6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 8:54 am

All times are UTC




Post new topic Reply to topic  [ 44 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Fri May 14, 2021 3:29 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Let me sit back and see what the two of you make of it.


Top
 Profile  
Reply with quote  
PostPosted: Fri May 14, 2021 3:40 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 904
option8: sorry about the noise - BigEd and I like to poke at each other sometimes...

To summarize: from the standpoint of information science, there is no direct way to prove 8-bitness.

But there may be a 'sidechannel'. Perhaps there is a way to extract that information based on statistics and some other task that leaks that information while providing some other benefit to the user.

P.S. For instance: a list of gas stations you use with times you gas up provides a lot of information about where you live, whether you have a job, and roughly what kind of a car you have. Is there something your 8-bit users do that others don't? Buy floppy disks, maybe...

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Fri May 14, 2021 4:18 pm 
Offline

Joined: Thu May 13, 2021 5:15 pm
Posts: 7
Wow.

So, yes, I know a modern machine can emulate a 6502, including all the hardware bugs, perfectly. I'm less concerned with that than the idea that a thing was done in 6502 assembly, or barring that, a period-correct implementation of BASIC. I can plug values into a BASIC program, let it run, and get the results. Or, I can write a faster version of the same algorithm in python, and get the results in 1/1000th the time. The python script will give me faster, but noticeably different, results.

Looking for that algorithm.


Top
 Profile  
Reply with quote  
PostPosted: Fri May 14, 2021 8:11 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8508
Location: Midwestern USA
option8 wrote:
I can plug values into a BASIC program, let it run, and get the results. Or, I can write a faster version of the same algorithm in python, and get the results in 1/1000th the time. The python script will give me faster, but noticeably different, results.

How would the Python script produce "noticeably different" results? If it has been written to perform the exact same computations as a BASIC program, it should be producing the exact same results. Processing speed in itself would be irrelevant.

On the other hand, if you are relying on the fact that floating point arithmetic in old eight-bit versions of BASIC occasionally resulted in errors due to conversion rounding effects, that still doesn't prove much, other than computation was done by something with less-than-ideal arithmetic features. It definitely won't prove that the machine that did the computation is powered by a specific microprocessor, as the effect I described is related to floating point vagaries, not machine architecture.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri May 14, 2021 8:39 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 904
Let me try again. A simulator of a processor is a detailed model of said processor, containing the same registers, datapaths, and instructions. When correctly implemented, there can be no detectable difference; a program executing on a simulator has identical execution path and side effects as the same program running on real hardware. (In fact, 'real' hardware is a simulation of binary logic, built with analog transistors on a silicon wafer).

Given that, there is absolutely nothing you can do to see if you are running on 'real' hardware or emulated virtual hardware, or 100 guys with calculators. Any quirks can be faithfully simulated.

The quirks you've brought up are bugs. There is no reason a BASIC floating point division should give you a different result than Python's; if the IEEE 754 standard (from 1985) is followed carefully, both should be same - especially since modern hardware handles floating point directly. And again, relying on undocumented bugs of poorly-implemented software is probably not a good idea - and has nothing to do with detecting what processor you are running on anyway.

P.S. There is nothing inherently 'faster' about Python as compared to BASIC. I wrote a BASIC implementation for the Atari ST that was pretty much as fast as C in most situations.

P.P.S. There is no algorithm possible. Any algorithm can be run on anything, including a simulation of an entirely different machine (maybe within another simulation). It's turtles all the way down with information theory.

The only hope is a 'sidechannel', as I mentioned before - something that is reasonsably unique to a particular execution environment, that the user may be coerced into providing you, or even voluntarily prove to you in exchange for some benefit, perhaps... The way I put up with my browser spying on me in exchange for the joy of arguing with BigEd here. Confirmed by statistics, perhaps. I would need more information.

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 4:52 am 
Offline

Joined: Thu May 13, 2021 5:15 pm
Posts: 7
Quote:
How would the Python script produce "noticeably different" results?


A simple example:
python -
Code:
>>> print(700.1-700)
0.1


AppleSoft BASIC -
Code:
]?700.1-700
.0999999046


After poking around in BASIC, I've knocked together this proof-of-concept proof-of-work "algorithm":
Code:
nonce = 1
seed = 700
difficulty = .111
F = (seed + 2(nonce) + nonce/100) - (seed +2(nonce) + nonce/1000) modulus difficulty
solution is nonce where F=0.



The BASIC code
Code:
10 B = 1:G = .111
 20 A = B * 2
 30 C = 700 + A + (B / 1000)
 40 D = 700 + A + (B / 100)
 50 E = D - C
 60 F = E -  INT (E / G) * G
 70  IF F = 0 THEN  GOTO 100
 80 B = B + 1: GOTO 20
 100  PRINT B: END


eventually prints "15429"

in python
Code:
import math
nonce=1
seed=700
difficulty=.111

def pow():
   result = (seed + (2*nonce) + float(nonce)/100.0) - (seed + (2*nonce) + float(nonce)/1000.0)
   f=math.fmod(result,difficulty)
   return f

while pow() > 0:
   nonce+=1
else:
   print nonce   


results in... well, I've let it run, and haven't been patient enough to let it get to a result.

If I add "print f" into the pow() function, it outputs
Code:
0.00900000000001
0.018
0.0269999999999
0.0359999999999
0.045
0.054
0.0630000000001
0.072
0.081
0.09
0.099
0.108
0.00599999999996
...


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 5:07 am 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 904
How does it help you to find out if I ran your code on a real Apple 2 or an Apple simulation on my Linux x86 box?

BigEd, I concede. Perhaps we should be talking about how many cycles a Z80 takes vs. 6502, as this is going nowhere.

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 6:26 am 
Online
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8545
Location: Southern California
option8 wrote:
Quote:
How would the Python script produce "noticeably different" results?

A simple example:
python -
Code:
>>> print(700.1-700)
0.1

AppleSoft BASIC -
Code:
]?700.1-700
.0999999046

I don't know the innards of either one; but it looks like the Python one is doing it either in decimal floating-point (more likely) or in binary scaled integer (less likely), whereas the AppleSoft BASIC is doing it in binary floating-point. The latter is off in the 7th digit which would be about right for 24 bits of mantissa, as .1 decimal, when converted to binary, is a repeating .000110011001100110011... There's no way to make the conversion exact. That has nothing to do with the processor though.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 6:47 am 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 904
option8, let's check the premises, as I suspect one of us is misunderstanding something.

Proof of work algorithms don't do what you think they do. They do not offer proof that any individual has done any work - not at all. They prove that altogether all the participants, on average, perform a certain number of hashes. But an individual miner can get lucky on the first try, performing hardly any work, and satisfy the system.

Nor do proof of work algorithms have anything to do with the speed of the participants. You could compute a winning result on an abacus - unlikely as it is. So your idea of a 6502 being slower than modern chips or python being faster, again, have nothing to do with proof of work or proof of 6502.

You are focusing on rounding errors and 'quirks' as proof terms. That is not how proof of work algorithms work. The only important thing is the result - it does not matter how you got it as long as it satisfies the system. They are not really algorithms, they are specifications and you are free to code them any way you like, and run them on any device you want.

I suggest you forget about 'proof of work', as it does pretty much the opposite of what you want. Nonces and difficulty have no business here - they exist for a very specific and very different reason.

How to prove that someone has a 6502? It is an entirely different problem than proof of work. You cannot ask them to solve some kind of a problem as it does not help (any processor can do that). You don't care about statistical measures - you want to know if a specific participant has access to a real 6502.

You seem to be ok with providing some code for them to run, apparently on an Apple 2 machine. Apparently, for some reason, they do. Why? Do you trust their results? Proof of work is there explicitly to eliminate trust. What are you trying to accomplish, and what are the rules?

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 2:25 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 904
GARTHWILSON wrote:
I don't know the innards of either one; but it looks like the Python one is doing it either in decimal floating-point (more likely) or in binary scaled integer (less likely), whereas the AppleSoft BASIC is doing it in binary floating-point. The latter is off in the 7th digit which would be about right for 24 bits of mantissa, as .1 decimal, when converted to binary, is a repeating .000110011001100110011... There's no way to make the conversion exact. That has nothing to do with the processor though.[/color]


According to https://docs.python.org/3/tutorial/floatingpoint.html, Python uses binary floating point. They explain how adding 0.1 three times is not equal to 0.3, and later advise:

Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is explained in precise detail below, in the “Representation Error” section. See The Perils of Floating Point for a more complete account of other common surprises.
...
Why is that? 1/10 is not exactly representable as a binary fraction. Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2**N where J is an integer containing exactly 53 bits.


The Perils of Floating Point.

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 2:54 pm 
Offline
User avatar

Joined: Mon May 12, 2014 6:18 pm
Posts: 365
EDIT: enso beat me to it but here's an example anyway:
GARTHWILSON wrote:
I don't know the innards of either one; but it looks like the Python one is doing it either in decimal floating-point (more likely) or in binary scaled integer (less likely)
Just for context, Python doesn't use decimal internally, probably because it would be much slower without hardware support. Here's a Python program to demonstrate the rounding error:
Code:
x=0
for i in range(1000000):  #ie FOR loop with 1,000,000 iterations
   x=x+0.1
x should be 100,000 after 1,000,000 iterations but is actually 100,000.00000133288

Python has decimal numbers built in if you need them:
Code:
from decimal import *
x=Decimal(0)
for i in range(1000000):
   x=x+Decimal("0.1")
which results in x equal to 100,000 exactly.


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 4:02 pm 
Offline

Joined: Thu May 13, 2021 5:15 pm
Posts: 7
Quote:
option8, let's check the premises, as I suspect one of us is misunderstanding something.

Yeah. I've changed the rules a bit since I started down this road - and all the [s]bickering[/s] friendly banter hasn't helped :)

I'm trying to work out a way of authenticating that some operation has been done on a vintage computer (Apple II) and not on a modern computer.

Ideally, this would also exclude emulators - but that's a different challenge. If the proof were based on some quirk specific to the 6502, then it would be applicable to a wider range of participants, but limiting to Apple II by way of Applesoft or something else specific in ROM is acceptable.

The proof-of-concept I posted above is just that - showing that a modern language running on a faster machine will give a different result, based solely on doing the same calculations. The ultimate "proof of work" is still to be determined.


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 4:19 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
There's a previous thread which touches on detectable differences between Basics, as seen in simple floating point calculations, linking in turn to related investigations and discoveries:

So if we were to find ourselves in front of a Basic prompt, and wondered which Basic it was, there are some ideas there for tests to run. (A bit like calculator forensics.)


Top
 Profile  
Reply with quote  
PostPosted: Sat May 15, 2021 4:26 pm 
Offline
User avatar

Joined: Sat Sep 29, 2012 10:15 pm
Posts: 904
option8 wrote:
...
The proof-of-concept I posted above is just that - showing that a modern language running on a faster machine will give a different result, based solely on doing the same calculations. The ultimate "proof of work" is still to be determined.


No... what you empirically and anecdotally demonstrate (not prove in any sense of the word) is that some implementation of floating point is different from another. You then propose to somehow identify that one is 'faster' and more 'modern' without any rational basis. This is not even wrong.

I am out.

_________________
In theory, there is no difference between theory and practice. In practice, there is. ...Jan van de Snepscheut


Top
 Profile  
Reply with quote  
PostPosted: Sun May 16, 2021 9:58 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8508
Location: Midwestern USA
option8 wrote:
Quote:
How would the Python script produce "noticeably different" results?

A simple example:
python -
Code:
>>> print(700.1-700)
0.1

AppleSoft BASIC -
Code:
]?700.1-700
.0999999046

Very good. You have succeeded in proving that AppleSoft BASIC, which uses a 32-bit or 40-bit float (depending on interpreter version) is slightly off compared to the IEEE double precision used in Python when attempting to compute the same result. That is all you have proved. There is nothing there to say one was running on a 6502 and the other was running on an x86.

Pardon me if I seem dunderheaded, but what exactly are you trying to achieve here? Having followed this entire topic, I get the distinct impression of someone tilting at windmills. :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 44 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 9 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: