Let me try again. A simulator of a processor is a detailed model of said processor, containing the same registers, datapaths, and instructions. When correctly implemented, there
can be no detectable difference; a program executing on a simulator has identical execution path and side effects as the same program running on real hardware. (In fact, 'real' hardware is a simulation of binary logic, built with analog transistors on a silicon wafer).
Given that, there is absolutely nothing you can do to see if you are running on 'real' hardware or emulated virtual hardware, or 100 guys with calculators. Any quirks can be faithfully simulated.
The quirks you've brought up are bugs. There is no reason a BASIC floating point division should give you a different result than Python's; if the
IEEE 754 standard (from 1985) is followed carefully, both should be same - especially since modern hardware handles floating point directly. And again, relying on undocumented bugs of poorly-implemented software is probably not a good idea - and has nothing to do with detecting what processor you are running on anyway.
P.S. There is nothing inherently 'faster' about Python as compared to BASIC. I wrote a BASIC implementation for the Atari ST that was pretty much as fast as C in most situations.
P.P.S. There is no algorithm possible. Any algorithm can be run on anything, including a simulation of an entirely different machine (maybe within another simulation). It's turtles all the way down with information theory.
The only hope is a 'sidechannel', as I mentioned before - something that is reasonsably unique to a particular execution environment, that the user may be coerced into providing you, or even voluntarily prove to you in exchange for some benefit, perhaps... The way I put up with my browser spying on me in exchange for the joy of arguing with BigEd here. Confirmed by statistics, perhaps. I would need more information.