wayfarer wrote:
NZQRC are the 5 types of 'algebraic' numbers, as /i listed in my earlier post, they have historical use and are widely accepted....
Indeed. But it was not the "ℕℤℚℝℂ" part I was talking about, but this:
wayfarer wrote:
NZQRC Integer, Signed, Rational, High-Precision 'point' numbers (or Scientific Notation (OR|| Symbolic Real Numbers))....
Calling ℕ "integer" instead of "natural," and ℤ "signed integer" instead of "integer" is likely to confuse people since that's not standard terminology. That's why I suggested looking at what Wikipedia calls these, which I think you'll find agrees with whatever that Google search you suggested gives you. (Only three of the nine results I get on the first page, do, because the other six call them 自然数, 整数, etc., probably because I happen to be in Japan; it's worth remembering that Google doesn't give everyone the same results.)
And of course you need to remember that the terminology used for number representations in computers is often different, which may be where some of your confusion between mathematical natural numbers and integers is coming from. When talking about number representations in computer languages, in most contexts an "integer" is mathematically neither a natural number nor an integer, but a member of a modular arithmetic
residue class.
So it's perfectly fine to call a 16-bit value holding a
limited range of values from ℕ or ℤ an "integer," but it's not correct to say that it's of type ℕ or ℤ. (Calling it a "signed integer" or "unsigned integer" makes it clear that you're using the computing term, not the mathematical term, because "signed integer" would be redundant when describing ℤ and "unsigned integer" is not ℤ.)
Oh, and
algebraic numbers are also a very different thing in standard math from the way you're using them.
And why do I go on about this? Because I agree with you that:
wayfarer wrote:
establishing jargon/lingo is important.
(Though I would call it "terminology.")
It's worth nothing that in some situations the differences between the mathematical representations and the computer representations are even greater. For example, in 6502 assembly language an 8 bit value is not inherently signed or unsigned, and even some of the functions you use on it (`ADC`, `SBC`) also are also not signed or unsigned. The signed unsigned interpretation happens not in the byte itself, or when doing an `ADC` on the byte, but _when you check the flags._ Up to that point it's essentially both at the same time.
Quote:
and are accurately represented by e^(iπ)= -1 (Euler's Identity or variations thereof) (not sure how to get superscript working here)
Sadly the support for math in this forum software is non-existent, so I use the
Unicode superscripts and subscripts, which you can cut and paste from that Wikipedia page if you don't have an easy way of typing them at your keyboard. There are similar pages for
blackboard bold (whence I got ℕ and so on) and other mathematical notations.
Quote:
I am really leaning to a symbolic maths/computer algebraic solver and want to say (X+Q) is a numerator. Or a denominator, so this would be in a way, a 'functional programming' idea to my understanding. IE, 'any term can be the output of an entire process'. That 'variable' can be entire programs in a Unix like environment with system calls and piping.
The "output of an entire process" in Unix-like environments with pipes is just a string, so that's already in pretty much all languages. The aspect of functional programming that you're thinking I think is reifying functions, which in many languages cannot be stored in a variable, passed around as a value, and operated upon. In Haskell, for example:
Code:
add1 x = 1 + x -- function definition
apply f x = f x -- function to apply a function to one argument
y = apply add1 3 -- y = 4, but same as "y = add1 3"
add2 x = (+) 2 x -- using (+) function in prefix form
add3 = (+) 3 -- "point free" style:
-- • "(+)" is a function that takes two arguments.
-- • "(+) 1" is a partial application, evaluating to a
-- function that takes one argument.
-- • The assignment makes add3 the same function that
-- takes one argument.
So if you take some time to wrap your head around that (the second paragraph there can be a bit difficult if you've not encountered this sort of thing before) you'll notice that in Haskell the whole line between "variables" and "functions" is blurred: in fact it's not even really there. When you say "x = 1", are you creating a symbol
x bound to a value, or bound to a function that takes no arguments? Well, both, because they're the same thing! Functions are values, no different from numbers or characters or strings except in what you can do with them (e.g., apply them to another value).
Lisp works similarly, and might be an easier place to start on this sort of thing. Also, it is probably a lot easier to write a Lisp for 6502 than Haskell, if you wanted to go that direction.
Quote:
I think this is 'inline functions' as a math term or such and I would like to have the basis for this myself... never calculating a value and operating on logic and rules, only generating the actual 'digits' when required at run time when needed.
"Inlining a function" usually refers to a compiler replacing a call to a function with the body of a function in the compiled code. (Above, that would be changing `y = apply add1 3` to `y = 1 + 3` at the machine code level.)
I don't know if there's a particular term of what you seem to be referring to, but I'd call it just "doing algebraic math, until you decide to do the calculations."
Quote:
at some level a data structure or Struct, should something like __SCINUM (operand)*10^(exponent)
the ability to construct that needs to be in place in any low level library, Scientific Notation should be 'trivial' to implement if all other aspects are in place, ie, floating or fixed point numbers, exponents, operands and multi term 'numbers'.
Again, terminology issue. You seem to be separating exponents from floating point, but exponents are an essential part of floating point, since that's what enables the decimal point to "float."
And again, scientific notation is orthogonal to this; you do not need it for floating point. A parser reading a floating point constant will produce the same value whether it reads "0.0000012" or "1.2e-6", the same is true of "1200000000000000000000" or "1.2e21".
Quote:
I might grab some Functional Programming at some point, I have a lot on my plate right now. However, for a 65xx Maths library, making it 'support functional languages' and 'use symbolic maths' are kinda the same thing a little. A variable can be a function can be a term.
Well, that's precisely what you can learn from functional programming, and probably learn more quickly and easily than trying to work it out independently. You might try going through the first few chapters of
The Little Schemer to get a quick intro. It starts out teaching the basics of list manipulation (which is worthwhile in itself!) and then quickly gets into higher-order functions (functions that operate on functions) and how they're used. And it's a fun book that, at least at the start, is pretty simple and quick to get through, though I warn everyone to be careful that they
really understand each chapter before going on to the next, as it sometimes looks on the surface easier than it really is.
Quote:
As other mentioned 'arbitrary precision', a lot of why I am doing this is to better understand how that might be accomplished.
How to work with 'any sized number' and declare that precision during runtime.
That's relatively easy, though on 6502 it stays relatively easy only as long as you keep your numbers below 2000 bits or so. Basically, it's just like doing 16- or 32-bit arithmetic on the 6502 except you keep going until you're done rather than signalling overflow (or just quietly throwing away the carry) when you reach byte 2 or 4.
A while back I did a quick experiment with this, writing routine to read a decimal number and convert it to a bigint up to around a hundred-odd bytes long. (That limit is due to the 255 length limit on the input string; routines manipulating native encodings are good up to 255 bytes.) Reading a decimal number is not only handy in and of itself, but also brings in the first native operation you need: a multiply by ten. You can find the
code here and the
tests here.