C in a uC... what were they thinking??
Re: C in a uC... what were they thinking??
cbmeeks wrote:
... I have an unhealthy obsession with the Stooges ...
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html
https://laughtonelectronics.com/Arcana/ ... mmary.html
Re: C in a uC... what were they thinking??
Quote:
According to animation historian Michael Barrier, Paul Julian's preferred spelling of the sound effect was "hmeep hmeep".
- GARTHWILSON
- Forum Moderator
- Posts: 8774
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: C in a uC... what were they thinking??
cbmeeks wrote:
My whole point is that there is always going to be code bloat. There will always be someone better (who writes better, less-bloated code), etc. Additionally, my point is that it's silly to try and reinvent the world on everything you do. Did you write your own web browser to post to this forum? Did you write your own OS, network stack, etc.?
Quote:
I'm a fan of all of them for their remarkable brilliance
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
Re: C in a uC... what were they thinking??
GARTHWILSON wrote:
As I remember, Larry was an excellent pianist and violinist.
Cat; the other white meat.
Re: C in a uC... what were they thinking??
I'm a little late to the discussion but I had a few thoughts:
I had a similar response when someone showed me how they were using C++ constructs to get the compiler to take variable assignments in code and turn them into pin flips on the MSP430 (similar class of microcontroller to what Arduino uses). The code he was getting was extremely small and fast. You can do a lot of really neat stuff with C++ for microcontrollers with no more bloat than you would with C.
I definitely disagree with you on that. Some people point out that C is a good fit for microcontrollers because it is often more efficient on certain systems. Even a $1 MSP430 with 16 registers can be a lot to juggle mentally and use perfectly efficiently. For a large program on a moderately complex microcontroller the compiler will often produce assembly as good or better than what you could do yourself. There are all sorts of weird things the compiler knows to do that you would not think of. One that sticks in my mind on the MSP430 is XORing a register with itself since IIRC it takes less cycles than loading a 0. Compare that to loading a 0 from the 'constant generator' register to subtracting a register from itself to storing the 0 in firmware and loading it in to whatever other weird thing you could do to get 0 and it is easy to see how many cycles the compiler saves in ways you haven't thought of. Another example is the x86 architecture. In the 80s writing your own assembly code could be more efficient than C, but the compiler designers have had 30+ years to perfect what they do and for large projects it would now be very hard for you to out-optimize the compiler by hand.
That is a good point. I have experimented with them a lot and they are useful, but there are things I don't think they can do (or I just haven't figured out how to do it yet.) For example, it would be wasteful to give each function that needs to be fast its own section of zero page. Instead, you could let them reuse the same chunk but you have a problem when one function calls a second function because it will try to use the first functions zero page chunk. I have not figured out how to make the macro tell the function that it is being called from another function, and should therefore use a different part of zero page, OR if it is being called this time as a top level function and should use the first chunk of zero page. If you were willing to have all or part of the function reproduced twice in your binary, you could write a program to automatically make a second copy of any function that is called as a top level AND second level function so each version uses a different part of zero page (assuming of course the binary space/zero page trade off makes sense for you). You could also tell the program how much zero page you are willing to give up for functions so it would not start making copies of functions (which bloats your binary) until it ran out of zero page.
Another helpful thing you could do that C compilers do is look for stretches of functions that are similar or can be rearranged to be similar so that you only need one copy of it, saving space. These functions could be totally unrelated in your mind (bit banging an SD card and driving an LCD) but the compiler will find coincidental similarities and only keep one version of it if it's long enough. For chips with lots of registers this could involve changing which register is used for one of the variables in the SD card routine so that it will look like the LCD routine (not that it matters to the C programmer since registers are invisible to him). Now, imagine writing assembly code that would do the same thing. You would need to create two to three versions of a function and then call the right version in the right place (God help you if you change the function but only remember to update two of the three versions), while also chopping out and saving multiple parts of those three versions that they have in common that don't depend on the different zero pages chunks, while also finding coincidental similarities in the thousands of lines of completely unrelated functions that can be chopped out and saved as well. If you could do that, you might still have to do massive reorganizing for even a minor change in your function since stretches in that function may no longer be similar to stretches in others, although new sections could have new similarities which you now need to find. It would not be possible for an assembly programmer to write a large program that is as small and memory efficient as the compiler's AND still easily readable by humans.
This got me thinking that it might be useful to keep the parts of cc65 that are fast and efficient and do the rest in assembly. I don't think it would be fun to test all of the functionality of cc65 to see what is worth keeping, though. It might be interesting to start a new project and implement only the parts of C that can be converted to assembly very efficiently. I think math calculations would be a good candidate. So would loops like "for" if your compiler were smart enough, for example, to see if the X register held anything useful before the for loop and if X was used afterward for anything to know if X should be saved before the for loop used it. Also, maybe having functions would allow for some optimizations like I mentiond above. Admitedly, this might only appeal to a small group since you would have to know/like both C and assembly.
BitWise wrote:
I'm always amazed that the generated code actually fits on an Arduino. The C++ libraries on UNIX and Windows generate masses of code bloat.
BigDumbDinosaur wrote:
Any high level language is inefficient when implemented on a small system, especially a µcontroller.
GARTHWILSON wrote:
Macros could definitely be putting a dent in the popularity of HLLs.
Another helpful thing you could do that C compilers do is look for stretches of functions that are similar or can be rearranged to be similar so that you only need one copy of it, saving space. These functions could be totally unrelated in your mind (bit banging an SD card and driving an LCD) but the compiler will find coincidental similarities and only keep one version of it if it's long enough. For chips with lots of registers this could involve changing which register is used for one of the variables in the SD card routine so that it will look like the LCD routine (not that it matters to the C programmer since registers are invisible to him). Now, imagine writing assembly code that would do the same thing. You would need to create two to three versions of a function and then call the right version in the right place (God help you if you change the function but only remember to update two of the three versions), while also chopping out and saving multiple parts of those three versions that they have in common that don't depend on the different zero pages chunks, while also finding coincidental similarities in the thousands of lines of completely unrelated functions that can be chopped out and saved as well. If you could do that, you might still have to do massive reorganizing for even a minor change in your function since stretches in that function may no longer be similar to stretches in others, although new sections could have new similarities which you now need to find. It would not be possible for an assembly programmer to write a large program that is as small and memory efficient as the compiler's AND still easily readable by humans.
BitWise wrote:
Complex data structures, control logic (e.g. lots of nested loops and conditionals) and maths are a pain to write in assembler (even with the structured assembly directives in my assembler). A compiler reliably cranks out the code for these kinds of things and all in fraction of the time it would take to do them by hand. With the added benefit the source is normally smaller and simpler to understand.
- GARTHWILSON
- Forum Moderator
- Posts: 8774
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: C in a uC... what were they thinking??
Druzyek wrote:
That is a good point. I have experimented with them a lot and they are useful, but there are things I don't think they can do (or I just haven't figured out how to do it yet.) For example, it would be wasteful to give each function that needs to be fast its own section of zero page. Instead, you could let them reuse the same chunk but you have a problem when one function calls a second function because it will try to use the first functions zero page chunk. I have not figured out how to make the macro tell the function that it is being called from another function, and should therefore use a different part of zero page, OR if it is being called this time as a top level function and should use the first chunk of zero page. If you were willing to have all or part of the function reproduced twice in your binary, you could write a program to automatically make a second copy of any function that is called as a top level AND second level function so each version uses a different part of zero page (assuming of course the binary space/zero page trade off makes sense for you). You could also tell the program how much zero page you are willing to give up for functions so it would not start making copies of functions (which bloats your binary) until it ran out of zero page.
Stacks to the rescue. ("Stacks" is plural, and includes a virtual stack in ZP, and possibly other ones, not just the page-1 hardware stack.) Stacks remove the conflict of what functions get how much of ZP, and there's no need to reproduce functions. They can have their own environments, and even be recursive. http://wilsonminesco.com/stacks/
Quote:
Another helpful thing you could do that C compilers do is look for stretches of functions that are similar or can be rearranged to be similar so that you only need one copy of it, saving space. These functions could be totally unrelated in your mind (bit banging an SD card and driving an LCD) but the compiler will find coincidental similarities and only keep one version of it if it's long enough. For chips with lots of registers this could involve changing which register is used for one of the variables in the SD card routine so that it will look like the LCD routine (not that it matters to the C programmer since registers are invisible to him). Now, imagine writing assembly code that would do the same thing. You would need to create two to three versions of a function and then call the right version in the right place (God help you if you change the function but only remember to update two of the three versions), while also chopping out and saving multiple parts of those three versions that they have in common that don't depend on the different zero pages chunks, while also finding coincidental similarities in the thousands of lines of completely unrelated functions that can be chopped out and saved as well. If you could do that, you might still have to do massive reorganizing for even a minor change in your function since stretches in that function may no longer be similar to stretches in others, although new sections could have new similarities which you now need to find. It would not be possible for an assembly programmer to write a large program that is as small and memory efficient as the compiler's AND still easily readable by humans.
This sounds more like something that has multiple programmers working on it, rather than one-man projects which we are probably all working on. We know what functions and routines we've done, and we don't write near-duplicate ones. I remember this was part of the story of the computer for Apollo though. During the process, it was found that different members of the team had their own versions of routines because they had not coordinated with other ones to avoid such duplicate work. This was a problem because of the amount of ROM they were limited to.
BitWise wrote:
Complex data structures, control logic (e.g. lots of nested loops and conditionals) and maths are a pain to write in assembler (even with the structured assembly directives in my assembler). A compiler reliably cranks out the code for these kinds of things and all in fraction of the time it would take to do them by hand. With the added benefit the source is normally smaller and simpler to understand.
In my assembly-language flow-control macros, the only small frustration (and it doesn't happen very often) is when you want something like
Code: Select all
If VIAPB's bit 6 is set AND foobar is less than $3F
<do this stuff>
END_IFCode: Select all
IF_BIT VIA1PB, 6, IS_SET
LDA FOOBAR
CMP #$3F
IF_CARRY_CLR
<do this stuff>
END_IF
END_IFwhich assembles this:
Code: Select all
BIT VIA1PB
BVC 1$
LDA FOOBAR
CMP #$3F
BCS 1$
<do this stuff>
1$: <continue>The MSP430 however, with its 16 16-bit registers, will always be more suitable to C than the 6502 is, or probably even more than the 65816 too. I would really like for someone to improve upon cc65. As it is, it generates horribly inefficient code. Anyone with any experience at all can do much better in assembly.
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
Re: C in a uC... what were they thinking??
GARTHWILSON wrote:
Stacks to the rescue. ("Stacks" is plural, and includes a virtual stack in ZP, and possibly other ones, not just the page-1 hardware stack.)
Quote:
This sounds more like something that has multiple programmers working on it, rather than one-man projects which we are probably all working on. We know what functions and routines we've done, and we don't write near-duplicate ones.
Quote:
Code: Select all
If VIAPB's bit 6 is set AND foobar is less than $3F
<do this stuff>
END_IFCode: Select all
IF VIA1PB, 6, BIT_IS_SET, AND, FOOBAR, #3F, CMP_LTRe: C in a uC... what were they thinking??
Oneironaut wrote:
Recently, I had the job of taking over someone's prototype development, and part of the system included C code to read RAW data from an SD card. Having recently written my own SD card code for the 6502, I figure it would be an easy project to do. It took me only a few hours to get SD access on the 6502 using assembly, and it is minimal code.
Well, after 2 days I finally gave up!
Bool this, struct that, misdirection, bloat, bloat, bloat!.... and for what!!??
Sorry, I just had to rant. Why would anyone want to write code like this??
I eventually downloaded the PIC assembly list and just coded the thing from scratch in 3 hours.
Didn't even know PIC assembly, and it was still easier than trying to follow C breadcrumbs!
The code went form at least 400 lines to about 30, and program memory usage was way down.
I have never been able to figure out why anyone though that C on a small uC was a good idea.
is there some draw to putting fluffy names on everything and simply writing a lot of code for nothing?
A uC can do nothing more than turn an IO on and off... it's just ones and zeros man!
Anyhow, I am so glad I am not doing it for a living.
Now back to 6502 assembly, where life is good!
Cheers!
Radical Brad

Well, after 2 days I finally gave up!
Bool this, struct that, misdirection, bloat, bloat, bloat!.... and for what!!??
Sorry, I just had to rant. Why would anyone want to write code like this??
I eventually downloaded the PIC assembly list and just coded the thing from scratch in 3 hours.
Didn't even know PIC assembly, and it was still easier than trying to follow C breadcrumbs!
The code went form at least 400 lines to about 30, and program memory usage was way down.
I have never been able to figure out why anyone though that C on a small uC was a good idea.
is there some draw to putting fluffy names on everything and simply writing a lot of code for nothing?
A uC can do nothing more than turn an IO on and off... it's just ones and zeros man!
Anyhow, I am so glad I am not doing it for a living.
Now back to 6502 assembly, where life is good!
Cheers!
Radical Brad

Terrible code can be written in any language. C is not immune.
Microcontrollers can do anything large computers can do. Google Turing machine and Turing complete.
In the last 40 years or so I have heard countless times how "language x will solve all our problems." No language can save us from bad programming.
Modern compilers for modern processors (the PIC is NOT modern, dating from about 1974/75) can do better than the vast majority of us.
Lines of source don't necessarily equate to bytes of code.
But the big question, why did anyone think C on a microcontroller was a good idea?
By 1955 people had already realized that assembly was harder than necessary. A lot of REALLY smart people put in a LOT of effort to make programming more efficient, likely at the expense of runtime efficiency. At a time when a computer with 4K words of memory was fairly large, they put in the effort to create languages like COBOL, FORTRAN, and even ALGOL. If you can find references on the web to Gier Algol, it is rather interesting. The machine the compiler ran on and targeted had 1K words of RAM. The compiler took something like ten passes to compile a program. But it was deemed worthwhile. Keep in mind this was before most of the theoretical science of compilers existed. It was not a trivial exercise.
Jump forward ten years to when the first UNIX version that was written in C (after first being written in assembly) came about. The entire system ran on a machine with 128K bytes or less. The compiler could generate code for smaller machines. By that time, the early 70s, it was already accepted that MOST programs didn't benefit from being written in assembly. That was the preachings of most of the great names in CS: Wirth, Dijkstra, Brinch Hansen, and many others.
Today, we can get a 32 bit processor with 32K of Flash and 4K of RAM, built onto a board with all necessary components, for $4
http://www.mouser.com/ProductDetail/Cyp ... PHYg%3d%3d That is a MUCH more powerful machine than the ones high level languages were first created for. Typically, those will be used in a fixed function device. If you can write the code in C (or some other HLL) and it serves its purpose, why not? The fact is that most people find it easier to write in some HLL than in assembly. The killer app for early micros (with about 4K bytes memory typical) was a high level language (BASIC) to make them easier to program. If you find assembly easier than any HLL, then that's great. But you are most definitely in the minority.
I enjoy writing assembly from time to time, and sometimes it is even necessary. I like to think I'm competent
http://smalltimeelectronics.com/project ... vrcog.html. But for day to day development, I am much more productive with C or C++. We are all different, with different tastes and talents. But the vast majority, perhaps 99% or more, will be more productive with a high level language than assembler.
- GARTHWILSON
- Forum Moderator
- Posts: 8774
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: C in a uC... what were they thinking??
Druzyek wrote:
GARTHWILSON wrote:
Stacks to the rescue. ("Stacks" is plural, and includes a virtual stack in ZP, and possibly other ones, not just the page-1 hardware stack.)
Quote:
Quote:
This sounds more like something that has multiple programmers working on it, rather than one-man projects which we are probably all working on. We know what functions and routines we've done, and we don't write near-duplicate ones.
I'm always collating in my mind, factoring, simplifying, re-ordering, on the look-out for tail-call elimination possibilities, etc.. Being so detail-oriented is a necessary part of being a good programmer. Unfortunately the PC world is anything but, where abuse of HLLs and compilers has resulted in a staggering level of bloat and inefficiency. Compilers remove the need for the programmer to be knowledgeable in every processor he programs for; but if you do a lot of assembly-language programming for one processor and get good at it, particularly one as simple as the '02 where you can really master it (unlike many more-complex processors), I do not believe for a moment that a compiler can surpass the human.
(That's not to say I do everything in assembly though. I definitely don't.)
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
Re: C in a uC... what were they thinking??
GARTHWILSON wrote:
Druzyek wrote:
GARTHWILSON wrote:
Stacks to the rescue. ("Stacks" is plural, and includes a virtual stack in ZP, and possibly other ones, not just the page-1 hardware stack.)
Quote:
Trying to dynamically allocate ZP sounds like a recipe for trouble though, whether by hand or having the compiler do it.
Quote:
Quote:
Quote:
This sounds more like something that has multiple programmers working on it, rather than one-man projects which we are probably all working on. We know what functions and routines we've done, and we don't write near-duplicate ones.
Quote:
Unfortunately the PC world is anything but, where abuse of HLLs and compilers has resulted in a staggering level of bloat and inefficiency.
Quote:
Compilers remove the need for the programmer to be knowledgeable in every processor he programs for; but if you do a lot of assembly-language programming for one processor and get good at it, particularly one as simple as the '02 where you can really master it (unlike many more-complex processors), I do not believe for a moment that a compiler can surpass the human.
- commodorejohn
- Posts: 299
- Joined: 21 Jan 2016
- Location: Placerville, CA
- Contact:
Re: C in a uC... what were they thinking??
Druzyek wrote:
Right, that is a skill we should all try to develop. The problem is that that will never be enough for you to outperform an optimizing compiler on larger programs. On modern processors, it does all of those things better than you can.
-
White Flame
- Posts: 704
- Joined: 24 Jul 2012
Re: C in a uC... what were they thinking??
Yeah, when it comes to large programs, then human design will win because architectural improvements will matter much more than micro-optimizations. For individual functions with large bodies, or complex loops those are certainly places where hand coding gets outrun by the compilers.
The biggest modern change is writing to the cache hierarchy, which has more to do with architecture than coding, designing your data structures to fit in cache lines and finagling what the overall access patterns will be.
The biggest modern change is writing to the cache hierarchy, which has more to do with architecture than coding, designing your data structures to fit in cache lines and finagling what the overall access patterns will be.
Last edited by White Flame on Sat Oct 07, 2017 8:50 pm, edited 1 time in total.
Re: C in a uC... what were they thinking??
Quote:
Compilers definitely surpass humans on modern chips...
Or to put it more simple: garbadge in = garbadge out holds true even for the most sophisticated compiler.
6502 sources on GitHub: https://github.com/Klaus2m5
Re: C in a uC... what were they thinking??
commodorejohn wrote:
People keep making this assertion like it's a religious mantra, but I'm not convinced. There's nothing magic about large programs or optimizing compilers, and the only thing that makes "modern processors" particularly complicated is out-of-order execution (hell, even back in the Pentium days clever programmers were optimizing for multiple-issue.) And sure, a program can probably work out the particulars of instruction ordering with less tedium than a human, but then the next generation of CPUs comes out and it's all different anyways...
Compilers scale across volume. Compilers don't lose track of things. Compilers are much, much better at ferreting out arcane data paths in terms of deciding stuff NOT to compile, and NOT to include.
Consider C++, generic programming, and the standard library. It offers all of these wonderful control structures and other high level things, creates code for them all on the fly during compile, and then proceeds to throw away all the parts that you don't need. You literally pay for only what you use. As a developer you don't have to decide what is safe to toss out. The compiler knows and will do that for you.
Compilers enforce type systems that make large system development safer and more possible. Compilers don't get tired or lazy. When people think they know better, and are able to make short cuts in the code, they're mostly right. When compilers do that, they're always right.
Having the compilers and runtime systems handle much of the lower level minutiae that goes in to system development can lead to safe, more secure software.
- barrym95838
- Posts: 2056
- Joined: 30 Jun 2013
- Location: Sacramento, CA, USA
Re: C in a uC... what were they thinking??
whartung wrote:
Compilers scale across volume. Compilers don't lose track of things. Compilers are much, much better at ferreting out arcane data paths in terms of deciding stuff NOT to compile, and NOT to include.
Consider C++, generic programming, and the standard library. It offers all of these wonderful control structures and other high level things, creates code for them all on the fly during compile, and then proceeds to throw away all the parts that you don't need. You literally pay for only what you use. As a developer you don't have to decide what is safe to toss out. The compiler knows and will do that for you ...
Consider C++, generic programming, and the standard library. It offers all of these wonderful control structures and other high level things, creates code for them all on the fly during compile, and then proceeds to throw away all the parts that you don't need. You literally pay for only what you use. As a developer you don't have to decide what is safe to toss out. The compiler knows and will do that for you ...
https://www.youtube.com/watch?v=zBkNBP00wJE&t=4300s
Mike B.