6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 2:02 pm

All times are UTC




Post new topic Reply to topic  [ 89 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next
Author Message
PostPosted: Wed Jul 19, 2017 8:39 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
cbmeeks wrote:
... I have an unhealthy obsession with the Stooges ...

For me it's Wile E. Coyote! :D Always so optimistic with his inventions -- I really appreciate that indomitable spirit of creativity! (despite his utter ineptitude :roll: )

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Wed Jul 19, 2017 8:43 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Quote:
According to animation historian Michael Barrier, Paul Julian's preferred spelling of the sound effect was "hmeep hmeep".


Top
 Profile  
Reply with quote  
PostPosted: Wed Jul 19, 2017 8:48 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8545
Location: Southern California
cbmeeks wrote:
My whole point is that there is always going to be code bloat. There will always be someone better (who writes better, less-bloated code), etc. Additionally, my point is that it's silly to try and reinvent the world on everything you do. Did you write your own web browser to post to this forum? Did you write your own OS, network stack, etc.?

Browsers and OSs are not part of my work. To me, they're just appliances. True, I choose my battles, and I don't even try to understand those. But if it's my project that I'm responsible for and my name goes on it, I won't take that approach.

Quote:
I'm a fan of all of them for their remarkable brilliance

As I remember, Larry was an excellent pianist and violinist.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Wed Jul 19, 2017 8:52 pm 
Offline
User avatar

Joined: Wed Aug 17, 2005 12:07 am
Posts: 1250
Location: Soddy-Daisy, TN USA
GARTHWILSON wrote:
As I remember, Larry was an excellent pianist and violinist.


That's especially true with the violin. As many people know, Larry accidentally spilled acid on himself when he was a small child. The doctor recommend taking up violin lessons for physical therapy.

_________________
Cat; the other white meat.


Top
 Profile  
Reply with quote  
PostPosted: Wed Oct 04, 2017 7:21 am 
Offline
User avatar

Joined: Mon May 12, 2014 6:18 pm
Posts: 365
I'm a little late to the discussion but I had a few thoughts:
BitWise wrote:
I'm always amazed that the generated code actually fits on an Arduino. The C++ libraries on UNIX and Windows generate masses of code bloat.
I had a similar response when someone showed me how they were using C++ constructs to get the compiler to take variable assignments in code and turn them into pin flips on the MSP430 (similar class of microcontroller to what Arduino uses). The code he was getting was extremely small and fast. You can do a lot of really neat stuff with C++ for microcontrollers with no more bloat than you would with C.

BigDumbDinosaur wrote:
Any high level language is inefficient when implemented on a small system, especially a µcontroller.
I definitely disagree with you on that. Some people point out that C is a good fit for microcontrollers because it is often more efficient on certain systems. Even a $1 MSP430 with 16 registers can be a lot to juggle mentally and use perfectly efficiently. For a large program on a moderately complex microcontroller the compiler will often produce assembly as good or better than what you could do yourself. There are all sorts of weird things the compiler knows to do that you would not think of. One that sticks in my mind on the MSP430 is XORing a register with itself since IIRC it takes less cycles than loading a 0. Compare that to loading a 0 from the 'constant generator' register to subtracting a register from itself to storing the 0 in firmware and loading it in to whatever other weird thing you could do to get 0 and it is easy to see how many cycles the compiler saves in ways you haven't thought of. Another example is the x86 architecture. In the 80s writing your own assembly code could be more efficient than C, but the compiler designers have had 30+ years to perfect what they do and for large projects it would now be very hard for you to out-optimize the compiler by hand.

GARTHWILSON wrote:
Macros could definitely be putting a dent in the popularity of HLLs.
That is a good point. I have experimented with them a lot and they are useful, but there are things I don't think they can do (or I just haven't figured out how to do it yet.) For example, it would be wasteful to give each function that needs to be fast its own section of zero page. Instead, you could let them reuse the same chunk but you have a problem when one function calls a second function because it will try to use the first functions zero page chunk. I have not figured out how to make the macro tell the function that it is being called from another function, and should therefore use a different part of zero page, OR if it is being called this time as a top level function and should use the first chunk of zero page. If you were willing to have all or part of the function reproduced twice in your binary, you could write a program to automatically make a second copy of any function that is called as a top level AND second level function so each version uses a different part of zero page (assuming of course the binary space/zero page trade off makes sense for you). You could also tell the program how much zero page you are willing to give up for functions so it would not start making copies of functions (which bloats your binary) until it ran out of zero page.

Another helpful thing you could do that C compilers do is look for stretches of functions that are similar or can be rearranged to be similar so that you only need one copy of it, saving space. These functions could be totally unrelated in your mind (bit banging an SD card and driving an LCD) but the compiler will find coincidental similarities and only keep one version of it if it's long enough. For chips with lots of registers this could involve changing which register is used for one of the variables in the SD card routine so that it will look like the LCD routine (not that it matters to the C programmer since registers are invisible to him). Now, imagine writing assembly code that would do the same thing. You would need to create two to three versions of a function and then call the right version in the right place (God help you if you change the function but only remember to update two of the three versions), while also chopping out and saving multiple parts of those three versions that they have in common that don't depend on the different zero pages chunks, while also finding coincidental similarities in the thousands of lines of completely unrelated functions that can be chopped out and saved as well. If you could do that, you might still have to do massive reorganizing for even a minor change in your function since stretches in that function may no longer be similar to stretches in others, although new sections could have new similarities which you now need to find. It would not be possible for an assembly programmer to write a large program that is as small and memory efficient as the compiler's AND still easily readable by humans.

BitWise wrote:
Complex data structures, control logic (e.g. lots of nested loops and conditionals) and maths are a pain to write in assembler (even with the structured assembly directives in my assembler). A compiler reliably cranks out the code for these kinds of things and all in fraction of the time it would take to do them by hand. With the added benefit the source is normally smaller and simpler to understand.
This got me thinking that it might be useful to keep the parts of cc65 that are fast and efficient and do the rest in assembly. I don't think it would be fun to test all of the functionality of cc65 to see what is worth keeping, though. It might be interesting to start a new project and implement only the parts of C that can be converted to assembly very efficiently. I think math calculations would be a good candidate. So would loops like "for" if your compiler were smart enough, for example, to see if the X register held anything useful before the for loop and if X was used afterward for anything to know if X should be saved before the for loop used it. Also, maybe having functions would allow for some optimizations like I mentiond above. Admitedly, this might only appeal to a small group since you would have to know/like both C and assembly.


Top
 Profile  
Reply with quote  
PostPosted: Wed Oct 04, 2017 8:21 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8545
Location: Southern California
Druzyek wrote:
That is a good point. I have experimented with them a lot and they are useful, but there are things I don't think they can do (or I just haven't figured out how to do it yet.) For example, it would be wasteful to give each function that needs to be fast its own section of zero page. Instead, you could let them reuse the same chunk but you have a problem when one function calls a second function because it will try to use the first functions zero page chunk. I have not figured out how to make the macro tell the function that it is being called from another function, and should therefore use a different part of zero page, OR if it is being called this time as a top level function and should use the first chunk of zero page. If you were willing to have all or part of the function reproduced twice in your binary, you could write a program to automatically make a second copy of any function that is called as a top level AND second level function so each version uses a different part of zero page (assuming of course the binary space/zero page trade off makes sense for you). You could also tell the program how much zero page you are willing to give up for functions so it would not start making copies of functions (which bloats your binary) until it ran out of zero page.

Stacks to the rescue. ("Stacks" is plural, and includes a virtual stack in ZP, and possibly other ones, not just the page-1 hardware stack.) Stacks remove the conflict of what functions get how much of ZP, and there's no need to reproduce functions. They can have their own environments, and even be recursive. http://wilsonminesco.com/stacks/

Quote:
Another helpful thing you could do that C compilers do is look for stretches of functions that are similar or can be rearranged to be similar so that you only need one copy of it, saving space. These functions could be totally unrelated in your mind (bit banging an SD card and driving an LCD) but the compiler will find coincidental similarities and only keep one version of it if it's long enough. For chips with lots of registers this could involve changing which register is used for one of the variables in the SD card routine so that it will look like the LCD routine (not that it matters to the C programmer since registers are invisible to him). Now, imagine writing assembly code that would do the same thing. You would need to create two to three versions of a function and then call the right version in the right place (God help you if you change the function but only remember to update two of the three versions), while also chopping out and saving multiple parts of those three versions that they have in common that don't depend on the different zero pages chunks, while also finding coincidental similarities in the thousands of lines of completely unrelated functions that can be chopped out and saved as well. If you could do that, you might still have to do massive reorganizing for even a minor change in your function since stretches in that function may no longer be similar to stretches in others, although new sections could have new similarities which you now need to find. It would not be possible for an assembly programmer to write a large program that is as small and memory efficient as the compiler's AND still easily readable by humans.

This sounds more like something that has multiple programmers working on it, rather than one-man projects which we are probably all working on. We know what functions and routines we've done, and we don't write near-duplicate ones. I remember this was part of the story of the computer for Apollo though. During the process, it was found that different members of the team had their own versions of routines because they had not coordinated with other ones to avoid such duplicate work. This was a problem because of the amount of ROM they were limited to.

BitWise wrote:
Complex data structures, control logic (e.g. lots of nested loops and conditionals) and maths are a pain to write in assembler (even with the structured assembly directives in my assembler). A compiler reliably cranks out the code for these kinds of things and all in fraction of the time it would take to do them by hand. With the added benefit the source is normally smaller and simpler to understand.

In my assembly-language flow-control macros, the only small frustration (and it doesn't happen very often) is when you want something like
Code:
       If VIAPB's bit 6 is set AND foobar is less than $3F
           <do this stuff>
       END_IF
In that case, unless I sharpen my pencil and get serious about some more macro work (which I hope to do at some point anyway), I'd need an IF structure inside another IF structure. It might look something like:
Code:
        IF_BIT  VIA1PB, 6, IS_SET
            LDA  FOOBAR
            CMP  #$3F
            IF_CARRY_CLR
                <do this stuff>
            END_IF
        END_IF

which assembles this:
Code:
        BIT  VIA1PB
        BVC  1$
        LDA  FOOBAR
        CMP  #$3F
        BCS  1$
        <do this stuff>
 1$:    <continue>


The MSP430 however, with its 16 16-bit registers, will always be more suitable to C than the 6502 is, or probably even more than the 65816 too. I would really like for someone to improve upon cc65. As it is, it generates horribly inefficient code. Anyone with any experience at all can do much better in assembly.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Wed Oct 04, 2017 9:02 am 
Offline
User avatar

Joined: Mon May 12, 2014 6:18 pm
Posts: 365
GARTHWILSON wrote:
Stacks to the rescue. ("Stacks" is plural, and includes a virtual stack in ZP, and possibly other ones, not just the page-1 hardware stack.)
True, but LDA zp is 3 cycles, whereas LDA zp,X is 4. The extra cycle you save may not always be worth the space you give up, but with a good compiler you would at least have the option of choosing size or speed.

Quote:
This sounds more like something that has multiple programmers working on it, rather than one-man projects which we are probably all working on. We know what functions and routines we've done, and we don't write near-duplicate ones.
No, that's not what I mean at all. All my projects have been one-man projects and I have seen big reductions when compiling for size. Whole functions and routines do not need to be near-duplicates for you to benefit. You just need stretches that the compiler can recognize as functionally similar, which is not always obvious to you since it may involve rearranging some things.

Quote:
Code:
       If VIAPB's bit 6 is set AND foobar is less than $3F
           <do this stuff>
       END_IF
That is neat! I should try to make a macro like that. Would you expand it like this:
Code:
IF VIA1PB, 6, BIT_IS_SET, AND, FOOBAR, #3F, CMP_LT


Top
 Profile  
Reply with quote  
PostPosted: Sat Oct 07, 2017 4:30 am 
Offline

Joined: Sun Oct 01, 2017 1:54 pm
Posts: 20
Oneironaut wrote:
Recently, I had the job of taking over someone's prototype development, and part of the system included C code to read RAW data from an SD card. Having recently written my own SD card code for the 6502, I figure it would be an easy project to do. It took me only a few hours to get SD access on the 6502 using assembly, and it is minimal code.

Well, after 2 days I finally gave up!
Bool this, struct that, misdirection, bloat, bloat, bloat!.... and for what!!??
Sorry, I just had to rant. Why would anyone want to write code like this??

I eventually downloaded the PIC assembly list and just coded the thing from scratch in 3 hours.
Didn't even know PIC assembly, and it was still easier than trying to follow C breadcrumbs!
The code went form at least 400 lines to about 30, and program memory usage was way down.

I have never been able to figure out why anyone though that C on a small uC was a good idea.
is there some draw to putting fluffy names on everything and simply writing a lot of code for nothing?
A uC can do nothing more than turn an IO on and off... it's just ones and zeros man!

Anyhow, I am so glad I am not doing it for a living.
Now back to 6502 assembly, where life is good!

Cheers!
Radical Brad
Image


Most of what I would say has already been said, but I will summarize my views with short versions:
Terrible code can be written in any language. C is not immune.
Microcontrollers can do anything large computers can do. Google Turing machine and Turing complete.
In the last 40 years or so I have heard countless times how "language x will solve all our problems." No language can save us from bad programming.
Modern compilers for modern processors (the PIC is NOT modern, dating from about 1974/75) can do better than the vast majority of us.
Lines of source don't necessarily equate to bytes of code.

But the big question, why did anyone think C on a microcontroller was a good idea?
By 1955 people had already realized that assembly was harder than necessary. A lot of REALLY smart people put in a LOT of effort to make programming more efficient, likely at the expense of runtime efficiency. At a time when a computer with 4K words of memory was fairly large, they put in the effort to create languages like COBOL, FORTRAN, and even ALGOL. If you can find references on the web to Gier Algol, it is rather interesting. The machine the compiler ran on and targeted had 1K words of RAM. The compiler took something like ten passes to compile a program. But it was deemed worthwhile. Keep in mind this was before most of the theoretical science of compilers existed. It was not a trivial exercise.

Jump forward ten years to when the first UNIX version that was written in C (after first being written in assembly) came about. The entire system ran on a machine with 128K bytes or less. The compiler could generate code for smaller machines. By that time, the early 70s, it was already accepted that MOST programs didn't benefit from being written in assembly. That was the preachings of most of the great names in CS: Wirth, Dijkstra, Brinch Hansen, and many others.

Today, we can get a 32 bit processor with 32K of Flash and 4K of RAM, built onto a board with all necessary components, for $4
http://www.mouser.com/ProductDetail/Cypress-Semiconductor/CY8CKIT-049-42XX/?qs=sGAEpiMZZMtIClGg9PMp4%2fOw7X%2fI83v%2fWc%2f7t3%2fqmz2hEinfsBPHYg%3d%3d That is a MUCH more powerful machine than the ones high level languages were first created for. Typically, those will be used in a fixed function device. If you can write the code in C (or some other HLL) and it serves its purpose, why not? The fact is that most people find it easier to write in some HLL than in assembly. The killer app for early micros (with about 4K bytes memory typical) was a high level language (BASIC) to make them easier to program. If you find assembly easier than any HLL, then that's great. But you are most definitely in the minority.

I enjoy writing assembly from time to time, and sometimes it is even necessary. I like to think I'm competent
http://smalltimeelectronics.com/projects/vidovly/avrcog.html. But for day to day development, I am much more productive with C or C++. We are all different, with different tastes and talents. But the vast majority, perhaps 99% or more, will be more productive with a high level language than assembler.


Top
 Profile  
Reply with quote  
PostPosted: Sat Oct 07, 2017 4:45 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8545
Location: Southern California
Druzyek wrote:
GARTHWILSON wrote:
Stacks to the rescue. ("Stacks" is plural, and includes a virtual stack in ZP, and possibly other ones, not just the page-1 hardware stack.)
True, but LDA zp is 3 cycles, whereas LDA zp,X is 4. The extra cycle you save may not always be worth the space you give up, but with a good compiler you would at least have the option of choosing size or speed.

I suppose you could do that with a macro too. One of the things it would look at for its conditional assembly would be which way you want it. Similarly, in my '816 Forth, the header macro looks to see if you want normal headers or if you want locally or globally headerless code, and if headerless, omits the name and link fields (but still lays down the code field pointer). One line in the source code can change the behavior of one or more macros, at many invocations. Trying to dynamically allocate ZP sounds like a recipe for trouble though, whether by hand or having the compiler do it.

Quote:
Quote:
This sounds more like something that has multiple programmers working on it, rather than one-man projects which we are probably all working on. We know what functions and routines we've done, and we don't write near-duplicate ones.
No, that's not what I mean at all. All my projects have been one-man projects and I have seen big reductions when compiling for size. Whole functions and routines do not need to be near-duplicates for you to benefit. You just need stretches that the compiler can recognize as functionally similar, which is not always obvious to you since it may involve rearranging some things.

I'm always collating in my mind, factoring, simplifying, re-ordering, on the look-out for tail-call elimination possibilities, etc.. Being so detail-oriented is a necessary part of being a good programmer. Unfortunately the PC world is anything but, where abuse of HLLs and compilers has resulted in a staggering level of bloat and inefficiency. Compilers remove the need for the programmer to be knowledgeable in every processor he programs for; but if you do a lot of assembly-language programming for one processor and get good at it, particularly one as simple as the '02 where you can really master it (unlike many more-complex processors), I do not believe for a moment that a compiler can surpass the human.

(That's not to say I do everything in assembly though. I definitely don't.)

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Sat Oct 07, 2017 5:41 pm 
Offline
User avatar

Joined: Mon May 12, 2014 6:18 pm
Posts: 365
GARTHWILSON wrote:
Druzyek wrote:
GARTHWILSON wrote:
Stacks to the rescue. ("Stacks" is plural, and includes a virtual stack in ZP, and possibly other ones, not just the page-1 hardware stack.)
True, but LDA zp is 3 cycles, whereas LDA zp,X is 4. The extra cycle you save may not always be worth the space you give up, but with a good compiler you would at least have the option of choosing size or speed.
I suppose you could do that with a macro too.
You could pick between LDA zp and LDA, zp,X with a macro but you could not rearrange firmware like I was describing above to keep zp free and reusable with macros. For large programs a compiler does this kind of thing better than a human can with or without macros.

Quote:
Trying to dynamically allocate ZP sounds like a recipe for trouble though, whether by hand or having the compiler do it.
Right, too complicated for a person but compilers can easily do this kind of thing. Some programs run 2-3 times faster with speed optimizations, and I think a good compiler could manage zp better than a person.

Quote:
Quote:
Quote:
This sounds more like something that has multiple programmers working on it, rather than one-man projects which we are probably all working on. We know what functions and routines we've done, and we don't write near-duplicate ones.
No, that's not what I mean at all. All my projects have been one-man projects and I have seen big reductions when compiling for size. Whole functions and routines do not need to be near-duplicates for you to benefit. You just need stretches that the compiler can recognize as functionally similar, which is not always obvious to you since it may involve rearranging some things.
I'm always collating in my mind, factoring, simplifying, re-ordering, on the look-out for tail-call elimination possibilities, etc..
Right, that is a skill we should all try to develop. The problem is that that will never be enough for you to outperform an optimizing compiler on larger programs. On modern processors, it does all of those things better than you can.

Quote:
Unfortunately the PC world is anything but, where abuse of HLLs and compilers has resulted in a staggering level of bloat and inefficiency.
Agreed

Quote:
Compilers remove the need for the programmer to be knowledgeable in every processor he programs for; but if you do a lot of assembly-language programming for one processor and get good at it, particularly one as simple as the '02 where you can really master it (unlike many more-complex processors), I do not believe for a moment that a compiler can surpass the human.
Compilers definitely surpass humans on modern chips but I don't think we know yet if that is true for compiling larger programs on the 6502, even if its true for short routines. Part of the advantage of HLLs seems to me to be that abstraction sets certain rules which lead to certain assumptions and given those assumptions, we can optimize in ways that would be difficult for an assembler to do. The question is whether the inefficiencies that might be necessary to institute all of C, given the architecture of the 6502, would outweigh the gains you would get from optimization.


Top
 Profile  
Reply with quote  
PostPosted: Sat Oct 07, 2017 6:07 pm 
Offline

Joined: Thu Jan 21, 2016 7:33 pm
Posts: 282
Location: Placerville, CA
Druzyek wrote:
Right, that is a skill we should all try to develop. The problem is that that will never be enough for you to outperform an optimizing compiler on larger programs. On modern processors, it does all of those things better than you can.

People keep making this assertion like it's a religious mantra, but I'm not convinced. There's nothing magic about large programs or optimizing compilers, and the only thing that makes "modern processors" particularly complicated is out-of-order execution (hell, even back in the Pentium days clever programmers were optimizing for multiple-issue.) And sure, a program can probably work out the particulars of instruction ordering with less tedium than a human, but then the next generation of CPUs comes out and it's all different anyways...


Top
 Profile  
Reply with quote  
PostPosted: Sat Oct 07, 2017 8:34 pm 
Offline

Joined: Tue Jul 24, 2012 2:27 am
Posts: 679
Yeah, when it comes to large programs, then human design will win because architectural improvements will matter much more than micro-optimizations. For individual functions with large bodies, or complex loops those are certainly places where hand coding gets outrun by the compilers.

The biggest modern change is writing to the cache hierarchy, which has more to do with architecture than coding, designing your data structures to fit in cache lines and finagling what the overall access patterns will be.

_________________
WFDis Interactive 6502 Disassembler
AcheronVM: A Reconfigurable 16-bit Virtual CPU for the 6502 Microprocessor


Last edited by White Flame on Sat Oct 07, 2017 8:50 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Sat Oct 07, 2017 8:44 pm 
Offline

Joined: Sat Jul 28, 2012 11:41 am
Posts: 442
Location: Wiesbaden, Germany
Quote:
Compilers definitely surpass humans on modern chips...
Compilers are also written by humans. So humans surpass other humans sometimes by knowledge and experience. However, for a human writing a compiler to anticipate all the stupid coding of another human is so much harder.

Or to put it more simple: garbadge in = garbadge out holds true even for the most sophisticated compiler.

_________________
6502 sources on GitHub: https://github.com/Klaus2m5


Top
 Profile  
Reply with quote  
PostPosted: Sat Oct 07, 2017 9:19 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
commodorejohn wrote:
People keep making this assertion like it's a religious mantra, but I'm not convinced. There's nothing magic about large programs or optimizing compilers, and the only thing that makes "modern processors" particularly complicated is out-of-order execution (hell, even back in the Pentium days clever programmers were optimizing for multiple-issue.) And sure, a program can probably work out the particulars of instruction ordering with less tedium than a human, but then the next generation of CPUs comes out and it's all different anyways...

There is something magic about compilers and large program -- cognitive load. Compilers don't suffer from it, but humans do.

Compilers scale across volume. Compilers don't lose track of things. Compilers are much, much better at ferreting out arcane data paths in terms of deciding stuff NOT to compile, and NOT to include.

Consider C++, generic programming, and the standard library. It offers all of these wonderful control structures and other high level things, creates code for them all on the fly during compile, and then proceeds to throw away all the parts that you don't need. You literally pay for only what you use. As a developer you don't have to decide what is safe to toss out. The compiler knows and will do that for you.

Compilers enforce type systems that make large system development safer and more possible. Compilers don't get tired or lazy. When people think they know better, and are able to make short cuts in the code, they're mostly right. When compilers do that, they're always right.

Having the compilers and runtime systems handle much of the lower level minutiae that goes in to system development can lead to safe, more secure software.


Top
 Profile  
Reply with quote  
PostPosted: Sun Oct 08, 2017 4:12 am 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
whartung wrote:
Compilers scale across volume. Compilers don't lose track of things. Compilers are much, much better at ferreting out arcane data paths in terms of deciding stuff NOT to compile, and NOT to include.

Consider C++, generic programming, and the standard library. It offers all of these wonderful control structures and other high level things, creates code for them all on the fly during compile, and then proceeds to throw away all the parts that you don't need. You literally pay for only what you use. As a developer you don't have to decide what is safe to toss out. The compiler knows and will do that for you ...

Have you seen Jason Turner's neat presentation which illustrates several of your points? The x86 to 6502 translation is very simple-minded, but the C++ to x86 bit is very clever, especially in the optimizing magic surrounding "const".

https://www.youtube.com/watch?v=zBkNBP00wJE&t=4300s

Mike B.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 89 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 8 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: