6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Nov 10, 2024 11:17 am

All times are UTC




Post new topic Reply to topic  [ 117 posts ]  Go to page Previous  1, 2, 3, 4, 5 ... 8  Next
Author Message
PostPosted: Tue Feb 12, 2019 1:17 am 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
backspace119 wrote:
as simple of an issue as it seems to be, seems impossible to solve correctly.

As I said in the other thread, I may have to just switch to using wait states and (begrudgingly) buy a bunch of oscillator cans
I wonder if perhaps you're confusing two separate goals.

Changing the oscillator frequency is a good way to see how fast the computer itself can go (without the EEPROM). For that test you want a continuously variable clock. You'd test by starting up some sort of program, then you would manually increase the clock frequency little by little until the program crashes. During this test you could use either a trimpot-controlled oscillator (Garth & I each described one) or your idea of a digital potentiometer connected to an LTC6900 (which is a cool suggestion, BTW). Problem solved -- now we know how fast the computer (the CPU, decoding and memory) can run. Notice: for this testing, there's no need to change the frequency rapidly.

Next we move on to the goal of getting the EEPROM or other slow device working. And now we *do* need a way to change speed rapidly! Understand that, as a program executes, the CPU will issue a memory access every cycle, and the accesses are scrambled and unpredictable, including code fetches, zero-page, stack and I/O accesses, to name a few. Since these are accesses we don't want slowed down, the goal is to be ready for the occasional EEPROM access when it occurs, and only slow that access down. And there's no advance notice; the slowdown has to happen instantly (in less than one cycle).

A trimpot is too slow, and so is a digital potentiometer. You need logic that can extend the cycle during which the EEPROM is accessed, and it needs to happen automatically. Starting with the /CS from the slow device, you can use external logic to do that ("clock stretching") or you can use the RDY input, which uses internal CPU logic to accomplish basically the same thing. In other words, wait states. And in many cases a single wait state grants enough extra time, which means the circuit can be very simple indeed.
Attachment:
simple wait-state generator.gif
simple wait-state generator.gif [ 4 KiB | Viewed 919 times ]
(The circuit for two wait-states is also very simple.)

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 6:24 am 
Offline

Joined: Fri Jan 25, 2019 5:40 am
Posts: 346
Location: Knoxville, TN
After reading a couple other threads on here, and doing more research, I believe I'm going to skip the LTC6900 entirely. The added complexity doesn't really seem to be worth it, sure, I'll pay a few extra bucks for some extra cans, but I won't have to mess around with moving phi2 timing around, which could impact other circuitry badly.

So much for a variable clock on this build. BUT! I do want to still try and figure this out, and maybe build another machine specifically to explore a variable clock, I'm certain it will have some use to someone at some point (maybe even me). So if anyone has some good ideas, leave them here, and if I get another machine running I'll explore them.

My newest plan is to use a monostable circuit to pull down RDY for 150ns to access EEPROM, since that should be enough for most models. This seems like a cleaner approach, and should be much faster (doesn't waste any time moving the clock around).

EDIT: whoah there's another reply on a new page, reading that, may edit this again


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 6:29 am 
Offline

Joined: Fri Jan 25, 2019 5:40 am
Posts: 346
Location: Knoxville, TN
Dr Jefyll wrote:
I wonder if perhaps you're confusing two separate goals.

Changing the oscillator frequency is a good way to see how fast the computer itself can go (without the EEPROM). For that test you want a continuously variable clock. You'd test by starting up some sort of program, then you would manually increase the clock frequency little by little until the program crashes. During this test you could use either a trimpot-controlled oscillator (Garth & I each described one) or your idea of a digital potentiometer connected to an LTC6900 (which is a cool suggestion, BTW). Problem solved -- now we know how fast the computer (the CPU, decoding and memory) can run. Notice: for this testing, there's no need to change the frequency rapidly.

Thanks for this, I really think it is a pretty slick way to let the computer control its own frequency, but it just isn't the right tool for slow accessing.

Dr Jefyll wrote:
Next we move on to the goal of getting the EEPROM or other slow device working. And now we *do* need a way to change speed rapidly! Understand that, as a program executes, the CPU will issue a memory access every cycle, and the accesses are scrambled and unpredictable, including code fetches, zero-page, stack and I/O accesses, to name a few. Since these are accesses we don't want slowed down, the goal is to be ready for the occasional EEPROM access when it occurs, and only slow that access down. And there's no advance notice; the slowdown has to happen instantly (in less than one cycle).

A trimpot is too slow, and so is a digital potentiometer. You need logic that can extend the cycle during which the EEPROM is accessed, and it needs to happen automatically. Starting with the /CS from the slow device, you can use external logic to do that ("clock stretching") or you can use the RDY input, which uses internal CPU logic to accomplish basically the same thing. In other words, wait states. And in many cases a single wait state grants enough extra time, which means the circuit can be very simple indeed.(The circuit for two wait-states is also very simple.)

-- Jeff

I didn't see your post before posting my last post, but it seems I came to the same conclusion you were trying to lead me to. I'm hoping to use a monostable circuit though, rather than clock cycles, to get exactly how much time I need for the wait state, will this work?


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 7:48 am 
Offline

Joined: Fri Jan 25, 2019 5:40 am
Posts: 346
Location: Knoxville, TN
Cross posting here because this belongs here.

I can probably keep the variable clock with the 1600/1799, and use a monostable for the RDY generation, making it independent of the clock, which should solve all the issues.

Anyone know if the 1600/1799 are well behaved when switching clock rates?

EDIT: I've been told wait states must be locked to clock rate. so this won't work, but I can still wait state off the variable clock


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 12:39 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10976
Location: England
The clock speed of a design varies according to the parts you use, of course, but also according to temperature and voltage. So, if you have a finely-controllable supply voltage, such as from a bench power supply, you can to some extent explore the robustness of your design by varying voltage while keeping clock frequency constant. (It's less easy to control temperature.) For example, if your design runs reliably at 10MHz with a 5.1V supply but not with 5.0V, then it's marginal. If it runs fine at 4.0V then you've probably got some headroom.


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 2:45 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
All good points, Ed. If we intend to operate beyond the manufacturer's rated specs then the burden of testing falls on us.

backspace119 wrote:
EDIT: I've been told wait states must be locked to clock rate. so this won't work, but I can still wait state off the variable clock
Wait states off the variable clock -- yes, I think that's your best bet. And RDY would be used to generate the wait states.

I realize this falls somewhat short of your apparent goal, which seemingly is to fully optimize the speed of both EEPROM and non-EEPROM bus cycles. However, the tradeoff is a favorable one. You end up with a nice, simple circuit, and the non-EEPROM bus cycles *are* fully optimized. EEPROM bus cycles get somewhat optimized, with the sole restriction that the amount of extra time must be an integer multiple of one entire cycle. That may be no restriction at all, if the numbers happen to work out exactly. In the worst case you'll have almost one entire cycle of unnecessary delay, which is not the end of the world when you consider that EEPROM accesses are an occasional thing -- they don't necessarily dominate the overall performance of the computer, given that EEPROM accesses may be considerably outnumbered by non-EEPROM accesses (such as zero-page, stack, I/O etc).

It is possible to reach your goal of optimizing the speed of both EEPROM and non-EEPROM cycles, but the performance boost compared to Plan A is likely to be small to none. Moreover, RDY can't achieve this, and the alternative (ie, clock stretching) demands logic that's considerably more complex than that required by RDY.

( RDY is generally used with a constant frequency applied to the CPU clock input... and when RDY is false, it tells the CPU to repeat the current bus cycle in its entirety. With clock stretching, RDY is tied high (ie, unused), and the signal applied to the CPU clock input is variable. But fully realizing your goals would require generating *two* independently and continuously variable clock pulses, and selecting between them on the fly. You'd have to ask yourself whether you're equal to the challenge, and whether the questionable advantage is worth the extra effort and the risk to your project's success. )

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 3:25 pm 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
If you're trying to optimize performance, it's also possible to copy the EEPROM contents to SRAM and then run from there.


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 3:55 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10976
Location: England
There's a whole research field of asynchronous design: other than one or two examples, it's really not how computers are designed to work these days. If you had the choice between a reliable machine of a certain performance, and an unreliable machine that ran 10% faster, which would you choose? Which is another way of saying, someone who says they want the maximum performance usually doesn't mean to say they will compromise correctness or reliability. If your filesystem gets trashed every two or three weeks so you need to recover from backup, that's probably not acceptable.


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 5:55 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
Arlet, I'm glad you mentioned that. I wanted to touch on that point, but the post seemed overburdened already.

Ed, I agree that over-aggressive optimization raises the risk of serious grief due to compromised reliability. Allow me to point out (perhaps unnecessarily) that the "conservative vs aggressive" question is unrelated to the separate (but also risk-laden) question of whether to attempt a simple variable clocking scheme or a complex variable clocking scheme. Simple schemes (even mere oscillator swapping) can be approached conservatively or aggressively, and the same is true of complex schemes.

The ultimate scheme -- an asynchronous design -- clearly risks requiring more expertise and patience than is available from the OP and forumites wishing to assist. It is intriguing, but IMO it would be distinctly experimental in nature -- IOW, not something that you'd immediately want to commit to PCB artwork. :| And the amount of benefit is questionable anyway.

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 6:45 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10976
Location: England
Yes, sorry, I went off in an odd direction just there... reading backspace119's comments about using delay lines, I felt that perhaps before cooking up a novel approach to wait-states, it's best to be fully familiar with synchronous design, which means a good understanding of
- setup times
- hold times
- clock skew
- clock-to-Q delays
and how to calculate circuit safety using appropriate constraints and specifications. Otherwise, it's a case of trying to run before you can walk.

(Now, plugging in a tried-and-tested approach is another matter, and much more likely to result in a satisfactory outcome.)


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 8:12 pm 
Offline

Joined: Fri Jan 25, 2019 5:40 am
Posts: 346
Location: Knoxville, TN
Wow, a lot of replies while I was gone, Instead of quoting everyone I'm just going to generally reply to everyone.

As far as knowing the ropes around the things you mentioned BigEd, I know the math behind them, and their definition, but I've had no real world practice with them, so a simpler approach may be better to keep me out of trouble.

For variable clock with wait states, I think this is probably the simplest way to go, the only issue is I'd like to only use a max of 2 wait states, which, if my math is right, gets me 100ns + the propogation delay of the wait state logic for my entire access time @ 20Mhz. This is still a bit fast for slow eeproms, but I've been considering grabbing 70ns NVRAM to replace the eeprom, which would make this a non issue.

As far as copying contents from eeprom to SRAM. The goal is to copy (probably) the entire program from EEPROM to SRAM, but I would also like to use EEPROM to store some data, and be writable by the machine. This system data may work better being saved by the RTC, since it has some spare room iirc. Either way though, I don't want to back myself into a corner on the eeprom, so I'd prefer to have a clean way of accessing it on the fly.

As far as testing goes, that is my intention, I want to test all of this, collect the data, and hopefully have a workable design at some fairly high speed.


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 9:05 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
backspace119 wrote:
I'd like to only use a max of 2 wait states, which, if my math is right, gets me 100ns + the propogation delay of the wait state logic for my entire access time @ 20Mhz.
Memory doesn't "see" the propagation delay of the wait state logic -- that's a separate issue, and not a critical one. (It's pretty easy to ensure that the low state of the decoder feeding the EEPROM's /CS input gets passed to the CPU RDY input soon enough to satisfy the minmal setup time RDY requires before PHI2 falls at the end of the cycle.)

If you're running at 20 MHz then each cycle is 50 ns, and, with zero wait states, the real-world memory access time available might be approximately 15 to 30 ns. (The specs won't eliminate the guesswork because, according to the specs, 20 MHz operation is impossible.)

Every wait state is one complete cycle (50 ns), so two wait states will increase the zero-wait-state access time by 100 ns, for a total of approximately 115 to 130 ns.

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 9:16 pm 
Offline

Joined: Fri Jan 25, 2019 5:40 am
Posts: 346
Location: Knoxville, TN
Dr Jefyll wrote:
backspace119 wrote:
I'd like to only use a max of 2 wait states, which, if my math is right, gets me 100ns + the propogation delay of the wait state logic for my entire access time @ 20Mhz.
Memory doesn't "see" the propagation delay of the wait state logic -- that's a separate issue, and not a critical one. (It's pretty easy to ensure that the low state of the decoder feeding the EEPROM's /CS input gets passed to the CPU RDY input soon enough to satisfy the minmal setup time RDY requires before PHI2 falls at the end of the cycle.)

If you're running at 20 MHz then each cycle is 50 ns, and, with zero wait states, the real-world memory access time available might be approximately 15 to 30 ns. (The specs won't eliminate the guesswork because, according to the specs, 20 MHz operation is impossible.)

Every wait state is one complete cycle (50 ns), so two wait states will increase the zero-wait-state access time by 100 ns, for a total of approximately 115 to 130 ns.


What I meant was the propogation delay of the wait state logic would increase the wait state time, which is actually incorrect now that I think about it, I do know that the memory won't ever see that delay though, or anything being accessed.

I had done the math by halving the period to give me the time I had to access anything, which lands me at 25ns, (Garth mentions this is only a slight oversimplification in his primer, and I believe this is because our access only falls in half the period, phi2 high) 25ns is really incredibly hard to beat, with glue logic and all involved.

As far as the wait states go, I was planning on 3 wait states if using the 150ns EEPROM, because that would give me 3 cycles plus a little, which should be enough for it, instead though, I'm looking at 70ns NVRAM, since 3 wait states requires 2 flip flop chips where 2 wait states only needs one.


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 10:16 pm 
Offline

Joined: Fri Jan 25, 2019 5:40 am
Posts: 346
Location: Knoxville, TN
I remember there being an issue running wait states with the top 8 bits being demultiplexed by the circuit on the WDC datasheet (which I'm currently doing). I'm looking around for it now, but does anyone know what the issue is? I need to take care of it before I add in the wait states


Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 12, 2019 11:21 pm 
Offline

Joined: Fri Jan 25, 2019 5:40 am
Posts: 346
Location: Knoxville, TN
I think I see the issue, the 245 may end up losing the data it has, and storing address bits it instead because PHI2 will be clocking but the CPU won't be doing anything, am I correct on this?, if so, I figure I'll have to qualify the CE of the 245 with the wait states/RDY line. Does this sound correct?


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 117 posts ]  Go to page Previous  1, 2, 3, 4, 5 ... 8  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 8 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: