6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 3:50 am

All times are UTC




Post new topic Reply to topic  [ 43 posts ]  Go to page 1, 2, 3  Next
Author Message
PostPosted: Thu Aug 19, 2021 6:24 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
The RC2014 bus has just one clock signal, CLK, used for all system timing. The RC6502 bus is a variant that added two more clock lines, so it has:
  • Φ0 (pin 21, the same pin as RC2014 CLK), which is the clock input to CPU pin 37, if an external oscillator is used. (Presumably this line is unused if there's no external oscillator feeding the CPU, though that's not entirely clear as there's no real documentation about this.)
  • Φ1 (pin 23), the inverted Φ2 output from CPU pin 3.
  • Φ2 (pin 19), the clock output from CPU pin 39 which is used for all timing (including RWB qualification) except for the input to the CPU itself.

In some recent discussions about the RC6502 bus, we've been talking about which of these we can drop to free up bus lines for other purposes. Φ0 has to stay because some CPU boards have the CPU take its clock from a different board. Φ1 seems unnecessary (one can get it by inverting Φ0). I am uncertain, however, about whether Φ2 is really necessary.

What would be nicest is if the bus reverted to the RC2014 arrangement where there's only one clock line. In the case where the CPU internal clock is used (via a crystal or RC network on pins 37 and 39 of a 6502), where the CPU doesn't have a clock output (MOS 6510) or where the vendor specifically says to avoid using that output (WDC W65C02) it's easy: there is only one clock signal that can be put on the bus.

The sticky case is with CPUs that are being driven by an external clock but have their own clock output, which output of course is invariably at least slightly delayed from the clock input. Opinion seems to be divided on which clock to use.

The modern WDC W65C02 has a clock output but they make it pretty clear in the data sheet that it ought not be used: under the timing diagram it says, "PHI1O and PHI2O clock delay from PHI2 [input] is no longer specified or tested and WDC recommends using an oscillator for system time base and PHI2 processor input clock." It doesn't seem to say whether or not the problem might be too little or too much delay on PHI2O.

On older systems (which is really the main area of interest here) it's not unusual to see an external clock on the Φ0 input but the CPU's Φ2 output used for system timing (including RWB qualification). Some examples:

On the other hand, as well as newer systems that I guess would generally use Φ0 system clock per WDC's advice, plenty of older systems also seem to do this, leaving the CPU Φ2 output unconnected. These include the Apple 1, Apple II and Apple IIc. And of course the C64 must do it this way since its 6810 CPU has no Φ2 output. The same is true of any systems using some of the MOS 6500 family CPUs that were available from the very start, including the 6501 and 6512 through 6515.

Particularly given that the 6501 and 6512 through 6515 had to work properly using Φ0 as the system clock, it seems to me implausible that the internal CPU design was different enough between those and the 6502 that the 6502 wouldn't work in the same way. But that's just my thinking, not based on any actual knowledge of chip design.

Does anybody have any thoughts on this, or any experience with issues using Φ0? Under what circumstances is it reasonable to say that you really should use Φ2 over Φ0?

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Thu Aug 19, 2021 6:26 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
Just in case it's helpful, here's a crystal oscillator external clock to the Φ0 input (yellow) vs. 6502 Φ2 output (cyan) on an old NMOS 6502. (That's the Φ1 output in pink below.) I don't know the exact manufacturer or date of this CPU because it's been relabeled, but from the power consumption (>100 mA) it's definitely NMOS. You can see that there's considerable delay there, though most of it comes from the slow ramp-up of the leading edge; there's little delay on the falling edge.

That's from my RC6502 Apple 1 clone SBC board. I've got some more timing examinations of Φ0 vs. Φ2 vs. RWB on that board that I'll put together and post in a subsequent message, in case that provides any illumination on this topic. (Or maybe in a different topic if I'm finding myself too confused about them.)


Attachments:
rigol.png
rigol.png [ 44.42 KiB | Viewed 1398 times ]

_________________
Curt J. Sampson - github.com/0cjs
Top
 Profile  
Reply with quote  
PostPosted: Thu Aug 19, 2021 7:22 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
Interesting question!

A fundamental aspect of 6502-powered computers is the clock is the final authority on all timing, even when using asynchronous hardware, e.g., async static RAM. Therefore the quality and stability of the clock source, as well as its correct usage, is paramount in achieving best stability and performance.

My advice is to use a "can" oscillator to generate the Ø2 clock, and distribute that clock to all devices that refer to it. A stable "roll your own" oscillator circuit involving a crystal resonator and supporting passives can be tricky to implement and in some cases, may fail to work due to voltage and/or temperature fluctuations. The manufacturers of can oscillators have solved these problems and guarantee their devices will properly operate over the full voltage and temperature range specified for the device. A can oscillator costs only slight more than the parts needed for a "roll your own" circuit and if using a half-can oscillator, takes up no more space on the PCB.

Per WDC recommendations, the two clock outputs on the MPU should not be used in a new design. Those outputs were used in older systems in which the Ø2 signal was generated by a crystal circuit attached to the MPU's clock input. That arrangement couldn't tolerate much loading, hence the two clock outputs.

At the relatively low speeds NMOS 6502 systems run, the lag between clock in and clock out is inconsequential (also note that the two outputs aren't necessarily 180° out of phase due to variations in internal prop time). Following the introduction of the 65C02, Ø2 went into double-digits, the lag between clock in and clock out became significant and timing problems often arose. This is what led to WDC's recommendation¹ to not use the clock outputs and instead drive everything from an oscillator.

Once Ø2 goes into double digits, clock symmetry will become a consideration. Can oscillators don't necessarily produce a 50 percent duty cycle output. If the clock is sufficiently asymmetric and at a high enough frequency, MPU timing violations will occur, which will be difficult to identify and correct. Asymmetry can be corrected by running the oscillator's output through a flip-flop, which will assure a 50 percent duty cycle. Plus if Ø1 is needed, as would be the case in a 65C816 unit with bank bits latching, it can be derived from the same flop at no additional expense. In this arrangement, Ø1 out and Ø2 out will be exactly 180° out-of-phase (as seen on a 'scope or logic analyzer).

Here's the clock generator I used in POCs V1.0, V1.1 and V1.2:

Attachment:
File comment: Two-Phase Clock Generator
clock_gen_2phase.GIF
clock_gen_2phase.GIF [ 16.88 KiB | Viewed 1368 times ]

An augmented clock generator is used in POCs V1.3 and V2.0:

Attachment:
File comment: Stretchable Two-Phase Clock Generator
clock_gen_2phase_stretched.gif
clock_gen_2phase_stretched.gif [ 39.69 KiB | Viewed 1368 times ]

The above version can be stretched on the high phase to produce one or two wait-states, depending on how jumper JP1 is set. The wait-state is "armed" when the /WSE signal is driven low during GCLK ("global clock") low. When GCLK goes high the wait-state commences. PHI1 will stay low and PHI2 will stay high for one or two GCLK cycles, according to how JP1 is configured. The MPU will halt with all outputs "frozen" until the next fall of GCLK.

Note the use of a 74AC74 in both circuits to produce the clock outputs. WDC MPUs require a clock input that swings from ground to VCC and has no more than 5 nanosecond rise/fall time. Other CMOS logic types, although able to meet the output amplitude requirement, may not be able to meet the rise/fall times.

The 74AC74 can source/sink 24 mA, producing a robust clock with a rise/fall time well below the maximum of 5 nanoseconds. However, the extremely fast edges will likely give rise to some ringing, which can be suppressed by various means. Series resistance placed physically close to the flop's outputs will help, as illustrated in the "stretchable" clock circuit.

GCLK is not affected by wait-stating, since it is driven by the Q output of flop U2a. In addition to supplying the time base for the 74AC109 J-K flop, GCLK is meant for use with 65xx peripherals, especially the 65C22. Were the latter's Ø2 input to be connected to PHI2 in the above circuit, the C22's timers would stop any time a wait-state was in effect, even though the C22 wasn't being selected at the time.

Given that all operations are slaved to the rise and fall of the clock, interesting problems can arise if the read/write circuits are not correctly implemented. As a general rule, writes should not be allowed when Ø2 is low. During Ø2 low, address bus transitions may momentarily cause false chip selects or may momentarily select the right device but the wrong address in the device. If writing is enabled during that time data corruption is likely to occur. Hence writing must always be qualified by Ø2, with one exception. If the target device is a 65xx peripheral, its RWB input must be directly connected to the MPU's RWB output—it must not be qualified with Ø2. Also, the chip and register selects must be stable prior to the rise of Ø2.

In my POC units, RAM, ROM and I/O all have separate /RD and in the case of RAM and I/O, /WD control inputs (aka /OE and /WE). Therefore, these signals have to be derived from the MPU's RWB output, using Ø2 to qualify the signals.

Attachment:
File comment: Fully-Qualified Read/Write Generation
read_write_qualify_alt.gif
read_write_qualify_alt.gif [ 46.98 KiB | Viewed 1368 times ]

This entire circuit can be implemented on a single quad NAND—the fourth gate can be used to invert the MPU's low-true reset to produce a high-true equivalent for devices that need it (e.g., the NXP 28L92 dual UART). It illustrates use of a 74AC00 but other CMOS types could be used, 74AHC00², for example. 74HC00 in this application is okay up to about 10-12 MHz, beyond which prop delay may become a concern.

In a 65C02 system, the gated /RD signal may seem unnecessary. Consider, however, that an I/O device may react badly to having its /RD input asserted during Ø2 low, when the address bus is momentarily unsettled. An ill-timed access of this sort may cause trouble if a read-sensitive register is accidentally "touched." Such an event can be completely avoided by using /RD.

In a 65C816, use of /RD is mandatory, since the 816 drives the data bus with the A16-A23 bits during Ø2 low. If /RD is not used, bus contention will occur and the A16-A23 bit pattern will be garbled.

A design feature I occasionally see in 6502 systems is use of Ø2 to qualify chip selects. I recommend against doing so, primarily because of performance implications. If a chip select isn't asserted until Ø2 high then the device will not be ready for access until some time after the rise of the clock, narrowing the window during which reliable access is possible. This will set a hard limit on the maximum clock rate that can be run.

——————————
¹This recommendation first appeared in the 65C02 data sheet from 2004.
²74AHC logic generally exhibits prop times similar to 74AC, but with less-aggressive output behavior.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 12:55 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
BDD, thanks for your detailed answer. Though I'm operating on the assumption that the external clock will always be clean and properly conditioned, it's good to remember that one needs to be careful to achieve this or it may affect what otherwise would be working timings. I use a can oscillator on my RC6502 Apple 1 clone SBC, and intend to continue using those on other boards.

Last week I 'scoped out my Apple 1 clone SBC to look at some of the timings related to Φ0 (the CPU clock) and Φ2 (the system clock output from the CPU). I was focusing on the data bus here. I now realize I actually wanted to be looking at the address bus, but something of a little concern to me came up with the data bus signals, so I thought I'd post this here both to get confirmation that I'm looking at these things properly and get thoughts on the data bus timing.

The code I'm using to test is a tight loop incrementing a location in memory. Below I gave the address in binary (LSB only) and hex, the data in memory and the assembly language code. I kick the machine off by jumping to $80; the loop is obviously $81-$84. The code at $85 returns to Wozmon; this is executed only if I change the BCC to a BNE to run the loop only for a short time. (I used this to test the code.)

Code:
    7.adbit.0   addr    data        disassembly
    ────────────────────────────────────────────────────────
    0111.1111   007F    nn          (incremented location)
    1000.0000   0080    18          CLC
    1000.0001   0081    E6 7F       INC $7F
    1000.0010
    1000.0011   0083    90 FC       BCC $0081
    1000.0100
    1000.0101   0085    4C 1F FF    JMP FF1F

    #   Alternate for testing code; return to mon after incrementing to 0
    1000.0011   0083    D0 FC       BNE $0081

    time  addr  data    action
    ──────────────────────────────────────────────────────
     T-2   80   r 18    read opcode     CLC
     T-1   81   r E6    execute         CLC

     T0 0  81   r E6    read opcode     INC zp      p. A-8
     T1 1  82   r 7F    read addr       (zp)
     T2 2  7F   r n_    read data
     T3 3  7F   W n_    execute increment           !!! book says read cycle
     T4 4  7F   W n+    write data
     T5 0  83   r 90    read opcode     BCC         p. A-13
     T6 1  84   r FC    read offset
     T7 2  81   r ??    offset added to PC (seen on addr bus)
       (3)              (skipped because branch doesn't cross page boundary)

Below the code I have the actions of the CPU worked out cycle by cycle. The details for the cycle behaviour of each instruction were taken from the MOS MCS6500 Microcomputer Family Programming Manual and then confirmed against the timing capture. The T-times above are annotated in the capture below. This is followed by another capture with A3 instead of A0 on the third channel, as that helped confirm that I was correctly differentiating between the program reads (A3=0) and data access (A3=1).

Attachment:
21h19-a1loop_phi0-rwb-a0-d0-annotated.png
21h19-a1loop_phi0-rwb-a0-d0-annotated.png [ 40.89 KiB | Viewed 1310 times ]

Attachment:
21h19-a1loop_phi0-rwb-a3-d0.png
21h19-a1loop_phi0-rwb-a3-d0.png [ 44.31 KiB | Viewed 1310 times ]

So as I mentioned, this experiment turns out not be properly set up to confirm that the address lines change well before Φ0 rises (which of course would also be well before Φ2's later rise). But it does show that the data bus appears to be changing well after the Φ0 rise on which we'd enable a write, and even probably a little after the Φ2 rise. (The latter is harder to determine because it's not a sharp edge.) Following are a couple of single-shot captures of a loop execution where D0 (which changes with every loop) goes from 0 to 1 and 1 to 0, with the third channel now showing Φ2 instead of an address line. The top-half of the capture is at the same time base as the captures above, with a selected area marked, and the lower have is a zoom in to that selected area to make it easier to see the exact relationships between the signals when D0 changes.

Attachment:
21h19-a1loop_phi0-rwb-p2o-d0-single0to1-zoom.png
21h19-a1loop_phi0-rwb-p2o-d0-single0to1-zoom.png [ 41.22 KiB | Viewed 1310 times ]

Attachment:
21h19-a1loop_phi0-rwb-p2o-d0-single1to0-zoom.png
21h19-a1loop_phi0-rwb-p2o-d0-single1to0-zoom.png [ 40.79 KiB | Viewed 1310 times ]

I see two things here that worry me.

First, RWB asserts write on cycle T3, while the CPU is still incrementing the value its read, as well as asserting write on T4 when it's writing the now-incremented value. The is not what the book says it should be doing, and this will cause the previous value to be rewritten just before the new value is written. Obviously this isn't a big deal for RAM, but it seems to me it could be problem for other devices, such as output ports that trigger an indicator to a remote device on write (such as with a 6821 PIA's control output pins in certain modes). Those might see this as two sequential writes of the old and new values, rather than just a single write of the new value. Is this kind of thing to be expected with the INC instruction, making it something that just generally should not be used for I/O unless "extra" writes are acceptable?

And second, I notice that the data lines are changing after the Φ0 rise that one might use to qualify the write, and even around or possibly after the Φ2 rise that this board uses to qualify the write. I think this is not a problem for RAM, since the proper data value will have been held for some time before the chip's select and write signals are deasserted, but I'm not totally sure about this. But again, for I/O, this could cause incorrect values to be output, even if just for brief fractions of a cycle, which seems as if it would be a bad thing, and this looks as if it could happen on any write instruction, not just the read-modify-write INC instruction. Is that something that I/O devices are supposed to be designed to be careful of, e.g. by not reading the data lines until the system clock's falling edge?

I guess I need to go back and do some further experimentation to determine timings for the address bus line and a non-RMW instruction, but I'd appreciate any feedback on what I've got so far, and any thoughts on how use of Φ0 vs. Φ2 as the system clock might make a difference to this, and whether it should make a difference to this.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 2:09 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8544
Location: Southern California
As long as you don't do transparent latches for output devices, you're fine. The 65xx I/O ICs know to only use the value that's on the bus at the time Φ2 falls (with the necessary short setup & hold times). IOW, they're not transparent latches, but edge-triggered registers. On the '22, the setup time is tDCW, and the hold time is tHW in the data sheets.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 7:33 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
If you can, it might be instructive to recapture at a slower clock speed. I think this will show that address and control signal changes are caused by the falling edge of the clock, with the rising edge being irrelevant. On the other hand, the databus is not driven during clock low, so will be indeterminate or will hold its previous value during clock low, and then will be driven to the right value as a consequence of, and therefore a little after, the rising edge.

Of course, if the time for the address and control signals to take on their new values happens to be less than half a clock cycle, it might appear that the changes are caused by the clock rising. That's why running slower might be informative.

It doesn't surprise me to see the unincremented value written before the incremented value: I haven't memorised which RMW instructions do this, but visual6502 shows a double write and it's been discussed previously. Any given documentation might be wrong, or might refer to a different design: it was useful back in the day to have such tables of clock by clock behaviour, but they are not definitive, as you've found.


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 8:30 am 
Offline

Joined: Tue Sep 03, 2002 12:58 pm
Posts: 336
cjs wrote:
First, RWB asserts write on cycle T3, while the CPU is still incrementing the value its read, as well as asserting write on T4 when it's writing the now-incremented value. The is not what the book says it should be doing, and this will cause the previous value to be rewritten just before the new value is written.

Which "book" is this? If it's talking about the 65C02, then some of the details won't apply to NMOS chips.
On NMOS, RMW instructions access the target address three times: read the original value, write the original value back, then write the new value. I don't think it was officially documented anywhere, but it's well known among demo coders, and is a key part of a popular way of acknowledging interrupts on the C64.


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 8:34 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Quote:
Which "book" is this?


See above:
Quote:
The details for the cycle behaviour of each instruction were taken from the MOS MCS6500 Microcomputer Family Programming Manual


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 8:46 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
BigEd wrote:
If you can, it might be instructive to recapture at a slower clock speed. I think this will show that address and control signal changes are caused by the falling edge of the clock, with the rising edge being irrelevant.

I'm not clear on why a slower clock speed would make a difference here. Ought the CPU not have the same behaviour and timing constraints regardless of clock speed, so long as the speed is within the ranges on the spec sheet?

Were you perhaps instead trying to say that you felt there weren't enough samples to see the timing accurately with the current 1 MHz clock? (It looked to me as if there there were plenty, even in the small segment in the zoomed in view. It looks like I'm capturing at least a hundred samples per clock half-cycle.)

I'm also finding "the rising edge being irrelevant" comment a bit odd for the purposes of this thread, since aren't we supposed to use that rising edge to qualify things like RWB? The (not insignificant) difference between rising edge timings for Φ0 and Φ2 thus might be very relevant, depending on how early or late relative to one or the other we can be when doing things like reading the address lines.

If your comments were just all about not seeing the address bus relationship with the clocks, that's just because in the second set of captures I don't even have an address line on a probe and in the first set I've not set up the code to properly generate changes that would clearly show the relationship between the clocks, RWB and the address lines. (To do that I need to go redo this experiment to be focusing on the address bus rather than the data bus, as I think I mentioned.) But even from what's there it seems clear that the address is changing after the falling edge of Φ0, and probably after the falling edge of Φ2 (which is very little delayed from the Φ0 at that edge), and given that as far as the address lines go we're always reading around the rising edge, it looks to me as if either clock would be absolutely fine for qualifying that.

Quote:
On the other hand, the databus is not driven during clock low, so will be indeterminate or will hold its previous value during clock low...

This I also find confusing. Did you mean perhaps that the data bus state is not changed during clock low? If it's never driven during clock low I don't see how it could hold its previous value, since there would be nothing driving the bus. It seems to me that if you're supposed to read the data bus on the falling edge of the clock, it must be driven for at least part of clock low or the data might vanish before the reader gets around to reading it.

Quote:
It doesn't surprise me to see the unincremented value written before the incremented value: I haven't memorised which RMW instructions do this, but visual6502 shows a double write and it's been discussed previously.

Ah, so this is known behaviour! Ok, that's fair enough then. And thanks for the hint about being able to use Visual 6502 to check these things.

Quote:
...it was useful back in the day to have such tables of clock by clock behaviour, but they are not definitive, as you've found.

And still useful today! I'm guessing that the description in the manual was an error or something where an update got missed when the design changed.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 11:04 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Sorry, I have been unclear. I will confess I have a hobby horse, which is that the role of the two phases of the clock is often misunderstood, and I'd like it to be better and more widely understood.

I was reacting to this:
Quote:
this experiment turns out not be properly set up to confirm that the address lines change well before Φ0 rises

because it implies some linkage between the rise of the clock and the change of the address lines. My reaction contained a poor suggestion - that you could learn something important by running at a different speed - and it's poor because it doesn't relate to what we see in your images.

If we imagine a 6502 running at rather a fast clock rate and with rather heavily loaded address lines, I think we will see the address lines settling as late as we like, without relation to the rising edge of the clock. If we then slowed that 6502 right down, we'd see the address lines settling before the rising edge of the clock, simply because enough time has passed since the falling edge. If we only had the second situation to observe, we might not realise the first situation is possible, and that it tells us something.

In other words, without somehow moving the rising edge around in time, it's difficult to infer from observation whether or not it's an important milestone, whether or not it causes anything.

My belief is that the rising edge is a convenient, but not a definitive, timing marker, and that it's potentially confusing when we see people speaking as if it is definitive. We understand the 6502 better if we understand this aspect of the timing, although it's clear that we don't have to understand it (or, agree with me) in order to build interesting things successfully.

However, the whole thing is more complicated because of the data bus, which is indeed driven by the CPU only during the second phase of the cycle (and of course only during a write.) If you have pullups or pulldowns on your databus, you'll see the effect of the undriven bus. If you arrange some other device to access memory during this phase, you'll find you can, because the CPU is not driving. If there's nothing driving the bus, the bus will for a time hold its value, possibly being weakly pulled up, over time, if there are TTL I/Os on it.


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 5:42 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
Something I'm not clear on is which processor is being used. Is this CMOS or NMOS?

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 5:50 pm 
Offline

Joined: Fri Jul 09, 2021 10:12 pm
Posts: 741
It's always made most sense to me that the period right around the falling edge of the clock was what mattered for both reads and writes. For RAM it doesn't matter much if you write throughout phase 2 as the data written at the end is what matters. For edge-triggered I/O though it is most important to trigger it after enough time has passed for the data lines to be stable, and you might as well just use the falling edge of the clock as you probably don't have a good way to measure the right data setup time to trigger the write earlier.

As an example, I sometimes use 74HC273 or similar as a simple output port, and its clock needs to be triggered late in phase 2, not right at the start of it - so I use the falling edge. Bus capacitance is enough to keep the data valid long enough after the CPU lets go.

Slightly more distantly related, in my video circuits with bus sharing between the CPU and the video circuit I also delay writes until e.g. halfway through phase 2, because I don't allow the CPU's write address onto the video address bus until phase 2 starts, and I want to give it some time for the address to settle before enabling writes. I could try to enable the transceivers earlier, but there's no benefit though because the data is held valid through to the end of phase 2 so I might as well just write it then. In this case I do have a much faster clock signal that I can use to time things like this within the CPU clock cycle.


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 25, 2021 6:43 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8544
Location: Southern California
cjs wrote:
If it's never driven during clock low I don't see how it could hold its previous value, since there would be nothing driving the bus.

After the end of the write, bus capacitance will hold the data for a surprisingly long time when nothing is driving the bus—not just nanoseconds, not just microseconds, but even milliseconds, according to related experiments of mine. (Actually, I tripped across that fact accidentally, and although I observed the milliseconds part, I did not go further to see just how much longer I could take it. This applies only if all loads are high-impedance, not taking any DC current to speak of. For example, don't use 74LS!) A 30pF capacitance on the bus (all IC inputs and outputs on a particular bit, plus capacitance in the board, sockets, etc.) multiplied by a 1MΩ DC load (just the ICs' leakage) is already a 30µs time constant; and the DC load will probably be far lighter than that. As you add more ICs, the DC load tends to get heavier, but so does the capacitive load, meaning the time constant doesn't necessarily change. In reality, the leakage on the insulated gates of NMOS and CMOS will usually be far lighter (like hundreds of MΩ, which is how I observed even milliseconds of hold time when I was developing the software for the tester for the 4Mx8 5V 10ns SRAM modules I provide, and I slowed things way down at some point, for some reason I no longer remember.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Thu Aug 26, 2021 12:42 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 730
Location: Tokyo, Japan
BigDumbDinosaur wrote:
Something I'm not clear on is which processor is being used. Is this CMOS or NMOS?

The processor used in my captures is relabeled as NMOS and almost certainly is NMOS (unless someone made a CMOS processor that pulls >100 mA; all the CMOS I've seen have been less than one tenth that). The target processors for the original question are both NMOS and CMOS, ranging from antiques all the way through to modern WDC parts.

Continuing, I just want to make sure I'm clear on this: "first phase" is Φ0 and Φ2 low, right?

BigEd wrote:
I was reacting to this:
Quote:
this experiment turns out not be properly set up to confirm that the address lines change well before Φ0 rises
because it implies some linkage between the rise of the clock and the change of the address lines.

Well, there is an explicit linkage, is there not? If you follow all the timing calculations transitively, you come up with the conclusion that the address lines must be valid before Φ0 rises, and thus you can (and should?) use Φ0 rise as a signal that address bus value is stable and you should read it then. Or am I getting confused here about how we build systems?

I was not meaning to say that Φ0 rising causes the address lines to do anything, and I am unclear about how my statement could be interpreted that way, since if the address lines become valid before the Φ0 rising edge, that rising edge clearly can't have caused the event that preceded it. (But I welcome comments on where I might have phrased things poorly.)

Quote:
If we imagine a 6502 running at rather a fast clock rate and with rather heavily loaded address lines, I think we will see the address lines settling as late as we like, without relation to the rising edge of the clock.

Wouldn't that be a broken system, then, if you can no longer use the rising edge of Φ0 (or Φ2) to determine that what's on the address bus is now valid (and therefore it's now, e.g., safe to assert a RAM's write signal)? Could such a thing happen if you're obeying all the timing restrictions in the data sheet and not doing things that would cause bus contention?

Quote:
My belief is that the rising edge is a convenient, but not a definitive, timing marker...

This is where I'm really confused. If we don't have a definitive timing marker for when the address lines are valid (and can then, e.g., assert a RAM's write signal), how can we have reliably working systems?

Quote:
...the data bus, which is indeed driven by the CPU only during the second phase of the cycle (and of course only during a write.)...If you arrange some other device to access memory during this phase, you'll find you can, because the CPU is not driving.

I think you by "this phase" you actually meant the other phase, Φ2 low instead of Φ2 high, right?

Looking around at the diagrams, only the one from the 6510 (1982) data sheet gives me much clarity on this, and seems to indicate that A) it will definitely start being driven sometime before the Φ2 falling edge, and B) it will stop being driven some time after the Φ1 rising edge that follows the Φ2 falling edge.

I get conclusion A) from T-MDS (data setup time) being max 200 ms. (at 1 MHz) from the start of the Φ2 rising edge, which is well under the 470 ms. minimum of PWHΦ2 alone even before you add in the length of time it takes the edge to rise (T-R). I get conclusion B) from T-HW (data hold time–write) being a minimum of 10 ns after the Φ1 finishes rising, which is no earlier than when Φ2 finishes falling.

Attachment:
timing-6510-mos-1982.png
timing-6510-mos-1982.png [ 110.57 KiB | Viewed 1194 times ]


GARTHWILSON wrote:
After the end of the write, bus capacitance will hold the data for a surprisingly long time when nothing is driving the bus

Well, I knew that, but it just now finally sunk in that of course I'm seeing the results of this in my captures above: while I can tell when something starts to drive the data bus, I can't tell when it stops because the data bus won't change much, if at all, when that happens. (Though there is an interesting little "bump" in the zoom of DB0 0→1 increment capture just after the Φ2 falling edge where the D0 level actually increases slightly (a fraction of a volt) for some reason at just the time when you would expect the CPU to stop driving it.)

At any rate, if my above timing analysis is correct, and a correct summary is "on writes the data bus will start being driven sometime after Φ2↑ and well before Φ2↓, and will stop being driven very shortly after Φ2↓," then I think I'm good with this part.

And then, getting back to the original topic of this discussion, I think we can also conclude that, given the relatively huge amount of time between data setup finishing and Φ2 falling, the Φ0 falling edge should also always be perfectly fine for indicating that data on the data bus are driven and stable and so for that particular purpose it's always safe to use Φ0 instead of Φ2. The only way I can see that breaking is if there's some stupidly long delay between Φ0 and Φ2 which would probably mean a system terribly broken in many ways, right?

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Thu Aug 26, 2021 8:37 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Yes, sorry, my typo there: as the 6502 uses the databus during phi2 high, the free half for other purposes is the opposite, the first phase.

Jeff wrote up a very thorough treatment with animated diagrams here
https://laughtonelectronics.com/Arcana/ ... iming.html
and discussed here
viewtopic.php?f=4&t=2909

You are of course quite right that, for the older parts, you can add up the various maximum and minimum timings and conclude that phi2 is a good signal to use to declare that the address bus is stable, and this is often done. As I say, it's useful. But it's not definitive: in some systems one might have another clock which rises earlier and gives more time to a memory access. It would of course be necessary to derive that clock in a safe way, such as by using a much higher speed master clock to determine the edges.

In the more modern faster parts, it all gets more compressed, and the documentation is lacking. I think I've seen it said that the modern datasheet doesn't really give room for 14MHz operation of a system - but we're saved by the practical observation that we don't seem to see worst-case behaviour.

But to the original discussion, I'm all for a thorough analysis of phi0 vs phi2 and the pitfalls to be avoided.

In the same area as an undriven bus holding a value for a little while, hold time constraints are very important and often overlooked. And slightly behind those is the question of bus contention. Which is to say, commonly one frets about meeting the timing during an access - meeting setup times - but that's only part of the job, as the various devices see the clock fall and the control lines change but need their hold times to be met.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 43 posts ]  Go to page 1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 44 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: