6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Fri Apr 26, 2024 7:03 pm

All times are UTC




Post new topic Reply to topic  [ 564 posts ]  Go to page Previous  1 ... 17, 18, 19, 20, 21, 22, 23 ... 38  Next
Author Message
 Post subject: Re: POC Computer
PostPosted: Sun Mar 02, 2014 11:01 pm 
Offline

Joined: Mon Mar 02, 2009 7:27 pm
Posts: 3258
Location: NC, USA
A machine code monitor really gives an SBC it's power. Great job with all the multiple drive interfaces too.
One day I'll get there in my system! :roll:

_________________
65Org16:https://github.com/ElEctric-EyE/verilog-6502


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Mon Mar 03, 2014 1:20 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
barrym95838 wrote:
BigDumbDinosaur wrote:
Here's a crude video of POC V1.1. It's not publicly-accessible. Eventually I'll do a different one at a higher resolution.

After hearing your voice for the first time, I'd have to say that you sound more like a big smart dinosaur than a big dumb one! :D

Mike

Amazing how I can fool people! :lol:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Mon Mar 03, 2014 1:29 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
ElEctric_EyE wrote:
A machine code monitor really gives an SBC it's power.

The monitor is where a major part of the firmware development time was expended. I was going to adapt code that I had developed for the 65C02 over 20 years ago but then decided to start with a blank canvas and write something that was optimized for the 65C816.

Quote:
Great job with all the multiple drive interfaces too.

Thanks! In theory, I can attach anything SCSI to the unit, even a flatbed scanner. However, the driver only understands basic block and stream device commands for reading or writing a disk, CD or tape.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
 Post subject: POC Computer
PostPosted: Thu Mar 13, 2014 5:43 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
Periodically I go through all our accumulated electronic stuff and get rid of old items that are not likely to be used any more (we give some of this stuff to the local high school for use in their electronics classes). I was doing so over last weekend and found a 28.322 MHz half-can oscillator that had been languishing for a long time, still in its sealed static bag. The P.O. number on the bag was A20821-02, meaning that it was ordered on August 21, 2002, with -02 meaning it was the second P.O generated that day.

Anyhow, I haven't a clue why we have an oscillator with a somewhat odd frequency, but I'm sure it was probably intended for a project that never got completed. Since it was sitting there in front of me I figured why not see if POC will boot with it. POC went through the entire POST without a hitch, made the usual attempt at loading an OS from disk and then entered the M/L monitor. A check of all monitor commands, including SCSI ones, showed that full operation was present. I did a bunch of memory dumps and disassemblies in an attempt to trip up I/O processing. I watched the uptime counter as well. I read and wrote some disk blocks. Nothing was messing up.

So now I know that the unit will run at 14.161 MHz with the host adapter in place, even though on paper, I/O timing violations surely must be occurring all over the place. What's interesting is that if I insert a 30 MHz oscillator, which results in a 15 MHz clock, POC will POST but the SCSI subsystem will not respond. So it must be that SCSI operation is right on the ragged edge at 14.161 MHz.

I ended up reinstalling the 25 MHz oscillator that was in POC, slowing the clock to a sedate 12.5 MHz.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Thu Mar 13, 2014 10:25 pm 
Offline

Joined: Sat Oct 20, 2012 8:41 pm
Posts: 87
Location: San Diego
BigDumbDinosaur wrote:
So it must be that SCSI operation is right on the ragged edge at 14.161 MHz.
I ended up reinstalling the 25 MHz oscillator that was in POC, slowing the clock to a sedate 12.5 MHz.


I have a wire-wrapped sbc board with an 816 that runs fine and (temp stable) at 10 Mhz (20 Mhz half can osc). I have tried it at 12 Mhz (24 Mhz osc) but it's a no go. I do have a few 74HCT gates on that board that could be changed to AC or ABT (which I don't have, maybe on some future order) and I might get it to run at 12 Mhz....Maybe, possibly... :lol:


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Fri Mar 14, 2014 4:50 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
clockpulse wrote:
BigDumbDinosaur wrote:
So it must be that SCSI operation is right on the ragged edge at 14.161 MHz.
I ended up reinstalling the 25 MHz oscillator that was in POC, slowing the clock to a sedate 12.5 MHz.

I have a wire-wrapped sbc board with an 816 that runs fine and (temp stable) at 10 Mhz (20 Mhz half can osc). I have tried it at 12 Mhz (24 Mhz osc) but it's a no go. I do have a few 74HCT gates on that board that could be changed to AC or ABT (which I don't have, maybe on some future order) and I might get it to run at 12 Mhz....Maybe, possibly... :lol:

In some cases, 74HC, despite the name, isn't any faster than the 74LS equivalents. More likely, though, your clock speed limit is due to insufficient drive on some signals. 74AC drives significantly harder than 74HC and 74ABT drives even harder. Of course, the stronger drive coupled with much faster edges may give rise to ringing and other maladies. Still, it can't hurt to try.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Fri Mar 14, 2014 7:42 pm 
Offline

Joined: Sat Oct 20, 2012 8:41 pm
Posts: 87
Location: San Diego
BigDumbDinosaur wrote:
More likely, though, your clock speed limit is due to insufficient drive on some signals.


I also have 128k of Dallas 100ns NVRAM on board and I'm decoding the 816 bank address using the WDC schematic which uses a AC573 and a AC245 on the data bus. All things considered 10Mhz is pretty good in this case. However, I might build another version some day with a CPLD (1504 or 1508) and boot the gates...of course when I get time. :roll:


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Fri Mar 14, 2014 9:35 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
clockpulse wrote:
BigDumbDinosaur wrote:
More likely, though, your clock speed limit is due to insufficient drive on some signals.


I also have 128k of Dallas 100ns NVRAM on board and I'm decoding the 816 bank address using the WDC schematic which uses a AC573 and a AC245 on the data bus. All things considered 10Mhz is pretty good in this case. However, I might build another version some day with a CPLD (1504 or 1508) and boot the gates...of course when I get time. :roll:

Faster RAM would be a big help. I'm using a 128KB SRAM with 12ns access time.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Apr 03, 2014 6:38 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
A while ago, I had replaced the 2692 DUART in POC V1.1 with a 26C92. The gain included 8-deep RxD and TxD FIFOs, and faster chip timings. The latter allowed V1.1 to be booted at 15 MHz, but without the SCSI host adapter being present.

In POC V1.0, bus loading would not allow stable operation above 12.5 MHz under any conditions—a number of hardware patches with blue wire aggravated the problem. These issues were rectified in V1.1, as all hardware patches were rolled into a new PCB layout, along with a better arrangement of the address and data buses. The result was that V1.1 would run at 12.5 MHz with the host adapter, but not at 15 MHz—host adapter initialization would fail and the unit would immediately enter the M/L monitor. Timing analysis indicated that the NCR 53C94 "advanced SCSI controller" (ASC) could not respond to chip selects within the approximate 33ns half-cycle time of a 15 MHz Ø2 clock. I needed to either wait-state host adapter accesses or come up with a faster ASC.

Enter the 53CF94 "enhanced SCSI-2 controller" or ESC. Key features are:

  • Support for up to 40 MHz operation. Like the 'C94 ASC, the 'CF94 uses a clock input to sequence its state machines and the SCSI bus. The maximum possible DMA transfer rate in megabytes is equal to the input clock rate in megahertz divided by two. With the 'C94, 12.5 MB/sec is the limit, since the maximum allowable clock input is 25 MHz. The 'CF94 supports a maximum of 20 MB/sec DMA.

  • Higher SCSI bus speeds. The 'CF94 can run the SCSI bus at a maximum speed of 10 MB/sec in synchronous mode and 7 MB/sec in asynchronous mode (the default in POC V1.1), compared to the 'C94's maximum performance of 5 MB/sec synchronous and 3.5 MB/sec asynchronous.

  • Faster chip timings. The 'CF94 is intended for use with higher clock rate systems without requiring wait-states. Therefore, all aspects of the MPU interface are substantially faster than that of the 'C94.

  • The 'CF94 is plug-compatible with the 'C94 and at reset presents the same programming model, making it possible for the 'CF94 to be used in POC's host adapter without changing anything.

A little searching tracked down some ESCs from a parts liquidator and I received them yesterday. The 53CF94 was produced by NCR, Emulex and AMD, the latter which is what I received. I removed the 'C94 from V1.1 and installed the 'CF94. POC booted without error, enumerated the SCSI bus and attempted the ISL from SCSI ID 0. So far, so good.

Next I replaced the 25 MHz clock oscillator (12.5 MHz Ø2) with a 30 MHz part (15 MHz Ø2) and powered on. POC went through the entire start-up sequence without error, fully enumerating the SCSI bus and attempting ISL. Issuing various commands to both the monitor and the SCSI subsystem indicated that the system was fully functional and stable.

I've run some tests on SCSI throughput to see what effect the increased clock rate has had. It is somewhat better than expected. With a 12.5 MHz Ø2 rate, the absolute maximum read performance off the disk is 497 KB/sec, using a 32 KB block size (that is, reading 32 KB from the disk in a single access). In order to assure test consistency, I'm reading from logical block address (LBA) $00000000, which takes advantage of the disk's ability to buffer a full track. Sampling occurs with a second read at LBA $00000000, which assures that the data will be read from the disk's buffer and not the medium itself, eliminating mechanical considerations from the test.

Since I'm using "pretend DMA" to access the 'CF94, I expected that the performance would increase in proportion to the Ø2 increase, since real-time throughput is determined by how quickly the 65C816 can read from the FIFO and then write to RAM. Accordingly, I expected to see about 596 KB/sec on a read access using the 'CF94. Not so! Repeated tests showed that I was managing about 650 KB/sec on a 32 KB read. Performance had improved out of proportion to the Ø2 rate increase, yet nothing in the driver code has been changed. So what did change? :?:

The answer turns out to be simple: on each access to the FIFO, the DREQ handshake output from the 'CF94 is tested to see if the FIFO has data waiting. This is accomplished with a bit instruction and a branch:

Code:
;= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
;
;ssxfrin: DMA INPUT TRANSFER
;
;   ———————————————————————————————————————————
;   Preparatory Ops : .X: 16 bit buffer pointer
;
;   Register Returns: none: IRQ will terminate
;                           this routine
;   ———————————————————————————————————————————
;
ssxfrin  bit dreq_srd          ;FIFO empty?
         bpl ssxfrin           ;yes, keep trying
;
         sei                   ;don't interrupt while...
         lda fifo_srd          ;getting & storing...
         sta mm_ram,x          ;datum
         cli                   ;IRQs now okay
         inx                   ;next buffer location
         bne ssxfrin           ;get more
;
         bra ssxfrCOM          ;MFU!
;


With the 'C94, the BPL SSXFRIN branch would be taken at periodic intervals, as the 'C94 could not always keep data in the FIFO. With the 'CF94, it appears that that branch is only being taken once at the start of the transfer, and at the end of a transfer when the final byte has been copied from the FIFO but before the 'CF94 interrupts due to a bus phase change (the phase change is caused by the target device, e.g., the disk, so there can be a lag after the final byte has been read). It appears that the branch is not taken during the actual transfer because the 'CF94 is able to keep copying data from the bus to the FIFO faster than the '816 can copy it out of the FIFO and into memory. I know from testing that this was not the case with the 'C94, which occasionally "stalled", causing the BPL SSXFRIN branch to be taken.

Something that clearly affects read transfer performance is the bracketing of the load/store instructions with SEI and CLI. I'm doing this because it is theoretically possible for the 'CF94 to interrupt immediately after the LDA FIFO_SRD instruction has executed. Were that to happen and the interrupt were to be processed, the last byte would be lost, as the STA MM_RAM,X instruction would never get executed. Of course, this wouldn't be a problem with a real DMA controller.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Apr 14, 2014 7:37 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
BigDumbDinosaur wrote:
Something that clearly affects read transfer performance is the bracketing of the load/store instructions with SEI and CLI. I'm doing this because it is theoretically possible for the 'CF94 to interrupt immediately after the LDA FIFO_SRD instruction has executed. Were that to happen and the interrupt were to be processed, the last byte would be lost, as the STA MM_RAM,X instruction would never get executed. Of course, this wouldn't be a problem with a real DMA controller.

I have been chewing on this for a while and in fact, had been exchanging E-mail with another member on ways to enhance the SCSI driver's performance. His input indirectly led to my making a discovery.

I had recently gotten a copy of the AMD documentation for the 53CF94 and after carefully studying the timing diagrams, concluded that during DMA transfers the chip's state machines are slaved to the /DMAWR input, which is what tells the 'CF94 when to connect the FIFO to the data bus. According to my observation, the "I'm done." IRQ doesn't come until eight 'CF94 clock periods have elapsed after the last time that /DMAWR is toggled, giving the 65C816 plenty of time to store the final byte and loop around to the BIT DREQ_SRD instruction. This would seem to suggest that there is no danger of losing the last byte due to a IRQ generated by the 'CF94. So I burned a ROM in which the SEI and CLI instructions have been omitted to see if I was on the right track. The transfer loop now appears as follows:

Code:
;= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
;
;ssxfrin: DMA INPUT TRANSFER
;
;   ———————————————————————————————————————————
;   Preparatory Ops : .X: 16 bit buffer pointer
;
;   Register Returns: none: IRQ will terminate
;                           this routine
;   ———————————————————————————————————————————
;
ssxfrin  bit dreq_srd          ;FIFO empty?
         bpl ssxfrin           ;yes, keep trying
;
         lda fifo_srd          ;getting & store...
         sta mm_ram,x          ;datum
         inx                   ;next buffer location
         bne ssxfrin           ;get more
;
         bra ssxfrCOM          ;MFU!
;

Eliminating the four Ø2 clock cycles per loop iteration that were consumed by SEI and CLI increased the data transfer rate during reads of 32KB to 700-710 KB/sec. The theoretical maximum is 750 KB/sec (derived by cycle counting), but bus protocol overhead steals some time away from actual transfers. Loading smaller amounts of data per SCSI transaction produces a somewhat slower transfer rate, due to bus protocol becoming a larger part of the time required to complete a transaction. Nevertheless, the performance improvement has helped there as well. A 2KB load occurs at around 650 KB/sec, which is faster than I was able to achieve with the 53C94 and the old code.

Repeated "stress testing" using loads ranging from 512 bytes (one disk block) to 48K (96 contiguous disk blocks) demonstrated that SEI and CLI in the read loop were unnecessary. The 48K load occurred in 0.066 seconds, producing an effective transfer rate of 744,727 bytes per second at a Ø2 clock rate of 15 MHz. For now, I'm going to say that that is the outer limit for SCSI performance on POC V1.1

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Apr 06, 2015 6:42 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
Recently, I acquired some Philips/NXP 28L92 dual UARTs (DUART) for experimentation, and installed one into POC last week in place of the NXP 26C92 DUART that I have been running.

The 'L92 has a programming model that is a superset of the 'C92, along with some enhanced features, the most important of which are the deeper receive and transmit FIFOs. The 'C92 has 8-deep FIFOs and the 'L92 has 16-deep FIFOs. Other features are 3.3 or 5 volt operation and the ability to work with either an x86 or MC68000 style bus. The 'L92 AC characteristics indicate that it is faster overall than the 'C92.

At reset, the 'L92 appears to the host system as a 'C92, which means it worked without any driver changes. However, that also meant that the FIFOs were configured to be 8-deep, as they are in the 'C92. I didn't want to tinker with the driver right away, so I instead manually entered a program to diddle the appropriate registers to enable the 16-deep FIFOs. At the same time, I enabled two other features that proved to have a significant effect on the number of interrupts generated during sustained data transfer.

It is possible to tell the 'L92 to not generate a TxD interrupt until the TxD FIFO has completely emptied. Normally, an IRQ would be generated as soon as space became available in the FIFO. By telling the 'L92 to not interrupt until the FIFO has been completely emptied, most sustained transmissions (e.g., during a full screen paint) will incur only one IRQ per 16 bytes transmitted. When the IRQ does come, the ISR can stuff the FIFO to capacity before moving on. This is in contrast to the way the 'C92 was operating, in which it would generate an IRQ as soon as space became available in the TxD FIFO. Depending on the rate at which the foreground program is feeding bytes to the driver, the interrupt rate relative to the data rate can increase.

Similarly, it is possible to tell the 'L92 to not generate an RxD interrupt until the receive FIFO is full. Again, normal operation causes an IRQ to be generated as soon as at least one byte has been shifted off the wire and into the FIFO. By telling the 'L92 to not interrupt until the FIFO is full, only one IRQ per 16 bytes received will be generated and all 16 bytes can be gotten from the FIFO and buffered before the ISR moves on.

There is a little booby-trap involving the delayed RxD IRQ, in which if less than 16 bytes arrive the IRQ will never come and data will go "stale" sitting in the FIFO. The solution to that is in the use of an RxD watchdog timer that will time out 64 bit clocks after the last byte has been received and the MPU has not read from the FIFO. When the watchdog times out it will generate an RxD IRQ and the MPU will know that data is getting stale in the FIFO. I enabled the watchdog, and since a single keystroke produces an immediate response, I know that it works.

The effect on the IRQ rate is significant. Monitoring the IRQ circuit with the logic probe showed a marked decrease in the IRQ rate during transmission, especially when something like a memory dump is painting the screen. Even more dramatic is what happens when I'm using the M/L monitor's L (load S-record data) function to load a program from my UNIX server into POC.

During the load, input is arriving at port B at the rate of 11,520 bytes per second. At the same, the monitor is printing a progress indicator to the console screen as each received S-record is processed. So the transmit side of port A is busy as well. Again watching the logic probe, I could see a major decrease in the number of interrupts being processed, especially since port B RxD IRQs were a fraction of what they were before. Some "quick and dirty" timing of the loading of a 32KB file showed that the processing time was nearly cut in half.

The next step is to rework the drivers and burn a new ROM to make it permanent.


Attachments:
File comment: NXP 28L92 Dual UART
uart_dual_28L92.pdf [336.96 KiB]
Downloaded 89 times

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Sun Jun 26, 2022 8:36 am, edited 1 time in total.
Top
 Profile  
Reply with quote  
PostPosted: Wed Apr 29, 2015 7:16 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
Sometimes it's amazing what can be discovered via pure serendipity.

One day not too long ago I ran a program that I wrote that displays the date, time and system uptime on the console, in a style similar to the date command in Linux, along with the uptime command. The date and time are read from the Maxim DS1511 real-time clock (RTC) and the uptime comes from a counter that is incremented at one second intervals by a 100 Hz jiffy IRQ. It's not complicated by any measure.

By sheer happenstance, I ran the same program exactly six hours later. The date information from the previous run was still on the screen along with the new information and as I looked at the display I realized that the new uptime didn't make sense. The time of day was indeed exactly six hours later than before, but the uptime was a little more than one percent too high.

Curious about what I was seeing, I "manually" (meaning, in the machine language monitor) called a BIOS function that generates a time delay, using a delay of 60 seconds. I watched the time display on my Linux console and when it was at a zero seconds value, started the delay. The delay expired in a hair more than 58 seconds. I tried the same thing again, however this time with a 300 second delay instead of 60. It expired in about 256 seconds.

Thinking that there might be some sort of bug in the part of the interrupt service routine (ISR) that drives all this timekeeping stuff, causing it to slightly over-count, I pored over the code. I found nothing. The jiffy IRQ drives two software clocks and the time delay counter. One clock is the uptime, which is maintained in 32 bits, and other is UNIX time, which is maintained in 40 bits. The time delay is a 16 bit down-counter. A little experimenting showed that all three were running fast. This discovery led me to write some code that would display the date, time and uptime, wait for 10 minutes and then display the date, time and uptime once more. The results were as before: uptime was slightly more than one percent too high. Digging a little more into this, I wrote some code that would display only the uptime at one hour intervals and let it run overnight. The error was consistently there.

The jiffy IRQ is generated by the watchdog timer in the RTC, which is configured to interrupt at 10 millisecond intervals. Clearly that IRQ rate was not exact and apparently was running a bit fast. Yet the clock and calendar part of the RTC was dead nuts, which I determined with another program that gets a formatted time string from my UNIX server and uses it to correct the RTC. During correction, the program reports how much of a difference exists between the RTC and the time string, and it was clear from that information that the RTC is an accurate timekeeper.

Another possibility was the generation of spurious interrupts, which can be triggered by noise or improper device configuration. However, the code that updates the clocks is very careful about verifying that the watchdog is interrupting. So it was not likely that spurious interrupts were the culprit.

Carrying on with the detective work, I thought that perhaps the RTC was defective and thus removed it from POC and installed a spare one I had. Much to my amazement, the replacement had the same error rate as the other one, but was otherwise keeping good time. It had to be a problem with the watchdog itself, which meant I couldn't rely on it for any kind of timekeeping. So I decided to try something else to satisfy myself that my clock updating code wasn't malfunctioning.

The 28L92 DUART (also the 2692 and 26C92) has a 16 bit counter/timer (C/T) that can be used in several ways, one being as a free-running timer. The C/T is slaved to the 3.6864 MHz X1 clock that the DUART uses for baud rate timing, and may be programmed to generate evenly spaced IRQs over a wide range of periodic rates. Owing to the way the C/T works, writing a value of 18432 into its registers will cause it to underflow exactly 100 times per second, making it suitable for jiffy IRQ generation.

A relatively minor change to the ISR took care of detecting and processing C/T IRQs and a change to the DUART setup table took care of configuring the C/T so it would run at the right rate and generate IRQs. Repeating some of the tests I did that confirmed that the watchdog was a bit fast proved that the C/T jiffy IRQ was right on the money. For example, manually running the time delay function in the BIOS for 10 minutes produced an exactly 600 second delay. Feeling bold, I tried a 60 minute delay, and it indeed timed out in 3600 seconds.

So it looks as though the DUART is now in charge of producing the jiffy IRQ.

As to why the watchdog is running fast, I have a theory but can't really confirm it. The DS1511's time base is a 32.768 KHz crystal-controlled oscillator. The watchdog's purported 10 millisecond maximum rate is not exactly achievable with that time base and hence the watchdog will not run exactly at the desired rate if it is not an exact sub-multiple of the oscillator's periodic rate of 30.517578125 microseconds. This characteristic has no effect on timekeeping, as the clock part of the RTC has a one second resolution.

I should note that in its intended purpose, the watchdog timer doesn't have to be very precise, as all it's suppose to do is give the MPU a swift kick in the you-know-what if the machine crashes. Whether that happens in 10ms or 11ms after the MPU goes belly-up isn't terribly important in the scheme of things. :lol:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Wed Apr 29, 2015 8:32 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10793
Location: England
Interesting... I can't presently see how it manages to be 1% fast. The 10ms granularity of setting the watchdog period would mean the counter needs to tick every 327.68 ticks - if it ticks every 327 ticks then it would be 0.2% fast.

It would be interesting to know what accuracy you'd see if you ran with a 20ms watchdog instead of 10ms.

Also interesting is that the 'repetitive' mode of the watchdog, according to the datasheet, reloads the counter when the interrupt is serviced and the registers read - which would tend to make the thing run very slightly slow:
Quote:
in repetitive mode. When the watchdog times out, both WDF and IRQF are set. IRQ goes active and IRQF goes to 1. The watchdog timer is reloaded when the processor performs a write of the watchdog registers and the timeout period restarts


Also interesting, if you're swapping parts around, once it's been plugged in, if you don't take care to disable the oscillator then it will keep running and use up the internal battery:
Quote:
When the DS1511 is shipped from the factory, the internal oscillator is turned off. This feature prevents the lithium
energy cell from being used until it is installed in a system. The oscillator is automatically enabled when power is
first applied.


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Wed Apr 29, 2015 1:35 pm 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1373
That is an interesting find... but as you dig into the Maxim datasheet, it makes sense that it (WD timeout) can not be that accurate. The BCD registers used to set the timeout value are coded against a 10ms value, which as you've discovered, is not truly achievable.

I've been looking at the Benchmarq BQ4845 RTC which also provides a programmable WD timeout and a periodic interrupt. Looking at the periodic timeout values they show, they do make sense based on the 32.768 KHz internal clock, albeit they round the lower values (30.5175 usec, 488.281 usec, etc.). Based on the values presented, it's based on the chained divide-by-two stack which provides up to a 500ms delay.

Regardless, neither chip will give you a jiffy clock that is 100Hz but the BQ4845 could be used for a more accurate jiffy clock using one of it's known timeout values (e.g., 128Hz). For my 65C02 system, I used the 65C22 timer as a free-running counter but does require that you set the timer value based on the CPU clock rate. I'm using a 4ms (250Hz) value which can be accurately configured for typical CPU clock rates from 1- to 16MHz. Of course the can oscillator precision varies a bit from unit to unit as I have 3 systems running concurrently and they all drift a bit differently.

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
 Post subject: Re: POC Computer
PostPosted: Wed Apr 29, 2015 3:23 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8144
Location: Midwestern USA
BigEd wrote:
Also interesting is that the 'repetitive' mode of the watchdog, according to the datasheet, reloads the counter when the interrupt is serviced and the registers read - which would tend to make the thing run very slightly slow:
Quote:
in repetitive mode. When the watchdog times out, both WDF and IRQF are set. IRQ goes active and IRQF goes to 1. The watchdog timer is reloaded when the processor performs a write of the watchdog registers and the timeout period restarts

That blurb in the data sheet is misleading. It is only necessary to read the interrupt status to clear the IRQ. The registers themselves don't have to be read to maintain operation. I found that out through experimentation and verified it with Maxim's technical support.

Quote:
Also interesting, if you're swapping parts around, once it's been plugged in, if you don't take care to disable the oscillator then it will keep running and use up the internal battery:
Quote:
When the DS1511 is shipped from the factory, the internal oscillator is turned off. This feature prevents the lithium
energy cell from being used until it is installed in a system. The oscillator is automatically enabled when power is
first applied.

That's correct. I have a small program that turns off the oscillator to conserve the battery.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 564 posts ]  Go to page Previous  1 ... 17, 18, 19, 20, 21, 22, 23 ... 38  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 39 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: