Not necessarily; I would expect the signal to be generated from a constant divider of 800 pixels per line, irrespective of the actual dot clock frequency. In which case, the signal would be slightly slower than expected, but I'd expect many or all displays to cope with it.
What do you mean not necessarily? The standard gives you 25.422 uS to display 640 pixels. If you are taking 0.04 seconds to display a pixel you will loose the trailing 8 pixels. There's no ambiguity about that.
I think we may be confusing each other. The standard gives timings based on 800 pixels per line, of which 640 are reserved for the image data. If your clock rate is inexact, but the signals are coming from a divider chain, then the overall frequency is wrong but there are still 640 pixels being emitted - they just don't take the length of time that the standard expects them to take.
But this is an analogue standard. The display scan time is set by the interval between the syncs, not the exact timing of those syncs, and on the originally intended CRT displays, there is no internal reference to blanking period at all. The signal to the tube base is either black - through the blanking periods - or has information (which may be black) and that information will light the screen. And it will light it whenever it occurs: if you pulse the video signal during flyback, you'll see it. If your image is shifted 10us relative to where it should be, you'll see it shifted by a third of the screen.
The digital display has to work out where the 'beam' is. It has to know how to isolate the active line from the blanking, which again, is not a signal presented to it. One approach might be to take a very stern view about the timings, and sample 640 times (with appropriate stretching) across the line based on an exact interval after the line sync pulse. This would indeed lose pixels if the dot clock is too slow, but it would also lose pixels at one end or the other of the line if there is any error in the sync timing. A better approach might be to look for active picture - the first and last non-black pixels on the line - across a field or two and derive internal timings from that.
I suspect that the monitor I use for testing uses both methods. It has an internal model of where things should be, and until the auto button is pressed that's how it displays. An off-frequency input is displayed shifted, with black pixels at one side or the other. When the auto button is pressed, things shimmy around for a second or two while it does a phase lock and at that point it displays the full pixels in the full width of the screen. Thereafter the settings are stored internally and remembered for next time.
Things can get confused with pathological signals, for example a mostly black screen with a flashing cursor in one corner, or colour bars. With the first, I think it just assumes standard timings, and won't show any issue until you fill the screen to both edges (at which point you can push the auto button if necessary now it has something to chew on!) but colour bars traditionally begin with a black bar. Auto on that, and the black bar disappears (which itself suggests a very wide range of adjustment in the PLL in the display). If you have colour bars with a red section on the lower quarter of the screen, then that extends the full picture width and there's no problem.
As an aside regarding timings, although TinyVGA.com indicates timings in pixels, all those timings are actually related to 8-pixel-wide character cells, which I find much easier to think in since that's directly related to when you need to sample video ram. I don't know much (anything) about CPLD but I suspect smaller numbers are easier to manage.
Neil