ElEctric_EyE wrote:
This is the result of the SDRAM pipeline?
No, right now I have a list of sprites that look like this:
Code:
X=0, Y=00, Sprite=0
X=0, Y=16, Sprite=0
X=0, Y=32, Sprite=0
...
X=0, Y=208,Sprite=2,
X=0, Y=224,Sprite=2
That's a list of 14 sprites in the Mario game, forming the first column of tiles. Sprite=0 means a 16x16 image of sky, and Sprite=2 is a 16x16 image of the rocks. This is then followed by 16 more columns, at X=16, X=32, and so on.
Now, if I want to scroll the entire play field 1 pixel to the left, the CPU has to go through that list, and subtract #1 from all the X coordinates. Like this:
Code:
X=-1, Y=00, Sprite=0
X=-1, Y=16, Sprite=0
X=-1, Y=32, Sprite=0
...
X=-1, Y=208,Sprite=2,
X=-1, Y=224,Sprite=2
With my new feature, the first column would look like this:
Code:
X=00, Y=00, Sprite=0
X+=0, Y+=16, Sprite=0
X+=0, Y+=16, Sprite=0
...
X+=0, Y+=16,Sprite=2,
X+=0, Y+=16,Sprite=2
The "Y+=16" notation means that the sprite is positioned 16 pixels lower than the sprite before that. This option would be encoded using an extra bit in the sprite descriptor. Now, if you want to move all the sprites one pixel to the left, the CPU only has to change the first one:
Code:
X=-1, Y=00, Sprite=0
X+=0, Y+=16, Sprite=0
X+=0, Y+=16, Sprite=0
...
X+=0, Y+=16,Sprite=2,
X+=0, Y+=16,Sprite=2
The other 13 sprites in this list would automatically be shifted as well, because they are positioned at -1 + 0 = -1.