For the two ROM technologies you asked about, DRAMs are much slower. See the 1973 Intel
Memory Design Handbook.
If compared to modern DRAMs, i.e. DDR SDRAMs, these ROM technologies have comparable random access speeds, but dramatically slower sequential access speeds. The implementation of the ROMs you mentioned is designed to specifically support random accesses. The multiplexer required to connect the sense amps to the memory cell, even if it's a static mask-programmed value, is a complex device in its own right. There's at least one logic layer for each address bit in the ROM in order to retrieve the desired memory cell and connect it to the output drivers. There's no free lunch in IC design, and that random access multiplexer represents a significant amount of delay.
A modern DRAM has a random access delay of around 10 clock cycles. Sequential access is significantly faster from the point where the "row" has been loaded from the array into the row buffer and the column selector, i.e. starting point in the row, has also been preset. The associated multiplexers are faster than the ROM's because they generally only have half the number of bits, and the entire row is read from the DRAM array in parallel.
If a ROM was designed for sequential access on a row of the array, and because it doesn't have to refresh the memory cells, it would be faster than a DRAM. Modern NAND Flash provides very high speed sequential access. It's not quite as fast as a modern DRAM, but it appears to be giving SDRAM a run for its money. Like DRAM, it uses a paged access mechanism better optimized for sequential access. Hence its extensive use in high-density Solid State Disk Drives (SSDD).