Just now, I was musing on how far we’ve come with computers - TopicsExpress



          

Just now, I was musing on how far we’ve come with computers since I first started working with them in the late 1970s. I managed to catch the wave just as systems were transitioning from the large Vax/VMS and IBM/370 half-million-dollar behemoths of the past to minicomputers (about $10,000) a few years before the first home microcomputers (about $1,000+) became popular. So, for example, at MIT I worked quite extensively on an old Data General Nova II computer. The retail was perhaps $10,000, but I think our lab inherited it for free from the HEAO project downstairs as they migrated upwards to the (then new) Nova Eclipse computers which were totally incompatible with older parts. The specs on that computer were impressive for the say: 1) Two 16Kx16 memory boards delivered an impressive 32K words (16-bit words) of memory. Better still, being core memory you could turn the computer off while a program was running, come back the next day, turn it on and it would pick right up from where it left off without a hiccup. 2) The CPU board was about two-feet square, covered with small-scale TTL chips; the new thing revolutionizing computers of the era. It could run through about 400 thousand instructions per second. I think the LARGEST chip on the CPU board was a 74181 bit slice ALU; a chip that had the equivalent of about 75 logic gates (perhaps about 150 transistors on it). For its time, that was a pretty impressive chip! 3) We had two large magnetic disk drive platters that hummed noisily along (I remember I tended to program with a radio or headphones on). Each one held a whopping 5MB of storage. At one point we inherited a ten-platter drive (50MB — WOW!!) but it had a hardware problem that I couldn’t get around, so we eventually put it out for recycling. We also scored a 9-track magnetic tape drive from a downstairs lab[1], and after figuring out why a particular set of transistors kept burning out, managed to use it occasionally (though it was so loud my co-workers encouraged me NOT to make use of it). Each tape reel could store perhaps about 100MB of data on it (guessing). 4) I programmed principally in Fortran IV for astronomical calculations, but astronomers at the time were also fiddling with this novel stack-based interactive language called Forth that got me hooked on weird software ideas. [1]: Where this is headed is this observation. Except for the disk and tape drives which would still need to be external systems, that computer and memory could be implemented on a single CHIP, such as the Xilinx Artix-7 FPGA that I’m currently programming. Furthermore, there’d be room to spare. Instead of operating at 400k instructions per second, you could easily clock through perhaps 100M instructions per second on a chip costing about $145.[2] That chip has about 4.5MB of distributed on-chip memory, or about 75x times as much memory as our 32K-word system supported (and I believe the instruction set could address a maximum of 64K-words). [2]: Now, if you purchase a fully assembled FPGA development board, such as my current favorite — the Digilent Nexys-4 board for about $320 retail[3], you’ve got USB and Ethernet serial in/outs, 16Mbytes of additional onboard memory, color VGA video output, and several expansion connectors. You could easily emulate these early systems if you wanted to. Indeed, I’m considering at some point down the line developing a port of the Intel 8085 CPU to FPGA implementation. That would be for fun and not for profit; unlike the Nova II where virtually none of the original operating system and programs are extant today, there are loads of archived programs for CP/M and early Windows systems that would run on an Intel 8085 system. [3]: In any case. It just shows you how far we’ve come with silicon designs, and also by comparison how far we have yet to go. With the Von Neumann architecture for CPUs and memory, I think we’ve fallen into a bit of a trap performance-wise. Specialized GPUs and FPGAs can boost data throughput, but are more difficult to program using current imperative languages ill suited to describe parallel and distributed programming tasks. The next wave will be systems that can exploit the performance that densely populated silicon chips can now deliver. But I’m repeating myself now, as I’m starting to echo what I previously said about this Bret Victor lecture on what “the future of programming” might look like. [In itself, this talk is another trip down memory lane.]
Posted on: Sat, 01 Nov 2014 23:21:36 +0000

Trending Topics



Recently Viewed Topics




© 2015