Beginnings to The Front End
Computers can be made more capable by offering more transistors, which are increasingly smaller and faster-switching, or better physical circuits and fabrication processes. Alternately, a builder can rethink machine design: structural layout, function organization, and implementation schemes, i.e., the architecture, guided by rigorous performance evaluation [Jean-Loup Baer, Computer Systems Architecture, 1980]. It quickly became clear that improvements in both capability and performance due to architectural changes are demonstrably superior to those gained from a brute force approach focused on improving physical aspects.
In the quest for ever faster, more capable computing, three long-term approaches to building computers emerged. One focused on centralized processing of instruction streams (Amdahl/IBM). The second advocated using autonomous processors and memories in parallel to overlap important activities and increase responsiveness (Thornton/CDC6600). Both utilized parallel resources (memory buffers and functional units) to maximize performance. A third proposed exploiting the natural parallelism in data streams (Dennis/Dataflow), which implied the availability of parallel resources. Flynn identified an array of processors working in unison on vectors of data as a practically interesting mode of computing. His term, SIMD, caught on and stuck.
Early hardware builders had realized that control could force artificial sequencing of instructions where data flow and logical independence allowed concurrent processing. In practice, centralized control and workload characteristics of that era limited returns in significant ways at the time. In particular, Amdahl demonstrated in a seminal work that the amount of exploitable parallelism ends up being a fundamental limiting factor on performance. Those early experiences were important in the long run. Moreover, the three major performance bottlenecks identified by Flynn (storage-I/O, execution, and branch decisions) are still as valid today. A fourth one to emerge since that time is parallelization.
By the late seventies, there was a strong realization of a need to simplify the front-end. Adding more capable instructions with creative (=elaborate) coding schemes results in costly decoding. Moore had predicted (perhaps demanded) a steady increase in transistors to be available to designers, an insight that lived long past its time thanks to advances in semiconductor technology driven by believers who made that original observation a law. Costly decoding, however, is not a wise place to spend a transistor budget, better spent on processing logic and larger memories closer to functional units (thus faster).
RISC (reduced instruction set computer) approach to machine instructions enables faster decoding and facilitates fast execution. Coherently designed instruction sets based on elementary operations also simplify compilers and help focus on producing more efficient machine code. A perhaps unforeseen side-effect of RISC at the time ended up being the more significant trend in the long run. Simpler RISC processors are naturally power-efficient. The original MIPS processor (mid-80s), while not alone and certainly not the first, was a very significant RISC design in terms of its long-term influence on modern processors. It was the first RISC to be widely successful commercially. It remains a showcase for a more purist interpretation of RISC design principles.
Modern computers, even at the low end, build on most, if not all, of the previous approaches in some way. They differ in design priorities and trade- offs depending on the target application. Computer architecture is the art, science, and engineering of building computers. Modern computers still benefit significantly from improvements in the design and characteristics of the physical device, circuit, and fabrication process but rely on architectural changes for the big gains.