Advanced Computer Architecture

Big Picture Summary 3

Updated Friday November 20, 2020 7:37 PM GMT+3

Revised 11/20/2020 - This article is not standalone. It's meant to cap a detailed reading of a similarly titled set of technical papers (see the course calendar).

Memory and I/O

It is interesting to learn about cache memory from a computer architect (system builder) viewpoint. Undergraduates often encounter the topic in the context of memory organization. In that context, it is hard to justify why to go into such trouble in the first place. In reality, the cache was born out of a need to bridge a significant performance gap between two critical system components. It was a clever expressway proposed to improve bandwidth between memory and processor by reducing the need for slower memory and increasing data rates into the processor.

The cache, however, introduced major system performance issues. A cache memory designed to keep up with a fast processor also introduces adverse traffic to maintain proper operation transparently to programs (i.e., neither processors nor programmers are aware), which was particularly challenging to earlier single bus systems. Eventually, multibus and later dedicated high- speed point-to-point connection systems removed some previous concerns.

Cache design is not easy because it involves many often conflicting design factors and performance considerations. There is no one size fits all. A successful cache involves careful tradeoffs to yield a net increase in performance while limiting house-keeping traffic. It is a big part of addressing the execution concern identified by Flynn.

Learning from those who developed solutions to address those issues helps better understand modern computers. Today the need for caching is more pressing than ever with faster and increasingly bandwidth-hungry processors. Fortunately, advances in semiconductor technology created more room on-chip (and in-package) for bigger, faster, more closely connected caches.

Nowadays, so-called cloud storage is turning out to be an increasingly relevant I/O device. No longer just for offline backup, it is increasingly becoming a part of daily workflows, much like a traditional online I/O device. Network-based cloud storage relies on a local component at the user end. Ethernet predominantly became that component. It is now a big part of what users perceive as the Internet. It is also a big part of what characterizes the user experience of cloud storage. In a sense, Ethernet significantly powers cloud storage.

The original Ethernet design is an example of a decentralized scheme with all the issues that arise in that environment. Don't let the fact that it happens to describe a networking technology distract. Watch for parallels in multiprocessing.