Crossing the Reconfigurable Computing Chasm – EEJournal

In 1960, Gerald Estrin presented “Organization of computer systems: the fixed plus variable structure computer” at the western joint IRE-AIEE-ACM computer conference. His abstract reads in part: “…a growing number of important problems have been recorded which are not practicably computable by existing systems. These latter problems have provided the incentive for the present development of several large scale digital computers with the goal of one or two orders of magnitude increase in overall computational speed.” – and his solution to the problem is given in the title “the fixed plus variable structure computer” – thus giving birth to the concept of reconfigurable computing.

Yep. The idea of reconfigurable computing pre-dates Moore’s Law, which was born six years later in 1966.

Over the next 58 years, we have chased that reconfigurable computing carrot, dangling enticingly just out of reach from the end of our ever-evolving programming pole. And, for at least three of those six decades, we have had reasonable hardware available to fulfill the promise of reconfigurable computing – an architectural alternative that could deliver Estrin’s “one or two” orders of magnitude increase in overall computational speed. In fact, we now know it might deliver as much as three or four orders of magnitude, and that is on top of our almost-entitled biannual doubling due to Moore’s Law.

And, yet, we are still not there.

One could argue that Moore’s Law has prevented us from realizing the promise of reconfigurable computing. After all, simply riding the von Neumann horse from one semiconductor process node to the next gave us a reliable 2x improvement in price, performance, and power every other year, and we didn’t have to rewrite our software or even redesign our computing architecture to get it. When you get that kind of bounty almost for free, who needs to be greedy?

Reconfigurable computing has always had a die-hard cult-like academic…

Source Link