Ye Olde FPGA Acceleration – EEJournal

Unless you’ve been hiding under a rock, you know that FPGA-based compute acceleration is suddenly a hot topic. And, even from under the rock, you probably got the memo that Intel paid over $16B to acquire Altera a couple years back – mostly to capitalize on this “new” emerging killer app for FPGA technology. These days there is an enormous battle brewing for control of the data center, as Moore’s Law slows to a crawl and engineers look for alternative ways to crunch more data faster with less power.

As we’ve discussed at length in these pages, FPGAs are outstanding platforms for accelerating many types of compute workloads, particularly those where datapaths lend themselves to massively parallel arithmetic operations. FPGAs can crush conventional processors by implementing important chunks of computationally intense algorithms in hardware, with dramatic reduction in latency and (often more important) power consumption.

The big downside to FPGA-based acceleration is the programming model. In order to get optimal performance from a heterogeneous computing system with FPGAs and conventional processors working together, you need a way to partition the problem, turn conventional code into appropriate FPGA architectures, and realize that whole thing in a well-conceived hardware configuration. This requires, among other things, a good deal of expertise in FPGA design, as well as an overall strategy that accounts for getting data into and out of those FPGA accelerators, and a memory and storage architecture that’s up to the task. Getting it right is no small feat, and there are countless ways to go wrong along the way and end up with very little gain from your FPGA investment.

At this week’s Supercomputing conference in Dallas, Bittware (acquired by Molex earlier this year) announced they were “joining forces” with Nallatech (acquired by Molex last year as part of the Molex acquisition of Interconnect Systems, Inc.). In the FPGA acceleration…

Source Link