Streamlining Embedded Design using FPGAs
Although the first FPGA, XC2064, was launched by Xilinx (now AMD-Xilinx) in 1984, the term FPGA was popularized by Actel in 1988. From the time they were introduced, FPGAs were considered superior to ASICs and PALs in terms of many aspects, the most important one being little or no NRE (Non-Recurring Engineering) cost. Other advantages include a reduced risk of failure and programmability on the fly.
The beauty of FPGAs stems from the fact that we are not dependent on FPGA hardware. This is because it doesn’t even exist before we code! It’s like drawing a circuit on a blank piece of paper. Once the HDL code is converted into its hardware counterpart, it gets wired inside the FPGA chip. This means that your code is converted into a circuit that’s specific to your application. This flexibility is exactly what makes them popular for embedded system applications.
What are FPGAs?
Field Programmable Gate Arrays (FPGAs) are semiconductor devices that, as the name suggests, are field programmable. Their internal circuitry is not mapped when you buy them, you have to program them (or re-program them) using HDLs (Hardware Description Languages). They consist of an array of logic blocks, DSPs, on-chip BRAMs, I/O pads, and routing channels.
Some cutting-edge features of FPGAs
- Offer a high degree of parallelism in application execution
- It is also possible to implement other processing architectures
- Even data path widths and register lengths can be changed according to the application
- Very good performance per Watt
- Can be revised in production more quickly than other devices
- Offer a huge expansion in I/O
The 3 Ages of FPGAs
In his paper titled “Three Ages of FPGAs: A Retrospective on the First Thirty Years of FPGA Technology”, Stephen Trimberger, an American computer scientist, electrical engineer, and inventor notes that FPGAs have passed through three distinct phases of development. He calls these phases “Ages”, and each is 8 years long.
These are:
- Age of Invention, 1984–1991: FPGAs were much smaller than the applications that users wanted to put into them. As a result, multiple-FPGA systems became popular.
- Age of Expansion, 1992–1999: FPGA vendors found themselves competing against both ASIC technology and EDA technology
- Age of Accumulation, 2000–2007: By the start of the new millennium, FPGAs were common components of digital systems.
Evolution of FPGAs
Chetan Khona, Director of Industrial, Vision, Healthcare & Sciences at AMD-Xilinx says, “If you go back to about 1996, back then you had a very common embedded architecture where you’d have a general-purpose processor, a DSP, memory, and an FPGA as a companion chip. Primarily, the role of the FPGA was to handle custom I/O and industrial communications.” Being programmable, FPGAs were then used to perform functions for which there was no specific chip available.
Over the past 10 years, FPGAs have developed in terms of speed and capacity, and now, even DIYers and hobbyists are using them for their projects. They have made their way into consumer electronics, automotive and industrial applications. However, communications and networking are still the two largest markets for FPGA products.
In order to popularize FPGA usage in the embedded systems market, FPGA companies started offering soft processor cores that can be implemented using the FPGA logic. A soft-core processor is a processor implemented using the FPGA fabric and is described in one of the hardware description languages such as Verilog and VHDL. Quite recently, FPGA vendors have started providing an embedded processor inside the FPGA. For example, Altera’s Arria V FPGA includes an 800 MHz dual-core Cortex-A9 MPCore.
Today, FPGA design is getting simpler than ever before. Even the need for having in-depth knowledge of HDLs for embedded FPGA design might be eliminated in the future. “There are many customers that use MATLAB Simulink, C++, OpenCL, Python, ROS 2, and various graphical means to do partial or complete Xilinx designs today. AMD-Xilinx has been investing in alternatives to traditional VHDL and Verilog design for over two decades to enable embedded developers with various backgrounds to take advantage of the benefits of adaptive computing.”, says Mr Khona.
When to use FPGAs vs When not to use
When it comes to embedded design, many conscious decisions have to be made while selecting the component and deciding the system architecture. One of these is deciding the brains of your system – FPGA/ Microcontroller / PLC, etc… Using an FPGA will not provide us with any advantages unless we leverage its features wisely according to the application that is to be built.
So when is using an FPGA going to be the most beneficial?
- When there’s no dedicated chip available in the market for your application
- When you’re not going to mass-produce around 10000+ chips
- When you want to produce chips in bulk, but want to test them on an FPGA to make sure that your technology is sound
- When your application runs particularly slow on a microcontroller
- When efficient parallel processing is crucial to the functioning of your application
- When you want hardware and firmware updates to happen quickly, and without much downtime
- When you are a company that’s launching a product for the first time and you don’t want to spend money on tech that will definitely need upgrades, but also want to get to market soon
- When you need precise control of timing signals on a clock by clock basis, and there are too many hardware interfaces involved
- When you want to dedicate some hardware resource on the FPGA to accelerate certain operations
FPGAs offer designers several performance advantages as mentioned above. However, if these advantages are not relevant to your end product or requirements, you can consider options apart from FPGAs.
Also, embedded system design using FPGAs is more challenging than using the off-the-shelf microcontroller because designing an embedded system using this approach involves many additional steps. It should also be noted that because FPGAs are used for tasks that are beyond microcontroller capacity, they are not necessarily optimized for low power consumption. In case a low power system is to be developed, an ASIC can be developed after testing on an FPGA.
FPGAs and Industry 4.0 Market Trends
The demand for extensive computation grew rapidly as IoT became popular, and this has increased the demand for and dependency on FPGAs. According to studies, The field-programmable gate array (FPGA) market is expected to witness a CAGR (compound annual growth rate) of 7.67% during the forecast period (2021-2026).
Smart Cities
In smart cities, one can expect a large number of high-definition images, videos, and data. This data needs to be collected and processed, which is why FPGA-enabled smart cities are on the rise. FPGAs are being heavily explored in domains like IoT security, interfacing with other IoT devices for image processing, and so on.
Bristol, UK is on its way to being the world’s first programmable city that’s built on AMD-Xilinx FPGAs. Bristol Is Open (BIO) is a joint venture between the University of Bristol, Bristol City, and many other collaborators. This is to be done by opening up a scalable software-defined network (SDN) operating system to researchers and companies.
Predictive Maintenance and Medical Diagnosis
Medical applications are vast, very application-specific, and demand extremely high performance. The problem is that finding a specially dedicated processor that fits each of these applications perfectly is difficult. This is why FPGAs are popular when it comes to med-tech. As medical technology grows, we will see more and more FPGAs in such machines.
During the past two years alone, a lot of research has been done in this field due to the Coronavirus pandemic. Many research papers were published and several new devices were invented and deployed. For example, Nuvation Engineering’s low-cost portable ventilator uses an Altera FPGA for real-time breath control.
Big Data, ML, and Edge Computing
Recent advances in data science, ML, and AI have promised computation on edge if AI hardware catches up with software. This is where FPGAs come into the picture. They fit perfectly with the requirements of edge computing. But as FPGAs are being used in such applications, a very important concern needs to be addressed – FPGAs have always been a niche field, limited only to experts.
However, things are changing quite rapidly. Last year, AMD-Xilinx announced the Kria portfolio of adaptive system-on-modules (SOMs) which are production-ready and provide a new method of bringing adaptive computing to AI and software developers. The Kria module not only offers hardware but also application support around the hardware in the form of their App Store.
“We took the solution stack that we developed a few years ago and embedded all of that, including the applications into the Xilinx App Store”, mentions Mr Khona. “Now, customers can take an entire app, like a smart camera app, directly from the Xilinx App Store, and they can put them directly into our hardware without doing any FPGA design whatsoever.”
FPGAs and the Moore’s Law
It is a well-known fact that Moore’s Law is slowly on its way to becoming invalid. In his paper titled What’s Next? [The end of Moore’s law], R. Stanley Williams, research scientist for HP Labs writes, “The end of Moore’s law may be the best thing that has happened in computing since the beginning of Moore’s law. Confronting the end of an epoch should enable a new era of creativity…”
The very creativity mentioned in this paper is what the scientific community is working on. Here are a few of these solutions:
- Finding alternatives to the CMOS technology by considering different materials
- Trying new models of computation like neuromorphic computing, quantum computing, etc…
- Using hybrid architectures with specialized accelerators for performance/energy (This is where FPGAs are involved)
Ever since Microsoft and Intel started using FPGAs in their data centers, FPGAs are being viewed as a great way of data acceleration, and are gaining popularity even in crypto mining! By integrating FPGAs as co-processors or accelerators, designers can ensure that much of the computing load is driven off the CPU without causing performance degradation. FPGAs can be tailored for an application as far as data paths and register lengths, and this allows designers to achieve performance improvements as we go beyond Moore’s Law.
Are FPGAs a solution to Chip Shortage?
The global chip shortage of 2020-21 was a direct consequence of the coronavirus pandemic. Automobile companies anticipated a decrease in their sales and reduced their chip orders. This happened in a lot of sectors, and many manufacturers were forced to focus more on consumer-centric chips since people started upgrading their laptops and personal computers. As it turns out, car sales started increasing again by the end of 2020, but automobile companies did not have enough chips to produce units.
Many believe that this supply-chain problem can be solved by creatively using FPGAs and leveraging their reusability. Their reconfigurable architecture makes them an ideal candidate and their increasing usage in embedded applications will help in reducing the number of chips needed globally.
The author, Aaryaa Padhyegurjar, is an industry 4.0 enthusiast with a keen interest in innovation and research.
Published at Thu, 10 Mar 2022 10:32:20 +0000
0 Comments