It’s no secret that higher performance means higher thermal management requirements. Denser electronics packed into smaller spaces oftentimes leaves designers with the challenge of finding more creative ways to dissipate the increased amount of heat for conduction-type cooling methods. OpenVPX enables extraordinary leaps in aggregate system bandwidth and processing speeds that mandates new methods to meet the resulting thermal challenges.
OpenVPX has introduced optical and RF signals to the backplane, removing these otherwise discrete connectors from the front of the cards. While the new backplane connections eliminate what would otherwise be a jumble of cables, the aggregate high-speed signals that now traverse the backplane rapidly heat up the system, exacerbating the already difficult-to-manage temperature increases.
Some of the most complex cards are being used in applications such as signal intelligence for communications and to record signals on the battlefield – including enemy communications – taking in audio inputs and triangulating the source of enemy fire.
Many high-performance applications require processor and FPGA (Field-Programmable Gate Array) system bandwidth that drive up the thermal load on the inside the chassis, necessitating the need for new thermal management strategies. One example is a recent aerospace application that required many RF inputs – 36 payload slots each with 16 RF signals and many large radar arrays that require vast amounts of RF I/O signals.
Tight Spaces Mean More Heat
Embedded sub-systems must sometimes be packaged to fit existing tight spaces in aircraft, ground vehicles, submarines, spacecraft and other rugged, compact environments, and has led to the need for optimized SWaP-C (size, weight and power-cooling*). While OpenVPX offers significant improvements in field-deployed system signal integrity, speed and capability, it has created new challenges in these space-constrained installations.
As higher performance systems are implemented, the choice between 3U VPX and 6U VPX becomes a matter of what functionality can be packaged on the smaller card vs. the larger. And as processors and FPGAs enable more capability, the 3U VPX form factor is favored for its reduced size and weight. This pushes the existing convection and conduction cooling techniques defined by the standard to their limits.
That concentration of power in a smaller board has heavily impacted chassis and backplane designs and complicated thermal management in systems using a 3U card, making heat dissipation a larger issue. However, new cooling options under the VITA 48 umbrella are working to accommodate the increased heat in these high performance systems.
Beyond Traditional Convection and Conduction
Most current applications find conduction cooling, as defined by VITA 48.2, and its respected cohort convection cooling sufficient. But the added complexity and heat generation of new boards and connectors quickly push current system cooling methods beyond these defined limits.
As VPX has grown in popularity, the VITA standards committees have defined additional cooling methods under VITA 48 to ensure future thermal needs are adequately handled. Current iterations are:
- 48.4, liquid flow-through, probably the most efficient up to 450 watts per card
- 48.5, air flow-through, which has the advantage of being able to meter the air to specific cards
- 48.7, air flow-by
- 48.8, air flow-through cooling without sealing for small form factor 3U and 6U VPX modules (ANSI ratified October 2017)
The environment in which the boards will be developed and tested is typically different than the final deployed unit, so a lab chassis, for example, can usually rely on just fan cooling, whereas a deployed unit might need conduction cooling. The proper cooling method for a deployed system should be based on the most practical design and take into account the housing, the card heat sink and the chassis itself.
OpenVPX into the Future
OpenVPX has allowed new definitions for VPX backplanes and systems, giving system architects and end users a far wider range of choices in critical high-speed applications, paving the way for more open architecture and multi-vendor interoperability in the future. It fosters technology growth over time, without requiring changes to system architecture. It uses adaptations within the standards themselves to enable new capabilities and build HPEC hardware.
System density is only increasing, and end users are still searching for ways to fit smaller boxes into more compact spaces, so they can put even more electronics into their applications. Which, or course, means more heat.
* For purposes of this discussion, the “C” in SWaP-C refers to “Cooling” whereas some definitions determine the “C” to mean “Cost”.