It’s no secret that higher performance means higher thermal management requirements. Denser electronics packed into smaller spaces oftentimes leaves designers with the challenge of finding more creative ways to dissipate the increased amount of heat for conduction-type cooling methods. OpenVPX enables extraordinary leaps in aggregate system bandwidth and processing speeds that mandates new methods to meet the resulting thermal challenges.
OpenVPX has introduced optical and RF signals to the backplane, removing these otherwise discrete connectors from the front of the cards. While the new backplane connections eliminate what would otherwise be a jumble of cables, the aggregate high-speed signals that now traverse the backplane rapidly heat up the system, exacerbating the already difficult-to-manage temperature increases.
Some of the most complex cards are being used in applications such as signal intelligence for communications and to record signals on the battlefield – including enemy communications – taking in audio inputs and triangulating the source of enemy fire.
Many high-performance applications require processor and FPGA (Field-Programmable Gate Array) system bandwidth that drive up the thermal load on the inside the chassis, necessitating the need for new thermal management strategies. One example is a recent aerospace application that required many RF inputs – 36 payload slots each with 16 RF signals and many large radar arrays that require vast amounts of RF I/O signals.
Embedded sub-systems must sometimes be packaged to fit existing tight spaces in aircraft, ground vehicles, submarines, spacecraft and other rugged, compact environments, and has led to the need for optimized SWaP-C (size, weight and power-cooling*). While OpenVPX offers significant improvements in field-deployed system signal integrity, speed and capability, it has created new challenges in these space-constrained installations.
As higher performance systems are implemented, the choice between 3U VPX and 6U VPX becomes a matter of what functionality can be packaged on the smaller card vs. the larger. And as processors and FPGAs enable more capability, the 3U VPX form factor is favored for its reduced size and weight. This pushes the existing convection and conduction cooling techniques defined by the standard to their limits.
That concentration of power in a smaller board has heavily impacted chassis and backplane designs and complicated thermal management in systems using a 3U card, making heat dissipation a larger issue. However, new cooling options under the VITA 48 umbrella are working to accommodate the increased heat in these high performance systems.
Most current applications find conduction cooling, as defined by VITA 48.2, and its respected cohort convection cooling sufficient. But the added complexity and heat generation of new boards and connectors quickly push current system cooling methods beyond these defined limits.
As VPX has grown in popularity, the VITA standards committees have defined additional cooling methods under VITA 48 to ensure future thermal needs are adequately handled. Current iterations are:
The environment in which the boards will be developed and tested is typically different than the final deployed unit, so a lab chassis, for example, can usually rely on just fan cooling, whereas a deployed unit might need conduction cooling. The proper cooling method for a deployed system should be based on the most practical design and take into account the housing, the card heat sink and the chassis itself.
See blog post on VITA 48.4 and alternate cooling methods
OpenVPX has allowed new definitions for VPX backplanes and systems, giving system architects and end users a far wider range of choices in critical high-speed applications, paving the way for more open architecture and multi-vendor interoperability in the future. It fosters technology growth over time, without requiring changes to system architecture. It uses adaptations within the standards themselves to enable new capabilities and build HPEC hardware.
System density is only increasing, and end users are still searching for ways to fit smaller boxes into more compact spaces, so they can put even more electronics into their applications. Which, or course, means more heat.
* For purposes of this discussion, the “C” in SWaP-C refers to “Cooling” whereas some definitions determine the “C” to mean “Cost”.
Looking back we can now see a shift in how development platforms are designed and how they are used by our integrator customer base. That shift is making it easier and less expensive to perform the development stages of a deployable system project and put solutions into the hands of the warfighter faster than ever before. Development hardware can also be shared between projects, or inherited by subsequent projects. This saves not only on lab budget, but the time to order and receive all new hardware for a new development project.
In the past few years, several end-of-life (EOL) announcements in the embedded computing market have both caused angst and opportunity. Making the shift away from a tried-and-true solution always brings with it the need to review not only the mechanical elements of an embedded system, but the integration and networking elements as well. And when that review is forced upon a designer, as in the case of an EOL announcement, it may mean forced choices of not-as-optimum alternatives. Or it could be something different altogether.