Factors of Increased Heat Generation in OpenVPX Systems

Publish Date:
May 10, 2019

It’s no secret that higher performance means higher thermal management requirements. Denser electronics packed into smaller spaces oftentimes leaves designers with the challenge of finding more creative ways to dissipate the increased amount of heat for conduction-type cooling methods.  OpenVPX enables extraordinary leaps in aggregate system bandwidth and processing speeds that mandates new methods to meet the resulting thermal challenges.

OpenVPX has introduced optical and RF signals to the backplane, removing these otherwise discrete connectors from the front of the cards. While the new backplane connections eliminate what would otherwise be a jumble of cables, the aggregate high-speed signals that now traverse the backplane rapidly heat up the system, exacerbating the already difficult-to-manage temperature increases.

Some of the most complex cards are being used in applications such as signal intelligence for communications and to record signals on the battlefield – including enemy communications – taking in audio inputs and triangulating the source of enemy fire.

Many high-performance applications require processor and FPGA (Field-Programmable Gate Array) system bandwidth that drive up the thermal load on the inside the chassis, necessitating the need for new thermal management strategies.  One example is a recent aerospace application that required many RF inputs – 36 payload slots each with 16 RF signals and many large radar arrays that require vast amounts of RF I/O signals.

Tight Spaces Mean More Heat

Embedded sub-systems must sometimes be packaged to fit existing tight spaces in aircraft, ground vehicles, submarines, spacecraft and other rugged, compact environments, and has led to the need for optimized SWaP-C (size, weight and power-cooling*). While OpenVPX offers significant improvements in field-deployed system signal integrity, speed and capability, it has created new challenges in these space-constrained installations.

As higher performance systems are implemented, the choice between 3U VPX and 6U VPX becomes a matter of what functionality can be packaged on the smaller card vs. the larger. And as processors and FPGAs enable more capability, the 3U VPX form factor is favored for its reduced size and weight. This pushes the existing convection and conduction cooling techniques defined by the standard to their limits.

That concentration of power in a smaller board has heavily impacted chassis and backplane designs and complicated thermal management in systems using a 3U card, making heat dissipation a larger issue. However, new cooling options under the VITA 48 umbrella are working to accommodate the increased heat in these high performance systems.

Beyond Traditional Convection and Conduction

Most current applications find conduction cooling, as defined by VITA 48.2, and its respected cohort convection cooling sufficient. But the added complexity and heat generation of new boards and connectors quickly push current system cooling methods beyond these defined limits.

As VPX has grown in popularity, the VITA standards committees have defined additional cooling methods under VITA 48 to ensure future thermal needs are adequately handled. Current iterations are:

The environment in which the boards will be developed and tested is typically different than the final deployed unit, so a lab chassis, for example, can usually rely on just fan cooling, whereas a deployed unit might need conduction cooling. The proper cooling method for a deployed system should be based on the most practical design and take into account the housing, the card heat sink and the chassis itself.

See blog post on VITA 48.4 and alternate cooling methods

OpenVPX into the Future

OpenVPX has allowed new definitions for VPX backplanes and systems, giving system architects and end users a far wider range of choices in critical high-speed applications, paving the way for more open architecture and multi-vendor interoperability in the future. It fosters technology growth over time, without requiring changes to system architecture. It uses adaptations within the standards themselves to enable new capabilities and build HPEC hardware.

System density is only increasing, and end users are still searching for ways to fit smaller boxes into more compact spaces, so they can put even more electronics into their applications.  Which, or course, means more heat.

* For purposes of this discussion, the “C” in SWaP-C refers to “Cooling” whereas some definitions determine the “C” to mean “Cost”.


No items found.

Read More Blog Posts

What is Edge AI Enabling in Industrial Computing Applications?

What is Edge AI Enabling in Industrial Computing Applications?

Similar to how cloud computing evolved over the last decade to the de facto way of storing and managing data, Edge AI is taking off. Edge AI is one of the most notable trends in artificial intelligence, as it allows people to run AI processes without having to be concerned about security or slowdowns due to data transmission. And its impact is notable in industrial embedded computing, since it allows platforms to react quickly to inputs without access to the cloud. We asked some Edge AI partners: If analytics can be performed in the cloud, what is the benefit of an Edge AI approach, especially as it’s related to industrial embedded computing?

Membership in the NVIDIA® Partner Network

Membership in the NVIDIA® Partner Network

We are taking AI Computing to the next level! Find out how our membership in the NVIDIA® Partner Network complements the designs of our rugged computing systems to deliver enhanced deployable systems specifically designed to operate in harsh environments.