“AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience.” — National Security Commission on Artificial Intelligence; 2021 Final Report

In a world where the amount of data inputs and video feeds continue to increase, embedded system designers need the tools and means to properly manage these inputs and make this data actionable. For mission-critical and safety-related military and defense operations, this task becomes even more important.
Implementing AI-based solutions in rugged embedded computing isn’t the only trend affecting system development. The mandate across the US DoD (Department of Defense) for systems and electronics to be interoperable across all platforms and manufacturers is driving change within the industry as well.
Fortunately, the SOSA™ Technical Standard, one of the open standards initiatives supported by the DoD’s Modular Open System Approach (MOSA), is enabling the needed level of data computation and processing that AI requirements mandate. The ability for systems to utilize a common architecture provides the means for quick development of advanced processing capabilities that enables AI-based computation.
Supporting AI Infrastructure Through SOSA
AI applications make use of SBCs, and GPGPUs and FPGA accelerators with an embedded system. In SOSA, the boards that implement them are called PICs or Plug-In Cards.
It’s the actual application — ISR, EW, etc. — that drives the algorithms and data sets specific to the use case, which in turn drives the system topology.
Some system implementations may require more than one accelerator, or GPGPU. Because GPGPUs or accelerators require use of the Expansion Plane, a system designed to align with SOSA must consider the connections needed to facilitate data transfers.
Effective System Development
When building an embedded system that will require AI level data processing, as well as adherence to the SOSA Technical Standard, taking into account certain design principles will enable you to meet all of your system requirements.
Simplicity of SOSA Proves Performance
As part of ensuring interoperability across different systems and platforms, SOSA restricts the number of acceptable profiles that can be applied in system development. This limited number of design options benefits compute-intensive systems, since profiles get re-used, reducing the need for complex integration efforts.
The goal of the standard is to design a non-proprietary open systems architecture to lower system development costs as well as make system reconfigurability and future system upgrades easier and faster. A key part is ensuring conformance for sensor components and SOSA modules in alignment with the Technical Standard.
AI is applied in SOSA-aligned embedded systems to process large amounts of data from sensors (like ISR, video feeds, remote sensing) and turn it into actionable insights in real time. SOSA’s modular architecture supports high-performance processors, GPGPUs, FPGAs, and accelerators that are needed to run AI workloads efficiently in rugged military applications.
The SOSA Technical Standard provides a common open architecture that enables interoperability across manufacturers and platforms. This simplifies integration of AI accelerators and compute intensive boards, reducing development complexity while supporting technology reuse and modular scalability.
AI in SOSA systems typically uses Single Board Computers (SBCs), General Purpose GPUs (GPGPUs), and FPGA accelerators mounted on plug-in cards (PICs) that conform to SOSA slot profiles. These modules handle compute-intensive tasks and high-speed data transfer across the system fabric
SOSA limits the acceptable board profiles and enforces open interfaces, which means boards and AI accelerators can be interchanged or upgraded without major redesigns. This modularity shortens development cycles and extends lifecycle support
AI use cases include intelligence, surveillance and reconnaissance (ISR), electronic warfare (EW), sensor fusion, target detection and tracking, and autonomous decision making at the tactical edge. AI enables fast pattern recognition and complex analytics where humans or traditional processors can’t keep up.

Looking back we can now see a shift in how development platforms are designed and how they are used by our integrator customer base. That shift is making it easier and less expensive to perform the development stages of a deployable system project and put solutions into the hands of the warfighter faster than ever before. Development hardware can also be shared between projects, or inherited by subsequent projects. This saves not only on lab budget, but the time to order and receive all new hardware for a new development project.

In the past few years, several end-of-life (EOL) announcements in the embedded computing market have both caused angst and opportunity. Making the shift away from a tried-and-true solution always brings with it the need to review not only the mechanical elements of an embedded system, but the integration and networking elements as well. And when that review is forced upon a designer, as in the case of an EOL announcement, it may mean forced choices of not-as-optimum alternatives. Or it could be something different altogether.