Motion Recruitment | Jobspring | Workbridge

Multimodal AI System Engineer / DoD

Alexandria, Virginia

100% Remote

Direct Hire

$180k - $220k

A cutting-edge technology organization focused on transforming maritime domain awareness is looking for an AI Systems Engineer - Multi-Modal to join its highly collaborative engineering team. This role focuses on innovating machine learning and sensor fusion solutions that enable real-time insights from diverse data sources (e.g., vision, RF, acoustic) across distributed platforms. You’ll help shape the future of real-world intelligence systems used to monitor remote environments and support critical decision-making.

This is a full-time opportunity with flexibility around remote work and offers a chance to contribute to mission-driven projects where initiative and deep technical expertise are highly valued. Required Skills & Experience
  • 7+ years of professional experience building and deploying advanced machine learning systems, especially those involving multiple sensor modalities (vision, RF, acoustics).
  • Master’s or PhD in Machine Learning, Computer Vision, Signal Processing, Robotics, Computer Science, or a closely related discipline.
  • Expertise with Python and deep learning frameworks such as PyTorch or TensorFlow.
  • Strong knowledge of sensor fusion principles, data alignment, and feature extraction across heterogeneous data sources.
  • Demonstrated ability to transition models from research prototypes to reliable, production-grade execution.
  • Effective communicator who can partner with engineering, product, and domain experts across disciplines.
Desired Skills & Experience
  • Prior work on embedded or edge-focused systems with real-time performance constraints.
  • Familiarity with maritime, aerospace, robotics, or other sensor-rich deployment environments.
  • Comfortable navigating ambiguity and driving model design from first principles through validation and optimization.
  • Experience with best practices in code quality, experiment tracking, and reproducible workflows.
What You Will Be Doing Tech Breakdown:
  • 55% Design, develop, and validate multi-modal machine learning components
  • 45% Pipeline architecture, optimization, and cross-disciplinary collaboration



Daily Responsibilities:
  • Partner with system engineers and domain specialists to translate raw sensor streams into robust model inputs.
  • Architect and implement multi-modal data pipelines including alignment, augmentation, and preprocessing.
  • Prototype and scale new machine learning approaches that combine diverse inputs (e.g., video, radio frequency, acoustics).
  • Optimize models and inference workflows for deployment on resource-constrained and remote-connected systems.
  • Document design decisions, training strategies, and evaluation results to support repeatability and cross-team understanding.
The Offer You will receive the following benefits:
  • Competitive base salary with budget for education and professional development.
  • Flexible schedule with remote-first collaboration.
  • Opportunity to make a tangible impact advancing real-world perception systems.
  • Work alongside a passionate, mission-oriented team that values innovation and autonomy
#LI-OP

Posted by: Olivia Policastro