Latest News / December ‘25 /CES Community Demo #2: Integrating SODA.Sim with Autoware for Camera-Based Evaluation

Introducing Lite Models: Embedded-Optimized Scene Perception for Autoware

Author: ADMIN
Introducing Lite Models: Embedded-Optimized Scene Perception for Autoware

Alternative Embedded Innovation (AEI), as part of the Autoware Foundation ecosystem, is proud to present Lite Models, a new family of open-source AI Scene Perception models which have been engineered and optimized specifically for embedded deployment as part of automotive edge-AI applications for ADAS and self-driving.


Architecture Exploration

The focus of this effort was to optimize three existing open-source models within the Autoware Vision Pilot stack (autowarefoundation/autoware_vision_pilot: Free self-driving car stack – fully open-source ADAS and autonomous driving system) which aims to achieve a production-ready and safety-certifiable open-source SAE L2 ADAS functionality for automotive OEMs and Tier-1 suppliers. The models under consideration were EgoLanes (lane segmentation), SceneSeg (scene segmentation), and Scene3D (depth estimation). Through our efforts, we were able to optimize all models while preserving the original functionalities, enabling deployment on constrained embedded platforms where multiple perception modules must run concurrently.

At AEI, we conducted a structured architecture exploration focused on:

  • Reducing computational complexity
  • Improving hardware efficiency
  • Preserving segmentation quality and high resolution
  • Maintaining compatibility with embedded inference frameworks

The primary architectural change involved replacing the original decoder with a significantly lighter design based on:

  • An ASPP (Atrous Spatial Pyramid Pooling) block inspired by DeepLabV3+
  • A custom lightweight segmentation head
  • An EfficientNet backbone for fast and robust feature extraction

This redesign reduces decoder complexity while preserving multi-scale context aggregation — critical for accurate estimation of fine-grained features. The resulting models are called:

  • EgoLanes Lite, efficient lane segmentation and classification model
  • SceneSeg Lite, a lightweight semantic segmentation model
  • Scene3D Lite, a compact monocular depth estimation model

Across these models, we achieved more than 20× compute reduction while preserving practical task-level robustness. As part of the Lite architecture redesign, all three perception models were optimized for INT8 inference using TensorRT on embedded hardware.

All Lite variants share a unified design strategy that ensures consistency across the perception stack and simplifies deployment on heterogeneous hardware.


Performance

Lite Models were benchmarked on embedded hardware using optimized inference backends.

Compared to the original perception models, Lite Models achieve:

  • Substantial reduction in computational cost
  • Significantly higher real-time throughput
  • Efficient INT8 acceleration compatibility

This enables lane segmentation to operate comfortably within embedded perception pipelines where segmentation, depth estimation, and object detection must coexist under strict latency budgets.

Most importantly, Lite models preserve the accuracy of the original, larger perception models on validation benchmarks.

Below is a consolidated comparison showing operation reduction and quantized inference speed improvements.

Compute Reduction

  • SceneSeg: 224 GOPs → SceneSeg Lite 7.82 GOPs (~28× reduction)
  • Scene3D: 224 GOPs → Scene3D Lite 7.78 GOPs (~28× reduction)
  • EgoLanes: 119 GOPs → EgoLanes Lite 6.10 GOPs (~20× reduction)

INT8 Embedded Throughput (Jetson Orin Nano)

ModelBaseline FPS (FP32 TensorRT)Lite FPS (INT8 TensorRT)Speed-Up
SceneSeg10.2 FPS87.6 FPS~8.6×
Scene3D10.0 FPS91.4 FPS~9.1×
EgoLanes20.5 FPS104.3 FPS~5.1×

Alternative Embedded Innovation — Deployment-Driven Embedded AI

Alternative Embedded Innovation specializes in:

  • Embedded machine learning for automotive and robotics systems
  • Hardware-aware model architecture design
  • Quantization and acceleration strategies, AI optimization structural and by type
  • Model-based development 
  • Safe and secure development of complex multidisciplinary systems in compliance with ISO-26262 ISO-8800
  • Integration of perception modules into safety-oriented architectures, beyond NVIDIA-only platforms, such as Infineon, AMD, Renesas, Texas Instruments and more

By contributing optimized perception modules such as EgoLanesLite to the Autoware ecosystem, we support the shared mission of the Autoware Foundation:

To advance open, scalable, and deployment-ready autonomous driving software through industrial collaboration and transparent validation. EgoLanes Lite, SceneSeg Lite and Scene3D Lite represent a concrete step toward that objective — engineering perception models not only for accuracy, but for real automotive deployment on compute constrained platforms.