top of page

Blinded by the White: The Need for Multi-Modal Sensor Fusion in Autonomous Systems

Everyone admires LiDAR’s detailed 3D point clouds for autonomous vehicles and robotics. These clouds create vivid maps of the environment, helping machines "see" in three dimensions. Yet, when dense fog or smoke fills the air, LiDAR’s vision fades into uselessness. In contrast, radar systems continue to perform reliably in these challenging conditions. Why does this happen? The answer lies in the fundamental differences between the wavelengths LiDAR and radar use, and how these wavelengths interact with particles in the air.


Understanding these differences is crucial for building autonomous systems that can operate safely in all weather conditions. This post explains why LiDAR struggles where radar excels, using a simple explanation of attenuation and a real-world example: a train moving through fog. We will also explore why combining multiple sensor types is the best path forward, and how Xelec’s expertise supports this approach.



How LiDAR and Radar Use Different Wavelengths


LiDAR (Light Detection and Ranging) uses near-infrared light, typically around 905 nanometers (nm). This wavelength is very short, close to visible light, which allows LiDAR to create highly detailed 3D images by measuring the time it takes for light pulses to bounce back from objects.


Radar (Radio Detection and Ranging), on the other hand, uses much longer wavelengths, often around 4 millimeters (mm) in the millimeter-wave radar band. These radio waves have lower frequencies and longer wavelengths compared to LiDAR’s light pulses.


Why Wavelength Matters


The difference in wavelength affects how each sensor interacts with particles like fog droplets, smoke, or dust:


  • LiDAR’s short wavelength is easily scattered or absorbed by tiny water droplets or smoke particles. This scattering reduces the amount of light that returns to the sensor, a phenomenon called attenuation.

  • Radar’s longer wavelength passes through these particles with much less scattering and absorption, so the radar signal remains strong even in dense fog or smoke.



What Is Attenuation? A Simple Explanation


Attenuation means the weakening of a signal as it travels through a medium. Imagine shining a flashlight through a thick fog. The light dims because tiny water droplets scatter and absorb the light, preventing it from reaching far distances. This is attenuation in action.


For LiDAR, attenuation happens when the near-infrared light pulses hit fog or smoke. The light scatters in many directions, and only a small fraction returns to the sensor. This reduces the sensor’s effective range and accuracy.


Radar waves, with their longer wavelengths, behave more like radio signals that can pass through fog with less interference. This means radar can detect objects even when visibility is near zero for LiDAR.



The Train in Fog: A Case Study


Picture a train moving through a thick fog bank. The train’s autonomous system relies heavily on LiDAR to detect obstacles and navigate safely. As the fog thickens, the LiDAR’s 3D point cloud becomes sparse and unreliable. The system might miss obstacles or misjudge distances, creating a safety risk.


In contrast, radar sensors on the train continue to detect objects accurately. The radar waves penetrate the fog, providing consistent data about the train’s surroundings.


This example shows why relying on LiDAR alone is risky. Autonomous systems must combine data from multiple sensors - cameras, LiDAR, radar, to build a complete and reliable picture of the environment. This approach is called multi-modal sensor fusion.



Eye-level view of a train moving through dense fog with visible radar sensor on the front
Train navigating through dense fog using radar sensors


Why Multi-Modal Sensor Fusion Matters


No single sensor type can handle every environmental challenge perfectly. Cameras provide color and texture but struggle in low light. LiDAR offers detailed 3D maps but fails in fog or smoke. Radar penetrates fog but provides less spatial detail.


Combining these sensors allows autonomous systems to:


  • Cross-check data to reduce errors

  • Fill gaps when one sensor’s data is weak or missing

  • Adapt to changing conditions like rain, fog, or dust


This fusion improves safety and reliability, making autonomous vehicles and robots more capable in real-world environments.



How Xelec Supports Multi-Modal Sensor Fusion


Xelec specializes in developing sensor solutions that integrate multiple technologies. Our expertise includes:


  • Designing radar systems optimized for harsh weather conditions

  • Integrating LiDAR and radar data streams for real-time processing

  • Developing algorithms that intelligently combine sensor inputs to improve object detection and tracking


By focusing on multi-modal sensor fusion, Xelec helps autonomous systems overcome the limitations of individual sensors. Our solutions ensure machines can "see" clearly, even when the environment tries to blind them.



Moving Forward with Clear Vision


LiDAR’s stunning 3D images capture the imagination, but they are not enough for safe autonomous operation in all conditions. Radar’s ability to see through fog and smoke makes it a critical partner in sensor fusion.


The train in fog example highlights the real dangers of relying on a single sensor type. Autonomous systems must combine the strengths of LiDAR, radar, and cameras to navigate safely and reliably.


Xelec’s expertise in multi-modal sensor fusion provides the tools and knowledge to build autonomous systems that work in the real world, no matter the weather. For developers and engineers, the takeaway is clear: build sensor systems that complement each other, so your autonomous machines never get blinded by the white.


 
 
 

Comments


bottom of page