Giving robots superhuman vision using radio signals


Freddy Liu (EAS’25), Haowen Lai (Gr’28) and Mingmin Zhao, Assistant Professor in CIS, from left, setting up a robot equipped with PanoRadar for a test run. Credit: University of Pennsylvania

In the race to develop robust perception systems for robots, one persistent challenge has been operating in bad weather and harsh conditions. For example, traditional, light-based vision sensors such as cameras or LiDAR (Light Detection And Ranging) fail in heavy smoke and fog.

However, nature has shown that vision doesn’t have to be constrained by light’s limitations—many organisms have evolved ways to perceive their environment without relying on light. Bats navigate using the echoes of sound waves, while sharks hunt by sensing electrical fields from their prey’s movements.

Radio waves, whose wavelengths are orders of magnitude longer than light waves, can better penetrate smoke and fog, and can even see through certain materials—all capabilities beyond human vision. Yet robots have traditionally relied on a limited toolbox: They either use cameras and LiDAR, which provide detailed images but fail in challenging conditions, or traditional radar, which can see through walls and other occlusions but produces crude, low-resolution images.

A new way to see

Now, researchers from the University of Pennsylvania School of Engineering and Applied Science (Penn Engineering) have developed PanoRadar, a new tool to give robots superhuman vision by transforming simple radio waves into detailed, 3D views of the environment.

“Our initial question was whether we could combine the best of both sensing modalities,” says Mingmin Zhao, Assistant Professor in Computer and Information Science. “The robustness of radio signals, which is resilient to fog and other challenging conditions, and the high resolution of visual sensors.”






PanoRadar works like a lighthouse, with a rotating sensor that emits radio waves, whose echoes are processed by AI into an accurate, 3D image of the surroundings.

In a paper to be presented at the International Conference on Mobile Computing and Networking (MobiCom 2024), held Nov. 18–22 in Washington D.C., Zhao and his team describe how PanoRadar leverages radio waves and artificial intelligence (AI) to let robots navigate even the most challenging environments, like smoke-filled buildings or foggy roads.

The team, from the Wireless, Audio, Vision, and Electronics for Sensing (WAVES) Lab and the Penn Research In Embedded Computing and Integrated Systems Engineering (PRECISE) Center, includes doctoral student Haowen Lai, recent master’s graduate Gaoxiang Luo and undergraduate research assistant Yifei (Freddy) Liu.

Spinning like a lighthouse

PanoRadar is a sensor that operates like a lighthouse that sweeps its beam in a circle to scan the entire horizon. The system consists of a rotating vertical array of antennas that scans its surroundings. As they rotate, these antennas send out radio waves and listen for their reflections from the environment, much like how a lighthouse’s beam reveals the presence of ships and coastal features.

Thanks to the power of AI, PanoRadar goes beyond this simple scanning strategy. Unlike a lighthouse that simply illuminates different areas as it rotates, PanoRadar cleverly combines measurements from all rotation angles to enhance its imaging resolution. While the sensor itself is only a fraction of the cost of typically expensive LiDAR systems, this rotation strategy creates a dense array of virtual measurement points, which allows PanoRadar to achieve imaging resolution comparable to LiDAR.

“The key innovation is in how we process these radio wave measurements,” explains Zhao. “Our signal processing and machine learning algorithms are able to extract rich 3D information from the environment.”

Teaching the AI

One of the biggest challenges Zhao’s team faced was developing algorithms to maintain high-resolution imaging while the robot moves. “To achieve LiDAR-comparable resolution with radio signals, we needed to combine measurements from many different positions with sub-millimeter accuracy,” explains Lai, the lead author of the paper. “This becomes particularly challenging when the robot is moving, as even small motion errors can significantly impact the imaging quality.”

Another challenge the team tackled was teaching their system to understand what it sees. “Indoor environments have consistent patterns and geometries,” says Luo. “We leveraged these patterns to help our AI system interpret the radar signals, similar to how humans learn to make sense of what they see.” During the training process, the machine learning model relied on LiDAR data to check its understanding against reality and was able to continue to improve itself.

“Our field tests across different buildings showed how radio sensing can excel where traditional sensors struggle,” says Liu. “The system maintains precise tracking through smoke and can even map spaces with glass walls.”

This is because radio waves aren’t easily blocked by airborne particles, and the system can even “capture” things that LiDAR can’t, like glass surfaces. PanoRadar’s high resolution also means it can accurately detect people, a critical feature for applications like autonomous vehicles and rescue missions in hazardous environments.

Looking ahead, the team plans to explore how PanoRadar could work alongside other sensing technologies like cameras and LiDAR, creating more robust, multi-modal perception systems for robots. The team is also expanding their tests to include various robotic platforms and autonomous vehicles.

“For high-stakes tasks, having multiple ways of sensing the environment is crucial,” says Zhao. “Each sensor has its strengths and weaknesses, and by combining them intelligently, we can create robots that are better equipped to handle real-world challenges.”

More information:
Haowen Lai et al, Enabling Visual Recognition at Radio Frequency, Proceedings of the 30th Annual International Conference on Mobile Computing and Networking (2024). DOI: 10.1145/3636534.3649369

Provided by
University of Pennsylvania

Citation:
Giving robots superhuman vision using radio signals (2024, November 12)
retrieved 12 November 2024
from https://techxplore.com/news/2024-11-robots-superhuman-vision-radio.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

What do you think?

Your email address will not be published. Required fields are marked *

No Comments Yet.