Conversations remotely detected from cell phone vibrations, researchers report

cell phone call


cell phone call
Credit: Pixabay/CC0 Public Domain

An emerging form of surveillance, wireless tapping, explores the possibility of remotely deciphering conversations from the tiny vibrations produced by a cell phone’s earpiece. With the goal of protecting users’ privacy from potential bad actors, a team of computer science researchers at Penn State demonstrated that transcriptions of phone calls can be generated from radar measurements taken up to 3 meters, or about 10 feet, from a phone. While accuracy remains limited—around 60% for a vocabulary of up to 10,000—the findings raise important questions about future privacy risks.

They published their research in Proceedings of WiSec 2025: 18th ACM Conference on Security and Privacy in Wireless and Mobile Networks. The work builds upon a 2022 project in which the team used a radar sensor and voice recognition software to wirelessly identify 10 predefined words, letters and numbers with up to 83% accuracy.

“When we talk on a cell phone, we tend to ignore the vibrations that come through the earpiece and cause the whole phone to vibrate,” said first author Suryoday Basak, doctoral candidate in computer science. “If we capture these same vibrations using remote radars and bring in machine learning to help us learn what is being said, using context clues, we can determine whole conversations. By understanding what is possible, we can help the public be aware of the potential risks.”

Basak and his advisor, Mahanth Gowda, associate professor of computer science and engineering, who co-authored the paper, used a millimeter-wave radar sensor—the same type of technology used in self-driving cars, motion detectors and 5G wireless networks—to explore the potential for compact, radar-based devices that could be miniaturized to fit inside everyday objects like pens.

Their experimental setup is only for research purposes, the researchers said, developed in anticipation of what bad actors could potentially create. They then adapted Whisper, an open-source, large-scale speech recognition model powered by artificial intelligence (AI), to decode the vibrations into recognizable speech transcriptions.

“Over the last three years, there’s been a huge explosion in AI capabilities and…



Source link

Disclaimer


We strive to uphold the highest ethical standards in all of our reporting and coverage. We 5guruayurveda.com want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on. For any glitch kindly connect at 5guruayurveda.com

Leave a Reply

Your email address will not be published. Required fields are marked *