Microsoft's new AI-powered Recall feature for Windows is facing renewed scrutiny over potential security vulnerabilities, raising concerns among cybersecurity experts and privacy advocates.
Recall, designed to create a searchable timeline of user activity by taking periodic screenshots, has been criticized for storing sensitive data in plain text. Security researchers warn that this approach could expose passwords, financial information, and private communications if devices are compromised.
"The fundamental architecture of Recall creates significant privacy risks," explained cybersecurity analyst Dr. Elena Rodriguez. "By capturing everything users do without adequate encryption, Microsoft is essentially building a treasure trove for potential attackers."
Microsoft has defended the feature as optional and locally stored, but critics argue that the default settings and implementation details remain problematic. The company faces growing pressure to implement stronger encryption and clearer user controls before Recall's broader rollout.
This controversy emerges as tech companies increasingly integrate AI features into operating systems, highlighting the ongoing tension between innovation and user privacy protection.