a literal horse race, not to be mistaken for the horse race among all of the generative AI vendors
News Analysis

Microsoft Recall Woes Are Emblematic of Broader Generative AI Security Issues

5 minute read
David Barry avatar
By
SAVED
In their race to push out new generative AI capabilities, vendors are skipping due diligence. Microsoft's Recall mess is just one high-profile example.

On May 20, Microsoft introduced Copilot+ PCs. According to the company, the new PCs — slated for a June 18 release — would offer enhanced performance and innovative AI experiences. They would also arrive with a new feature called Recall.

The Recall feature was designed to help users find information on their PC via an explorable visual timeline. Recall would build the timeline by taking continuous snapshots of everything that appears on a person's computer screen.

Rolling Back Recall

After a series of high-profile security professionals posted criticisms of the feature, Microsoft backed down and buried the release in the Windows Insider Program. It will now emerge sometime in “the coming weeks."

A blog outlining the updated timeline reads: “Following receiving feedback on Recall from our Windows Insider Community, as we typically do, we plan to make Recall (preview) available for all Copilot+ PCs coming soon.”

Later in the post, Pavan Davuluri, corporate vice president for Windows and devices explained that to use Recall even Windows Insider customers would need a Copilot+ PC because of hardware requirements.

Related Article: The Double-Sided Coin of Using Generative AI for Cybersecurity

Recall's First Adjustments

The June 13 announcement is the second time Microsoft pulled back on Recall.

When it was first announced, Microsoft explained it designed Recall to use local AI models to screenshot everything a user does or sees on their computer. The protests that quickly followed forced Microsoft to change its plan. In a June 7 blog post, the company stated it was updating Recall’s access settings with the following:

  1. The new set-up of Copilot+ PCs will require users to specifically choose to turn it on. If they do not, its default position is off.
  2. To enable Recall, users will have to re-enroll with Windows Hello — a feature to sign in to Windows devices with your face, fingerprint or a PIN. Proof of presence is also required to view your timeline and search in Recall.
  3. Additional layers of data protection including “just in time” decryption secured by Windows Hello Enhanced Sign-in Security (ESS) ensures that Recall snapshots are decrypted and accessible only when the user successfully authenticates.

It is unclear now if these features will apply to those in the Windows Insider Program, but it seems likely given that the final June 13 update was added to the original update outlining the default settings and Windows Hello feature enablement.

'The Dumbest Cybersecurity Move in a Decade'

In the original announcement, Microsoft had stated the screenshots taken by Recall would be stored locally and analyzed locally using the generative AI capabilities of the machine. The advantage, it stated, would be that users will be able to “retrace their steps.” Microsoft also noted that control of the data remains in the hands of the user. 

"You are always in control of what is saved. You can disable saving snapshots, pause temporarily, filter applications and delete your snapshots at any time,” the blog reads.

Microsoft argued that if the machine storing the data was hacked, the culprits could not access and use the data harvested by Recall and stored on the computer.

However, former Microsoft threat analyst Kevin Beaumont claimed to have done exactly that and that it is, in fact, possible to export the data which is stored in an SQLite database locally.

Microsoft told media outlets a hacker cannot exfiltrate Copilot+ Recall activity remotely.

Reality: how do you think hackers will exfiltrate this plain text database of everything the user has ever viewed on their PC? Very easily, I have it automated.

HT detective pic.twitter.com/Njv2C9myxQ

— Kevin Beaumont (@GossiTheDog) May 30, 2024

On Mastodon he wrote, “I’m not being hyperbolic when I say this is the dumbest cybersecurity move in a decade.”

Related Article: A Zero Trust Security Primer

Generative AI Security Comes to the Fore

Recall is one high-profile example of a growing list of concerns raised around reckless behavior among those building generative AI when it comes to security as well as the lack of transparency in the development of large language models.

A group called Right to Warn about Advanced Artificial Intelligence, made up of “former employees at frontier AI companies” including Daniel Kokotajlo, a former researcher at OpenAI’s governance division, released an open letter stating that while they believe “in the potential of AI technology to deliver unprecedented benefits to humanity” they have major concerns about the risks it poses too.

“These risks range from the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” the letter reads.

Several current OpenAI employees signed the letter anonymously.

Other members of the group include William Saunders, a research engineer who left OpenAI in February, and three others former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company, Mr. Kokotajlo said. One current and one former employee of Google DeepMind, Google’s central A.I. lab, also signed.

The New York Times coverage of the letter quoted former OpenAI engineer William Saunders: "When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward.'"

A Vote in Favor of Local LLMs

Support for Recall has come from some pointing out the benefits of local LLMs and the ability to secure and air-gap computers. ITVA.AI founder and CEO Ben Tran told Reworked that while giants like OpenAI, Google and Anthropic have focused on large LLMs that run in the cloud, Microsoft and Meta have been developing smaller models like Microsoft Phi and Meta's Llama, which can be run locally on personal machines.

Enterprises are eagerly awaiting the release of open-source models like Meta's Llama 3 400B+ Parameter model to harness them locally for AI applications, he continued.

Even before Microsoft Recall's release, Tran said, you could download and run Microsoft's Phi Model and Meta's Llama model from Hugging Face, then disconnect your computer from the internet and run these models locally.

Learning Opportunities

Microsoft's new product uses these smaller models and local screenshots to provide AI applications directly on your device. When implemented correctly, this is the most secure way to use AI and LLMs, he added.

Related Article: Thinking of Building an LLM? You Might Need a Vector Database

Upholding AI Security Standards

Generative AI's intersection with security remains a complex issue, Cyber Command CEO and founder Reade Taylor told Reworked. The deployment of AI in security applications has been pivotal in staying ahead of sophisticated cyber-attacks in the past.

However, for any of this to work, transparency in AI operations and stringent compliance checks are non-negotiable. Microsoft and other vendors must maintain high security standards, mirroring practices already in place across various industries to ensure AI's responsible and secure use.

The broader implication here speaks to the necessity for all tech vendors — not just Microsoft — to prioritize security in their AI advancements. This means constant assessment, updates and user education on best practices — measures Microsoft must parallel to avoid Recall becoming a cybersecurity liability, Taylor concluded.

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Pietro Mattia | unsplash
Featured Research