A person looking at a smartphone that is "reading his mind"
Editorial

AI Can Already Read Your Mind (And You Probably Agreed to It)

5 minute read
Timothy Cook avatar
By
SAVED
The mind-reading tech isn't coming in ten years. It's already in your pocket.

When people hear "mind-reading technology," they picture brain chips like Neuralink drilled into skulls. While almost a reality, it's still an image of science fiction dystopias. The truth is that these chips are not a threat, because most of us could opt out simply by declining the surgery.

The real threat is in the invisibility of objects we don't realize have the same capabilities and the terms and conditions we often don't read. With the integration of new AI technology, the consumer wearable industry has built something pervasive. Inference based cognitive surveillance using devices you already own.

Table of Contents

The Signals You Can't Hide

Your smartwatch measures heart rate variability. The VR headset tracks your eye movements. Earbuds with microphones can detect micro-changes in your voice.

These aren't measuring your thoughts directly. They're measuring the subconscious biological signals that betray your thoughts. And AI statistical models have gotten good enough at inferring that this distinction barely matters at this point.

TikTok's January 2026 privacy policy update made explicit what many platforms had previously buried in vague language. The company may now collect "biometric identifiers and biometric information" including "faceprints and voiceprints" from user content. The policy notes this collection may even occur during content creation, before users even decide to post.

TikTok's policy is explicit about what others obscure, but the collection infrastructure is expanding beyond phones.

Meta's Ray-Ban smart glasses (the company sold over seven million pairs in 2025) now bring always-available cameras and microphones into physical spaces. A February 2026 New York Times investigation documented how wearers record restaurant staff, strangers and conversations without subjects' awareness. The recording indicator is a faint light that most people miss or mistake for Bluetooth.

Meta Ray-Bans beside a photo taken with Meta Ray-Bans at Shakespeare's Globe theatre in London
Meta Ray-Bans beside a photo taken with Meta Ray-Bans at Shakespeare's Globe theatre in London

Google, Apple and OpenAI are all developing similar products, with facial recognition capabilities expected soon. The inference problem is no longer confined to devices you wear. It now includes devices others wear while looking at you.

Related Article: The AI Device Wars Just Kicked Off In A Big Way

Biometric Psychography: What Inference Can Expose

Researchers Magee, Ienca and Farahany have shown that algorithms already have the ability to infer multiple pieces of identity from physiological signals. These include:

  • Sexual orientation
  • Personality traits
  • Drug use
  • Mental health conditions

Researchers have used similar techniques to uncover proxies for PIN numbers, romantic attractions and skill levels at various tasks. Brittan Heller termed this capability "biometric psychography," where one can extract psychological profiles from physiological data. Your body often betrays your mind. And sensors are getting more ubiquitous, not less.

Imagine a college student scrolling through TikTok late at night. A video discussing the early signs of severe anxiety or an eating disorder pops onto his "For You" page. He doesn't "like" the video. He doesn't even leave a comment, save it or share it with a friend. In fact, after only a brief glance, he quickly swipes away because he doesn't want to leave a digital trail of his insecurities. Consciously, he has chosen not to engage.

But the app wasn't just tracking his explicit clicks. In the three seconds the video was on screen, the software recorded the exact milliseconds his thumb hovered over the glass before he finally swiped away. It measured the subtle deceleration of his scrolling speed the moment the video registered in his brain. It logged the time of day, his battery level and the fact that he paused just a fraction of a second longer on this video than on the comedy skit before it.

He never typed a word about his mental health. But by analyzing his subconscious behavioral biometrics, the algorithm was capable of inferring his psychological state. The tech company now holds a deeply personal profile of his vulnerabilities, extracted entirely from involuntary digital body language he couldn't hide.

TikTok's updated policy makes this data extraction explicit. Under its "Information We Collect Automatically" section, the platform notes that it tracks granular "Usage Information," including the exact duration of your screen time and how you interact with the platform. Furthermore, the policy legally permits the collection of "keystroke patterns or rhythms" and device touch interactions. The algorithm doesn't need to analyze your face through the camera to gauge your emotions. It measures your micro-hesitations and physical screen behaviors, frame by frame, before you've made any conscious decision to share anything.

The Pre-Consent Collection Problem

This is entirely different from traditional data collection. Biometric signals are captured continuously, automatically and before any user action triggers them. If you click buy on a pair of shoes, you initiate the action and the data is collected from that choice. If you view a pair of shoes and your heart rate increases or you subconsciously grin at the sight of them, you didn't initiate that action. It was biological. And thus the data collected is not related to one's agency.

Biometric data in isolation doesn't mean anything. One's heart rate is an isolated data point. It becomes dangerous when algorithmic inference becomes involved. If you measure your heart rate variability while reading a news article, that's data. Pupil dilation during a product presentation becomes biometric data. Even a stress response to a political claim or reaction to a provocative point of view can infer a political viewpoint.

You didn't even have to click anything. You didn't search for anything. You didn't post anything. You just... reacted. Biologically. Involuntarily.

Traditional consent frameworks assume you engage in a moment of choice. You decide to share something, and the law assumes you understood what you were sharing (even if you don't). But cognitive biometric collection happens before decision. The data exists because you have a body and the sensors or cameras are present.

Related Article: AI Knows What You Want Before You Do. Welcome to the New Personalization Era

When 'Opt Out' Isn't an Option

Pre-consent collection assumes the device is yours. But Meta's smart glasses introduce a different vector: cognitive biometric data collected by devices other people wear while observing you.

In public spaces, your facial expressions, emotional reactions and behavioral patterns can now be captured by any stranger wearing ordinary-looking eyewear. The Times quoted one content creator: "Most of the time I approach someone with glasses, they don't realize what's happening."

This isn't a hypothetical. It's about to be a $290 billion market with millions of new recording devices deployed in a single year, and the companies manufacturing them are the same ones building the inference models.

Traditional privacy debate assumes a controllable perimeter. You decide what to share and that becomes protected. You click accept. You hand over your data in exchange for a service. But cognitive biometric inference through third-party devices dissolves that perimeter entirely. You cannot opt out of someone else's glasses. You cannot consent to a sensor you don't know is pointed at you. You cannot manage privacy settings on a device that belongs to a stranger standing three feet away in a coffee shop.

Privacy law was built for a world where data collection required your participation. Biometric inference requires only your presence. The college student scrolling TikTok at least chose to open the app. The person being recorded by a stranger's smart glasses made no choice at all. Their face, their micro-expressions, their emotional reactions became data because they existed in a physical space where a $290 device was pointed in their direction.

Learning Opportunities

The mind-reading technology isn't coming. It's here, operating through inference rather than direct neural access, and increasingly through devices you don't own and didn't agree to. So it no longer only matters if your consented data is protected. Your presence in a public space is enough to make your cognitive state someone else's product.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Timothy Cook

Timothy Cook, M.Ed., is Director of The Cognitive Privacy Project and writes the "Algorithmic Mind" column for Psychology Today. Connect with Timothy Cook:

Main image: Simpler Media Group
Featured Research