Consider the following scenario:
Anxiety in response to rapid technological advances. Depression in the known world “disappearing” due to accelerating innovation. A growing sense of purposelessness. Psychologists and doctors, puzzled by this trend, call it neurasthenia and recommend a high-calorie diet and isolation.
This situation is not from today but from the Industrial Revolution, when electrification, mechanization and the rapid adoption of critical technologies left many, especially the stable upper class, deeply unsettled.
Neurasthenia remains a diagnosis in Asian cultures, but it eventually faded in the U.S., absorbed into the Diagnostic and Statistical Manual (DSM) of Mental Disorders and concepts of depression and anxiety. However, the impact of technology on our collective well-being is a broad modern discourse. Social media use, its purported impact on teen depression and the increasing ubiquity and power of large language models (LLMs) are opening up new discussions about how technology affects our collective and individual psychological health.
Here are some highlights of the growing perspectives within the psychology field on AI, particularly the rise of generative AI.
Historical Context and Notable Commentators on Psychology and AI
Alan Turing’s seminal essay — which asks, “Can Machines Think?” — explores the practical possibilities of a computer’s psychology. B. F. Skinner’s behaviorist approach to reinforcement learning forms the basis of modern LLM training methods. Psychology’s influences continue. Human-computer interaction (HCI), sentiment analysis and neuroscience — a co-parent of AI — are all branches of psychology.
There are notable psychology-AI crossovers. Geoffrey Hinton, born in 1947, began as an experimental psychologist and completed his Ph.D. in artificial intelligence in 1978. Known as a “godfather of deep learning,” Hinton pioneered an approach for back-propagating multi-layer neural networks on large amounts of data, paving the way for modern LLMs. He worked at Google until 2023. Upon his departure, he began warning about the lack of investment in safe and ethical AI.
Building on Hinton's foundational work and calls for safe AI, questions of bias are critical for psychology, especially in the context of the patient relationship. Arthur C. Evans Jr., Ph.D., CEO of the American Psychological Association, testified before the U.S. Senate about opportunities to improve access to education and health care with AI and also warned about perpetuating existing biases and compromising privacy. He encouraged a more explicit partnership between AI companies and the psychology community: “Without incorporating psychological science deeply into the development of AI tools, we risk continuing to harm already disadvantaged populations and creating systems that perpetuate harmful stereotypes and bias.”
Gary Marcus, a cognitive psychologist, entrepreneur and professor emeritus at NYU, is another voice of note who’s been called the “loudest critic of AI.” Marcus frequently addresses what he sees as over-hype of GenAI, particularly concerning corporate transparency and consistency. He advocates for more effective governance and market collaboration by AI companies in regular media appearances and his personal newsletter.
Additionally, local-level criticism of AI is emerging. For example, Lisa Strohman, a cognitive psychologist and founder of the Digital Citizens Academy, views teenage digital communication via social media as especially dangerous when combined with the power of deepfakes. Given Facebook’s role in both social media and AI development, a merging of social media and AI critiques appears likely. She fears that “lots of damage can be done quickly” for individual well-being given the power of GenAI.
Recent Research Explores AI-Worker Health and Productivity Tensions
Many of the aforementioned concerns by psychologists echo those held by the public in research polling. AI’s influence on bias, empowerment and the future of work are all consistent public concerns. Specific studies to explore these ideas further include:
AI Use at Work May Increase Insomnia, Loneliness and Alcohol Consumption for Some
A study published in the Journal of Applied Psychology describes a correlation between AI usage and feelings of social disconnection, leading to increased alcohol use and insomnia. The study, summarizing four sub-studies, concludes “employees with higher levels of anxious attachment are more sensitive to interacting with AI.” One author highlights the tool's usefulness but emphasizes the need to view AI as distinct from other work tools, given its cognitive nature and pseudo-replacement for a social work environment.
AI May Contribute to Feelings of Worker Disempowerment
The U.K.-based Institute for the Future of Work recently published a report suggesting that advanced tools reduce the “quality of life” of workers, suggesting that they cause worker disempowerment and workplace anxiety. While these concerns are echoed in polling data, the study is notable for highlighting the difficulty of clear analysis in the space. The study’s discussion of the negative impacts of AI considers AI as part of an overall grouping of “advanced technology,” such as wearables, in their analysis. Given its general-purpose nature, AI can be implicitly or explicitly part of a digital experience, making direct ties challenging.
AI May Disproportionately Help Lower Performers, Challenge High Performers
In a positive contrast to the two aforementioned AI studies, a cross-functional paper analyzes the impacts of AI usage on a group of Boston Consulting Group consultants. It found that after using ChatGPT, the lowest performers in a workplace improved the quality of their work by 43% versus a 17% increase for the top performers. From a psychological perspective, an increase in performance may have varying impacts on those affected — lower performers may feel more confident, while top performers may become more anxious.
Using tools like AI often requires a particular or more specific way of thinking for effective outcomes. A study notes that the autonomous nature of tools can especially challenge high-performing employees. Top performers often have a particular approach — and a more internalized locus of control — that the autonomous and non-human nature of AI systems challenges. Adoption of AI may, therefore, be contingent on the starting productivity and quality of work by a team.
AI Will Continue to Impact Psychology
Like other industries and fields, AI will not just affect psychology studies, but psychology itself. AI chatbots can expand access to therapy, but they risk removing the human element. With increasing consumer access to software-based solutions, American Psychological Association leadership and industry groups are pushing to partner and experiment with new types of therapy. Yet, bias in AI and privacy concerns remain significant hurdles.
This, in turn, is leading some researchers to look beyond chatbots in favor of more general-purpose AI tools, such as using natural language transcription tools to accelerate note-taking and analysis. This highlights the opportunity to look beyond the latest LLM released by developers and also embrace the growing AI ecosystem for traditionally non-digitized teams.
AI is a Cognitively Complex General-Purpose Solution
The transition from gas lamps to electric bulbs in the early 20th century was a visible change, but today's AI revolution is more subtle and cognitively complex. While neurasthenia may not be re-emerging in the 21st century — though the 20th-century prescription sounds like the modern equivalent of healthy eating and a digital detox — AI does offer significant potential to empower individuals, improve work performance and provide therapy bots on demand. These benefits, however, are challenged by psychological concerns about AI’s impacts on individual disempowerment, consolidating bias and compromising privacy.