An employee looks at a computer monitor at a home office.
Feature

Report Shows How Consumers Are Using Generative AI

3 minute read
Chris Sheehan avatar
By
SAVED
Where can GenAI improve?

Generative AI is not new, despite what you might be seeing. Researchers have been studying the concept of artificial intelligence and machine learning (ML) since at least the 1950s and 1960s. It wasn’t until OpenAI's introduction of ChatGPT and the demonstration of what the technology could do that GenAI moved from the realm of sci-fi and tech into public consciousness. Put simply, the technology exploded last year, and it has already had a major impact on multiple industries. According to PwC's "Global CEO Survey," 70% said they expected GenAI to “significantly change the way their company creates, delivers and captures value in the next three years.”

But what about consumers, the actual users of GenAI tools? The "Generative AI Survey" by our team surveyed more than 6,300 global consumers, software developers and digital quality testing professionals on the topic found that user satisfaction with the technology is increasing but persistent challenges around bias and privacy remain.

GenAI Growth Challenges: Inaccurate Responses, UX Flaws and Harmful Output

Over the past six months, the technology has gained traction. Ninety-one percent of our survey respondents have used GenAI tools to conduct research, and 33% use it for research daily, compared to 23% who were actively using a GenAI service for research last September. At the same time, 50% of respondents have experienced responses or content they consider biased, up from 47% last year. Still, 75% of those respondents felt that chatbots are getting better at managing toxic or inaccurate responses.

Only 19% of respondents said GenAI tools understand their questions and provide helpful answers every time they use them, with that number dropping to 9% in Europe. While there’s clearly room for improvement with the user experience (UX) (consumers want to see better source attribution, more localized responses and support for more languages, for example), it represents a significant jump from the 7% of respondents that said the responses they received from GenAI chatbots were always relevant and appropriate last year. The most common UX issues that GenAI users encounter are general answers that don’t provide enough detail, misunderstood prompts and convincing but slightly incorrect answers, with only 10% citing obviously wrong answers.

A large majority of users (89%) are concerned about the quality of the data being used to train GenAI systems, including 11% who said they would never provide private information. This represents only a slight drop from last year, where 91% of users expressed concern about the quality of data, and is clearly another area for improvement.

Changing Consumer GenAI Use Cases

PwC’s study “The Path to Generative AI Value: Setting the Flywheel in Motion” believes that “only about 15% of the potential GenAI value rests with the ... patterns for which early GenAI services have become known” and that “across industries, the top five GenAI use cases can create 50% to 80% of the overall value derived from the technology.”

Consumers have changed how they use the technology since it first came onto the scene. Originally, users were interested in how GenAI may further creative pursuits but are now using it for more work-related tasks. In our study last year, 78% of consumers said they were using GenAI, compared to 91% this year. In addition, 81% of respondents have replaced search engines with chatbots for queries and nearly one-third (32%) use chatbots for search daily. Additional use cases include language translation, creative writing and drafting emails, proposals and similar business communications. For software developers and testers, chatbots are helping to write and debug code, build test cases and reporting.

What Comes Next

AI is changing fast, and the appetite to see what it can do is also expanding. User feedback will be critical in refining not only the bugs and glitches in AI applications, but also how the application responds appropriately and relevantly to end users. This will require continuous fine-tuning and understanding the nuances of training large language models (LLMs). Red teaming, a common adversarial approach in cybersecurity, can also be applied to GenAI and used to test multiple points of failure. Looking at bias specifically, a diverse group of testers could use red teaming to identify where the biased and unsafe material is coming from — and then use the information to re-train models or develop guardrails.

Building the right testing team and assessing user interaction, feedback and functional bugs as well as considering accessibility and inclusivity will help organizations continue to build quality applications and services that meet consumers’ current and future needs.

About the Author
Chris Sheehan

As Applause's SVP and GM of strategic accounts, Chris Sheehan enables the success of Applause’s strategic account business, including strategy, sales and operations to ensure continued growth and customer success of its largest customers. Since joining Applause in 2015, Sheehan has held roles on multiple teams, including software delivery, product strategy and customer success. Connect with Chris Sheehan:

Main image: By Annie Spratt.
Featured Research