It’s official: Generative AI has become a “thing.” Everywhere you look, the media, academia and panels of experts — even occasional users — all have a take on the technology. And while the perspectives have been exceptionally loud and especially widespread, we can draw lessons from how new technologies have gained traction in the past to buffer out the noise and assess whether that interest is likely to stick.
Marketing buzz tends to evolve, rather than jump from topic to topic. In 2019, the buzz was about AI; today it’s about generative AI. While many of the claims made four years ago are still being made today, they’ve pushed into new areas. AI improves accuracy, provides statistics and gives HR more time for “other tasks.”
But what does that even mean?
A Cluttered Landscape
Recently, Workday released a series of AI and ML capabilities designed to help businesses “drive productivity, streamline business processes, empower their people and make better decisions.”
Oyster launched an AI-powered chatbot to answer questions about global hiring and remote-work regulations. Skillsoft released a beta “conversation simulator” that will provide employees with “a safe space” to practice important business conversations using an AI trainer.
Meanwhile, SAP released a new AI-based assistant called Joule to wide media coverage, although TechTarget pointed out that few users have actually gotten their hands on the product. It’s also not clear what’s different about Joule compared to copilots from companies like Microsoft or Salesforce. Some believe Joule is more about SAP positioning itself as an industry leader than offering a true AI solution.
“This technology is not available now, and the LLMs that it will be built on are still in development,” Joshua Greenbaum, principal at Enterprise Applications Consulting in Berkeley, Calif., told TechTarget. “The fact that SAP has 20,000 customers who have agreed to give some version of their enterprise data to build the models is muddied by the fact that we don’t know who these companies are, what their business processes are or what corners of the global economy they represent.”
Related Article: Generative AI Is Cool, But It Isn't Corporate (Yet)
Inevitability Is Everything
Such sentiments won't slow AI's growth.
More than 60% of HR leaders are participating in enterprise-wide discussions about leveraging generative AI, according to Gartner. Fifty-eight percent are collaborating on AI with IT leaders, and 45% are working with legal and compliance functions to explore potential use cases.
Signals are mixed about how excited employees are about these possibilities — and how that does or doesn't align with their organization’s enthusiasm about the tech’s potential.
For one thing, workers don’t want the technology to take over their learning and development activities, according to a Wiley survey. An increasing number of organizations are exploring the use of AI to optimize their training and career development programs, although more than half of the study’s respondents — around 59% — said they prefer having a human instructor in charge of their workforce development, as opposed to the mere 7% who said AI would be better.
And although the majority of respondents told Wiley they want L&D content to be developed by subject matter experts, the Paychex 2023 Pulse of HR Report predicts a continued reliance on technology to enhance upskilling and reskilling initiatives.
It probably doesn’t help that corporate leaders are enamored with AI for what workers might consider the wrong reasons. Notably, Gallup found that 72% of Fortune 500 CHROs see AI replacing jobs within the next three years.
Related Article: Reality Check: The Truth About How AI Will Impact Jobs
Roadmap Obstacles
Bear in mind, the world of generative AI can be a place of unintended consequences.
Researchers from Stanford University and the University of California, Berkeley, recently found that ChatGPT’s performance with basic math operations has significantly declined since the application was launched. They attribute the decline to “drift,” which occurs when attempts to improve one part of an AI model adversely impacts the performance of others.
It's not the first time advanced technology has traveled in unanticipated directions.
In 2018, Amazon built a machine-learning application for talent acquisition that didn’t like women. Designed to help recruiters review resumes, the program compared applicants against patterns found in CVs submitted over a 10-year period. Since so many more men than women populated the tech workforce during that time, the system inevitably machine-taught itself that male candidates were stronger than their female counterparts. Amazon abandoned the effort.
More recently, two attorneys were sanctioned after they used ChatGPT to find judicial opinions pertinent to a current case. When following up, the judge and opposing counsel could find no references to the citations, and for good reason: ChatGPT had made them up, as it generated answers to the original queries. The attorneys, the judge wrote, “abandoned their responsibilities” by submitting judicial opinions that didn’t exist.
“Changing [an AI model] in one direction can worsen it in other directions,” Stanford Professor James Zou told The Wall Street Journal. “It makes it very challenging to consistently improve.”
That means, Zou added, that AI systems need to be monitored very closely.
But none of this should be surprising. Generative AI is a new technology that was released to the public early in its lifecycle.
For leaders, and particularly CHROs, the question is whether they can look at experiences like Amazon’s and anticipate what traps might be lurking for their own organization. Amazon’s efforts weren’t ill-conceived; they were just early. Like generative AI. Which means this is a time when it’s especially useful to ask “what if?”