ants on a log moving leaves together
Feature

Generative AI in the Workplace Is Inevitable. Planning for It Should Be Too

5 minute read
David Barry avatar
By
SAVED
Generative AI's march into the digital workplace is unlikely to slow. A little prep work can go a long way towards a successful deployment.

Adobe research confirms what is becoming increasingly clear: organizations are quickly adopting generative AI. The belief that generative AI will increase worker productivity is the biggest driver behind the push.

The survey of 6,049 digital workers across five countries — the US, UK, Australia, India, Japan — found four out of five business leaders stating workers use generative AI often and 39% stating they will likely soon use it every day.

On the flip side, 25% of leaders said they were not ready to install it because they don’t have the right security, privacy and trust guardrails, and in many cases said they didn't know how to use the technology effectively. A majority of respondents (two thirds) said AI is already saving them time, and 61% say it helps them work faster.

So is the march to bring generative AI into all corners of the digital workplace inevitable?

Generative AI Is Already Here

Generative AI will not only be a part of every aspect of the digital workplace, but it’s already here for many knowledge worker roles, said Matt Heying, product and technology executive at Mural.

He noted the generative AI integrations vendors like Microsoft, Google, Slack, Zoom and Salesforce have already introduced to their product suites, which brings the software into the routine workflows of most employees.

“While these are some of the biggest tech companies in the world, it is just the tip of the iceberg in respect of the digital workplace. Any enterprise or organization whose employees use software in their daily workflows will likely find AI in their workplace from this point on,” he said.

However, he said it was early days yet and many vendors haven’t completely understood how it will change their core values or functionality.

“I see a lot of vendors merely sprinkling AI 'magic' on the top. It reminds me of the early days of mobile or cloud technologies where everyone just rushed to turn their product into an app,” he said.

Unresolved issues around security, compliance and trust remain, he continued, and organizations should focus now on building guidelines and policies for acceptable use of AI in the digital workplace.

“I think we’ve already accepted that AI is coming for the workplace; the question is to which workflows and to what degree," he said. “Although at first glance, it seems like end-to-end AI is the best option, it’s important to remember that it is a tool above all else. Employees will still be the ones figuring out how to best use it.”

Related Article: Generative AI Writing Job Descriptions: Adult Supervision Required

The Relationship Between Age and Adoption

The speed of uptake, however, is likely dependent on a worker's age, said Maxime Vermeir, senior director of AI Strategy at ABBYY. 

He points to research commissioned by ABBYY, which showed younger employees more likely to explore new tools to do their jobs, with 68% of those under 35 using digital assistants compared to only 27% of respondents over 55.

Of those not yet using digital assistants, the research shows that 79% of younger workers thought the assistants would have more of an impact on their productivity as compared to 66% of respondents over 55.

"One of the most common reasons for intelligent automation projects’ failure is that employees are not being trained well enough,” he said. “Leaders need to focus on upskilling older workers to use AI-enhanced technology and clearly explain their benefits."

Generative AI is a powerful tool, he added, but requires the domain-specific knowledge and context that comes from an enterprise’s data foundation.

Related Article: Generative AI, The Great Productivity Booster?

AI Won't Live Up to the Hype Without Training

ChatGPT is already a household name and ChatGPT-driven organizations now count in the more than 100 million users the software reached within two months of its initial launch, said Amani Gharib, director of HR research and advisory services at McLean & Company.

AI tools have the potential to optimize workflows in the digital workplace, she continued. However, companies must provide employees with AI-related training and development in order to optimize benefits and reduce risk, as well as address ethical considerations.

“When an organization provides training and development around the appropriate use of AI technology, it builds ethical behaviors that lead to an atmosphere of trust and integrity,” she said. “Additionally, an ethical workforce means that an organization is less likely to suffer damage to its reputation for the misuse of AI."

She recommends using a combination of different approaches to meet training needs, including virtual and/or on-site training sessions, creating cross-functional learning groups, and outlining developmental steps to adapt to the new technology. Topics should include the importance of navigating AI ethically, reducing risk, avoidance of plagiarism, and how to make informed and ethical decisions around privacy.

“By offering such training and development opportunities, organizations will be able to promote ethical behavior and provide direction to all employees when faced with AI-related ethical decisions,” she said.

This needs to be done from the get-go, she added. When AI is being introduced into the organization, HR must communicate guidelines around its use in a timely, clear and consistent manner — managing the people side of the implementation.

“By conducting risk assessments and creating policies around AI technology, HR will be able to manage ethical risks and considerations accordingly “she added. “It is also just as important for organizations to build resilience and manage the disruption of AI through effective change management and transformational leadership.”

Learning Opportunities

Related Article: Who's Responsible for Responsible AI?

Slow Your Roll on Generative AI Deployments

Organizations should walk, not sprint in their evaluation and adoption of generative AI, said Frank Schneider, VP and AI evangelist at Verint.

While it's a useful solution, it's not ready to be unleashed in the raw and must be tested and implemented with the appropriate guardrails in place.

Four pillars can act as guideposts when assessing whether or not your organization is ready for generative AI, he said.

1. Security and Compliance

Use of the technology must be governed and managed from the perspective of legal compliance, data privacy and the protection of your business's sensitive proprietary information, Schneider said. This is an ongoing effort, as changes take place in the technological and regulatory landscape.

2. Extensibility

For generative AI to reach its potential, it must be open and extensible so that when a new data source emerges, it can easily ingest this data. In today's world of AI and humans working together, he said, orchestration of customer journeys with interoperability across customer facing and employee tools is essential.

3. Validation

Validation of any information generative AI delivers is necessary so people can trust its results. AI models must be contextually appropriate, validated to be the right type for the right use case and audience, and within the scope of the desired workflow or conversation flow. Validated data is key to ensuring the best outcomes and the most relevant and valuable insights for the organization and its objectives.

4. Scrutable

Generative AI must be implemented in an environment where its function can be clearly understood and proven with careful investigation. In addition, organizations should opt for some level of human-in-the-loop oversight to ensure the generative AI model is making decisions in a responsible and ethical manner.

“Transparency into how AI learns, what is contained within a model, and how it provides output is key to a successful implementation,” Schneider concluded.

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Christian Holzinger | unsplash
Featured Research