More than two years after ChatGPT’s release, organizations are still struggling to safely and effectively implement generative AI. Gartner predicts that roughly 30% of AI projects will be abandoned after proof of concept due to poor data quality, inadequate risk controls, escalating costs or unclear business value.
Early AI adopters are well aware of these challenges. AI, while powerful, is not magical or instantly transformative. To use it effectively, organizations need to do more to prepare their data and their people for the arrival of the new technology. This includes stronger information management, AI enablement and community knowledge-sharing at every step of the process.
Without Enablement and Analytics, AI Falls Short
Before you can use a new technology effectively, you have to understand how to operate it safely and productively. But research shows that most companies lack AI training programs.
Less than half of organizations currently offer their employees dedicated training for AI, according to AvePoint's 2024 AI and Information Management Report. This presents an obvious problem for organizations looking to adopt AI, since generative AI tools like chatbots — while generally intuitive — only go so far when employees haven’t been taught how to use them. Lack of training can harm productivity and contribute to AI disillusionment, which is on the rise. A global study from the Upwork Research Institute found 77% of employees say AI has increased workloads and hampered productivity.
Without proper training, AI can easily do more harm than good by creating additional work and driving down morale.
But AI enablement goes far beyond training programs. Top-down employee education is only the start. AI enablement should also include community initiatives like employee working groups, which can drive organic adoption and peer-to-peer knowledge sharing. Managers should also encourage AI use and champion top users, which can help alleviate AI skepticism or fatigue. Software that measures and benchmarks user activity can help here and will become increasingly critical as leaders look to increase AI usage and demonstrate measurable ROI with the technology. AI can deliver on its promise with the right training and socialization, but without it, AI-adopting organizations may fail to see the ROI and transformative impact they’ve been told to expect.
I recently spoke about this topic with Daniel Larsson, senior architect at Cristie Nordic, who specializes in AI education. "The purpose of [generative AI] is to keep the conversation going," he said. "It's supposed to lead you on, ask questions and interact [with you]. Once you understand what it's built for, it takes away the issue with hallucinations." He said companies can overcome AI resistance and help their employees use the technology to greater effect, without negative side effects, when they demystify AI and clearly outline what it can and can’t do.
Related Article: 5 AI Metrics Every Leader Should Track
We Also Need Better Data and Stronger Information Management
AI adopters also need to pay attention to the quality of the data that powers their technology, as well as the policies and infrastructure that protect sensitive user, citizen and customer information. AI needs relevant, well-organized data to work well — this is the idea at the core of the “garbage in, garbage out" hypothesis. Organizations therefore need to ensure their data is secured and optimized for AI. Without an information management strategy that dictates how and when generative AI tools handle data, organizations that handle sensitive data open themselves to huge amounts of risk.
The right software helps IT leaders with information management, but cyber resilience and data security have a huge human component. It’s not enough to have software that secures and optimizes your data for AI; you also need policies that guide employee actions around AI. Without rules to regulate this activity, employees could unintentionally overshare and expose sensitive data.
Employee error causes 88% of data breaches according to Stanford researchers. The risk grows when AI is used without an acceptable use policy. Yet only 53% of organizations that use AI say they have an acceptable use policy in place.
Speed bumps are inevitable as AI adoption expands. When (and before) challenges arise, leaders need to focus on the human component of AI use and adoption with user education, adoption of strong policies and support for community knowledge sharing throughout their organizations.
Learn how you can join our contributor community.