"Learn AI or become a dinosaur within three years,” investor Mark Cuban once famously said.
With the advances we’ve seen so far this year, I’d argue that’s closer to one year. And business leaders are taking note, with the call to upskill and reskill people for the AI era coming from the very top.
Boards want to see progress in AI proficiency, AI software features and AI use cases. This is driving demand for AI skills, with AI jumping from the sixth most scarce technology skill in 2024 to number one in 2025 (the steepest jump in any technology skill recorded for over 15 years).
Table of Contents
- Not All AI Training Is Made Equal
- A Shift in Responsibilities and Skills Needed
- Experimentation (and Failure) Are Vital
- Safe Training Environments
- Learners Feel Able to Fail
- Stretching Skills Through Continuous Challenge
Not All AI Training Is Made Equal
AI skills initiatives are a top priority across the public and private sectors, from Citi rolling out mandatory AI prompt training to 175,000 employees to more than 60 US organizations signing up to the White House’s Pledge to America’s Youth: Investing in AI Education.
But not all AI training is made equal, and in our rush to train people in AI skills as quickly as possible, hoping to keep pace with AI developments, we increase the risk that people are learning theory that doesn’t translate into performance when needed. Indeed, one EY survey estimated that companies are losing up to 40% of the potential return-on-investment in their AI projects due to inadequate training.
A Shift in Responsibilities and Skills Needed
We need to rethink training not as something that people must consume in order to understand and build skills, but practice and make mistakes with.
AI is rewriting entire workflows and industries and our abilities, our thinking, needs to evolve with this. AI shifts the work required from humans from lower order tasks such as data cleaning and meeting scheduling to higher level cognitive activities like networking and relationship building, team management, coaching and decision making.
AI governance and critically assessing an AI solution’s output will also be increasingly sought after human skills. AI’s emergence in our workplaces is creating a new skill category known as evolved skills — the ability to achieve a set outcome successfully through iteration and exploration using a mix of AI and human reasoning.
Related Article: Overwhelmed By AI? How to Make AI Training Practical & Impactful
Experimentation (and Failure) Are Vital
Ultimately, the change in skills and tasks required by humans is making self-directed experimentation more important in your training strategy.
By 2030, up to 30% of our current working hours are expected to be automated using AI solutions. To achieve this, workers need to know what tasks AI can take over effectively, how to assess its outputs and be aware of its limitations. Without clear oversight, we end up with hallucinations being introduced into court papers or having to refund customers due to AI-generated errors (see the Air Canada chatbot debacle or Deloitte's AI report fiasco).
Such mistakes are only okay to be made in a safe (not live) environment. Because we will make mistakes, it’s an integral part of the learning process (in fact, failing around 16% of the time leads to better proficiency). But we shouldn’t be experimenting with sensitive data, on live systems or deliverables.
Safe Training Environments
Instead, leaders can offer non-live environments such as sandboxes and virtual IT labs, where people can try out AI tools without risking operations or data.
Similarly, pilot projects and proof-of-concepts can give an indication of an AI solution’s potential in an organization but, they are resource intensive to implement. With 95% of enterprise AI pilots currently failing to deliver value, doing several of these at once is a costly and time-consuming way to test what AI tools are going to work or not. Too many, and your board — and employees — will be disheartened with future AI rollouts.
Before going ahead with a pilot, use hands-on environments like labs to test an AI feature in a near-real-life mirror of your live system. It can be set up with the same processes, compliance needs, user permissions and tech stack that your people will use in the real world. It’s useful for the AI solution’s product and technical enablement teams too, since setting up their software in an enterprise’s unique workflows will show them early-on if there are going to be incompatibilities to work around. It also helps with customer education, because if a user of the software repeatedly gets stuck on a certain step, that could indicate that further training is required or the product needs improvement.
Learners Feel Able to Fail
With the risk to data and continuity out of the picture, learners feel psychologically safe to test the limits of an AI tool. Failure, in this context, becomes a growth and innovation opportunity.
If the AI produces an incorrect result, the learner can go back and try again. They may get feedback from the lab on where they went wrong. They can spin up another instance with different settings and see if that impacts results. They can try inputting in different prompts or data.
In other words, they are free to keep tweaking their work alongside the AI to understand how to best come to the result that they want. This experimentation and iteration aren’t something taught through theory alone.
Related Article: AI Skills Training: Strategies for Technical Teams vs. End-Users
Stretching Skills Through Continuous Challenge
Moreover, labs are becoming advanced enough now to adapt with each learner.
One set-up could be created (by an AI model) for a basic user and another for a more advanced user. As they go through certain steps and are assessed in the lab, the next instructions and challenges they face could adapt to their assessed skill level. If they want more of a challenge, they could enter a prompt to make the next version of the lab they use harder. If they want to try something new, they could prompt an AI lab generator to create scenarios and tasks within the lab that’ll really stretch their skills and make them apply skills in novel ways.
This is where the future of lab development itself is headed, somewhat symbiotically, where hands-on experiences become increasingly adaptive.
If AI is rewriting how we work, then hands-on learning must rewrite how we grow. Real progress happens when people can explore AI safely, test boldly and adapt continuously. And that requires training that adapts with them and provides them the opportunity to learn, make mistakes, iterate and experiment. The future won’t belong to the most trained teams, but to the most practiced ones.
Learn how you can join our contributor community.