Artificial intelligence usage in software isn’t new. It dates back almost 75 years to the Dartmouth Summer Research Project on Artificial Intelligence conference where “Logic Theorist” was presented – a program designed to replicate the problem-solving skills of humans. Throughout the years we have seen advances in the technology and AI has, in many ways silently, made its way into many of the digital experiences we consume.
So why all the hype today?
The short answer is that recent advances, particularly in generative AI, have democratized it. When OpenAI released a free preview of ChatGPT in December of 2022 it received more than a million sign ups in the first five days. A million. AI is no longer just a tool for data scientists or developers. Because of tools like ChatGPT, we are quickly approaching a world where anyone can harness the power of AI to solve complex problems, automate tasks and unlock new opportunities.
Today’s AI is capable of providing advanced personalization, in-depth analytics and strategic insights in your digital experiences. When combined with intuitive and elegant interfaces that allow seamless interaction with it and the rich and diverse datasets that fuel it, you are able to deliver next-level digital experiences to your end-users.
As you are likely considering how you can leverage AI to improve your digital experience, it is important to make sure you aren’t considering just AI, but ethical AI. You can start by creating an AI strategy for your digital experience that mitigates unconscious bias and that supports the establishment of regulation.
Mitigating Unconscious Bias
According to a global study commissioned by Progress, 66% of organizations experience data bias today and 78% are concerned data bias will become a bigger issue as use increases for AI and machine learning (ML).
Generative AI relies heavily on the quality of its training models for content generation. A careful curation of training data is essential to ensure that it represents the diverse user base and use cases that generative AI will encounter. It is especially critical given AI’s potential to perpetuate unconscious biases, which can negatively impact marginalized groups. Integrating generative AI into your digital experience should serve to promote an inclusive and accessible digital environment by mitigating unconscious bias.
Collecting data from a wide range of sources across demographics, geography and other relevant factors allows you to incorporate diversity into the training data. Because by definition unconscious bias is not something you are consciously aware of, automated tools can help detect and mitigate bias in the training data. Some companies use ML algorithms to identify and remove biased language from text datasets, while others employ statistical techniques to recognize and address disparities in demographic representation.
Still not convinced? There are real business risks for those who don’t maintain data integrity by removing bias. We have seen it already. Financial institutions who found themselves wrongly rejecting otherwise qualified loan candidates because the AI model had a bias built in that discriminated by zip code. HR systems that favored male candidates over equally qualified female candidates. If your digital experience relies on biased AI models, you will not only deliver a poor experience for your users, you have the potential to exclude a significant percentage of your target buyers.
Regulation (or Lack Thereof) and How to Navigate It
As often happens with technology that gains rapid adoption, regulation hasn’t caught up. If a generative AI tool scrapes the web to train its large language model (LLM), who owns the output of the data from the generative AI?
Consider this: If an LLM consumes and shares open source code today, but the maintainer of that code removes their consent for the LLM to process it, is the development team that implemented that code from a generative AI left with a massive problem?
These questions are just a few examples of what government regulation and oversight seek to address.
The good news is that the EU is developing the Artificial Intelligence Act to improve regulations concerning data quality, transparency, human oversight and accountability, addressing ethical concerns and implementation issues. Other countries and government entities will likely follow suit.
In the absence of regulation, it is imperative you educate yourself on the kind of AI your team will be implementing, understand where there might be regulation put in place in the future and ensure your team puts guardrails in place to protect both your data as well as the data your LLM is leveraging.
AI advancements bring huge improvements to the digital experience you can deliver. Mitigating unconscious bias and educating yourself on regulation that is being created while also being forward-thinking about what regulation we might need will allow you to deliver experiences that not only meaningfully impact your end-user, but it will do so in an ethical way.
Read the full report “Data Bias: The Hidden Risk of AI” at progress.com.