The Gist
- Ethical AI imperative. Ethical governance in AI is crucial to manage privacy and data use, ensuring human-centric technology evolution.
- AI's transformative role. Generative AI significantly transforms human-brand interactions, necessitating ethical and transparent practices.
- Future collaboration focus. Investing in ethical AI practices is key to a harmonious future between humans and AI technologies.
In an era where the human experience is tantamount to being the scarcest "resource" (besides time, of course) of exponential value, generative AI is making a big splash and is undoubtedly and irrevocably transforming interactions between humans, brands and businesses. A foundation must be laid so both species — humanity and AI — can come together to increase the probability of a better future.
To be specific, AI or narrow-band use cases for AI have been around for decades — we’ve all heard Geoffrey Hinton speak. Put simply, every household Web2 brand — think the FAANG stocks — Facebook (Meta) , Amazon, Apple, Netflix and Google — all exploited AI for profit.
In this Web2 "era," we humans, specifically our data (of intentions, needs and wants), were essentially the "products." In short, our data was monetized, oftentimes infringing on our privacy.
Make no mistake, AI has been around for decades, seven-to-eight decades in fact, before generative AI or large language models (LLMs) made a big splash in 2022. What it meant was that the power of AI was and will be increasingly democratized — in our hands or pockets, just a prompt/speech/ask away. But what’s less apparent is how we’ve shifted the perennial gray line even further left — yielding yet even more data to train these new LLMs such as OpenAI’s ChatGPT, Google’s Bard and Meta’s Lama 2.
From chatbots that offer 24/7 customer support to personalized content recommendations, generative AI has the potential to elevate the human experience to new heights. However, harnessing this transcendental power necessitates stringent ethical practices and governance frameworks to ensure the well-being and privacy of well, humans (customers, employees, citizens).
It should be common knowledge now that generative AI hallucinates as it strives to provide us the "best" answers possible — sometimes making things up along the way. All the more reason for ethical guidelines and boundaries to guide our thoughts and actions, and to steer a positive growth trajectory for what is poised to become the most disruptive human invention. Period.
We will dig into into the importance of ethical AI frameworks and boundaries in human-centric AI applications, supported by examples across various industries — but focused more on the applications of generative AI and LLMs — though I can’t possibly cover the cumulative ethical impact of general AI in this short article.Ethical AI Example No. 1: Personalized Content Recommendations
Streaming Services
Generative AI algorithms analyze user preferences and viewing history to suggest personalized content. While this definitely enhances the human experience, it also raises concerns about privacy. Ethical governance must dictate how much data is collected, ensuring that customer data is not exploited or shared without consent. There’s so much debate amongst experts about the spectrum of data: zero-party, first-party, third-party data etc. ChatGPT, for example, was trained using text databases from the internet. This included a whopping 570 GBs of data obtained from books, web texts, Wikipedia, articles and other pieces of writing on the internet. Back in 2022 — even then, results were jaw dropping, and OpenAI coralled millions of users within weeks. Now we’re grappling with synthetic data, i.e., data fabricated by AI itself for training purposes.
Related Article: Ethical AI Principles: Balancing AI Risk and Reward for Brands & Customers
Ethical AI Example No. 2: Virtual Assistants and Chatbots
Ecommerce
Many ecommerce platforms use chatbots powered by generative AI to assist customers. The ethical boundary here lies in transparency — customers must be made aware that they are interacting with a chatbot, and not a human. Ethical governance ensures that companies are clear about the nature of these interactions. Ecommerce giants like Amazon and Shopify use generative AI, e.g., Shopify Magic to help sellers write product descriptions. There’s risk here that fake content, images, or even reviews (!!!) can cause reputational and financial damage, never mind deteriorating the Human Experience!
Related Article: Generative AI: Exploring Ethics, Copyright and Regulation
Ethical AI Example No. 3: Content Generation
Marketing and AdvertisingGenerative AI can automate content creation for marketing campaigns, optimizing efficiency and consistency. However, ethical boundaries are crucial in preventing the dissemination of false or misleading information. Governance frameworks ensure that AI-generated content aligns with ethical advertising standards. There have been several cases of generative AI infringing copyrights, e.g., Getty Images suing creators of AI art over the tool Stable Diffusion for scraping its content
Ethical AI Example No. 4: Healthcare and Telemedicine
Healthcare AppsAI-driven virtual healthcare assistants are on the rise, providing medical information and even diagnoses. Ethical practices demand that generative AI in healthcare adhere to strict regulations to safeguard patient privacy, maintain accurate diagnoses, and avoid any medical misinformation. There’s growing concern that generative AI is stoking medical malpractice concerns in unexpected ways. The use of generative AI for private or confidential information is something that we need to be especially cautious about, e.g., entering patient-specific info could be a violation of HIPAA (Health Insurance Portability and Accountability Act) and lead to various legal troubles. In the near future it's likely that doctors will spend more time on empathy — things like breaking bad news, counseling, mental health of family members, etc., leaving more and more parts of medical work to robots and AI.
Ethical AI Example No. 5: Social Media Engagement
Social Media Platforms, E-Books, EmailGenerative AI is employed to enhance social media content, such as personalized news feeds and recommendation algorithms. Ethical governance frameworks should monitor and control content curation to avoid promoting divisive or harmful content, thus preserving a positive online environment.
Generative AI is also being put to illicit use creating e-books sold on Amazon, or generating fake political ads. And now, spam and scams are out of control thanks to AI’s ability to target us more easily.
Spam is unsolicited commercial emails aimed at nudging us toward buying something, clicking on links, installing malware or changing our views. One email blast can make $1,000 in a matter of a few hours. Advances in AI now mean spammers could replace traditional hit-or-miss approaches with more targeted and persuasive messages. And all this is possible thanks to AI’s easy access to social media posts.
A recent report from Europol expects 90% of internet content to be AI-generated in a few years. Misinformation depletes trust, which underpins an accretive, positive human experience.
Ethical AI Example No. 6: Financial Services
Banking and Finance
AI-powered chatbots and virtual assistants are becoming increasingly prevalent in the financial sector. Ethical practices require clear data protection measures and the prevention of unfair or discriminatory practices. Governance frameworks can ensure AI applications adhere to these ethical guidelines. These concerns include embedded bias and privacy shortcomings, opaqueness about how outcomes are generated, robustness issues, cybersecurity and AI’s impact on broader financial stability.
An important challenge for AI systems is embedded bias — particularly in a highly regulated and sensitive sector like financial services. Embedded bias could be defined as computer systems that systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others. In the financial sector, which is increasingly dependent on AI-supported decisions, embedded bias could lead to, among other things, unethical practices, financial exclusion and damaged public trust.
Ensuring Ethical AI: Balancing Technological Advancement With Ethical Governance and Customer Trust
In summary, generative AI holds great promise in enhancing the human experience across various industries, but its implementation must be guided by ethical practices and governance frameworks. As we've seen in these examples, issues of privacy, transparency, accuracy and fairness must be carefully addressed. Failure to establish these ethical boundaries can result in potential harm to customers, from privacy breaches to misinformation. See this link for some useful thoughts on how we can manage risks and preserve trust with generative AI.
Businesses that wish to leverage generative AI for better human experiences must not only invest in the technology but also in ethical practices and governance — start today! We’re already "too late" as Sam Altman, formerly of OpenAI and now at Microsoft, implies in his pleas to governments and regulators to lean in and take charge of what’s certain to be the most transformative invention human-kind has seen.
This approach ensures that the power of generative AI is harnessed responsibly, delivering a seamless and customer-centric experience while upholding the highest ethical standards — also this "code of ethics" vastly increases the probability of both species (Humanity x AI) being able to create a better future, together.
Learn how you can join our contributor community.