I recently had to share a short biography with a conference organizer. There’s few things more mortifying for a Brit than writing about yourself in the third person, so I thought I’d see if ChatGPT could do a better job.
Structurally it wasn’t bad at all. It gave a good overview of my work, my experience and my interests. Until I got to the third paragraph.
“Sharon is author of two books: ‘The Digital Workplace: A Practical Guide’ and ‘The Future of Work: A Guide for Leaders.'"
Impressive, right? There’s just one problem: I’m not the author. In fact, no one is. These books don’t exist.
How had these been attributed to me?
How ChatGPT Works
ChatGPT works by using a deep neural network called a transformer. It is pre-trained on a large corpus of text data from the internet, allowing it to learn patterns, language structures and some background knowledge. During pre-training, the model learns to predict the next word in a sentence based on the previous context, which helps it understand and generate coherent responses.
Once the pre-training is complete, the model is fine-tuned on specific tasks or datasets to make it more effective and reliable. Fine-tuning involves training the model on a narrower dataset that is carefully generated with human reviewers following guidelines provided by OpenAI. These reviewers help curate and rate possible model outputs to ensure high-quality responses.
When you interact with ChatGPT, you input a message or a prompt. The model takes that input and generates a response based on the context provided. It uses the patterns and knowledge it has learned during pre-training and fine-tuning to generate a coherent and relevant response to your query.
But coherent and relevant are not the same thing as correct. It generates sentences that are likely to be correct based on its model.
That is, based on the data it learned about me, ChatGPT surmised that "The Digital Workplace: A Practical Guide" was a book I might conceivably write.
Related Article: Generative AI Results Should Come With a Warning Label
What ChatGPT Doesn’t Know
The problem is, ChatGPT can’t independently validate if its responses or assumptions are correct. It doesn’t know that I simply don’t have the patience to write a whole book.
That’s just a small example of why — while ChatGPT can help — you'll still need critical thinking and information verification from reliable sources.
Infinite Interns
ChatGPT and its Google competitor, Bard, are like having infinite interns. Undoubtedly useful for legwork, first drafts and ideas, but the quality isn’t anywhere near good enough to put in front of a client.
Just as with your office interns, ChatGPT needs guidance and supervision. Generative AI models can sometimes produce unpredictable or difficult to control outputs. In cases where the generated content needs to adhere to specific guidelines or comply with regulations, this lack of control can be problematic.
While the “it was the intern” excuse occurs so frequently it’s become its own meme, it’s impossible — not to mention embarrassing — to blame AI for generated output, especially if it produces inappropriate, misleading or harmful content.
Like your interns, ChatGPT doesn’t have as sharp an appreciation of data security as your more experienced staff. Content pasted into these tools (for example to generate a summary) creates new privacy and security risks, as sensitive or confidential information may be unintentionally incorporated into training data or leaked through generated outputs.
Safeguarding data becomes crucial to prevent breaches or misuse.
Related Article: Are You Giving Employees Guidelines on Generative AI Use? You Should Be
Building an AI-Employee Alliance
While the risks are real, employers should resist the temptation to lock tools down. Instead, look for ways to bring these powerful capabilities into the digital workplace in a secure manner.
That’s partly about having the right tools. Paid for versions reduce the risk of data leakage, while Microsoft’s Copilot will bring this safely within your own tenancy.
But for AI to live up to its potential means not just having, but knowing how to use the tools. A recent Microsoft report highlighted how organizations need to develop their employees’ understanding of and confidence with using AI in their work.
That means understanding what the tools can do, and how to use them to get the best results (what Microsoft calls AI Aptitude). But also work iteratively, confidently and responsibly alongside with and augmented by AI — in what it terms the AI-Employee Alliance.
The same report found people were enthusiastic at the prospect of AI rescuing us from burnout or tedium, with limited concern about automation leading to job losses (rightly so, in my view). Employees appear alive to the possibilities, perhaps recognizing how much of the work many of us do today is wasting our time and skills.
A quick experiment with ChatGPT and its ilk for simple tasks like writing a biography is a great way for employees to understand these tools' potential. I'm using ChatGPT regularly for first drafts — including this one — essentially using it to create a structure to kick-start my thinking. After editing for style and fact-checking, my final work looks nothing like the start, but that first draft speeds up the process of getting to the final. Like me, ChatGPT is a long way away from being able to write a book (or at least, a good book).
Running experiments like these will also highlight the potential pitfalls: security risks, bias, intellectual property, plagiarism, trust, credibility. With these tools already mainstream, now’s the time to assess and mitigate your risks. That could be through careful data curation, model transparency and interpretability, guidelines for use, and ongoing evaluation of the AI systems available for employee use.
But most important of all is in building that employee-AI alliance, where individuals are equipped to use these tools safely and responsibly. Help employees understand where AI can help, how to make the most of it, and where — like my book-authoring track record — its capabilities are overstated, and you'll set them up for success.
Learn how you can join our contributor community.