Quantum computing and generative AI are two powerhouse technologies, but the potential that comes from combining them may be even greater.
When integrated with AI, quantum computing's already impressive ability to solve complex problems and process vast amounts of data has the potential to become even faster and more accurate. This is particularly true when it comes to machine learning (ML), which is focused on developing algorithms that can learn from data and offer predictions of future actions based on that data. The more data the machine has access to the better, or more accurate, the algorithms it creates.
The combination of quantum and machine learning can result in the creation of algorithms that are capable of processing and analyzing data far more efficiently that current machine learning products.
Training the algorithms has so far required high levels of computational power ... until now. Recent developments indicate that using quantum mechanics through quantum computing can make the training a lot easier and a lot more effective.
That said, a few hurdles remain in the way before this rosy-colored future can come to fruition.
Quantum Supremacy and Generative AI
Generative AI technologies will play a big role in taking quantum computing from its current aspirational state to the end goal of quantum supremacy, where the quantum computer outperforms classical compute methods with an exponential advantage, said Rosemary Francis, chief scientist in the high-performance computing (HPC) team at Altair since the company's 2020 acquisition of Ellexus, the company she founded.
Machine learning itself, Francis said, could be a good candidate for acceleration using quantum computing because the algorithms already tolerate a certain level of uncertainty. It is likely to be the lighter side of data analytics that benefits first, she said, rather than the more complex applications of deep learning.
The implementation of quantum computing involves a lot of statistical analysis due to the probabilistic nature of quantum computing and the high error rates it currently offers. As quantum computing scales, areas of analysis and data processing that can be accelerated with machine learning will emerge.
“It is popular belief that quantum computing will replace high-performance computing and machine learning, but in fact practical applications are more likely to have a hybrid compute model where HPC and AI are a critical part of the quantum computing workflow,” she said.
Related Article: How Close Are IBM's Quantum Computing Predictions to Reality?
The Impact for Business
The advantages of combing the two technologies — quantum computing and generative AI — are clear.
For business specifically, it would provide organizations with the horsepower to analyze many different data sets in parallel, which will enable them to generate an expected outcome, said Todd Moore, global head of data security products at the French-based Thales group. In turn, this outcome can be used to help solve complex mathematical puzzles and to find resolution to a multitude of other real-world problems — even predicting the weather more accurately.
“Coupling generative AI with quantum computing will help provide more accurate outcomes," he said. "Since quantum computing can make reliable predictions, ultimately the data set that AI is built upon increases that reliability."
But for both quantum computing and generative AI to become mainstream and successful, there needs to be confidence in the accuracy of the data that is being used to create positive (or negative) outcomes. Moore said the old saying “crap in, crap out” has never been truer.
“Being able to guarantee the integrity of data is a core principle of data security. From the time that data is created and throughout its lifecycle, there needs to be a way to validate and maintain data integrity,” he said.
Data encryption and access controls can be deployed today to make sure that integrity is maintained throughout its lifetime.
One of the clear advantages with this combination, he said, is that deploying a quantum-resistant architecture to protect the data at rest, in use and in motion will provide a lifetime of data integrity and data accuracy.
“Quantum computing has the potential to crack legacy encryption algorithms. The need for data security that is crypto-agile has never been more important than in today's world. Making sure that generative AI is working upon reliable data sets will be paramount for future adoption,” he said.
Related Article: How Algorithmic Trust Models Can Help Ensure Data Privacy
The Challenge of Interpretability
AI interpretability usually refers to the ability of an AI system to express a human understandable decision-making process upon request. And because machine learning is still the main way of doing AI today, interpretability remains challenging.
Hugues Poiget, lead machine learning engineer at Scortex, which was bought by TRIGO last year, said that's mostly because of the complexity of the algorithms used, which are composed of millions of parameters and not explicitly engineered for a given problem, and the lack of transparency on the data used to train the model — very famous datasets are trained over private data not accessible to the end user — and the way AI systems learn.
Another factor challenging interpretability is that besides the investment from the community to find ways to interpret AI's decisions, the technology is usually embedded in products before interpretability mechanisms are available.
“When the hype starts, it’s common that distractors highlight the lack of interpretability mechanisms for those very new approaches,” Poiget said. Large Language Models (LLMs) and their derivatives (e.g., WinCLIP) offer the possibility that interpretability may be easier because of the ability for a human to request a model why it made a given prediction. For instance, he said, try posing a question to ChatGPT and then asking the model why it answered the way it did.
End-users of such AI systems should not forget that asking the model to explain its prediction is only generating a new prediction, which has a low chance to explain why the initial prediction was made. Nevertheless, he said, it's important for AI engineers to continue to work on engineering interpretability features.
Related Article: Tech Giants Dominate Quantum Computing But It's Still Anybody's Game
Are We There Yet?
There are, of course, caveats.
Matt Wallace, CTO at Faction, said it’s not guaranteed that quantum computing will speed up machine learning. This, he said, is partially because computing weights and biases for ML involve many tiny computations to compute the loss function to train the algorithm.
“Most speed-ups of quantum involve solving a problem faster. That said, if quantum computing reaches the needed scale, they will be applicable, but it won’t be today,” he said.
Generative AI is expected to have an considerable impact on this, mainly because LLMs are a great tool for learning; they’re strong at explaining, generating and debugging code. But that also means older systems will have to be replaced.
"Having a training set that ended in September 2021 won’t help quantum computing become more practical," Wallace said. "But other tools will be GPT4+ capable, which will refresh understanding newer data."
Because of the speed at which these technologies are evolving, it is very likely that new algorithms will be needed for the use of quantum computing on AI, Wallace said.
“The algorithms we have today are based on ... classic computations. An algorithm that can build a neural network (NN) taking advantage of quantum to provide quadratic speed-ups could mean an ability to generate a much more powerfully accurate and insightful NN," he said, "but this is really speculative at this point.”