Editor's Note: This is Part 4 of our AI Revolution series. Don't miss Part 1, Part 2 and Part 3.
Business leaders do not need to become coders to thrive in the age of AI. David De Cremer, author of "The AI-Savvy Leader," argues the key for leaders is not fluency in Python or mastery of neural networks but a foundational understanding of artificial intelligence. De Cremer believes that leaders should focus their attention on grasping the principles of AI, the mechanics of machine learning and the ethical implications that follow. It’s about seeing the big picture.
With this said, AI-savvy leaders should possess basic fluency with statistics and modeling. This isn’t about running regressions or fine-tuning algorithms but understanding the language of data and how models predict outcomes. Leaders equipped with this foundational knowledge are positioned to ask the right questions, interpret insights accurately, recognize if the right data is being used and make informed strategic decisions after being informed by the data.
They are also positioned to help in AI deployment. Typically, this is where organizations stumble, and this is why AI-savvy leadership can make a real difference. Leaders who understand what it takes to operationalize AI — from data readiness to model training and ethical safeguards — are far more likely to navigate the complexities of AI projects. De Cremer’s message is clear: a leader's job is to know how to wield AI effectively.
What Is AI? A Leader's Guide
Artificial intelligence (AI) has been a focal point of technology discussions since 1959, with pioneers like Marvin Minsky at MIT envisioning machines that could mimic human cognition. My own journey with AI began in the early 1980s, working on systems that were groundbreaking at the time. But what exactly is an AI system today? More importantly, what isn't it?
AI is an umbrella for technologies designed to automate business processes and create intelligent systems. AI spans a spectrum of innovations, including data science, machine learning (ML), generative AI (GenAI) and the newest frontier — agentic AI. At its core, AI leverages statistics, modeling, machine learning, deep learning, neural networks and data mining to analyze information and make predictions about future or otherwise unknown events. Let’s look at what these terms mean now so you can make sense of AI’s capabilities.
Machine Learning: ML looks for patterns in the data to improve performance on specific tasks based on seeing those patterns in data previously. ML excels at working with structured data — information that is neatly organized in predefined formats like rows and columns. Think of an Excel spreadsheet as an example. Its primary strength is predictive modeling: making accurate forecasts based on historical data patterns.
Deep Learning: Deep Learning is a more complex form of machine learning that processes vast volumes of data through multi-layered neural network processing — the data is organized like the way we believe or understand the human brain organizes memories. It’s "deep" because data flows through numerous layers before reaching a result. This allows it to recognize intricate patterns and make sophisticated predictions.
Generative AI: Unlike traditional AI models that predict outcomes, generative AI can create new content: text, images, videos and even audio. It learns from existing datasets and produces synthesized outputs based on what it has seen in the learning data sets. Additionally, it is especially powerful with the right unstructured data like voice, video and free-form text.
Agentic AI: Agentic AI systems can autonomously make decisions, act and solve problems with minimal human intervention. Agents go beyond mere response generation, mimicking the decision-making processes of human employees based on historical data and processes. They understand context, set goals, execute tasks and adapt dynamically to changes.
Different scenarios call for different types of AI. Predictive analytics and machine learning are ideal for optimizing supply chains, managing risk and enhancing customer insights. Deep learning is a game-changer in fields requiring pattern recognition — like healthcare diagnostics and fraud detection. Generative AI is transforming creative industries, enabling rapid content generation, while agentic AI is pushing boundaries in autonomous operations and digital workforce augmentation and automation.
To be fair, beneath these technologies lies a robust data architectural foundation: the cloud and the modern data stack. This includes cloud data platforms, data ingestion mechanisms, data warehouses or lakehouses, a semantic layer or metadata and governance. These elements are crucial for scalable and secure AI deployments — a topic we will explore in greater detail in the next article.
Related Article: The AI Revolution, Part 1: Drawing the Lines That Will Define the Future of AI
Machine Learning Essentials
At the core of machine learning is the development of models that identify data patterns and make predictions. This process is grounded in statistical methods including regression analysis and error minimization. To be clear, a model is a simplified representation of reality designed to highlight critical aspects of complex systems. A well-constructed model, as I learned in a modeling class in graduate school, does more than mimic real-world processes; it reveals interesting and often unexpected insights. Effective modeling is tailored to specific problem sets, allowing it to address targeted challenges with precision and clarity.
Key tasks for ML model development include:
- Feature Identification and Analysis: Discovering and analyzing the potential patterns and key features within data.
- Feature and Relationship Analysis: Understanding how features interact and influence outcomes.
- Model Prototyping and Training: Developing preliminary models and refining them through training with datasets.
- Model Evaluation and Testing: Assessing the accuracy and reliability of models under various conditions.
- Modeling Prediction and Post-Processing: Generating predictions and refining outputs to meet business needs.
- Model Maintenance and Performance Validation: Updating models to reflect changing data and validating their performance.
Ensuring machine learning models remain accurate and relevant over time is a function of MLOps (Machine Learning Operations). MLOps uses the principles of DevOps (a combination of software development (dev) and IT operations (ops), aiming to streamline the software development lifecycle by promoting collaboration and automation) to ensure prediction models are stable, well-calibrated and adaptive to evolving data environments.
For machine learning to deliver real value, organizations must integrate its output into their business processes and workflows. This integration enables systems to adjust dynamically as underlying data shifts, maintaining alignment with business goals and operational realities. When done right, machine learning doesn't just support decision-making — it transforms it, enabling more responsive and intelligent business operations.
Generative AI Essentials
GenAI enables the creation and manipulation of human-like text, images and other content. To understand how it works, you need to understand how its key components work and integrate including large language models (LLMs), vector databases (Vector DB) and specialized models like ChatGPT. These elements work in tandem to deliver intelligent, context-aware responses that are close to accurate and contextually relevant most of the time.
LLM Infrastructure
At the heart of a GenAI platform is the large language model. LLMs, such as GPT (Generative Pre-trained Transformer), are designed to produce detailed, coherent content by understanding and predicting word sequences.
- Generative: Able to create text by generating new sequences of words, not just classifying or labeling inputs.
- Pretrained: Trained on vast datasets, collected from both online and offline repositories, enabling them to learn language structure and context deeply.
- Transformer Architecture: Excels at understanding relationships between words in a sequence, allowing it to predict the next word with high accuracy through a process called self-supervised learning.
- Fine-Tuning: Further training on specific datasets to perform specialized language tasks, enhancing their accuracy and relevance in particular applications.
Vector Databases and Embeddings
The underlying structure of LLMs is powered by vector databases and embeddings. In natural language processing (NLP), words and phrases are represented as vectors — arrays of numbers that encode semantic meaning.
- Embeddings: These are high-dimensional (I will explain this) vectors that map words into semantic space. Words with similar meanings have vectors that are close to each other. This allows the model to understand context and relationships.
- Tokens: Inputs are divided into smaller units called tokens, each mapped to a vector that carries semantic information.
- Vector Operation: Vectors can be manipulated mathematically; in particular, a cosine similarity function that measures the angle between two vectors, indicating how related they are. A score of 1 means perfect alignment, while 0 indicates no relationship.
To visualize how this works, imagine comparing different types of meat and meat brands by their succulence and deliciousness — just three dimensions. If each meat type is plotted by score a vector can be created in a 3D space, proximity to another vector reflects similarity in characteristics. Now, this is a low dimensional example — generative AI is high dimensional because it can compare up to 4,032 dimensions. This fact enables LLMs to understand nuanced language and generate contextually rich responses.
ChatGPT and Applications in GenAI Platforms
ChatGPT is a GenAI platform that leverages the infrastructure of pre-trained transformers and vector databases to generate interactive, conversational text. By understanding user prompts and responding with coherent and contextually appropriate text, it exemplifies the power of GenAI to reshape communication, customer service and digital interactions.
When these components are seamlessly integrated, a GenAI platform is capable of not just understanding human language but engaging with it meaningfully — transforming raw data into intelligent, actionable insights. Occasionally, hallucinations may occur, resulting in wrong or misleading answers as the model attempts to always provide a response.
Related Article: The AI Revolution, Part 2: Why Reinventing Your Business Beats Optimizing It
Agentic AI Essentials
Agentic AI represents the next evolution of artificial intelligence, where generative AI converges with workforce automation platforms to drive real-world actions. While GenAI excels at understanding and generating human-like responses, Agentic AI turns these insights into executable workflows, bridging the gap between intelligent conversation and automated task completion. This transformation is made possible through a workforce/agent orchestration platform, which seamlessly coordinates digital agents to perform tasks, make decisions and respond dynamically to changes in data and business environments.
At the core of Agentic AI is agent frameworks and enterprise agentic workflows. These frameworks enable the creation and deployment of specialized agents that execute specific tasks autonomously, while enterprise workflows ensure that these agents operate cohesively within business processes. The result is an intelligent, self-directed system capable of not just understanding intent but taking action — optimizing operations, enhancing productivity and scaling decision-making across the enterprise. Agentic AI redefines automation by empowering GenAI with the capability to act, not just respond.
The AI Stack Simplified
Now let’s visualize everything we have discussed in a single diagram. On the top left is AI machine learning. Data is stored in a data lake or data warehouse in the cloud. This data is then connected to Spark or an ML/data model, which processes it to produce a prediction as output.
Next down is the GenAI stack. In this layer, the LLM generates embeddings, which are stored in a vector database. Unlike traditional databases that organize information in rows and columns, this database stores information as chunks. The output of this process is a chatbot prompt.
Finally on the bottom left, there is agentic AI, which integrates GenAI with data and workflow or scripts to create actions. Traditionally, scripts were designed for human-driven action, but here they are automated, causing actions to be generated based on the data and logic of the workflow.
Related Article: The AI Revolution, Part 3: Lessons From the Leaders Shaping AI’s Next Chapter
Strategic AI Leadership: Understanding Over Coding
The AI revolution is not about mastering code — it's about mastering understanding. First, leaders should grasp AI fundamentals: the principles of machine learning, deep learning and generative AI. Second, successful AI integration depends on data fluency and a solid grasp of how models drive decisions, not just the technical details. Finally, operationalizing AI demands a strategic vision, ethical awareness and the ability to navigate complexity — not just technical know-how.
In the age of AI, leadership is about wielding intelligence effectively, not building it from scratch.
Don't miss Part 1, Part 2 and Part 3 of our AI Revolution series.
Learn how you can join our contributor community.