UK AI chip startup Fractile has raised $220 million in Series B funding, adding fresh capital to its effort to build hardware for one of AI’s biggest emerging bottlenecks: inference.
As AI models take on longer, more complex tasks, the cost and speed of generating outputs are becoming a major constraint. Solving that problem, according to Fractile, will require a complete rethink on the chips that power frontier AI systems.
Table of Contents
- Fractile Targets the Inference Issue
- The Series B Round
- Why Inference Is AI's Growing Problem
- Fractile’s UK Expansion Plans
- Anthropic Reportedly Shows Interest
- Fractile Looks Beyond Today’s GPU Limits
Fractile Targets the Inference Issue
Fractile is developing chips based on in-memory compute, an approach that allows calculations to happen directly inside computer memory. The objective is to reduce the time, energy and cost associated with moving data between memory and processors.
That goal is paramount as inference — the process of running an AI model to generate answers, code, reasoning steps or other outputs — becomes increasingly expensive as models are used for more complex work.
In a blog post announcing the funding, Fractile said some advanced AI workloads already require outputs reaching tens of millions of tokens. At current speeds, the company argued, those workloads can take far too long to be practical at scale.
The Series B Round
The $220 million round was led by Accel, Factorial Funds and Peter Theil's Founders Fund. Other participants included:
- Conviction
- Gigascale
- O1A
- Felicis
- Buckley Ventures
- 8VC
Fractile said the funding will help accelerate its path toward getting its first AI chips and systems into customers’ hands.
The company has also drawn backing from notable technology executives. Former Arm and Acorn Computers executive Stan Boland previously invested in Fractile, along with entrepreneur Hermann Hauser, Wayve co-founder Amar Shah and former Intel CEO Pat Gelsinger.
Why Inference Is AI's Growing Problem
Much of the AI infrastructure conversation has centered on training large models. But as generative AI moves deeper into production, inference is becoming a critical cost center.
Training creates the model. Inference runs it every time a user or system asks it to do something. That makes inference especially important for enterprise AI use cases such as:
- AI coding assistants
- Drug discovery workflows
- Materials research
- Long-context enterprise agents
- Other multi-step AI systems that generate large volumes of output
Existing chip architectures are constrained by memory bandwidth, making it harder to run large AI models quickly and economically. According to Fractile founder and CEO Walter Goodwin, faster inference could help turn workloads that currently take weeks or months into tasks that can be completed in days or hours.
"Faster speed is not just about going from 10 seconds to 100 milliseconds. It is about going from days, weeks, months — down to something that is much, much shorter."
- Walter Goodwin
Founder & CEO, Fractile
Fractile’s UK Expansion Plans
The Series B follows Fractile’s February 2026 announcement that it plans to invest £100 million, or about $135 million, in its UK operations over the next three years.
That expansion includes:
- Growing its existing sites in London and Bristol
- Creating a new hardware engineering facility in Bristol
- Hiring across the UK, the US and Taiwan
Fractile said it is currently recruiting in London, Bristol, San Francisco and Taipei.
Anthropic Reportedly Shows Interest
Fractile’s chips are not yet commercially available, but the company appears to be attracting early customer interest.
Earlier this month, The Information reported that Anthropic held discussions with Fractile about buying its inference chips once the hardware becomes available in 2027.
As frontier AI companies push toward larger models, longer context windows and more agentic workloads, they are also looking for ways to reduce their dependence on expensive, power-hungry infrastructure.
Fractile Looks Beyond Today’s GPU Limits
Fractile was founded in 2022 by Dr. Walter Goodwin, then a PhD student at the University of Oxford’s Robotics Institute.
In the company’s most recent funding announcement, it dubbed itself as a full-stack effort spanning AI research, foundry processes and chip design.
“Since founding, we’ve been working across the full stack, from foundational AI research to foundry process innovation to chip micro-architecture, to aggressively chase the most promising solutions and develop systems that break the trade-off curve, reject the inference pareto frontier of cost-versus-latency, and chart a course to changing what we can do with the world’s best AI models,” company officials wrote.
Nvidia GPUs still dominate the AI hardware market, but startups like Fractile are trying to carve out a role by targeting specific technical limits in today’s infrastructure. The opportunity is clear: if AI’s future depends on running increasingly long and complex workloads, inference hardware could become one of the next major competitive arenas.