Key Takeaways
- Featherless.ai raised $20 million led by AMD Ventures and Airbus Ventures.
- Platform delivers serverless inference for over 30,000 open models.
- IT teams gain control, avoiding vendor lock-in and supporting data sovereignty.
Featherless.ai, a serverless inference platform specializing in open-source AI, announced it closed $20 million in Series A funding on April 30. The funding round was co-led by AMD Ventures and Airbus Ventures, with participation from BMW i Ventures, Kickstart Ventures, Panache Ventures and Wavemaker Ventures.
The company emerged in 2023 to offer a serverless AI inference platform that lets teams run over 30,000 open-source AI models across language, vision and audio without the need to manage infrastructure. It positions itself as a neutral layer unaligned with any hyperscaler, chipmaker or proprietary ecosystem. Through its AMD collaboration, it ensures popular open-source models run natively on AMD ROCm, offering an auditable alternative to proprietary hardware. Core infrastructure is hosted in the U.S. and EU, with a global team across Canada, Europe, Singapore and Australia.
Table of Contents
Benefits of Serverless Inference
The company emphasizes the importance of serverless inference platforms in reducing the costs and complexities of deploying open-source AI at an enterprise scale. These platforms leverage specialized GPU cloud infrastructure designed for high-performance training and production inference, featuring:
- Bare-metal GPU and CPU compute nodes
- Managed Kubernetes services for seamless container orchestration
- AI-optimized storage and networking for efficient data access
- On-demand GPU access with flexible scaling options
Featherless.ai's infrastructure supports the growing demand for data residency compliance through vertically integrated solutions, which aid organizations in managing their data effectively.
The company said it would use today's funding round to improve four areas:
- Expanding the open model library
- Shipping the open-source agent runtime
- Deepening optimization across hardware and
- Scaling enterprise deployments with regional sovereignty
This investment signals a turning point in the AI market. While the first wave of adoption was defined by proprietary, closed-door ecosystems, we provide a neutral ground for a second phase where companies can own and run their own models without being tethered to a single cloud provider or a restricted tech stack.
— Eugene Cheah, CEO and Co-founder, Featherless.ai
Tiered Models & Avoiding Lock-In
Many organizations route routine tasks to smaller, local models while reserving foundation models for complex reasoning, matching compute intensity to task complexity.
Hyperscaler dependency remains a recognized risk. OpenAI's $38 billion AWS deal diversifying compute beyond Microsoft Azure signals that even major AI vendors are pursuing multi-cloud strategies. Enterprises can draw a similar lesson: distributing inference workloads reduces pricing and availability risk.
Addressing ROI Challenges
Despite an increase in AI budgets, many organizations struggle to achieve a return on their investments in generative AI. Featherless.ai aims to address this challenge by providing a streamlined infrastructure that eases integration with production workflows.