U
Awaira Score
68
Out of 100
Valuation
N/A
Post-money
Total Raised
$160M
All rounds
Awaira Score
68/100
Founded
2018
100-500 employees
What They Build
March 2026Untether AI designs AI inference accelerator chips using a compute-near-memory architecture that places processing elements directly adjacent to memory storage rather than in a separate compute chip, dramatically reducing the data movement bottleneck that limits inference throughput and energy efficiency in conventional von Neumann architectures. The Toronto company develops both the chip architec…
Is this your company? Claim it →Company Info
StageSeries C
Employees100-500
Country🇨🇦 Canada
Share
Loading sentiment...
Funding Rounds
Series C · No public funding round data available yet.
More from Canada
🇨🇦 View all AI companies in Canada →Alternatives
View all alternatives to Untether AI →Frequently Asked Questions
What is Untether AI's valuation?▾
Untether AI's valuation is not publicly disclosed.
Who invested in Untether AI?▾
Investor information for Untether AI is not publicly available at this time.
When did Untether AI last raise funding?▾
No public funding round data is currently available for Untether AI.
How many employees does Untether AI have?▾
Untether AI has approximately 100-500 employees.
What does Untether AI do?▾
Untether AI designs AI inference accelerator chips using a compute-near-memory architecture that places processing elements directly adjacent to memory storage rather than in a separate compute chip, dramatically reducing the data movement bottleneck that limits inference throughput and energy efficiency in conventional von Neumann architectures. The Toronto company develops both the chip architecture and the supporting software compiler toolchain required to deploy AI models on its hardware.\n\nThe company raised approximately $160 million in venture funding from investors including Intel Capital, Export Development Canada, and BDC Capital. Untether AI targets inference-intensive AI applications including edge AI deployment, data centre inference servers, and cloud inference acceleration where power efficiency and throughput per dollar are the primary evaluation metrics. The compute-near-memory approach is architecturally distinct from both GPU-based inference and other AI accelerator designs, with published benchmarks showing significant performance-per-watt advantages for specific inference workload types.\n\nUntether AI competes in the AI inference chip market against NVIDIA TensorRT, Groq, Cerebras, and other AI accelerator companies pursuing different architectural approaches to the inference efficiency problem. Canada position as a global AI research hub, anchored by the Vector Institute in Toronto and Mila in Montreal, provides a strong talent pipeline and research collaboration environment for deep chip design and AI systems work. The company architecture-first approach requires a longer path to production deployment than companies adapting existing GPU infrastructure, but targets a structurally different point on the performance-efficiency curve.