Back to GlossaryInfrastructure

GPU

Definition

Graphics Processing Unit — a specialized processor originally designed for rendering graphics but now essential for training and running AI models due to its ability to perform thousands of parallel computations.

GPUs have become the backbone of modern AI computing. Unlike CPUs that excel at sequential tasks with a few powerful cores, GPUs contain thousands of smaller cores optimized for parallel processing — perfectly suited for the matrix multiplications that dominate neural network computation. NVIDIA dominates the AI GPU market with its A100, H100, and H200 accelerators, while AMD and Intel compete with alternatives. A single H100 GPU costs around $30,000-$40,000, and training frontier LLMs requires clusters of thousands of GPUs running for months. The global shortage of AI GPUs has made them one of the most strategically important commodities in tech. GPU cloud providers like AWS, Google Cloud, CoreWeave, and Lambda Labs provide on-demand GPU access for organizations that cannot purchase hardware.

Companies in Infrastructure

View Infrastructure companies →