The startup has contracted the use of "several gigawatts" of computing power based on new Tensor Processing Units (TPUs), starting in 2027. This mathematically confirms the analytics: Anthropic is preparing to train ultra-large "frontier" models comparable in resource intensity to future projects by Microsoft and OpenAI. Purchasing chips through the Google ecosystem (instead of direct dependence on NVIDIA) allows the company to optimize CAPEX and lock in a stable cost of inference for years to come.
Source: Anthropic / TechCrunch
HardwareAnthropicUpdateTPU