There is no contradiction here; it is the classic division of labor in AI. NVIDIA's architecture (B200 clusters) is critical for *training* colossal multimodal models in data centers. However, proprietary, highly energy-efficient AI6 silicon is needed for *inference* directly at the "edge"—in the onboard computers of electric vehicles for Full Self-Driving and in the heads of humanoid Optimus robots. By controlling the Edge infrastructure, Musk frees his physical products from dependence on third-party vendors.
Source: Reuters / CNA
HardwareTeslaElon MuskEdge AINVIDIA