xAI Expands Colossus Supercomputer Campus with Third Building
Elon Musk’s artificial intelligence startup xAI has acquired a **third building** at its Colossus supercomputing campus near Memphis, Tennessee, in a move aimed at massively increasing its AI compute capacity by 2026.[4] The new structure will support xAI’s effort to build one of the world’s most powerful AI training facilities and to compete directly with leading frontier labs such as OpenAI and Anthropic.[4]
MACROHARDRR: The New Building in xAI’s Colossus Complex
The newly purchased building has been named **“MACROHARDRR”**, extending Musk’s tongue‑in‑cheek “Macrohard” branding that he uses as a dig at Microsoft’s dominance in cloud and AI infrastructure.[1][3] Located next to the existing Colossus data center in the broader Memphis area, the facility is being converted into a large‑scale computing hub to run xAI’s next generation of AI models.[1][4]
According to reports, the expanded site is designed to deliver **nearly 2 gigawatts** of power capacity dedicated to AI workloads once the build‑out is complete.[1][4] That level of energy infrastructure would make Colossus one of the largest single‑site AI compute installations in the world.[1]
Toward 1 Million GPUs and Record‑Scale AI Compute
xAI’s long‑term plan is for the combined Colossus campus to house **at least 1 million GPUs**, forming a supercomputer optimized for training and serving large‑scale AI models.[1][4] Industry analyses indicate that the current and planned build‑out at Colossus includes hundreds of thousands of high‑end NVIDIA accelerators, with the ultimate goal of reaching the million‑GPU mark as additional power and space come online.[1]
Musk has publicly stated that he wants xAI to accumulate **more AI compute than any other player**, framing the arms race in terms of raw training capacity for increasingly capable models.[3] The third building and its associated power infrastructure are central to that ambition, enabling xAI to scale beyond today’s leading clusters operated by major tech firms.[1][3][4]
Supporting Frontier AI Models and Tools
The expanded Colossus campus is designed primarily to train and run xAI’s own frontier‑scale models, which compete with systems like
ChatGPT and other large language models from top AI labs.[1][3] With more GPUs and higher power density, xAI expects to:
- Train larger and more complex models
- Shorten training cycles for model updates
- Run multiple large experiments in parallel
- Support advanced AI assistants and developer tools built on its models
This infrastructure push reflects the industry‑wide consensus that access to massive compute is a key bottleneck for progress in cutting‑edge AI, from natural language systems to multimodal and agentic models.[1][2][3]
Environmental and Community Concerns
The rapid scale‑up has triggered **environmental concerns** and local scrutiny.[4][5] Delivering nearly 2 gigawatts of AI power requires extensive on‑site generation and grid connections, raising questions about:
- Overall energy consumption and carbon footprint
- Local air quality impacts from fossil‑fuel‑based generation
- Noise and land‑use issues around the expanded complex
Reporting on xAI’s earlier turbine deployments at Colossus describes community pushback and ongoing litigation related to smog, noise, and permitting for additional power units.[5] Similar debates are expected as the MACROHARDRR building is converted and tied into the broader energy system that feeds the campus.[4][5]
Timeline and Competitive Context
The new building is slated to begin **data center operations in 2026**, aligning with xAI’s broader roadmap to ramp up its AI clusters over the next few years.[1][4] The company’s expansion comes as major rivals, including OpenAI and other hyperscalers, pursue their own multi‑gigawatt data center projects to support frontier AI research and deployment.[2][3]
With the MACROHARDRR acquisition and the planned power upgrades, xAI positions Colossus as a flagship facility in the global AI infrastructure race—one that, if fully realized, could rival or exceed the capacity of the largest existing AI supercomputers.[1][2][3][4]