>
What the Trump Admin is Like With The Client List
Tucker Carlson Reveals Who He Thinks Funded Jeffrey Epstein's Crimes
In the latest massacre, 15 Palestinians seeking food at GHF site die from suffocation...
Trump Continues to Throw Gasoline on the Epstein Bonfire by Attacking His Own Supporters in MAGA
Magic mushrooms may hold the secret to longevity: Psilocybin extends lifespan by 57%...
Unitree G1 vs Boston Dynamics Atlas vs Optimus Gen 2 Robot– Who Wins?
LFP Battery Fire Safety: What You NEED to Know
Final Summer Solar Panel Test: Bifacial Optimization. Save Money w/ These Results!
MEDICAL MIRACLE IN JAPAN: Paralyzed Man Stands Again After Revolutionary Stem Cell Treatment!
Insulator Becomes Conducting Semiconductor And Could Make Superelastic Silicone Solar Panels
Slate Truck's Under $20,000 Price Tag Just Became A Political Casualty
Wisdom Teeth Contain Unique Stem Cell That Can Form Cartilage, Neurons, and Heart Tissue
Hay fever breakthrough: 'Molecular shield' blocks allergy trigger at the site
xAI has two AI data centers. The first data center had 100,000 Nvidia H100 for the Grok 3 training and it has 50,000 more H100 and 50,000 more H200 when Grok 3 was released. xAI added 30,000 B200 chips. This equals 400,000 Nvidia H100.
xAI is going to or has already activated the second AI data center with 110,000 Nvidia B200 which are equal to 550,000 H100. The second AI data center will expand to 1 million Nvidia B200 or B300 over the 5-8 months. This means adding 100,000 to 200,000 B200 chips every month.
xAI will likely dedicate the majority of the second data center's resources to pre-training a next-generation model—potentially xAI's most ambitious yet—while using the first data center for supplementary pre-training tasks or smaller-scale experiments.
Post-training (or fine-tuning) refines the pre-trained model by training it on specific tasks or datasets to improve performance in targeted areas. This phase requires fewer resources than pre-training but benefits from parallelization and flexibility.
First Data Center:
With 400,000 H100 equivalents, this data center can handle fine-tuning Grok 3.5 or its successors for specific applications (e.g., scientific reasoning, conversational accuracy, or domain-specific knowledge).
Second Data Center:
Initial Phase: Allocate a portion of the 550,000 H100 equivalents to fine-tune the pre-trained models on diverse tasks, running multiple fine-tuning jobs in parallel.
Expanded Phase: With 5,000,000 H100 equivalents, xAI can scale up fine-tuning efforts significantly, tailoring multiple model variants for different use cases (e.g., enterprise solutions, research tools, or real-time systems).
Reinforcement learning (RL) enhances the model's decision-making and adaptability by rewarding desired outputs and penalizing undesired ones. This phase is computationally intensive for complex tasks and critical for achieving human-like performance.
First Data Center:
Apply RL to Grok 3.5 or smaller models to refine their behavior, such as improving response coherence or task-specific accuracy.