>
DR. ROBERT MALONE EXPOSES INCOMPLETE DATA BEHIND RSV VOTE
"The Network" in the Worlds of the Elites
Unlimited Energy in My Garage (No Patents)
Will AI need a body to come close to human-like intelligence?
Neuroscientists just found a hidden protein switch in your brain that reverses aging and memory loss
NVIDIA just announced the T5000 robot brain microprocessor that can power TERMINATORS
Two-story family home was 3D-printed in just 18 hours
This Hypersonic Space Plane Will Fly From London to N.Y.C. in an Hour
Magnetic Fields Reshape the Movement of Sound Waves in a Stunning Discovery
There are studies that have shown that there is a peptide that can completely regenerate nerves
Swedish startup unveils Starlink alternative - that Musk can't switch off
Video Games At 30,000 Feet? Starlink's Airline Rollout Is Making It Reality
Grok 4 Vending Machine Win, Stealth Grok 4 coding Leading to Possible AGI with Grok 5
xAI has two AI data centers. The first data center had 100,000 Nvidia H100 for the Grok 3 training and it has 50,000 more H100 and 50,000 more H200 when Grok 3 was released. xAI added 30,000 B200 chips. This equals 400,000 Nvidia H100.
xAI is going to or has already activated the second AI data center with 110,000 Nvidia B200 which are equal to 550,000 H100. The second AI data center will expand to 1 million Nvidia B200 or B300 over the 5-8 months. This means adding 100,000 to 200,000 B200 chips every month.
xAI will likely dedicate the majority of the second data center's resources to pre-training a next-generation model—potentially xAI's most ambitious yet—while using the first data center for supplementary pre-training tasks or smaller-scale experiments.
Post-training (or fine-tuning) refines the pre-trained model by training it on specific tasks or datasets to improve performance in targeted areas. This phase requires fewer resources than pre-training but benefits from parallelization and flexibility.
First Data Center:
With 400,000 H100 equivalents, this data center can handle fine-tuning Grok 3.5 or its successors for specific applications (e.g., scientific reasoning, conversational accuracy, or domain-specific knowledge).
Second Data Center:
Initial Phase: Allocate a portion of the 550,000 H100 equivalents to fine-tune the pre-trained models on diverse tasks, running multiple fine-tuning jobs in parallel.
Expanded Phase: With 5,000,000 H100 equivalents, xAI can scale up fine-tuning efforts significantly, tailoring multiple model variants for different use cases (e.g., enterprise solutions, research tools, or real-time systems).
Reinforcement learning (RL) enhances the model's decision-making and adaptability by rewarding desired outputs and penalizing undesired ones. This phase is computationally intensive for complex tasks and critical for achieving human-like performance.
First Data Center:
Apply RL to Grok 3.5 or smaller models to refine their behavior, such as improving response coherence or task-specific accuracy.