>
Tucker shares 'backroom' info about brawl between him and Israel First crowd…
Why Isn't There a Cure for Alzheimer's Disease?
US Government Revokes 80,000 Visas
OpenAI CEO Sam Altman served legal papers during speech in dramatic on-stage ambush
Goodbye, Cavities? Scientists Just Found a Way to Regrow Tooth Enamel
Scientists Say They've Figured Out How to Transcribe Your Thoughts From an MRI Scan
SanDisk stuffed 1 TB of storage into the smallest Type-C thumb drive ever
Calling Dr. Grok. Can AI Do Better than Your Primary Physician?
HUGE 32kWh LiFePO4 DIY Battery w/ 628Ah Cells! 90 Minute Build
What Has Bitcoin Become 17 Years After Satoshi Nakamoto Published The Whitepaper?
Japan just injected artificial blood into a human. No blood type needed. No refrigeration.
The 6 Best LLM Tools To Run Models Locally
Testing My First Sodium-Ion Solar Battery
A man once paralyzed from the waist down now stands on his own, not with machines or wires,...

GPT-4 can output 25000 words. GPT-4 can write a higher quality novel while GPT3.5 could only output a very short story.
GPT-4 can score 1410 on the SAT tests vs 1260 for GPT 3.5.
GPT-4 can score 161 on the LSAT vs 149 for GPT 3.5.
GPT-4 can score 99 percentil for GRE (high school equivalent) verbal test vs 63 percentile for GPT3.5.
GPT-4 is a Transformer based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.
A large focus of the GPT-4 project was building a deep learning stack that scales predictably. The primary reason is that for very large training runs like GPT-4, it is not feasible to do extensive model-specific tuning. To address this, we developed infrastructure and optimization methods that have very predictable behavior across multiple scales. These improvements allowed us to reliably predict some aspects of the performance of GPT-4 from smaller models trained using 1, 000× –10, 000× less compute.