>
ECONOMY | Biden's Economic Hangover Is Hitting HARD
Watch a SpaceX rocket launch Crew-10 relief mission tonight for NASA astronauts on ISS after delay
3D-Printed Gun Components - Part 1, by M.B.
2 MW Nuclear Fusion Propulsion in Orbit Demo of Components in 2027
FCC Allows SpaceX Starlink Direct to Cellphone Power for 4G/5G Speeds
How Big Tech Plans To Read Your Mind
First electric seaglider finally hits the water with real passengers
Construction, Power Timeline for xAI to Reach a 3 Million GPU Supercluster
Sea sponges inspire super strong material for more durable buildings
X1 Pro laser welder as easy to use as a hot glue gun
What does "PhD-level" AI mean? OpenAI's rumored $20,000 agent plan explained.
SHOCKING Bots- Top 5 Bots in the Battle of the Humanoid Bots
Solar film you can stick anywhere to generate energy is nearly here
The AI industry has a new buzzword: "PhD-level AI." According to a report from The Information, OpenAI may be planning to launch several specialized AI "agent" products including a $20,000 monthly tier focused on supporting "PhD-level research." Other reportedly planned agents include a "high-income knowledge worker" assistant at $2,000 monthly and a software developer agent at $10,000 monthly.
OpenAI has not yet confirmed these prices, but they have mentioned PhD-level AI capabilities before. So what exactly constitutes "PhD-level AI"? The term refers to models that supposedly perform tasks requiring doctoral-level expertise. These include agents conducting advanced research, writing and debugging complex code without human intervention, and analyzing large datasets to generate comprehensive reports. The key claim is that these models can tackle problems that typically require years of specialized academic training.
Companies like OpenAI base their "PhD-level" claims on performance in specific benchmark tests. For example, OpenAI's o1 series models reportedly performed well in science, coding, and math tests, with results similar to human PhD students on challenging tasks. The company's Deep Research tool, which can generate research papers with citations, scored 26.6 percent on "Humanity's Last Exam," a comprehensive evaluation covering over 3,000 questions across more than 100 subjects.
OpenAI's latest advancement along these lines comes from their o3 and o3-mini models, announced in December. These models build upon the o1 family launched earlier last year. Like o1, the o3 models use what OpenAI calls "private chain of thought," a simulated reasoning technique where the model runs through an internal dialog and iteratively works through issues before presenting a final answer.
This approach ostensibly mirrors how human researchers spend time thinking about complex problems rather than providing immediate answers. According to OpenAI, the more time you put into this inference-time compute, the better answers you get. So here's the key point: For $20,000, a customer would presumably be buying tons of thinking time for the AI model to work on difficult problems.
According to OpenAI, o3 earned a record-breaking score on the ARC-AGI visual reasoning benchmark, reaching 87.5 percent in high-compute testing—comparable to human performance at an 85 percent threshold. The model also scored 96.7 percent on the 2024 American Invitational Mathematics Exam, missing just one question, and reached 87.7 percent on GPQA Diamond, which contains graduate-level biology, physics, and chemistry questions.
On the Frontier Math benchmark by EpochAI, o3 solved 25.2 percent of problems, while no other model has exceeded 2 percent—suggesting a leap in mathematical reasoning capabilities over the previous model.