>
Powerful Pro-life Ad Set to Air During Super Bowl 'Adoption is an Option' (Video)
Even in Winter, the Sun Still Shines in These Citrus Recipes
Dates: The Ancient Fertility Remedy Modern Medicine Ignores Amid Record Low Birth Rates
Amazon's $200 Billion Spending Shock Reveals Big Tech's Centralization Crisis
SpaceX Authorized to Increase High Speed Internet Download Speeds 5X Through 2026
Space AI is the Key to the Technological Singularity
Velocitor X-1 eVTOL could be beating the traffic in just a year
Starlink smasher? China claims world's best high-powered microwave weapon
Wood scraps turn 'useless' desert sand into concrete
Let's Do a Detailed Review of Zorin -- Is This Good for Ex-Windows Users?
The World's First Sodium-Ion Battery EV Is A Winter Range Monster
China's CATL 5C Battery Breakthrough will Make Most Combustion Engine Vehicles OBSOLETE
Study Shows Vaporizing E-Waste Makes it Easy to Recover Precious Metals at 13-Times Lower Costs

Demonstrates AI generating novel math objects (Bellman functions for optimal control) quickly, linking to isoperimetric profiles and Takagi function (RH-related). It advances understanding of square function instabilities without changing the world immediately, but highlights AI's role in "small steps" toward deeper insights in stochastic processes and analysis.
There have also been solving of decades old open significant math problems solved by OpenAI GPT 5.2.
Paul Erd?s, the prolific Hungarian mathematician, posed over 1,000 open problems across fields like combinatorics, number theory, and graph theory. Many remain unsolved, with some offering cash prizes. Recently, large language models (LLMs) like OpenAI's GPT-5.2 (released in late 2025) have made headlines for contributing to solutions, often autonomously or with minimal human guidance. These are tracked on the erdosproblems.com site, maintained by mathematician Thomas Bloom, and verified by experts like Terence Tao (Fields Medalist).
Overall, since Christmas 2025, 15 Erd?s problems have been marked solved with 11 crediting AI involvement. Tao's analysis is that there are 8 cases of meaningful autonomous AI progress, 6 where AI built on prior research.
AI doesn't just generate initial proofs — it excels at iteratively refining them. For instance, in the process of turning raw AI-generated arguments into full research papers, tools like GPT can
– automatically rewrite sections for better clarity.
– Adjust phrasing, variable choices, or logical flow.
– Incorporate historical context, literature references, and natural-language explanations.
– Produce multiple drafts quickly, reducing the "feel of a generic AI-produced document" to something approaching acceptable research-paper quality.
In one specific example Tao referenced (around the autonomous solve of Erd?s #728), the collaboration led to a new writeup of the proof that he judged as within ballpark of an acceptable standard for a research paper, with room for improvement but far beyond initial raw output.
Tao describes this as shifting proof-writing toward a search problem at scale. AI can generate thousands of mini-lemmas, variations, or exposition styles, then use checkers (like Lean) to validate and cull the weak ones, while humans focus on high-level direction. He calls this "vibe-coding" or rapid iteration — complementary to human strengths. There is useful vibe-math proofing. Enabling more possibilities to be considered more rapidly. A good human mathematician can have math research enhanced and sped up.