>
Joe Rogan Experience #2246 - James Fox
Just a fraction of the hydrogen hidden beneath Earth's surface could power Earth for 200 years..
SpaceX Tests New Heat Shield and Adds Tanks for Engine Restarts
7 Electric Aircraft That Will Shape the Future of Flying
Virginia's fusion power plant: A step toward infinite energy
Help us take the next step: Invest in Our Vision for a Sustainable, Right-to-Repair Future
Watch: Jetson founder tests the air for future eVTOL racing
"I am Exposing the Whole Damn Thing!" (MIND BLOWING!!!!) | Randall Carlson
Researchers reveal how humans could regenerate lost body parts
Antimatter Propulsion Is Still Far Away, But It Could Change Everything
Meet Rudolph Diesel, inventor of the diesel engine
China Looks To Build The Largest Human-Made Object In Space
Ferries, Planes Line up to Purchase 'Solar Diesel' a Cutting-Edge Low-Carbon Fuel...
They show :
before 2010 training compute grew in line with Moore's law, doubling roughly every 20 months.
Deep Learning started in the early 2010s and the scaling of training compute has accelerated, doubling approximately every 6 months.
In late 2015, a new trend emerged as firms developed large-scale ML models with 10 to 100-fold larger requirements in training compute.
Based on these observations they split the history of compute in ML into three eras: the Pre Deep Learning Era, the Deep Learning Era and the Large-Scale Era . Overall, the work highlights the fast-growing compute requirements for training advanced ML systems.
They have detailed investigation into the compute demand of milestone ML models over time. They make the following contributions:
1. They curate a dataset of 123 milestone Machine Learning systems, annotated with the compute it took to train them.