>
America's Poison Melting Pot And The Luxury Of Tolerance
Build a Greenhouse HEATER that Lasts 10-15 DAYS!
6,000 Airbus A320 Jets Receive Critical Update After "Intense Solar Radiation" Exposure
Black Friday Turnout Solid: Goldman, UBS Highlight Decent Start To Holiday Spending Season
Latest Comet 3I Atlas Anomolies Like the Impossible 600,000 Mile Long Sunward Tail
Tesla Just Opened Its Biggest Supercharger Station Ever--And It's Powered By Solar And Batteries
Your body already knows how to regrow limbs. We just haven't figured out how to turn it on yet.
We've wiretapped the gut-brain hotline to decode signals driving disease
3D-printable concrete alternative hardens in three days, not four weeks
Could satellite-beaming planes and airships make SpaceX's Starlink obsolete?
First totally synthetic human brain model has been realized
Mach-23 potato gun to shoot satellites into space
Blue Origin Will Increase New Glenn Thrust 15-25% and Make Rocket Bigger
Pennsylvania Bill – 'Jetsons Act' – Aims To Green-Light Flying Cars

They show :
before 2010 training compute grew in line with Moore's law, doubling roughly every 20 months.
Deep Learning started in the early 2010s and the scaling of training compute has accelerated, doubling approximately every 6 months.
In late 2015, a new trend emerged as firms developed large-scale ML models with 10 to 100-fold larger requirements in training compute.
Based on these observations they split the history of compute in ML into three eras: the Pre Deep Learning Era, the Deep Learning Era and the Large-Scale Era . Overall, the work highlights the fast-growing compute requirements for training advanced ML systems.
They have detailed investigation into the compute demand of milestone ML models over time. They make the following contributions:
1. They curate a dataset of 123 milestone Machine Learning systems, annotated with the compute it took to train them.