>
War Comes Closer: Senate OK's $40 Billion To Ukraine; NATO Pledges 'Open Ended' Support
Get Ready to Be Muzzled: The Coming War on So-Called Hate Speech
California Judge Strikes Down Law Forcing Companies To Appoint Women To Corporate Boards
Elon Musk's Twitter Detractors Were Subsidized With Millions In Taxpayer Dollars
World's First Vertiport For Flying Taxis Opens In UK
The World's First Flying Taxi Hub Takes Shape in the English Midlands
Elon Musk Gives Everyday Astronaut a SpaceX Starbase Tour
NEW StarLink Mesh Nodes | Starlink | Starlink 2022
Episode 5: What to do in a BOIL ADVISORY with your Berkey Filter
Harley-Davidson's Livewire announces second electric motorcycle
ColdQuanta Cold Atom Quantum Computer is Commercially Available
Styro Aircrete Garden Shed- Pouring and Packing the Walls
Watch: Autonomous Chinese Drone Swarm Flies Through Forest While Hunting For Humans
We test drove a solar powered car with 1000 miles range that never needs charging
U.S. Department of Energy's Argonne National Laboratory and Hewlett Packard Enterprise (NYSE: HPE) unveiled a new testbed supercomputer to prepare critical workloads for future exascale systems that will deliver up to four times faster performance than Argonne's current supercomputers.
Polaris will enable scientists and developers to test and optimize software codes and applications to tackle a range of artificial intelligence (AI), engineering and scientific projects planned for the forthcoming exascale supercomputer, Aurora, a joint collaboration between Argonne, Intel and HPE.
The $500+ million Exaflop Aurora was planned for 2021 but it has been delayed until 2022-2023. Aurora has been delayed waiting for Intel's Sapphire Rapids server chips. The first plan was for a 180 petaflop Aurora for 2018 but delays in earlier Intel chips caused the need for a new plan.
Polaris will deliver approximately 44 petaflops of peak double precision performance and nearly 1.4 exaflops of theoretical AI performance, which is based on mixed-precision compute capabilities. Polaris 1.4 AI ExaFLOPS does not use standard FP64 (64 bit floating point) for standard supercomputer performance metrics.