>
OTOY | GTC 2023: The Future of Rendering
Humor: Absolutely fking hilarious. - Language warning not for children
President Trump's pick for Surgeon General Dr. Janette Nesheiwat is a COVID freak.
What Big Pharma, Your Government & The Mainstream Media didn't want you to know.
Forget Houston. This Space Balloon Will Launch You to the Edge of the Cosmos From a Floating...
SpaceX and NASA show off how Starship will help astronauts land on the moon (images)
How aged cells in one organ can cause a cascade of organ failure
World's most advanced hypergravity facility is now open for business
New Low-Carbon Concrete Outperforms Today's Highway Material While Cutting Costs in Minnesota
Spinning fusion fuel for efficiency and Burn Tritium Ten Times More Efficiently
Rocket plane makes first civil supersonic flight since Concorde
Muscle-powered mechanism desalinates up to 8 liters of seawater per hour
Student-built rocket breaks space altitude record as it hits hypersonic speeds
Researchers discover revolutionary material that could shatter limits of traditional solar panels
U.S. Department of Energy's Argonne National Laboratory and Hewlett Packard Enterprise (NYSE: HPE) unveiled a new testbed supercomputer to prepare critical workloads for future exascale systems that will deliver up to four times faster performance than Argonne's current supercomputers.
Polaris will enable scientists and developers to test and optimize software codes and applications to tackle a range of artificial intelligence (AI), engineering and scientific projects planned for the forthcoming exascale supercomputer, Aurora, a joint collaboration between Argonne, Intel and HPE.
The $500+ million Exaflop Aurora was planned for 2021 but it has been delayed until 2022-2023. Aurora has been delayed waiting for Intel's Sapphire Rapids server chips. The first plan was for a 180 petaflop Aurora for 2018 but delays in earlier Intel chips caused the need for a new plan.
Polaris will deliver approximately 44 petaflops of peak double precision performance and nearly 1.4 exaflops of theoretical AI performance, which is based on mixed-precision compute capabilities. Polaris 1.4 AI ExaFLOPS does not use standard FP64 (64 bit floating point) for standard supercomputer performance metrics.