>
Researchers Turn Car Battery Acid and Plastic Waste into Clean Hydrogen and New Plastic
Stop Guessing: When to Pressure Can vs. Water Bath Can
What Do Authoritarians Fear Most? People Who Stick Up for Each Other.
12V 460Ah Epoch "V2-T" Marine Rated Battery! Holy Cow..
The Most Dangerous Race on Earth Isn't Nuclear - It's Quantum.
This Plasma Stove Cooks Hotter Than The Sun
Energy storage breakthrough traps sunlight in a molecule
Steel rebar may have met its match – in the form of wavy plastic
Video: Semicircular wings give Cyclone VTOL a different kind of lift
After 20 Years, Wave Energy Finally Works
FCC Set To "Supercharge" Starlink Space Internet With "Seven-Fold More Capacity"
'World's First' Humanoid Robot For Real Household Chores Launched With 16-Hour Battery
XAI Training 10 Trillion Parameter Model – Likely Out in Mid 2026

Scientists at DGIST in Korea, and UC Irvine and UC San Diego in the US, have developed a computer architecture that processes unsupervised machine learning algorithms faster, while consuming significantly less energy than state-of-the-art graphics processing units. The key is processing data where it is stored in computer memory and in an all-digital format. The researchers presented the new architecture, called DUAL.
Scientists have been looking into processing in-memory (PIM) approaches. But most PIM architectures are analog-based and require analog-to-digital and digital-to-analog converters, which take up a huge amount of the computer chip power and area. They also work better with supervised machine learning, which includes labeled datasets to help train the algorithm.
To overcome these issues, Kim and his colleagues developed DUAL, which stands for digital-based unsupervised learning acceleration. DUAL enables computations on digital data stored inside a computer memory. It works by mapping all the data points into high-dimensional space; imagine data points stored in many locations within the human brain.
The scientists found DUAL efficiently speeds up many different clustering algorithms, using a wide range of large-scale datasets, and significantly improves energy efficiency compared to a state-of-the-art graphics processing unit. The researchers believe this is the first digital-based PIM architecture that can accelerate unsupervised machine learning.
It is not clear if there are improvements with the DUAL architecture that would improve the CEREBRAS wafer scale chips. The CEREBRAS wafer-scale AI chips have 18 gigabytes of on chip memory. Those would be the ideal way to implement superior processing in memory.