>
Eudaimonia: That Perfect Instant While Pedaling Your Bicycle
CHEMTRAIL WARFARE: Tom Renz Exposes the Military's SECRET Chemical Attacks on Americans
Founder Klaus Schwab to step down as World Economic Forum's chair
POWERFUL FRIDAY BROADCAST: Trump Goes On Total Warpath! 47 Just Axed The NSA & Cyber Command...
Watch the Jetson Personal Air Vehicle take flight, then order your own
Microneedles extract harmful cells, deliver drugs into chronic wounds
SpaceX Gigabay Will Help Increase Starship Production to Goal of 365 Ships Per Year
Nearly 100% of bacterial infections can now be identified in under 3 hours
World's first long-life sodium-ion power bank launched
3D-Printed Gun Components - Part 1, by M.B.
2 MW Nuclear Fusion Propulsion in Orbit Demo of Components in 2027
FCC Allows SpaceX Starlink Direct to Cellphone Power for 4G/5G Speeds
Scientists at DGIST in Korea, and UC Irvine and UC San Diego in the US, have developed a computer architecture that processes unsupervised machine learning algorithms faster, while consuming significantly less energy than state-of-the-art graphics processing units. The key is processing data where it is stored in computer memory and in an all-digital format. The researchers presented the new architecture, called DUAL.
Scientists have been looking into processing in-memory (PIM) approaches. But most PIM architectures are analog-based and require analog-to-digital and digital-to-analog converters, which take up a huge amount of the computer chip power and area. They also work better with supervised machine learning, which includes labeled datasets to help train the algorithm.
To overcome these issues, Kim and his colleagues developed DUAL, which stands for digital-based unsupervised learning acceleration. DUAL enables computations on digital data stored inside a computer memory. It works by mapping all the data points into high-dimensional space; imagine data points stored in many locations within the human brain.
The scientists found DUAL efficiently speeds up many different clustering algorithms, using a wide range of large-scale datasets, and significantly improves energy efficiency compared to a state-of-the-art graphics processing unit. The researchers believe this is the first digital-based PIM architecture that can accelerate unsupervised machine learning.
It is not clear if there are improvements with the DUAL architecture that would improve the CEREBRAS wafer scale chips. The CEREBRAS wafer-scale AI chips have 18 gigabytes of on chip memory. Those would be the ideal way to implement superior processing in memory.