>
Benjamin Netanyahu Holds Middle Finger In The Face Of The U.S. And the Entire World
Doomed Dali ship's audio black box reveals multiple alarms were blaring in moments...
Hair-loss treatment found in cinnamon
Baltimore Bridge Collapse May Cost Billions, Dramatically Disrupt Supply Chains
Scientists Close To Controlling All Genetic Material On Earth
Doodle to reality: World's 1st nuclear fusion-powered electric propulsion drive
Phase-change concrete melts snow and ice without salt or shovels
You Won't Want To Miss THIS During The Total Solar Eclipse (3D Eclipse Timeline And Viewing Tips
China Room Temperature Superconductor Researcher Had Experiments to Refute Critics
5 video games we wanna smell, now that it's kinda possible with GameScent
Unpowered cargo gliders on tow ropes promise 65% cheaper air freight
Wyoming A Finalist For Factory To Build Portable Micro-Nuclear Plants
High-Speed Railway Progresses Towards 200-mph Dallas-Houston Line
27 Ft-tall 3D-printed Structure Built by New Robot | ICON's Multi-Story Robotic Construction Sys
A recent analysis on the future of warfare indicates that countries that continue to develop AI for military use risk losing control of the battlefield. Those that don't risk eradication. Whether you're for or against the AI arms race: it's happening. Here's what that means, according to a trio of experts.
Researchers from ASRC Federal, a private company that provides support for the intelligence and defense communities, and the University of Maryland recently published a paper on pre-print server ArXivdiscussing the potential ramifications of integrating AI systems into modern warfare.
The paper – read here – focuses on the near-future consequences for the AI arms race under the assumption that AI will not somehow run amok or takeover. In essence it's a short, sober, and terrifying look at how all this various machine learning technology will play out based on analysis of current cutting-edge military AI technologies and predicted integration at scale.
The paper begins with a warning about impending catastrophe, explaining there will almost certainly be a "normal accident," concerning AI – an expected incident of a nature and scope we cannot predict. Basically, the militaries of the world will break some civilian eggs making the AI arms race-omelet:
Study of this field began with accidents such as Three Mile Island, but AI technologies embody similar risks. Finding and exploiting these weaknesses to induce defective behavior will become a permanent feature of military strategy.
If you're thinking killer robots duking it out in our cities while civilians run screaming for shelter, you're not wrong – but robots as a proxy for soldiers isn't humanity's biggest concern when it comes to AI warfare. This paper discusses what happens after we reach the point at which it becomes obvious humans are holding machines back in warfare.
According to the researchers, the problem isn't one we can frame as good and evil. Sure it's easy to say we shouldn't allow robots to murder humans with autonomy, but that's not how the decision-making process of the future is going to work.
The researchers describe it as a slippery slope: