>
1000s Evacuated As Massive Wall Of Water Surges Through Ukraine After Major Dam 'Blown Up'
Journalists Are Asking Ukrainian Soldiers To Hide Their Nazi Patches, NYT Admits
Comey: Imagine A "Retribution Presidency" Where The President Ordered The...
El Salvador Unleashes "Volcano Energy" With 241 Megawatt Planned Bitcoin Mining Operation
Newly Developed Humanoid Robot Warns About AI Creating "Oppressive Society"
Scientists develop mega-thin solar cells that could be shockingly easy to produce:
High-tech pen paints healing gel right into wounds
EG4 18K after 1 Megawatt Hour! Is it worth the $$$?
Terminator-style Synthetic Covering for Robots Mimics Human Skin and Heals Itself
The Death of 2FA (2 Factor Authentication)? + Q&A
High-speed orbital data link drags space communications out of the '60s
WORLD'S FIRST 3D PRINTED CLAY HOUSES
Smaller, cheaper, safer: The next generation of nuclear power, explained
Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities. Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect "inappropriate" content across images and text.
The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian, and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action. "Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years.
We recognized that existing systems weren't effectively taking into account context or able to work in multiple languages," the Microsoft spokesperson said via email. "New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed."