>
This "Printed" House Is Stronger Than You Think
We Built 3 Off-Grid SOLAR WATER Systems (DIY Guide)
"Identify, Disrupt, Dismantle, Destroy": White House's Stephen Miller Declares War On
Unfinished Story... #usa #israel #palestine #politics #congress #europe #news #uk #canada
We finally integrated the tiny brains with computers and AI
Stylish Prefab Home Can Be 'Dropped' into Flooded Areas or Anywhere Housing is Needed
Energy Secretary Expects Fusion to Power the World in 8-15 Years
ORNL tackles control challenges of nuclear rocket engines
Tesla Megapack Keynote LIVE - TESLA is Making Transformers !!
Methylene chloride (CH2Cl?) and acetone (C?H?O) create a powerful paint remover...
Engineer Builds His Own X-Ray After Hospital Charges Him $69K
Researchers create 2D nanomaterials with up to nine metals for extreme conditions
Laser connects plane and satellite in breakthrough air-to-space link
Lucid Motors' World-Leading Electric Powertrain Breakdown with Emad Dlala and Eric Bach
These digital tools have made writing lengthy code much easier. However, experts say this trade-off comes with new security risks and a continued need for human oversight.
Developers say artificial intelligence (AI) slashes a lot of the grunt work in writing code, but seasoned developers are spotting flaws at an alarming rate.
The security testing company Veracode published research in July—gathered from more than 100 large language model (LLM) AI tools—that showed while AI generates working code at astonishing speed, it's also rife with cyberattack potential.
The report noted 45 percent of code samples failed security tests and introduced vulnerabilities outlined by the cybersecurity nonprofit, the Open Worldwide Application Security Project.
Veracode researchers called the study's findings a "wake-up call for developers, security leaders, and anyone relying on AI to move faster."
Some experts say the high number of security flaws isn't shocking given AI's current limitations with coding.
"I'm surprised the percentage isn't higher. AI-generated code, even when it works, tends to have a lot of logical flaws that simply reflect a lack of context and thoughtfulness," Kirk Sigmon, programmer and partner at intellectual property law firm Banner Witcoff, told The Epoch Times.
Cybersecurity researcher and former mission operator for the Iris Lunar Rover, Harshvardhan Chunawala, compared AI code writing to home building. He said it's like having AI draft a quick blueprint for a house, but the blueprint might include doors that don't lock, windows that don't fit, or wiring that's unsafe.
And with AI's advance into critical digital infrastructure, he said the system isn't just making "blueprints" anymore, but ordering materials and beginning construction before a foundation inspection has taken place.