>
Declare Your Independence w/ Ernest Hancock on FREE TALK LIVE NETWORK Thurs Oct 17 @ 6 pm CDT
Disaster: Kamala.exe Crashes In Fox Interview Meltdown
What Mainstream Media Won't Tell You: Last Four Years a Dumpster Fire
New Lithium Manganese Iron Phosphate Batteries Scaling to Over 300 Gigawatt Hours...
Jeff Bezos's Blue Origin Could Have a Commercial Space Station Running by 2030
Toyota Just Invested $500 Million in Electric Air-Taxi Maker Joby
Cheap, powerful, high-density EV battery cells set for mass production
World's first 3D-printed hotel rises in the Texas desert
Venus Aerospace Unveils Potential Mach 6 Hypersonic Engine and Will Power a Drone in 2025
OpenAI As We Knew It Is Dead, Now It's A Loose Cannon In The Hands Of A Megalomaniac Technocrat
Geothermal Energy Could Outperform Nuclear Power
I Learned How to Fly This Electric Aircraft in a Week--and I Didn't Need a License
"I am Exposing the Whole Damn Thing!" (MIND BLOWING!!!!) | Randall Carlson
Israel develops method for hacking air-gapped computers - no computer is safe now
Sam Altman is a Technocrat who plays the long game. When he first got involved with OpenAI in 2015, he jockeyed to take the helm in 2019 but he was constrained by a board of directors. By 2023, the Board saw through him and ousted him. Then the real Sam Altman appeared as he "clawed his way back to power", kicked the directors off the board and reconstituted another board subservient to him.
By 2024, top executives saw through Altman's Hitleresque schemes and fled the company, leaving the safety team in shambles. Just this week, the company's Chief Technology Office (CTO), Mira Murati, abruptly walked out.
Now VOX notes, "he's stripped the board of its control entirely" and taken dictatorial control of OpenAI.
Read the original and simple Charter below. This was originally signed by Altman. Since then he has violated every word of it.
OpenAI Charter
Our Charter describes the principles we use to execute on OpenAI's mission.
This document reflects the strategy we've refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development.
OpenAI's mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:
Broadly distributed benefits
We commit to use any influence we obtain over AGI's deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
Long-term safety
We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be "a better-than-even chance of success in the next two years."
Technical leadership
To be effective at addressing AGI's impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
We believe that AI will have broad societal impact before AGI, and we'll strive to lead in those areas that are directly aligned with our mission and expertise.
Cooperative orientation
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI's global challenges.
We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.