>
Obama Impeachment LIVE | US BIGGEST ARREST SHOCKS Nation! Tulsi DROPS BOMBSHELL
Will the Next First Turning Be to Technocracy?
Business Insider: Factcheck Your AI Stories Or Else
BREAKING EXCLUSIVE: Judge Delays Tina Peters Justice, Orders Colorado AG to Answer...
This "Printed" House Is Stronger Than You Think
Top Developers Increasingly Warn That AI Coding Produces Flaws And Risks
We finally integrated the tiny brains with computers and AI
Stylish Prefab Home Can Be 'Dropped' into Flooded Areas or Anywhere Housing is Needed
Energy Secretary Expects Fusion to Power the World in 8-15 Years
ORNL tackles control challenges of nuclear rocket engines
Tesla Megapack Keynote LIVE - TESLA is Making Transformers !!
Methylene chloride (CH2Cl?) and acetone (C?H?O) create a powerful paint remover...
Engineer Builds His Own X-Ray After Hospital Charges Him $69K
Researchers create 2D nanomaterials with up to nine metals for extreme conditions
On Thursday afternoon, Business Insider staffers received a long-awaited memo from editor in chief Jamie Heller, outlining the Axel Springer-owned outlet's rules for using artificial intelligence. The guidelines had been eagerly anticipated since earlier this summer, when Mathias Döpfner, chief of the German publishing giant Axel Springer, urged employees at a corporate town hall to weave A.I. into their daily work, but left the specifics to individual publications.
Heller's note filled the information vacuum, spelling out both restrictions and expectations for staff. "This policy is designed to enable and encourage us to experiment with A.I., a rapidly changing technology, as we continue to pursue the biggest stories about business, tech, innovation, and more for our audience," she wrote in the memo, obtained by Status.
The guidelines unsurprisingly said that BI journalists can use A.I.—"just like any other tool"—for tasks such as research. And they said staffers "may use approved A.I. tools for specific tasks and enhancements for images and video." But it was, of course, the rules about using ChatGPT in writing that caught the attention of most staffers.
The policy explicitly stated that its journalists can now use A.I. to assist in writing their stories. More specifically, in a FAQ sent to staffers, the outlet said it is acceptable for staffers to use A.I. to write their first drafts. "Can I use ChatGPT to write my first draft? Yes, but you must make sure your final work is yours," the policy stated.
Interestingly, while BI's guidelines technically now permit reporters to use ChatGPT to generate first drafts, the outlet simultaneously discouraged them from utilizing the technology for that specific purpose. "Writing is a valuable critical thinking process, and strong writing makes our journalism better and more distinct," the policy explained. "By writing your first draft yourself, you sharpen your thinking about the points you want to communicate and help ensure the writing is accurate and interesting—as our audience expects." The guidelines also stressed that the final product must remain the journalist's "own creative expression."
The policy makes BI one of the first major U.S. newsrooms to formally greenlight such far-reaching use of A.I., though other outlets are certainly warming to the use of the technology. BI's rules also stop short of offering transparency to readers on the matter. Echoing Döpfner's comments at the company town hall this summer, the guidelines stated that "most uses of A.I. by our journalists do not require disclosure." If the company starts to publish content created solely by A.I. or without human review, it will be upfront with readers, the policy said. But when reporters use A.I. to assist with writing—even to produce entire first drafts—those stories likely will not include a disclaimer for audiences.