How WIRED Will Use Generative AI Tools

like a lot Everyone Else In the past few months, journalists have been experimenting with generative AI tools like ChatGPT to see if they can help us do our jobs better. AI programs can’t call sources and elicit information from them, but they can produce improper text versions of those calls, and new generative AI tools can condense hundreds of pages of those texts into a digest.

But writing stories is another matter. I’ve tried a few fliers – sometimes with disastrous results. It turns out that current AI tools are pretty good at producing convincing (if formulaic) copy full of lies.

This is related, so we want to be on the front lines of new technology, but also be appropriately ethical and prudent. Here are some ground rules for how we use the current suite of generative AI tools. We understand that artificial intelligence will evolve and therefore may modify our view over time, and we will acknowledge any changes in this post. We welcome responses in the comments.

Text generators (such as LaMDA and ChatGPT)

We do not publish stories with artificial intelligence generated text, except when the fact that it was created by artificial intelligence is the point of the story. (In such cases we will detect usage and flag any errors.) This applies not only to full stories but also to snippets—for example, asking for a few paradigm sentences about how Crispr works or what quantum computing is. It also applies to editorial texts in other platforms, such as email newsletters. (If we use it for non-editorial purposes such as marketing emails, which are already automated, we will disclose it.)

This is for obvious reasons: Existing AI tools are prone to both errors and bias, and often result in boring and unoriginal writing. In addition, we believe that someone who writes for a living needs to constantly think about how best to express complex ideas in their own words. Finally, an AI tool may inadvertently steal someone else’s words. If a writer uses it to create text for publication without disclosure, we will treat that as plagiarism.

We also do not publish AI-edited text. While using AI to shrink an existing 1,200-word story down to 900 words might seem less problematic than writing a story from scratch, we think there are still drawbacks. Aside from the risk of the AI ​​tool introducing factual errors or changes in meaning, editing is also a matter of judging what is most relevant, original, or interesting about a piece. This judgment depends on the understanding of both the subject and the readers, and artificial intelligence can do neither.

We may attempt to use artificial intelligence to suggest titles or text for short social media posts. We are currently manually generating a lot of suggestions, and final selections have to be approved by the editor for accuracy. Using an AI tool to accelerate idea generation will not fundamentally change this process.

We might try to use artificial intelligence to generate story ideas. The AI ​​might assist the brainstorming process with a prompt such as “Suggest stories about the impact of genetic testing on privacy,” or “Provide a list of cities where predictive policing has been controversial.” This may save some time and we will continue to explore how this can be useful. But some of our limited testing has shown that it can also produce false leads or dull ideas. In any case, the real work, which only mortals can do, is in evaluating which works are worth pursuing. Where applicable, for any AI tool we use, we will acknowledge the sources you used to generate the information.

.

Leave a Reply

Your email address will not be published. Required fields are marked *