Many journalists have strong reactions when they hear about artificial intelligence or machine learning being used in the newsroom. Most of those reactions are negative.
The pros and cons of AI in the newsroom vary. They include (but are definitely not limited to):
- Information can be published at large scale
- Large volumes of data can be processed quickly
- Text can be drafted, allowing journalists to add context
- Real-time news coverage
- Frees up journalists to produce in-depth, high-value content
- Fears that it could threaten human writers’ jobs
- Reliance on external datasets – changes in how data is selected, formatted, and shared by sources can be a problem. Think garbage in, garbage out.
- High costs of development
- Ethical issues
Despite the challenges, some newsrooms are implementing AI in their work in unique ways. A couple of years ago, we reviewed how newsrooms were using the technology to create AI reporters and analyze large datasets.
During the pandemic, more and more outlets experimented with the technology to help with things like reporting on COVID-19 case numbers, getting text messages out to readers, and more.
Here are a few ways that newsrooms are using AI and showing employees that it doesn’t have to be scary.
Columbia Journalism Review recently looked at how several newsrooms were using automated journalism to report on the pandemic. As the numbers of infections, deaths, and (eventually) vaccinations changed daily, AI has been a solution to putting together regular reports of external data.
CJR’s Samuel Danzon-Chambaud explains: “This type of structured data that can fit into predictable story frames lays the groundwork for automated journalism, a computational process that creates automated pieces of news without any human intervention, except for the initial programming.”
The newsrooms in this study used automated journalism to create dashboards, newsletters, visualizations, and more.
Automating Local News
RADAR (Reporters And Data And Robots) was launched in 2018 with grant support from Google’s Digital News Innovation Fund. It provides AI-generated, data-driven articles to digital, print, and broadcast outlets across the UK and Ireland.
It filed 250,000 articles in the first 18 months. RADAR’s “Live Tech” blends human editorial skills with the automation tech to create quality content quickly and on a massive scale.
At a time when local news deserts are spreading in the U.S. and around the world, this service utilizes AI tech to provide local communities with coverage to keep them informed.
The AP is also working with local newsrooms to better utilize AI. “We are not building tools to replace people,” Aimee Rinehart, The Associated Press’ program manager for AI, told Poynter. “We are building tools to automate tasks and hopefully broker opportunities for journalists to do deeper, richer stories.” The AP is currently surveying local newsrooms to determine where they're at with AI and will choose several to work with for a year of strategy experimentation.
KPCC-LAist has learned that actively engaging with the community is critical for breaking news stories like the pandemic and California wildfires. The newsroom worked with Quartz to use machine learning that could sort through thousands of reader questions and organize them by topic, theme, or trend.
This system allowed the team of journalists to quickly respond to readers, building stronger relationships and trust with the community.
Caitlin Hernandez, a KPCC-LAist assistant producer, explained how the service helped a reader: “One question asker recently shared that she’d never heard of LAist until she was googling how to find answers to her COVID-19 questions. When we not only provided a space to ask but also a quick response, she found herself coming back to the site over and over — now saying she won’t go anywhere else for essential news.”
RJI’s Maggie Doheny tried using the AI tool Hemingway to edit a draft and compared it with the final version that was run by a human editor. While the tool helped spot things like passive voice and difficult-to-read sentences, she found that some things just need a human eye.
Doheny explained, “An AI editor can’t determine newsworthiness and doesn’t provide any restructuring ideas or see where you need more context or sources. Not only that, it won’t delete repetitive parts that may be unnecessary — a major role of an editor is to help your copy become more clear and concise.”
Tools like this could be useful for reviewing the first draft to catch the basics. Once writers have made those adjustments, it could then be passed on to the real-life editor for the final pass.
With mis/disinformation a constant issue on social media and something the media must navigate, companies like AuCoDe can offer help.
The AI-based startup uses machine learning to detect controversies and misinformation online and turns the data into actionable intelligence. It was recently announced that AuCoDe is a partner in FACT CHAMP, a fact-checking project to address misinformation in partnership with Asian American and Pacific Islander (AAPI) communities.
Other AI Solutions
In June 2020, JournalismAI, a project of Polis and the Google News Initiative, launched a global collaboration experiment. A group of participants from more than 20 news organizations worldwide worked in teams to explore how AI could help address certain challenges.
The teams investigated how to use the technology to help with auto-summarization, newsroom biases, archive article suggestions (to help journalists provide context), and audience loyalty. The teams presented their findings at the JournalismAI Festival in December 2020.
AI clearly offers a number of opportunities for streamlining some of the work done in newsrooms (without necessarily costing human jobs). Most of the experiments have been using technology to add efficiencies and free up reporters for more in-depth work, rather than replacing what they do entirely. It will be interesting to see in what ways it’s implemented down the line - or perhaps even sooner. The AI Academy for Small Newsrooms, a six-week training program that launched in September 2021, will share its action plans openly to help other publishers around the world.
It's important to remember that, despite the technology, robots aren’t right all the time and can’t replace a journalist’s human character. AI-generated articles don’t have much in the way of personality or in-depth analysis like a journalist would be able to do.
Still worried about it? GPT-3, a language generator, wrote an op-ed for The Guardian to convince you that humans have nothing to fear from AI. It explains, “I am not asking humans to like me. But they should see me as a friendly robot. I am a servant of humans.”
See the original post on Beyond Bylines.