Skip Navigation Accessibility Statement
The 2025 PR & Comms Calendar : Make 2025 your best year yet. Our planning calendar gives you the framework to craft effective campaigns all year long.

The Rise of Generative AI and What It Means for the Communications Industry

Cog and circuits representing generative AI

When OpenAI made ChatGPT available to the public in November 2023, it transformed the way much of the world viewed generative artificial intelligence (AI). The ability to create detailed, written content with a simple prompt put generative AI in the spotlight, and within three months of ChatGPT’s launch it had passed 100 million active users. (By comparison, it took TikTok nine months and Instagram two-and-a-half years to reach that same 100 million milestone.)

Generative tools like Google Gemini, Claude, Apple Intelligence, and image creators DALL-E and Midjourney followed as the technology filtered into many different sectors and industries, integrating into the tools we use day-to-day.

The world of public relations – one driven by words and images – has already started to embrace it. According to the 2024 Global Comms Report, 32% of comms professionals say they use generative AI frequently, 33% use it infrequently, 27% are considering starting to experiment with it, 7% are unlikely to use, and 1% aren’t sure about usage yet. The most prominent use case is in creating content for external audiences.

Chart showing data on generative AI use cases in PR

As adoption increases, so too does the creation of more and more AI-generated media (think text, art, video and audio). Meanwhile, as these content generators become more widely available for experimentation and product development, it is vital for PR and corporate communications professionals to understand the opportunities and challenges generative AI presents to their organization and audiences. The broader comms industry also has a responsibility to ensure it utilizes AI in an ethical, responsible manner, while protecting and investing in human-led creativity and professional development.

Cision Executive Director of AI Strategy Antony Cousins notes: “I've been working in AI for 10 years and the thing that gets me up in the morning is not the idea of putting people out of work – that is not something I could get behind – but the idea that the work you're going to be doing in the future is much more human-orientated. AI isn’t coming for your job, but what it is coming for is specific tasks, for things that the AI can be better at than you.”

How Generative AI Can Support Comms Teams

So where can generative AI be applied to help comms pros today? Think of it as support for carrying out routine tasks such as developing the framework for a press release, announcement or catalog/e-commerce copy; incorporating SEO keywords into content; generating lists of media outlets and journalists to target; and even helping to compose pitches as in CisionOne Outreach.

Brands can also go further by employing AI tools to generate audio versions of press releases or web content, or support content accessibility through text-to-speech features, making it possible for those with visual impairments to communicate. Generative AI is also now being baked into many social media publishing platforms (e.g. Brandwatch), allowing users to quickly create and modify content to be published across brand channels.

For communicators, staying up to date on trends in AI and how media organizations and brands are applying these technologies will be crucial to staying competitive.

As Cision Head of EMEA Analysis Barnaby Barron explains, generative AI is evolving quickly and opening doors for comms teams. "The potential for AI applications within PR and comms teams is huge, what we really encourage is testing different tools to see what works for you. At the same time, the efficiencies you can make should ensure PR and comms professionals have time to be really creative, something AI can’t match humans at."

The Risks and Challenges of Generative AI in PR and Comms

Though AI’s potential to reshape PR is significant, there are just as many ways it can be disruptive and potentially harmful. AI tools are “trained” on specific and historical data sets, many of which can be problematic in serving up meaningful content, because AI is essentially “guessing” the next best response to a prompt based on that data.

That guess is only as good as the data on which it has been trained. With that in mind, comms teams need to ensure that their data is robust enough to align with the task they're asking AI to do.

As reporting by outlets like Wired and the New York Times have made clear, generative AI is prone to inaccuracies and “hallucinations.” There is still a way to go before we have AI content generators – whether in the form of a chatbot designed to imitate conversation or a tool for creating long-form text – we can rely on to deliver truly accurate and factual content.

For these reasons, humans who interact with synthetic media generators need to be mindful of the prompts they use to elicit responses. ChatGPT needs good quality prompts to create good quality output. And it can’t help you if you mistakenly feed it the wrong information.

The language or imagery used in requests may also include unconscious bias – and may lead professional-grade tools astray. Though improvements are likely to be made to limit query fallibility, taking the initiative to learn and understand best practices will reap the greatest benefits of this technology.

5 Fast Facts About Generative AI and Ethical Use:

  1. AI-generated content can’t be copyrighted. Under existing intellectual property law, it instantly becomes part of the public domain. Only content created by a human being can be protected by U.S. copyright.
  2. AI content can fall foul of lawsuits. Because AI algorithms are trained on huge amounts of existing content, there is a risk that the original creators of that content could bring copyright infringement claims based on the use of their intellectual property in training the AI or in the content it produces. (Getty Images and several individual artists have already filed suit against companies pioneering AI image generators over the use of their images and artistic styles.)
  3. AI is prone to bias. AI that has been trained on flawed content may perpetuate bias and stereotypes or generate content that is misleading or outright false.
  4. AI-generated media is being used maliciously. Such as to create deepfakes and perpetuate fake news. Sophisticated media monitoring will be critical for identifying misinformation or disinformation that could harm your brand’s reputation and responding to it before it gains traction.
  5. There’s a lack of regulation around AI. It can be difficult to verify the origin and authenticity of machine-generated content, which can undermine trust in the PR industry and in the media more broadly. It is too soon to tell what guardrails will be legislated. However, policy makers are already moving on this issue. In the U.S., the Department of Commerce's National Telecommunications and Information Administration (NTIA) recently launched a request for comment (RFC) regarding AI accountability.

Generative AI Can Enhance – Not Replace – Human Creativity in PR

As AI becomes more ubiquitous, public relations practitioners will need to be aware of benefits and caveats that innovations in AI provide. For the most effective applications of these emerging technologies, the human element will continue to be essential.

One might use an AI-backed software solution to write an outline for a case study or press release, or to generate social media imagery with the greatest potential for consumer engagement, for example. But to guarantee accuracy, any such content would still need to be reviewed, vetted, and optimized by humans with subject matter expertise.

Tools like ChatGPT can read your question and provide a response, but they won’t understand the context. Though it can appear you're having a human-like conversation with a chatbot, it’s only providing responses based on what its existing data is telling it should be the next response. ChatGPT can't think on its feet or interpret ideas, it also can’t go materially beyond what has already been created on a topic. For that reason, it lacks the ability to exceed a brief in the way humans can.

However, AI can help you generate an answer if you're posing the right question. For example, ask AI to simply write a press release and you'll likely end up with something routine and uninspiring. Is it a product press release? Who is the audience? And who are the main competitors? The specificity of the question, coupled with the right data training set, will lead to better results.

No matter the use case, AI, when used responsibly and mindfully, with a guiding human hand, can empower practitioners to work smarter, not harder.

AI Definitions: Your Cheat Sheet of Need-to-Know Terms

  • Artificial Intelligence (AI): Computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
  • Chatbot: A computer program designed to simulate conversation with human users, especially over the internet.
  • Deepfake: AI-synthesized media that is false, such as doctored videos where one person’s head has been placed on another person’s body, or realistic “photographs” of people who don’t exist.
  • Generative AI: AI models that can create new and original content such as text, images, video or music, based on patterns learned from existing data. Tools like ChatGPT, Gemini, Claude, Midjourney, DALL-E, and Synthesia are all generative AI.
  • Large Language Model (LLM): An algorithm trained on a body of content that’s been developed to produce text, respond to questions using natural language, or translate material from one language to another.
  • Natural Language Processing (NLP): A field of AI that focuses on enabling computers to understand, interpret, and generate human language.
  • Predictive Analytics: The use of data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data.
  • Sentiment Analysis: The use of natural language processing, text analysis, and computational linguistics to systematically identify and quantify subjective information such as positivity or negativity.

Looking for more info on artificial intelligence in PR and communications? Download From AI to Z: A Starter Guide to Using Generative AI in PR & Comms.

If you’d like to learn about CisionOne, our all-in-one PR platform, speak to an expert today.

Simon Reynolds
Simon Reynolds

Simon is the Content Marketing Manager at Cision UK. He worked as a journalist for more than a decade, writing on staff and freelance for Hearst, Dennis, Future and Autovia titles before joining Cision in 2022.