BBC Programme Director for Generative AI, Peter Archeris a big-picture man. He and his team are committed to ensuring AI reinforces – and never undermines – the BBC’s mission.

That’s why the BBC is integrating AI to support creativity, transparency and audience trust without compromising its famously high editorial standards.

From training to transparency guidelines, newsroom innovation to AI literacy, the BBC sets an example for how public service media (PSM) can lead responsibly on AI.

🤖 Integrating generative AI in line with BBC values

Peter’s in no doubt about the opportunities of AI to further the BBC’s mission and bring value to audiences and staff. He’s equally aware of the risks and challenges, like inaccuracies in model output and ensuring AI is used responsibly to support creativity and human endeavour.

He said: “AI changes much, but it doesn’t change the BBC as an organisation. Our mission, values and standards won’t change. To ensure our use of AI is in line with our values we have three key principles for its use:

1. First, we will always act in the best interests of the public, using generative AI to strengthen our public mission and deliver greater value to audiences.

2. Second, we will always prioritise talent and creativity. No technology can replicate or replace human creativity. We will always prioritise and prize authentic, human storytelling by reporters, writers and broadcasters who are the best in their fields.

3. Third, we will be open and transparent. Trust is the foundation of the BBC’s relationship with audiences. Our leaders will always remain accountable to the public for all content and services produced and published by the BBC.”

⛑️ Protecting editorial integrity is vital

“Using AI responsibly is mission-critical for us,” Peter told us. “In addition to our three principles, we’ve put in place guidelines to ensure it’s used responsibly at the BBC. Chief among them is our Editorial Guidance on the use of AI, which helps both internal BBC staff and suppliers understand key issues in the use of AI and how to use it responsibly.”

With all new technology, knowledge is key, and the BBC is leaving no one behind.

Peter said: “We have developed internal training on AI, called AI Essentials. It’s mandatory for everyone at the BBC; we don’t make courses mandatory often but responsible use of AI is a key issue for us.”

🖥️ + 🤗 Rolling out AI needs tech readiness and human acceptance.

Possessing new tools is one thing, but they’re useless unless staff are on board.

“We’re excited about the potential for AI to help us further our mission,” Peter told us. “We’re already forging ahead with tools and models that are delivering value to our audiences and to staff. For example, using AI to assist journalists in our newsroom; helping sports journalists create live text pages from football broadcasts; and to automatically create subtitles for BBC Sounds.”

The BBC is also releasing tools like AI assistants, image editors and code pilots into the business.

“The pace of change is amongst the biggest challenges – it’s hard to keep up with new technology at the best of times, but generative AI is in another league. This means we need to keep our approach flexible. More broadly, all media organisations have a lot going on and generative AI must compete with other priority activities, too. As with all ML- [machine learning] and AI-related endeavours, it’s critical we’re building on strong foundations, so having a strong data strategy is more important than ever.”

⚖️ How do you balance AI-driven efficiencies and protecting audience trust?

Public trust is the bedrock of public service media, and the BBC is taking no chances, putting transparency front and centre.

“AI-generated content can blur the lines between what’s real and artificial and it’s important to help audiences understand the role AI plays in our work, and how it impacts the content they consume,” Peter said.

“Being transparent about where and when we use AI helps to maintain trust in our content and services. We’ve recently published transparency guidelines for staff and suppliers. In particular, we require disclosure to the audience where AI output that could be mistaken for real output is clearly labelled to disclose the role of AI. For example, the use of AI ‘face-swapping’ techniques to anonymize contributors.”

🔮 The role of gen-AI in the future of PSM

So how does he think generative AI will change things for PSM in the future?

“I’m positive about the potential for GenAI. There are so many opportunities, it’s a real challenge to prioritise what matters most. Stepping back, I see three key opportunities for PSM:

1. First, using GenAI to make the most of our content, for example maximising its value through reformatting and ever greater personalisation.

2. Second, through new experiences for our audiences, AI assistants are a good example of this for now, but there will be many more.

3. Third, giving our teams the tools to transform their experience of work, such as to save time and work more effectively.

He added: “Beyond PSM’s use of GenAI, we have a really important part to play in helping our audience get to grips with AI, for example through AI literacy activities; and a critical role in ensuring the information ecosystem is healthy.

In particular, being a trusted voice in the midst of AI services, like assistants, that can produce significant inaccuracies. That fundamental mission of PSM – to be a trusted voice for our audiences – has never been more important.”

The fundamental mission of PSM to be a trusted voice for our audiences has never been more important.