Journalists must be trained in how to recognise and deal with algorithm bias in order to counter the speed and scale at which biased content is being spread. That’s the view of Gemini, the artificial intelligence (AI) conversation tool, which says that the need for such training is “more urgent than ever”.
AI says “the rapid advancement of artificial intelligence and machine learning has led to a significant increase in the use of algorithms in news production” and that “algorithms can introduce bias into news content, which can have serious consequences for both individuals and society.”
Media Helping Media (MHM) asked Gemini how journalists can recognise and deal with algorithm bias. We also asked it to design a one-day training course to equip journalists with the tools they need to tackle the issue.
The following is the text of that conversation, which took place in September 2024. During the conversation Gemini states that while the challenge of bias in news production is not entirely new, the rise of AI has made it more urgent and complex.
AI warns that “journalists and news organisations must adapt to this new reality by developing strategies to mitigate algorithm bias and ensure that the news they produce is fair, accurate, and representative”.
I asked AI the following questions. The answers were immediate. Gemini’s responses are unedited other than to change them from American English to English English (MHM’s preferred house style).
MHM: What is algorithm bias in news production?
Gemini AI: Algorithm bias in news production refers to the unintentional or intentional bias introduced into news content by the algorithms used to curate, personalise, or generate news. This bias can manifest in various ways, including:
- Selection bias: Algorithms may prioritise certain stories or topics over others, leading to a skewed representation of the news.
- Confirmation bias: Algorithms may reinforce existing beliefs or biases by recommending content that aligns with a user’s preferences.
- Representation bias: Algorithms may underrepresent certain groups or perspectives, leading to a lack of diversity in the news.
- Amplification bias: Algorithms may amplify certain narratives or misinformation, contributing to the spread of false or misleading information.
Factors contributing to algorithm bias:
- Biased data: If the data used to train algorithms is biased, the algorithms will likely produce biased outputs.
- Biased algorithms: The design of algorithms themselves can introduce bias, such as by prioritising certain features or metrics over others.
- Lack of transparency: The lack of transparency in how algorithms work can make it difficult to identify and address bias.
Addressing algorithm bias in news production requires a multi-faceted approach, including:
- Improving data quality: Ensuring that the data used to train algorithms is diverse, representative, and free from bias.
- Designing unbiased algorithms: Developing algorithms that are transparent, accountable, and resistant to bias.
- Promoting transparency: Increasing transparency in how algorithms are used and the decisions they make.
- Human oversight: Ensuring that human editors and journalists play a role in reviewing and curating news content.
By understanding and addressing algorithm bias, news organisations can help to ensure that the news they produce is fair, accurate, and representative.
MHM: How can journalists avoid algorithm bias if they are not in control of the algorithms?
Gemini AI: While journalists may not have direct control over the algorithms used in news production, they can take several steps to mitigate algorithm bias:
- Be aware of algorithm bias: Journalists should understand the concept of algorithm bias and how it can manifest in news content. This knowledge can help them identify potential biases and take steps to address them.
- Critically evaluate news sources: Journalists should be sceptical of news stories generated or curated by algorithms. They should carefully evaluate the sources of information and verify the accuracy of the content.
- Seek diverse sources: Journalists should strive to gather information from a variety of sources, including those that may not be prominently featured in algorithm-driven news feeds. This can help to ensure that the news they report is balanced and representative.
- Use human judgement: Journalists should rely on their own judgement to assess the quality and relevance of news stories. They should be willing to override algorithmic recommendations if they believe that they are biased or misleading.
- Advocate for transparency: Journalists can advocate for greater transparency in how algorithms are used in news production. This can help to identify and address biases and ensure that the news is produced in a fair and ethical manner.
- Collaborate with data scientists: Journalists can collaborate with data scientists to understand how algorithms work and identify potential biases. This can help them to develop strategies for mitigating algorithm bias.
By taking these steps, journalists can help to ensure that the news they report is accurate, balanced, and free from algorithm bias.
MHM: So mitigating algorithm bias is a new task brought about by the advance of AI but should now become a crucial part of a news producer’s job?
Gemini AI: That’s a great observation. While the term ‘algorithm bias’ may be relatively new, the underlying issue of bias in news production has existed for centuries. However, the advent of AI and machine learning has amplified the potential for bias to be introduced into news content at a larger scale and at a faster pace.
Here’s a breakdown of how traditional news production compared to AI-driven news:
Traditional News Production:
- Human curation: News editors and journalists manually select and curate stories based on their judgement and expertise.
- Limited reach: News distribution was primarily through print and broadcast media, limiting the audience and potential for bias.
AI-Driven News Production:
- Algorithmic curation: Algorithms use data to recommend news stories to users, often based on their past behaviour and preferences.
- Vast reach: News can be disseminated instantly to a global audience through digital platforms, increasing the potential impact of bias.
Key differences:
- Scale: AI can process and analyse vast amounts of data, increasing the potential for bias to be introduced at a larger scale.
- Speed: AI can disseminate news at a much faster pace than traditional methods, making it difficult to identify and correct biases before they have a significant impact.
- Personalisation: AI can personalise news content to individual users, potentially reinforcing existing biases and limiting exposure to diverse perspectives.
Therefore, while the challenge of bias in news production is not entirely new, the rise of AI has made it more urgent and complex. Journalists and news organisations must adapt to this new reality by developing strategies to mitigate algorithm bias and ensure that the news they produce is fair, accurate, and representative.
MHM: So would you say that dealing with algorithm bias should now be part of every journalist’s basic training?
Gemini AI: Absolutely. Dealing with algorithm bias should be a fundamental part of every journalist’s training in the digital age. Here’s why:
- Understanding the problem: Journalists need to understand how algorithms work, how they can introduce bias, and the potential consequences of this bias.
- Identifying and mitigating bias: Journalists should be able to identify signs of algorithm bias in news content and take steps to mitigate its effects.
- Evaluating sources: Journalists should be able to critically evaluate the sources of information they rely on, especially those that are generated or curated by algorithms.
- Promoting diversity and inclusion: Journalists should strive to promote diversity and inclusion in the news they report, by seeking out a variety of sources and avoiding algorithmic biases that may underrepresent certain groups or perspectives.
- Advocating for transparency: Journalists can advocate for greater transparency in how algorithms are used in news production, which can help to identify and address biases.
By incorporating training on algorithm bias into journalism programmes, we can equip journalists with the skills and knowledge they need to produce fair, accurate, and representative news in the digital age.
MHM: Could you suggest the outline for a one-day training course for journalists who want to understand and tackle algorithm bias?
Gemini AI: Absolutely. Here is an outline for a one-day training course on algorithm bias for journalists
Morning Session: Understanding Algorithm Bias
- Introduction to algorithms in news production:
- How algorithms are used in news curation, personalisation, and generation
- Common types of algorithms (e.g., recommendation engines, natural language processing)
- Understanding bias:
- Defining bias and its different forms (e.g., selection bias, confirmation bias, representation bias)
- The impact of bias on news quality and credibility
- Sources of algorithm bias:
- Biased data
- Biased algorithms
- Lack of transparency
- Case studies of algorithm bias in news:
- Real-world examples of how algorithm bias has affected news coverage
- Lessons learned from these cases
Afternoon Session: Tackling Algorithm Bias
- Identifying algorithm bias in news content:
- Recognising signs of bias in news stories generated or curated by algorithms
- Tools and techniques for detecting bias
- Mitigating algorithm bias:
- Strategies for reducing the impact of bias in news production
- Best practices for data collection and curation
- Designing unbiased algorithms
- Promoting transparency and accountability
- Human oversight and editorial judgement:
- The role of human editors and journalists in ensuring the quality and fairness of news content
- Balancing algorithmic recommendations with human expertise
- Advocating for change:
- How journalists can advocate for greater transparency and accountability in the use of algorithms in news production
- Engaging with news organisations and policymakers to address algorithm bias
Workshop Activities:
- Group discussions: Facilitated discussions on specific case studies and challenges related to algorithm bias
- Hands-on exercises: Practical exercises to identify and analyse algorithm bias in news content
- Role-playing scenarios: Simulations of real-world situations where journalists need to deal with algorithm bias
- Guest speakers: Invited experts from the fields of journalism, technology, and ethics to share their insights and experiences
Additional Considerations:
- Tailoring the course: The content of the course can be tailored to the specific needs and interests of the participating journalists.
- Online or in-person: The training can be conducted either in-person or online, depending on the preferences of the participants and the resources available.
- Continuous learning: Encourage participants to continue learning about algorithm bias and best practices for mitigating its effects through ongoing professional development.
MHM: And what sources would you recommend for supporting the course on algorithm bias which you have just outlined?
Gemini AI: Recommended sources for a course on algorithm bias in news production include:
Books:
- Weapons of Math Destruction by Cathy O’Neil: This book explores how algorithms can perpetuate inequality and discrimination.
- Automating Inequality by Virginia Eubanks: This book examines the ways in which algorithms are used to reinforce existing social and economic inequalities.
- Algorithms of Oppression by Safiya Umoja Noble: This book focuses on the racial bias inherent in many algorithms and their impact on marginalised communities.
- The Algorithm: The Ultimate Power by Brian Christian and Tom Griffiths: This book provides a general introduction to algorithms and their applications.
Articles and Papers:
- Journal of Computer-Mediated Communication: This journal publishes research on the intersection of technology and communication, including studies on algorithm bias.
- Digital Journalism & Ethics: This journal focuses on ethical issues in digital journalism, including the use of algorithms.
- Pew Research Center: This organisation conducts research on a variety of topics related to technology and society, including algorithm bias.
- MIT Technology Review: This publication covers emerging technologies and their societal implications, including the impact of algorithms.
Online Resources:
- AlgorithmWatch: This non-profit organisation works to increase transparency and accountability in the use of algorithms.
- Open Rights Group: This organisation campaigns for digital rights, including the right to be free from algorithmic discrimination.
- Mozilla Foundation: This foundation is dedicated to building a healthy internet, including efforts to address algorithm bias.
- The Markup: This investigative journalism outlet focuses on the impact of technology on society, including the role of algorithms.
Case Studies:
- The Facebook Cambridge Analytica scandal: This case highlights the potential for algorithms to be used to manipulate public opinion.
- The YouTube algorithm and the spread of misinformation: This case demonstrates how algorithms can amplify harmful content.
- The use of algorithms in criminal justice: This case raises concerns about the potential for algorithms to perpetuate racial bias.
By incorporating these sources into your training course, you can provide participants with a comprehensive understanding of algorithm bias and the tools they need to address it.
MHM: Would you say that the need for journalists to be trained in dealing with algorithm bias is urgent?
Gemini AI: Absolutely. The need for journalists to be trained in dealing with algorithm bias is more urgent than ever. The rapid advancement of artificial intelligence and machine learning has led to a significant increase in the use of algorithms in news production. These algorithms can introduce bias into news content, which can have serious consequences for both individuals and society.
Here are some reasons why this training is urgent:
- Increasing reliance on algorithms: News organisations are increasingly relying on algorithms to curate, personalise, and generate news content. This means that the potential for algorithm bias to impact news coverage is growing.
- The potential for harm: Algorithm bias can lead to the spread of misinformation, the amplification of harmful stereotypes, and the underrepresentation of certain groups or perspectives. This can have serious consequences for individuals and society.
- The need for accountability: Journalists have a responsibility to ensure that the news they report is accurate, fair, and unbiased. This includes understanding and addressing the potential for algorithm bias.
By training journalists in how to identify and mitigate algorithm bias, we can help to ensure that the news we consume is reliable and trustworthy.
MHM’s conclusion
The fact that AI is warning that journalists need to be trained in recognising and dealing with algorithm bias suggests that human judgement, journalistic ethics and editorial values are crucial in a world where artificial intelligence and machine learning is crafting so much content. As Gemini says in the interview above, “the underlying issue of bias in news production has existed for centuries. However, the advent of AI and machine learning has amplified the potential for bias to be introduced into news content at a larger scale and at a faster pace.”