by Ryan Jones, Director of Marketing, Precision Risk Management, former journalist and a University of Iowa Journalism School alum.
e-mail: [email protected]

It should go without question the photos you see in an ag publication are real and accurate, but in 2024, AI has progressed far enough to appear accurate at a glance. Ag publications need to know the ethical slippery slope they are walking on when using these generative images without the proper guardrails in place.

While reading some agricultural publications, I was dismayed to see the use of AI images above the fold on a controversial subject. In one case, the AI image completely misrepresented the subject. By using a fake and misleading photo the reader was less informed after seeing the image in a fact-based journalism story.

It is easy to understand how a misleading image could get put above the fold. The image looks good, professional, clean and is provided by Adobe – a great photo for the front page of a newspaper. The only complication is that the image is a lie, which should be a problem for both the publisher and the reader.

Generative images have tremendous power to inform or misinform. When scrolling through social media, you are almost certainly passing over AI-generated images without noticing. Recent social media posts of pink dolphins in North Carolina have gone viral, capturing the attention of millions. Sadly, everyone was being deceived by generative AI images of these supposed dolphins.

Journalists have a responsibility to provide the truth to their consumers, so we have a more informed public. In fact, it’s their primary role. By ensuring the accuracy of their reporting, journalists help maintain the trust of their publications and the institution of journalism. When verification steps are skipped, that trust is eroded.

For decades, journalists have applied strong verification steps. With AI, the normal verification steps may not be enough. In the agricultural publication, the image was disclaimed as AI in the caption, but merely accepting Adobe’s AI stock image captions is not enough. Journalists have the responsibility of verifying the truth of the images they present to their readers.

As journalists and marketers, we can’t accept inaccuracies with the new technology of AI just because it is easy to use. Journalists must verify the images as accurate before putting them alongside their reporting.

To be clear, the use of AI images alone, is not the major concern. Journalists and marketers use graphic design and illustration when a real photo isn’t available or impossible, such as illustrating underground tile to see how it works. They base those images on accurate designs. They’re tools to show the truth of subject when we can’t take a photo. AI needs to be treated in the same exact way. It needs to be a tool to help us illustrate the truth.

Today, some publications are failing to take the power of generative images seriously enough. We need better oversight from both journalistic publications and marketers to ensure we are providing high-quality and truthful products. We need humans who can bring context, judgment, and ethical considerations on where and how to use these new technology tools. We also need organizations to prioritize accuracy and integrity over convenience and cost-saving measures.