Radio World’s longtime supporter Mark Durenberger wrote to me to express concern about national publications that reportedly are using generative artificial intelligence tools to create articles — with or without transparency to readers. What about Radio World, he wondered?
Radio World’s policy, set by me and stated briefly in a past column, is not to use language-based artificial intelligence to create our stories.
If this were to change for a reason that I currently can’t imagine, I would explain it to readers.
(I exclude from this discussion widely available AI-based tools that we all use on our phones such as search engines that are part of daily American life, but it’s worth noting that AI in one form or another is all around us already.)
Our freelancers and columnists have been told of our policy and reminded to conform to it in their own work.
The only AI-based language tools that I’m conscious of using in our editorial workflow are those that create transcriptions of audio or video interviews we do. A human person reviews such content for accuracy before using it in reported material.
Generative AI of course can also be used in creating graphics and photos. In Radio World we do not use such images that aren’t labeled as such, and to date they have been used only in stories about AI itself.
I would not have a concern about using AI tools to create “concept images” or other graphics that merely create visual interest; I don’t feel we’d need to tell you, the reader, that AI was used in such cases, given that graphics tools have existed for decades to let us create or modify such images.
But an original image in a news or information context should not be manipulated to change meaning or mislead the viewer. And if a created image might be misinterpreted or misunderstood by a reader as being “real” when it fact it is not, we should identify it.
Our parent company, like many media organizations, is still trying to figure all this out; and the ground shifts around us as the tools become more capable, creating gray areas even as we try to stake out boundaries. But so far, it has been easy for me to tell where the “line” is, based on my gut and commonsense.
I would not for instance publish a photo in RW where we had taken the head of a person from one snapshot and cropped it into a different group photo to make her look better. However this is an example where someone else might have a different opinion about what constitutes acceptable modification, whether AI was involved or not.
I suspect Mark Durenberger’s main concern is with language-based content, and the short answer is that my personal intention overall for Radio World is to take the conservative course in applying any of these tools. If Future as a company decides tomorrow that it could replace your friendly editors with RW-GPT5, all bets are off; but I don’t see any signs of that to date.
[Read More Radio World Stories About Artificial Intelligence]