AI has Knowledge, only humans have Wisdom
Professor Bill Silcock warns that democracy, trust, and truth hinge on human transparency and media literacy, not on machines
Interview by Robert Antony
COLOMBO – Artificial intelligence may reshape the world, but it cannot replace human judgment, integrity, or wisdom, says Bill Silcock, Emeritus Professor of Journalism and Mass Communication at Arizona State University. In this exclusive interview with Virakesari, he stresses that AI is merely a tool and its impact depends entirely on how people, politicians, and citizens choose to use it. As misinformation and deepfakes grow more sophisticated, Silcock argues that the only true defence is an educated public, transparent journalism, and a society where every citizen becomes a ‘deepfake detector’.
Excerpts of the interview:
Question: How does AI contribute to the spread of disinformation? What kind of challenge is this? And how can journalists and citizens effectively combat this trend?
Answer: This is a very big challenge. The keywords for the current era are ‘Disinformation’ and ‘Media Literacy’. To face this challenge, we must continuously educate the public.
The Missouri School of Journalism has described journalists as ‘the public’s first schoolteachers’. The job of a journalist is to simplify and present the complex world issues truthfully and accurately to people who do not have time to understand them (they are not unintelligent, just time-constrained). AI is only a tool that can assist in doing this.
The important thing here is to trust the tool. That is why global media organizations like the British Broadcasting Corporation (BBC) and the Associated Press (AP) are using AI with human oversight and clear guidelines. For example, AP has denied permission for the use of generative AI-created photographs.
Q: Some individuals intentionally spread false information online. How big a challenge is this to society with the growth of AI?
A: Yes, it is certainly a huge challenge. It is challenging both when done intentionally and when it happens accidentally. Let me give you an example. A story about an unknown American soldier’s grave was circulated on Facebook. It seemed like a nice, moving story, so I shared it too. But a friend of mine said, ‘This is not a true story. ’ There was no malicious intent in this, but the information was not completely true. Similarly, even when beautiful pictures are shared, some people point out that they are not real.
Now, this misinformation seems much more sophisticated and authentic than before. We see visuals of a giant whale attacking a ship. Unlike fake scenes that look like Hollywood movie clips, the current ones appear very real.
Therefore, each of us must become a ‘check and balance’ person for one another. Whenever sharing on social media, we must verify the factual nature. While detecting deepfakes is the job of journalists, we must elevate every citizen to become a ‘Deepfake Detector’ (deepfakes are highly realistic fake images, videos, or audio created using highly sophisticated AI technology to mimic real people or events, making it difficult to distinguish truth from fabrication). Correcting these is a very challenging task. This requires media literacy.
Q: You’ve highlighted media literacy as a key solution. Where should this education begin – schools, universities, or community programs? And how can it be effectively implemented?
Answer: Yes, absolutely, that is how it must happen. A child should start exploring this at the age of 12. Intelligent young people, who own smartphones today and know how to create graphics, etc., must be made to understand how the media operates.
Media literacy classes must be made compulsory. Media literacy classes should be mandatory everywhere, including the Sri Lankan education sector and the Arizona state education sector.
Q: As AI tools become more advanced, will traditional journalism transition to the next level or will it continue as is?
A: It must definitely grow. It will transition to the next level. Look at the history of news. It is always changing. Technology is the biggest driving force of change. Every time technology brings a change, humans must intervene and ensure fidelity to the fundamental habits of journalism: telling the truth, getting the facts right, and providing the information society needs to make decisions.
Journalists must be fundamentally transparent in their work. When I am talking to you now, I use AI to gather some information that I didn’t know. But it is in your hands to capture what I spoke about normally and turn it into a logical story.
AI may have Knowledge, but you have Wisdom. I don’t know if generative AI can gain wisdom. But as long as you are transparent in your actions, the ‘hand of Trust’ will endure. That is the most important thing.
Q: How can AI be harnessed to strengthen democratic values like free speech and fair elections, rather than undermine them?
A: Artificial intelligence is only a tool. Its effect depends on how people, politicians, or ordinary citizens use that tool. If these tools are used correctly, they can help promote democratic values and principles.
AI poses two major threats to democracy. Cooling the large data centres built for AI operations requires a lot of water. This will have a huge impact on the environment. The next is that all the big corporations behind AI, like ChatGPT, Amazon, and Facebook, are rushing into the AI world. This is similar to the rush for radio in the early 1900s. But the motivation of these media personalities is to make a profit rather than to enhance democracy. When their greed and profit motive violate the basic principles of democracy, it begins to affect politicians. This is the threat to democracy.
Q: What is the solution to this?
A: Education. Under the principle of ‘one man, one vote,’ humans must be educated so that the ordinary citizen can make an Informed Decision.
–Robert Antony, a multiple award-winning journalist, currently serves as Assistant Editor at Virakesari
Comments are closed, but trackbacks and pingbacks are open.