Advancing dialogue with the help of AI
AI can be used to create safe spaces for audiences as well as new revenue streams for digital newsrooms, argues Rappler's Don Kevin Hapal.
DW Freedom: Don Kevin Hapal, you are Head of Data and Innovation at Philippine digital news outlet Rappler, and currently developing new digital exchange spaces for your audiences. Why don’t you just use what big social media platforms have to offer?
Don Kevin Hapal: We are often overly dependent on big platforms like Facebook. Our Facebook traffic has dropped significantly over the past year.
Generally speaking, these platforms have given up on working with newsrooms. So this is not a question of if we can be independent — we must. We must get our communities to go onto our own platforms directly.
And, at the same time, we wanted to build a space where people could not only access information from our newsroom but also have a conversation without all the toxicity, harassment and fear. A safe conversation which is moderated and managed by journalists.
But this then will be a rather small public sphere — is it limited to Rappler's community only?
We are hoping to bring other newsrooms into this and use the same technology as well — so that in the end different communities can be in exchange. We use a decentralized platform technology for this, which is called "matrix". The idea behind this is that we want to see people having control over their own communication.
So you offer users of the Rappler app the opportunity to take part in different exchanges.
Yes, and we have plugged a lot of AI features into this app, to help keep the conversation clean. The AI checks for violations of community guidelines, but we also have a layer of human moderation to make sure it covers the spaces that should be covered by humans.
The AI checks 24/7 and flags problems — and the human moderator then takes a look and decides on actions. And then we use a chatbot that we call Rai.
What is special about this chatbot?
Rai is largely using latest information from the Rappler website. It is built on a large language model (LLM). But we are limiting the data it uses to avoid hallucinations based on a technique called GraphRAG, a method for enhancing accuracy.
It is still early stages. We started building this in 2023. The acceptance seems to be OK. Rai provides the users with a unique Filipino and Asian perspective when it comes to world events. And this project shows that we can integrate AI responsibly to support journalistic rigor in the face of disinformation.
You also run something called "AI dialogue"?
One of our general ambitions is to develop new deliberative technologies and work through surveys with our audience. For this we are experimenting with an online AI moderated focus group discussion tool called AI dialogue.
A bot hosts online discussion groups that can talk about specific topics. The AI then asks people a lot of questions and creates ideas for policy recommendations that come from these discussions. Overall we introduce a three step process: We synthesize information, the bot then asks follow-up questions and then it creates a quick summary of what has been said.
Can you give an example?
One example is a creative process we started for collecting democratic input on how AI could be governed. We ran this as a project in cooperation with Open AI.
We built a scalable conversation where many people gave their input. For other topics we have been working with local governments. Because we can offer AI powered public consultations. They used the tool to consult with their constituency and then came up with local policies.
This line of work has turned out to be profitable for us, too: We have been able to monetize AI-powered market research.
But don't traditional representative audience surveys deliver more reliable insights?
The traditional surveys have their limitations, too. What we offer is that we can complement this with qualitative information.
In your experience, what is the most important thing to consider when governing AI?
We at Rappler were the first to publish an AI guideline. What is important for me is that whatever we build we ensure transparency. We must be open about what data is used and what for.
Rappler is a digital company. Has your innovative power made you an interesting partner for Big Tech?
We must see that most of Big Tech's business is with other sectors. I know that in the US some news organizations were able to agree on deals with them. But I think we cannot do that here, in the Philippines. They are not interested. When we did the project with Open AI, we had a lot of conversations with them. But in the end nothing concrete has come out.
Don Kevin Hapal was introduced to data journalism while writing and researching about social media, disinformation, and propaganda. One of his investigative pieces led to one of the biggest network takedowns by Facebook, covering 220 pages, 73 accounts, and 29 instagram accounts, with a combined following of 43 million users.
Interview: Jan Lublinski