Jump to main content
Rumors vs. Reality: Dr. Message and Cofacts Combat Misinformation
2020-07-23

Software engineers at Trend Micro created the “Dr. Message” anti-fraud platform to benefit the public. From left to right: Kalin, John, Paul Liu, Eric, and Ted. (photo by Lin Min-hsuan)

Software engineers at Trend Micro created the “Dr. Message” anti-fraud platform to benefit the public. From left to right: Kalin, John, Paul Liu, Eric, and Ted. (photo by Lin Min-hsuan)
 

Misinformation is everywhere these days. Friends often pass on scam cellphone messages like “Download the latest Hello Kitty emojis—tonight only!” Or “Your package has been sent out, please collect it now,” linking to a series of phishing URLs. Or fake news such as “[Health minister] Chen Shih-chung says to stay indoors until the Dragon Boat Festival.” It’s hard to tell which of these Internet rumors and “special offers” are true and which are false. But you can avoid falling prey to scams and hoaxes by forwarding the messages to a fact-checking platform.

 

The software engineers who created the online fact-checking platforms Dr. Message and Cofacts initially wrote anti-fraud chatbots to protect friends and family. But because people want to avoid being deceived by misinformation, Trend Micro’s Dr. Message has attracted nearly 400,000 followers on Line and Facebook, and Cofacts has more than 180,000 followers on those platforms. The engineers write programs to benefit the public, reducing the societal harm that misinformation causes, thus turning their love for relatives and friends into a social good.

Collectively solving social problems

Dr. Message analyzes more than 55 million pieces of information every month, and has successfully identi­fied more than 2 million fake messages and items of digital misinformation. Trend software engineer “Big John” (all Dr. Message’s programmers use aliases to protect themselves from online fraudsters) created the app in 2018 after his mother got taken in by a fraudulent Line message promising free Mickey Mouse emoji stickers if she shared the message with three friends. In just two days, Big John and fellow chatbot study group member Kalin developed a “sniffer dog” chatbot identifica­tion program that their family members could use to detect phishing websites.

“My mom didn’t find it easy to use,” says Big John, scratching his head. It turned out that the chatbot wasn’t gathering enough information on scams, fake news, and phishing websites. Consequently the program couldn’t detect fraudulent messaging. The engin­eer­ing team then asked Paul Liu, senior manager of Trend Micro’s Global Consumer Sales Enablement and Business Sales Department, to promote Dr. Message on Mobile01 and other online forums. They hoped to use data provided by users to improve the app’s sensitivity in detecting misinformation.

Fact-checking surges

“We started out doing scams!” quips Paul Liu. What he means is that Dr. Message originally checked out only fraudulent websites and phony Line accounts. Because he’d worked with the Criminal Investigation Bureau while employed at Microsoft, Liu collaborated with the bureau’s 165 National Anti-fraud Platform, and from August 2018 he linked Dr. Message with the Taiwan FactCheck Center, Cofacts and MyGoPen databases. In October of the same year he brought in more public-sector sources by including the corrective information sections on the websites of the Ministry of Foreign Affairs, the Taiwan Food and Drug Administration and the Taiwan Centers for Disease Control.

In February 2020, Dr. Message took part in the US‡Taiwan Tech Challenge, organized by the American Institute in Taiwan and the Institute for Information Industry, winning first prize, US$175,000 (about NT$5.3 million). When Paul Liu talks about the team’s motiva­tion for entering the competition, there’s sadness in his voice. Although Trend provided funds for the platform’s operation, the team’s supervisor didn’t support it, causing team members to nearly give up and withdraw from the project on several occasions.

All this changed when Trend CEO Eva Chen met with the team and affirmed that it was demonstrating the core competence and corporate social responsibility expected of a global information security company. In June 2020 Dr. Message began collaborating with Trend Micro’s Japanese and Philippine branches, going on to launch Japanese and English versions of the Dr. Message software. The aim is to reduce misinformation and fake news that might result in financial loss or reputational harm for people in those countries.
 

The Dr. Message team took first prize at the US–Taiwan Tech Challenge, receiving recognition of their efforts to combat disinformation. (photo by Kent Chuang)

The Dr. Message team took first prize at the US–Taiwan Tech Challenge, receiving recognition of their efforts to combat disinformation. (photo by Kent Chuang)
 

The Cofacts approach

Cofacts gets its name from the concept of col­labor­ative fact-checking whereby the public are invited to join in efforts to identify false information.

One day in September 2016, Cofacts founder Johnson Liang was riding on the MRT, chatting with his school friend Billion Lee, a graduate of National Taiwan University’s Political Science Department. “People pass on rumors, but they don’t Google to find out if they’re true,” Liang said. He then asked Lee to help him write a chatbot program that anyone could use.

Liang used the Airtable spreadsheet to set up a fact-checking database. Lee, who was responsible for verifying content, identified a slew of fake messages daily, but the job proved too much for one person. They needed more fact checking editors.

When Minister Without Portfolio Audrey Tang learned of the situation, she posted a note on her Facebook page announcing that Cofacts needed editors. The post attracted no jobseekers but drew lots of people to the Cofacts website to try to check out misinformation. The surge of interest made Cofacts an overnight sensation, overloading the Airtable database. Consequently, Johnson Liang designed a new database in 2017; further­more, to apply for a g0v Civic Tech Prototype Grant, he invited his NTU Department of Computer Science & Information Engineering classmates Zucien and GGM to write a proposal. Since receiving the grant, the team has held regular meetings every Wednesday evening.

Everyone’s a disinformation detective

In addition to expert programming power, the Cofacts team also has an ambitious “open source” philosophy.

His long hair rolled up in a bun, the clear-faced ­Liang casually remarks, “Wikipedia, the website Stack Overflow, which answers questions about program design, and Yahoo Answers all rely on collaborative fact-­checking, open to any and all volunteer editors.”

In this spirit of open sourcing, Cofacts has made database and codes available to the public. Developer Carol Hsu’s fact-checking bot Meiyuyi, recently popular with Line groups, accesses Cofacts’ database. Via Audrey Tang, a Thai NPO called Open Dream got to know Cofacts, and using Cofacts’ English-language toolkit, created a Thai version of the software. “That’s how open-source projects work,” says Johnson Liang.

Media literacy

Billion Lee analyzes the types of information that Cofacts checks, including false healthcare messages, ads for free emoji stickers, online scams, political news, and zombie misinformation. “Whether rumors or responses, it’s all worth checking. But in the end, you always have to judge for yourself,” Lee says.

Dr. Message plans to release Facebook and Twitter versions. AI expert Ted emphasizes that computers still have many limitations in understanding human language and text. Fact-checking platforms can help the public more easily determine if information is false. But most importantly, people need to develop media literacy—the ability to spot false information on their own.