TikTok and Reddit’s attempts to combat misinformation

TikTok has grown to be a massive social media platform since it merged with Musical.ly in 2018, and due to this, the platform has seen a large amount of misinformation get shared by users. In an article I found by Capitol Technology University, they shared that a 2022 report found that roughly a fifth of TikTok videos contain misinformation. Because of this, it is important to analyze TikTok’s efforts to combat the misinformation and disinformation that targets their platform.

Considering that a large number of younger adults and kids use the app, content that could trick users is extremely dangerous. For example, a 17-year-old interviewed for an article in The National stated that artificial intelligence makes it hard for them to distinguish fact from fiction, which leads to more misinformation getting spread. Because of this, TikTok has committed to labelling Artificial Intelligence generated content, however experts say that isn’t enough to combat it. Another effort that TikTok has taken to limit misinformation getting spread is creating a guide with the non-profit MediaSmarts to help parents and their kids navigate the platform safely and avoid getting tricked by AI-generated content. I think this is a good step to preventing people from falling for misinformation, as learning from a young age how to navigate such platforms can be extremely beneficial.

According to Capitol Technology University, TikTok has taken efforts to combat misinformation such as collaborating with reputable fact-checking organizations such as PolitiFact and Snopes to help find and remove misinformation from the platform. I think that this is an important step by them, as using third party, unbiased organizations can make users feel more comfortable using the platform and know what they are reading is either accurate or misleading. TikTok has also launched educational campaigns to help users recognize misinformation with in-app videos, articles, and quizzes. According to an article in TikTok’s newsroom, TikTok partnered with The Journal to support Media Literacy Ireland’s campaign by creating in-app videos with the intention of educating users on how to spot and counter mis/disinformation. I think using videos and other tactics like quizzes is a great out of the box method of getting users engaged in learning about misinformation before falling for it. Capitol Technology University also stated that TikTok has an automated moderation system that scans violations to their community guidelines, which then gets reviewed by TikTok content moderators.

Reading through TikTok’s Safety Center, I found numerous statements that detail their attempts to curb misinformation on their platform. TikTok states that Community Guidelines prohibit harmful misinformation about health, elections, climate change and more.”, and that “When content goes against these rules, we remove it or make it ineligible for the For You feed as outlined in our policies (learn about our policies here)”. TikTok makes it clear that their policies apply not only to intentional disinformation but also unintentional misinformation. I think that this is important because while misinformation could be unintentional, if it gets spread it can be harmful. In addition, TikTok confirms that they work with independent fact-checkers and allows users to report misinformation if they spot it. If a creator uses artificial intelligence, they must label it as AIGC, which TikTok has a tool to do so. In addition, TikTok also automatically labels content as AI-generated if identified as such. This is meritable because as stated earlier, AI has gotten so advanced that it can be incredibly difficult to distinguish AI from non-AI content. TikTok also has features to help users fund reliable sources, such as search reminders, informational banners, and election centers. This is also a good way for them to help users access accurate information from reputable sources. For example, when there is a crisis, TikTok states that they would have search banners or popups to direct users to authoritative sources with trustworthy updates.

I think TikTok is on the right track with promoting media literacy, and should expand on that effort. Creating fun, educational content in-app can help attract users and get them to be more aware and better recognize misinformation not just on TikTok, but also other platforms. I think that while TikTok has taken steps to create fact-checking systems, they should find more ways of preventing the misinformation from being posted in the first place. Making sure to flag accounts that have posted misinformation could be a beneficial way to prevent misinformation from getting shared on TikTok. Stemming from this, I think that TikTok should improve its AI and machine learning tools to better spot misinformation before users have the chance to report misleading content. This would limit the amount a post spreads before it reaches a large audience.

Reddit, another platform I use quite often, unfortunately has a drastically more lenient stance on preventing misinformation. Within Reddit’s Content Policy, they feature a list of only 8 rules. Within these rules, only a couple somewhat relate to misinformation. One of the rules tells users to “Post authentic content into communities where you have a personal interest, and do not cheat or engage in content manipulation (including spamming, vote manipulation, ban evasion, or subscriber fraud) or otherwise interfere with or disrupt Reddit communities.”. Another rule tells users not to “impersonate an individual or an entity in a misleading or deceptive manner.”. Once the COVID pandemic happened, Reddit was pressed on their minimal efforts to curb misinformation and disinformation getting posted on their platform.

According to CNN, “Reddit banned one prominent subreddit called r/NoNewNormal, which described itself as hosting a “[skeptical] discussion of the ‘new normal’ that has manifested as an outcome of the coronavirus (COVID-19) pandemic” and has been flagged by several prominent subreddits as a significant source of Covid-19 and vaccine misinformation.”. However, Reddit’s security team stated that the subreddit was not banned due to spreading misinformation, but because they were “brigading”. In the same article CNN reported that Reddit had placed 54 subreddits under quarantine, removing them from search and recommendations as well as placing them behind a warning. On the other hand, the CEO Steve Huffman has rejected to have the platform fight against COVID-19 misinformation, insisting that such actions should be taken by volunteers of individual subreddits. Huffman added “Given the rapid state of change, we believe it is best to enable communities to engage in debate and dissent, and for us to link to the CDC wherever appropriate,” and added “While we believe the CDC is the best and most up to date source of information regarding COVID-19, disagreeing with them is not against our policies,” . The platform has suggested users report misinformation if they see it, and stated that they will work closely with moderators if they see misinformation regularly cropping up in subreddits.

I think that Reddit has not done enough to fight against the spread of misinformation. While the platform is intended to provide its users the ability to have an opinion, it can be dangerous to allow for misinformation to be spread without any interference from the platform itself. Reddit should have a fact-checking system of some sort, that can identify if misleading information is being spread. Further, the platform should block users who do post misinformation or disinformation. Reddit could use AI tools like TikTok to spot misinformation, and should update their rules to be more strict about sharing false information.  


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *