We have reached a point in time where it’s hard to imagine life without endless content streaming on our connected devices. From wacky TikTok videos to serious talking heads and even hardcore terrorist campaigns, it’s all out there for our consumption.
As such, this content diet is inadvertently shaping our perspective of the world around us. Some of the effects of this content have been highly debatable. Mental health experts, for example, have warned of the potential for fear-inducing videos available on channels like YouTube to affect brain development in young children. They’ve cautioned against viewers being exposed to self-harm videos or even ones that show how to make Molotov cocktail bombs, etc.
But who should be responsible for the many ills taking place today as a direct (or indirect) result of such content? To that end, however, it would be wrong to view all the content on social media channels and other platforms through the same lens of eagle-eyed scrutiny; many are genuine gems of free information exchange.
So far, all of the internet giants are free of legal obligations relating to their content. In the US, under the provisions of Section 230 of the US Communication Decency Act, tech platforms are not legally liable for the content they host. Despite the various attempts of both parties in the US Congress to amend the act, content governance is hitherto dictated by the terms of the tech companies themselves.
Tech giants such as Facebook, Instagram, Twitter, TikTok and YouTube have been regularly bypassing the Section 230 arrangement while flooding the user feeds with content that they “think” is fit for their audience ad nauseam.
Raising Eyebrows
Perhaps the most challenging aspect in the deluge of content generation has been the advent of fake news. The damaging effects of fake news have shattered our belief systems in such information integrity. We no longer believe in nor can we tell wrong from right. We have been propelled into a state of perpetual confusion through the spread of rampant misinformation and disinformation. Even the established sources of information are struggling to cope with this debilitating phenomenon. And to top it all off, new technologies have made generating false content a cakewalk — think AI-generated content such as deep fakes.
To tackle such challenges, news agencies have initiated dedicated fact-checking units. However, social scientists have reached the conclusion that the direct impact of fact-checked corrections is “often very limited.” The process of fact-checking and content moderation is no mean feat, and it will require continuous training and motivational support to achieve efficient and rewarding outcomes. Research shows that Facebook users engage with misinformation in the form of fake news 70 million times per month on average. This is a drop from the 2016 peak of 200 million monthly fake news engagements. On Twitter, people share false content 4 million to 6 million times per month, a figure that has not declined since the 2016 US election, which made history for the unbridled micro-targeting of users with misinformation to sway their votes.
Similarly, unsuspecting app users have been duped out of their savings and personal data by the use of click-bait content by nefarious perpetrators under the guise of hard-to-resist takeaways and gift offers. Online shoppers continue to experience disheartening surprises when finding that, for instance, their most desired recent online purchase turns out to be fake merchandise. Despite the efforts and claims by application owners about tightening the content and data privacy policies, the occurrence of online harm experienced by consumers tells a different story altogether.
Who’s Got the Data?
The use of platforms like TikTok is being banned by many governments across the world due to the company’s ambiguity regarding the efficient management of their users' data.
Not surprisingly, in a recent response to a 2014 lawsuit filed by a group of activists against Google Korea, South Korea's Supreme Court has demanded that Google come clean on the details of its sharing of the personal information of South Korean nationals with third parties. According to news reports, the complaint against Google is that it had passed on user information to the American government's "PRISM" intelligence program.
However, as per US laws, Google has the right to reject the demand. The Korean Supreme Court has maintained that Google's obligation to abide by US laws does not "legitimize" its practice in South Korea. Under South Korean law, online service providers must comply with individual users' requests and provide records of whether — and how — their data has been shared with a third party.
Until and unless such data governance policies are streamlined into transparent and binding regulations, the global digital economy faces a challenging environment.
Putting Measures in Place
Given the Pandora’s Box that online media has become today, governments around the world are trying to work out a method to sanitize digital platforms. One such initiative in Europe has taken the shape of the Digital Services Act (DSA). Guided by the principle that “what is illegal offline should be illegal online,” the DSA aims to provide clearer and more standardized rules for large and small digital service providers across the European market. The DSA will regulate how both big and small platforms moderate content, advertise and use algorithms for recommendation systems across online marketplaces, social networks, app stores, travel and accommodation platforms, and many others.
Moreover, initiatives such as UNESCO’s Internet for Trust conference in February brought together over 3,000 representatives of governments, regulatory bodies, digital companies, academia and civil society to discuss the dilemma of misleading information on social media channels. UNESCO will finalize and publish the guidelines by mid-2023, the use of which will then be expected — by governments, regulatory and judicial bodies, civil society, the media and digital companies themselves — to help improve the reliability of information online while maintaining the promotion of freedom of expression and human rights.
The Long and Winding Path
Experts predict a more difficult battle against misinformation in the future unless efforts are made to rein it in now. Given the arms race toward AI supremacy among tech giants, including Google and Microsoft, misinformation might unintentionally be presented more convincingly, as is being witnessed in the case of chatbot services like Google Bard and ChatGPT. Such AI chatbot services have already come under the radar and scrutiny of governments around the globe.
Most importantly, while efforts such as the DSA, the Internet for Trust and many similar initiatives are under way, internet companies would do well to act proactively regarding their platform policies, including their enforcement, the coordination of automation and people for critical operations, closed-loop feedback mechanisms and the like, to be on top of challenges that may surface in the future.
“If people continue to tolerate social media algorithms that reward lies, future generations will inherit a world in which truth has been dangerously devalued,” warned UNESCO director-general Audrey Azoulay during her prescient keynote address at the Internet for Trust conference.