“And be it indeed that I have erred, mine error remaineth with myself” Job 19:4.
Laws regarding personal responsibility—whether biblical or contemporary—hold every person accountable for their own actions. United States law extends the concept of personhood to corporations as well as humans (U.S. Code Title 1, section 1: “…the words “person” and “whoever” include corporations, companies, associations, firms, partnerships, societies, and joint stock companies, as well as individuals…”).
Personal responsibility is paramount in today’s culture of 24/7 access to information and forms the implicit understanding that anything published as news has been verified as true. Therefore, whether a company creates the content or merely allows previously-written material to be posted, they bear the responsibility for monitoring and preventing the spread of fake news.
Consumers have the power to click anywhere for free information, but the media corporations that supply it require income to operate. In large part they generate this income through selling advertising space. For example, 97.9% of Facebook’s (now Meta) total 2020 revenue came from advertising. Not far behind, 81.6% of Google’s 2021 4th quarter earnings were due to advertising. High-traffic sites like Meta are more attractive to advertisers because a large volume of views drives the click-through rates for purchases that will then keep the advertisers in business. With advertising so obviously lucrative, media companies create and promote clickbait titles and fake news to entice the consumer’s valuable attention by supplying a steady stream of new articles to click on. The need to feed the coffers seems to take precedence over personal responsibility for the content.
With great power must also come great responsibility, and the Kingpin-like media giants counter claims of mismanagement by proudly announcing their policies and programs to protect their trusting consumers. However, fact-checking humans and artificial intelligence algorithms take an average of 10-20 hours to find and flag fake news. In that time period a single piece of fake news can be shared thousands of times and be seen by millions of viewers, most of whom will never know that almost a day later the original post was flagged as fake.
Additionally, media algorithms amplify what an individual consumer has selected before, trapping people in an echo chamber by promoting more of what already pleases them. For example, in December 2016, fed constant assurance that presidential candidate Hillary Clinton was running a child sex ring out of the basement of a pizza parlor, a man armed himself with a semiautomatic rifle and stormed in to right the wrongs that the authorities refused to attend to. The news was fake, the man is in jail, the pizza parlor was badly damaged, three people were injured, and everyone suffered irreparable psychological damage. The present version of corporate responsibility for fake news is not yet good enough.
I believe that a small change in how the posts are flagged can make a meaningful difference. Media companies should require authors to flag their contribution as part of the posting process, e.g. “news–unverified” or “opinion”. Rather than having to trawl through millions of posts looking for fake news, fact-checkers can search for the “news–unverified” tag and begin the verification process much sooner. After verification, the fact-checker will change the tag to either “news–fake” or “news–confirmed”. These tags must be embedded in the post, such that sharing the post automatically shares the tag as well. Consumers will know at once whether a post represents opinion or fact, and whether that fact has been confirmed yet. Corporations will prove their personal responsibility for the content that they author or post from someone else. This achievable endeavor will benefit both consumer and corporation, and will lead to a safer online community.