The Machines Won’t Save Us
The importance of news literacy, and why there will always be a need for human oversight of artificial intelligence.
Photo by Alex Knight on Unsplash
“Never in human history has more information been available to more people. But it’s also true that never in history has more bad information been available to more people”
60 Minutes – 3.26.17
Information overload
We’re now living in the information age wherein everyone has the ability to circulate news and information to a global audience with very little effort via blog posts, social media channels, or simply by adding their own two cents into the comments section of a YouTube video.
From Statista, over 500 hours of new content is uploaded to YouTube every minute.
Over 6 million blog posts written on any given day.
And on TikTok, it’s estimated that over 1 billion videos have been viewed daily for the past year.
Far from perfect
The world is now generating content at an unfathomable rate, which is really an impressive testament to our ingenuity as human beings, although, with that said, we are still human, and it’s not exactly a secret that as human beings we are also innately flawed to where we’ve even come up with a commonly-known expression for ourselves, “We’re only human.”
And as humans our knowledge is woven with our own personal beliefs, perceptions, experiences, and sometimes, our prejudices. This inherently subjective learning process can sometimes lead to the propagation of misinformation.
For these reasons that there’s never been a more urgent need to educate ourselves about news and media literacy.
News literacy is more than simply discriminating fake news from real news; it extends to understanding why and how misinformation is created and shared, and can often spread like wildfire across our digital landscape. It involves nurturing critical thinking skills to help us discern biases, scrutinize the credibility of a source, and understand the broader social, political, and economic context within which all media now operates.
Almost human
And to compound the issue, advancements in artificial intelligence have given us the capability of quickly generating human-like text on practically any issue, event, or topic of interest, underscoring the need for early education to focus on news literacy in order to help us adapt to these rapidly-changing technologies.
For years now, the unchecked spread of misinformation has become an increasingly pressing issue, directly influencing our rising social discourse and the growing divisions we see across nearly every nation on this planet. And this added level of complexity makes it imperative that we do our best to stay informed on how this technology can be either used or abused.
These AI systems are meritocratic by nature — they neither discriminate nor moderate the quality of the content they generate; they simply rely on the data provided to them.
The problem stems from AI tools, including language models, operate through algorithms trained on vast datasets compiled from human-created content. Therefore, if the input data contains flawed information, the AI might simarlily churn out inaccurate, misleading, or biased content.
Garbage in. Garbage out.
Given that the internet is not uniformly regulated and is teeming with factual and flawed information, the potential for AI to propagate misinformation is high. It’s not that big a leap to say that these systems could easily and inadvertently amplify what they’re trying to prevent, the spread of misinformation.
Consider the implementation of AI across social media platforms, where the battle against fake news and information is constantly being waged. These platforms employ algorithms to suggest content that users may find engaging, and they work on a simple principle of reinforcement — the more a piece of information is shared and interacted with, the more it is promoted. Whether this information is accurate is hardly a consideration because (again) the goal is often around engagement.
We can see the statistics to understand that we are already generating content at a massive rate, and all of this output is continually getting archived seemingly for generations. That’s a lot of accessible content out there.
It can take a proficient writer an hour to compose a well-written 1000-word article, but today’s AI text generators can now create convincing narratives within seconds that, while sounding plausible, may not always be truthful or accurate. While this is inadvertent, it’s simply the result of regurgitating what’s being fed into the internet. This is easily proven by using any of the current marketplace AI chatbot services and asking it to write an article debunking climate change or any other polarizing issue.
We’re poised for information overload as all of this AI-generated content begins to flood the system, and we’re not well-prepared for what it’s about to throw at us.
Think of these generators like children. They’re constantly learning by observing their environment. If they’re exposed to incorrect information it’s not too surprising the child will then be impacted by these corrupted inputs, and these systems learn at a heightened rate, making them capable of quickly disseminating inaccurate information, often in compelling, believable language that seems authoritative and therefore trustworthy.
Consider AI chatbots that, after exposure to the internet’s myriad views, have been known to spew sexist, racist, or generally objectionable remarks. The input material, a reflection of the best and worst of humanity, results in AI displaying behaviors and viewpoints that we would rightly condemn coming from any human being.
And while most of these AI services will interrupt a user from generating answers that the platform deems irresponsible or even dangerous, there will always be maliciously, bad actors figuring ways around the gate.
Most recently, researchers at Carnegie Mellon University manipulated one of the popular online chatbots to spit out results that were counter to their programming; hate speech, instructions on how to craft illegal drugs, etc.. with a simple command prompt.
https://www.wired.com/story/ai-adversarial-attacks/
Life finds a way
Every company that has ever created online protections for their technology has constantly had to update and fight against the ever-mischievous forces of the internet.
They build roadblocks. Users go around them.
To quote Ian Malcolm, “Life finds a way.”
Path to the future
Now you may be exasperated, thinking that this all sounds like a bunch of doom-splaining, but there is a way to take on this challenge for the ages.
We begin by realizing that regardless of the increasing sophistication of AI and collaborative measures, the onus of responsibility ultimately falls on us, and education is the first line in our defense against the perpetuation of misinformation. By becoming more discerning consumers and creators of information, we, as responsible members of the general public can help improve the quality of data circulating in the digital information ecosystem, reducing the risk of AI systems regurgitating our own flawed intel.
Critical thinking is key. As human beings, we need to understand our biases, prejudices, and flawed beliefs and make a conscious effort to correct them. It’s essential for us to cultivate our understanding of these technologies, and nurturing an atmosphere of empathy, equality, and accurate knowledge can profoundly influence AI’s learning process, guiding it toward a more unbiased perspective.
The fact that our reliance on AI for information generation may paradoxically be jeopardizing our quest for factual information is a significant reason we should be advocates for public awareness about how these systems work, their strengths, limitations, and the potential misinformation risks they carry.
Our actions can influence these systems, and it’s on all of us whether we are to be good parents or neglectful ones.
Early adoption
Moreover, schools should be the leaders of this movement, and incorporating news and media literacy into every student’s curriculum from a young age would prepare them with the skills for critical thinking to help combat misinformation, and in turn, start cleaning up the mess we’ve made of our digital landscape. From the earliest years, children can begin learning about the basics of reliable sources, understanding the purpose of news and advertising, recognizing bias and stereotypes, and identifying manipulative content.
The machines won’t save us.
They never will.
The solution isn’t in relying on their programming to provide us with answers, it’s in teaching ourselves to recognize and acknowledge our own flaws and shortcomings.
And that’s okay.
After all, we’re only human.
Curating the news with Topico.
Topico is a mobile app for the user curation of news articles.
Our goal has always been to create an environment where we can comfortably share the news.
There are plenty of places to share the news, but those places also allow you to share photos, memes and personal rants.
It’s our belief that to comfortably share the news we needed to build a platform that’s dedicated to only share news links, essentially Topico providing a way for people to create their own news aggregators for others to follow articles on the issues events and topics that interest them.
User-curated news provides for various perspectives and unique sources to showcase an infinite amount of personal curations.
Are humans far from perfect? – Of course, but they’re still the most capable at applying critical thinking, and understanding context and nuance.
While A.I. has a place in finding relevant news and information, it’s our belief that human intelligence through personal curation is still needed to provide the general public the ability to actively participate in the sharing of knowledge and information.
Additionally, there’s an aspect of human curation that is often overlooked when discussing A.I. and algorithmic curation, and that’s the inclusion of creativity, aesthetics, and the most human of qualities, empathy.