Imagine waking up one morning. You throw on a pot of coffee. You settle down with your laptop. Open your favorite online news site.
And nothing’s there.
What’s happening? Where are all the articles covering today’s current events???
Well, it seems in order to turn a profit, the publishers decided to start embracing artificial intelligence as the savior of the news industry and start auto-generating the articles of the day.
It’s not necessarily their fault. It was simply an act of survival due to the fact that over 70% of Americans don’t want to pay for the news.
Their number one reason?
“It’s free, man.”
For most people, journalism isn’t a product worth purchasing. It’s just considered to be another free offering of the internet.
Paywalls? Eh, they’ll simply google the story and find it elsewhere. That’s because the news has become an expected accessory of the internet, and the idea of purchasing “the news” is like paying for wi-fi in a coffee shop. It’s now just a part of the service.
And so the news outlets simply sent all the wordsmiths and reporters home, closed their doors, and accepted that the machines were the best course as a sustainable business strategy.
But something strange happened.
The machines didn’t spit out the latest story on the war in Ukraine, or that insightful review of the latest film release.
Because generative AI can only generate content from what’s already been put out into the world, it’s incapable of working in real-time.
It can’t magically know that bombs have fallen on a school in Bakhmut, or randomly decide to watch, comprehend and craft a film review. It requires direction. It’s also unable to perform tasks like critical analysis because it lacks the human ability of intuition, emotional depth, or understanding the complexities around all of the possible ethical/cultural considerations.
Sure, it could scan any of the social networks or attempt to scrub blogs and forums for real-time information, but without any human oversight, you would see the news devolve into the written equivalent of a cafeteria food fight, with that level of quality in the cuisine.
It could probably pull data from financial reports, sports scores, and weather and geological surveys, but what about all those stories that can only come from the human experience? Those creative, nuanced perspectives that breathe life onto the page because they’re coming from a place of emotion, personal engagement, or the basic understanding of context?
The journalist who notes the tension in a courtroom, or has spent decades building confidential relationships, or uncovers that story of personal perseverance during a tragedy.
That’s all gone.
And with professional journalists no longer contributing to the information ecosystem, AI is left with only being able to plagiarize itself.
So what happens when that missile strikes a classroom in some foreign war?
Citizens would naturally take to social media. They could post pics with quick captions, or maybe they’re savvy enough to add a bit more detail.
They could then start attracting “likes”, catching the attention of the internet.
Corrupt governments would then set their propaganda machines to flood the system with their own narrative, complete with fake or exploited imagery. All this occurs in a matter of seconds as their own bot farms start manipulating the visibility of the content through “likes” and shares.
Bots begin to fight bots for internet supremacy of the “truth”, creating the illusion of a popular consensus where there is none, elevating public discourse, distrust, and paranoia.
And all this occurs in less time than it takes a journalist to boot up their laptop.
And now the problem that AI was originally thought to be able to fix is not only unresolved, it’s compounded, broken, and corrupted.
Flooding the market
Advanced AI systems will continually enhance their ability to generate realistic, persuasive text, making it easier to craft and spread false narratives, churning out biased articles, misleading news, or even deepfake audios/videos that seem more and more convincing, and all designed to trigger emotional responses, convincing people to believe or propagate a false storyline.
And all of this data will flood the reservoir from which legitimate news services are trying to feed their own AI, making it increasingly more difficult to verify sources, identify users, and deploy fact-checking systems.
These reputable services are then slowed by having to authenticate the veracity of a story, having to implement safeguards and responsible approaches to validate the information flooding the system, and this slows down their dissemination speed, while the bad actors have no such impediments towards their only goal; to quickly spread fear, doubt, and distrust.
These bad actors have the added advantage of not being affected by the regulations put upon legitimate news services. They aren’t handcuffed by the requirement for transparency, accountability, or even the basic respect for human rights.
It’s an uneven playing field.
The battle between legitimate news services vs. foreign (or domestic) propaganda machines is like watching a boxer following the Marquess of Queensberry rules go up against a steroid-enhanced MMA fighter in the octagon.
It’s a slaughter.
And to make matters worse, the potential flood of misinformation reinforces the distrust the people in their filter bubbles already have of the mainstream media.
There’s no doubt that there have to be regulations in place when it comes to implementing artificial intelligence into fields like news and journalism, but, as we know all too well, that will only apply to legitimate and credible news outlets.
Regulation won’t eliminate misinformation, or worse, disinformation. It will simply apply rules that are only adhered to by one side.
And while it’s imperative that researchers develop AI systems capable of fact-checking and disinformation detection, which could help mitigate the propagation of falsehoods, it only helps in identifying misinformation. It doesn’t stop it from drowning the channels it thrives in; social media, comments, and memes.
And if we start removing journalists from the equation, we lose a powerful weapon in the war against misinformation; a referenceable source with a background of education, credits, and accomplishments.
So what now?
Personally, I don’t want my newsfeeds, emails, and chat services overrun with inhuman, bot-processed, regurgitated articles that can only form a complete sentence based on what a human has already diligently crafted in the first place.
AI is great as a supporting tool; to jumpstart writer’s block; or present binary responses.
Nevertheless, I still want news generated from the human experience.
And to ensure that journalists are protected, the solution begins with changing the way we consume news and information and recognizing the value of professional journalism.
The first line of defense.
Education and news literacy.
Considering the current landscape, this has to begin at every age level. While it’s essential for children at a young age to receive a basic level of education on internet privacy, fact vs. fiction, and digital etiquette, many of these issues could also be taught to older adults as well.
Simply understanding that everything they see or hear in the media is content is a way to help them conceptualize what they’re consuming. Teaching them to identify that some content is factual, some are people’s opinions, and some are made-up stories is a basic fundamental of news literacy.
Critical thinking. Asking questions about the content they’re absorbing.
- Do you think this is real or true? Why, or why not?
- What do you think is the purpose or intention of what you’re consuming?
- What’s missing? Are there open questions? A lack of evidence? Is there another perspective?
Sourcing. Where did this come from?
- Who is the author? What are their credentials?
- What is the background and history of this source? When was it published?
- Does the source have an editorial process?
Format. How is the content presented?
- Does it use inflammatory or sensational language?
- Is it relying heavily on emotion rather than facts or logical arguments?
- Is it an advertisement? Is it trying to sell you a product or service?
These are basic considerations that adults often don’t or won’t apply when consuming news and information.
Trust and accountability.
Regaining trust in journalism and mainstream media is a complex but vital task when it comes to building a culture for news literacy.
A common phrase is “I don’t trust legacy news.” or “The news is biased and corrupt.”
I often ask people, “If you don’t get your news from the news… where do you get your news?”
It’s a question that rarely gets a straight answer, and as stated throughout this article, there are a LOT of disingenuous, unregulated, unreliable sources to fill that void.
Trying to get these people to read reputable sources as well as their own online sources may very well be a fruitless endeavor because of the safety of their filter bubble, but again, that’s why it’s essential to teach children at a young age to apply critical thinking skills to all of the content they consume.
With that said, there are very reputable but sensational news outlets that construct provocative headlines to get clicks like any other form of content.
They hurt professional journalists who are uncovering stories that are essential for a more informed democracy.
These opinion-based outlets will always exist in the 24-hour news cycle, but they mostly rely on referencing an existing article, and then espousing their viewpoints, opinions, and theories.
They are the overly-sweet dessert, whereas the referenced article is often the healthy vegetables. Applying critical thinking informs you which is which, but still requires the consumer to eat a balanced meal.
Additionally, these outlets only sell what consumers are buying.
Regaining trust in journalism requires us all to manage our diet, and once we are careful with what we consume, the menu will begin to change as well.
We need journalism. We need quality journalism, and we ourselves need to make changes to create a more informed, less divisive society.
Instead of the usual summary, it seemed apropos to end with a question to artificial intelligence using ChatGPT 4.0.
Topico is a mobile app for the user curation of news articles.
Our goal has always been to create an environment where we can comfortably share the news.
There are plenty of places to share the news, but those places also allow you to share photos, memes, and personal rants.
It’s our belief that to comfortably share the news we needed to build a platform that’s dedicated to only sharing news links, essentially Topico providing a way for people to create their own news aggregators for others to follow articles on the issues events, and topics that interest them.
User-curated news provides various perspectives and unique sources to showcase an infinite amount of personal curations.
Are humans far from perfect? – Of course, but they’re still the most capable of applying critical thinking and understanding context and nuance.
While A.I. has a place in finding relevant news and information, it’s our belief that human intelligence through personal curation is still needed to provide the general public the ability to actively participate in the sharing of knowledge and information.
Additionally, there’s an aspect of human curation that is often overlooked when discussing A.I. and algorithmic curation, and that’s the inclusion of creativity, aesthetics, and the most human of qualities, empathy.