Have you ever stumbled upon an image online and wondered whether it’s real or created by artificial intelligence (AI)? As AI-generated content continues to flood the internet, it’s becoming increasingly difficult to distinguish between what’s genuine and what’s not. The rise of AI has unlocked a world of creative possibilities, but it also poses significant challenges, transforming the way we consume and interact with online content.
From AI-generated images, music, and videos dominating social media to deepfakes and bots scamming users, AI now touches a substantial part of the internet. According to a study by Graphite, the amount of AI-made content surpassed human-created content in late 2024, primarily due to the launch of ChatGPT in 2022. Another study suggests that more than 74.2% of pages in its sample contained AI-generated content as of April 2025, highlighting the need for effective content authentication and AI detection methods.
As AI-generated content becomes more sophisticated and nearly indistinguishable from human-made work, a pressing question arises: How much can users truly identify what’s real as we enter 2026? The increasing prevalence of AI-generated content has significant implications for digital media, online trust, and information authenticity.
AI content fatigue kicks in: Demand for human-made content is rising
After a few years of excitement around AI’s “magic,” online users have been increasingly experiencing AI content fatigue, a collective exhaustion in response to the unrelenting pace of AI innovation. This phenomenon is driven by the growing awareness of AI-generated content and its potential impact on online credibility and trust.
According to a Pew Research Center survey, a median of 34% of adults globally were more concerned than excited about the increased use of AI in a spring 2025 survey, while 42% were equally concerned and excited. This shift in public perception highlights the need for transparent AI labeling and content certification to ensure online transparency and accountability.
“AI content fatigue has been cited in multiple studies as the novelty of AI-generated content is slowly wearing off, and in its current form, often feels predictable and available in abundance,” Adrian Ott, chief AI officer at EY Switzerland, told Cointelegraph. “The key to addressing this issue lies in human-made content and authenticity, which are essential for rebuilding online trust and credibility.”

“In some sense, AI content can be compared to processed food,” he said, drawing parallels between how both these phenomena have evolved. “Just as consumers increasingly prefer organic food and local produce, online users will seek out human-crafted content and authentic media that provides a more personal and emotional connection.”
“When it first became possible, it flooded the market. But over time, people started going back to local, quality food where they know the origin,” Ott said, adding:
“It might go in a similar direction with content. You can make the case that humans like to know who is behind the thoughts that they read, and a painting is not only judged by its quality but by the story behind the artist.”
Ott suggested that labels like “human-crafted” might emerge as trust signals in online content, similar to “organic” in food, promoting transparency and accountability in the digital landscape.
Managing AI content: Certifying real content among working approaches
Although many may argue that most people can spot AI text or images without trying, the question of detecting AI-created content is more complicated. The rise of deepfakes and AI-generated media has made it increasingly difficult to distinguish between what’s real and what’s not.
A September Pew Research study found that at least 76% of Americans say it’s essential to be able to spot AI content, and only 47% are confident they can accurately detect it. This highlights the need for effective AI detection tools and content authentication methods to ensure online trust and credibility.
“While some people fall for fake photos, videos or news, others might refuse to believe anything at all or conveniently dismiss real footage as ‘AI-generated’ when it doesn’t fit their narrative,” EY’s Ott said, highlighting the issues of managing AI content online and the importance of media literacy and critical thinking.

According to Ott, global regulators seem to be going in the direction of labeling AI content, but “there will always be ways around that.” Instead, he suggested a reverse approach, where real content is certified the moment it is captured, so authenticity can be traced back to an actual event rather than trying to detect fakes after the fact, promoting transparency and accountability in the digital landscape.
Blockchain’s role in figuring out the “proof of origin”
“With synthetic media becoming harder to distinguish from real footage, relying on authentication after the fact is no longer effective,” said Jason Crawforth, founder and CEO at Swear, a startup that develops video authentication software, highlighting the need for blockchain-based solutions and decentralized authentication methods.
“Protection will come from systems that embed trust into content from the start,” Crawforth said, underscoring the key concept of Swear, which ensures that digital media is trustworthy from the moment it’s created using blockchain technology and artificial intelligence.

Swear’s authentication software employs a blockchain-based fingerprinting approach, where each piece of content is linked to a blockchain ledger to provide proof of origin — a verifiable “digital DNA” that cannot be altered without detection, ensuring content integrity and authenticity.
“Any modification, no matter how discreet, becomes identifiable by comparing the content to its blockchain-verified original in the Swear platform,” Crawforth said, adding:
“Without built-in authenticity, all media, past and present, faces the risk of doubt […] Swear doesn’t ask, ‘Is this fake?’, it proves ‘This is real.’ That shift is what makes our solution both proactive and future-proof in the fight toward protecting the truth.”
So far, Swear’s technology has been used among digital creators and enterprise partners, targeting mostly visual and audio media across video-capturing devices, including bodycams and drones, promoting content authenticity and trust in various industries.
“While social media integration is a long-term vision, our current focus is on the security and surveillance industry, where video integrity is mission-critical,” Crawforth said, highlighting the importance of video authentication and content verification in critical applications.
2026 outlook: Responsibility of platforms and inflection points
As we enter 2026, online users are increasingly concerned about the growing volume of AI-generated content and their ability to distinguish between synthetic and human-created media. The need for transparent AI labeling and content certification is becoming increasingly important to ensure online trust and credibility.
While AI experts emphasize the importance of clearly labeling “real” content versus AI-created media, it remains uncertain how quickly online platforms will recognize the need to prioritize trusted, human-made content as AI continues to flood the internet, highlighting the need for platform accountability and regulatory oversight.

“Ultimately, it’s the responsibility of platform providers to give users tools to filter out AI content and surface high-quality material. If they don’t, people will leave,” Ott said. “The onus is on platforms and regulators to ensure online trust and credibility by promoting transparency, accountability, and authenticity in the digital landscape.”
As the demand for tools that identify human-made media grows, it is essential to recognize that the core issue is often not the AI content itself, but the intentions behind its creation. Deepfakes and misinformation are not entirely new phenomena, though AI has dramatically increased their scale and speed, highlighting the need for effective content moderation and AI detection methods.
Related: Texas grid is heating up again, this time from AI, not Bitcoin miners
With only a handful of startups focused on identifying authentic content in 2025, the issue has not yet escalated to a point where platforms, governments or users are taking urgent, coordinated action, highlighting the need for collective action and industry-wide collaboration to address the challenges posed by AI-generated content.
According to Swear’s Crawforth, humanity has yet to reach the inflection point where manipulated media causes visible, undeniable harm:
“Whether in legal cases, investigations, corporate governance, journalism, or public safety. Waiting for that moment would be a mistake; the groundwork for authenticity should be laid now.”













































