Quick Thoughts on the Evolution and Erosion of Trust in the AI Age

We are living through crazy times. Trust (or confidence in the veracity of the information we obtain) shapes reality and underpins societal function.  Yet, trust has become fluid, evolving alongside technological and cultural shifts. Let’s examine these shifts. Historically, trust was placed in institutions and authoritative bodies. The credibility of traditional media organizations and governmental…


We are living through crazy times. Trust (or confidence in the veracity of the information we obtain) shapes reality and underpins societal function.  Yet, trust has become fluid, evolving alongside technological and cultural shifts. Let’s examine these shifts.

Historically, trust was placed in institutions and authoritative bodies. The credibility of traditional media organizations and governmental bodies was built on cycles of verification and accountability. When a major news outlet reported a story, there was implicit trust that it had been vetted. However, trust in these institutions has drastically diminished due to political polarization, corporate interests, and failures in transparency. Scandals such as misinformation around wars, economic crises, and biased reporting have contributed to the perception that these institutions serve agendas rather than the truth. As institutional credibility waned, individuals increasingly turned to alternative sources of information, coinciding with the rise of social media and the democratization of content creation. Anyone with a platform and reach could shape public discourse. The result was a paradigm shift—from trust in institutions to trust in individuals.

People now trust individuals: influencers, podcasters, and social media personalities. This shift has reoriented how information is disseminated. Instead of top-down verification (supply side), trust is now bottom-up (demand side), dictated by popularity and ideological alignment. People follow commentators who reinforce their worldviews rather than challenge them, leading to echo chambers that reinforce biases rather than cultivate critical thinking. As is evident today, this transformation in trust is dangerous. Unlike institutions, which are held to some level of accountability, individuals operate with fewer constraints. There are no editorial standards or fact-checking requirements in place to ensure accuracy. Misinformation spreads more easily within the fragment of society that buys it and then dangerously across fragments. Objective truth becomes less relevant to society than subjective belief.

On top of this, let’s add the effect of AI.  Can we move from people-trust to AI-trust?  Imagine a story where a dozen street cameras and hundreds of sensors (inside and outside the vehicles) record a fatal car accident.  What if an agentic AI integrated and analyzed the recordings and wrote up the narrative?  Would this “news” be more trustworthy than the more subjective assessment of a police officer?  Or will AI engender mistrust? Deepfake technology, for instance, is advancing to the point where we see indiscernible human-like avatars presenting news, making it nearly impossible to differentiate between actual and AI-generated content. AI is rapidly improving on human-like traits like empathy, explainability, and reliability, which makes AI content more believable, but that is not the same as more trustworthy. Unlike humans, who are forgiven for their cognitive biases and limitations, we expect machines to be infallible, yet they are only as objective as the data they are trained on. The algorithms behind AI are still set by humans, which means biases and errors can persist. AI’s uncanny ability to mimic reality may paradoxically erode trust—if everything can be faked, how can anything be believed?

Getting to the fabric of trust, let’s assume for convenience that (objective) facts exist. In societal discourses, narratives are weaved around these facts. Some facts could be given more weight, and others could be given less or even ignored. Newscasters, podcasters, journalists, media, and individuals build narratives. The connections between facts could be based on logic or subjective opinion. Logically built narratives are closer to the truth than those based on subjective values or beliefs. Institutional trust traditionally had standards (based on logic or expertise) to verify narratives. People-trust weaves narratives based on the subjectivity of the storyteller. AI-trust is based on narratives built on statistical training of models with massive data. Which narrative do we trust as all kinds of information bombard us? We can throw up our hands in frustration, passively consume what we prefer, or hope for improvements in technology or society.  Yet, hope alone is not a strategy.

So, what is the solution? On the technological front, if we rely on AI-generated content, we must develop verification systems, whether through digital birth certificates for information, blockchain-based authenticity markers, or independent AI oversight bodies. On the societal front, until such systems exist, skepticism should be our default mode of thinking. Rather than assuming information is true unless proven false (i.e., think of the quaint time when everything in print media was assumed to be true), we must operate under the assumption that all information is false unless verified.

In closing, trust is no longer a static concept tied to authoritative institutions; it is a fragmented narrative shaped by technological and societal changes. We have transitioned from trusting institutions to trusting individuals to contemplating trust in AI. This transformation has both liberated and destabilized the way we consume information. While AI holds the potential to provide objective truth, it also amplifies the risk of manipulation and deception. Ultimately, the future of trust will depend on our ability to critically evaluate information, demand transparency, and establish verifiable truth mechanisms. The next time we encounter a piece of news—whether from a government, an influencer, or an AI—we must ask: Do I believe this because it is true, or because I want it to be true? This single question may be the last safeguard against a world where trust is no longer tethered to reality.


Leave a Reply

Your email address will not be published. Required fields are marked *