Without a clear way to benchmark information, it is difficult to know how to judge its veracity. Historically, the greatest purveyor of trust were institutions – we trusted the FBI, congress, certain news sources – and gave them respectability based on their assumed credibility. With institutional trust going down, many of us flounder in our information silos – often benchmarking information based on alignment with a belief system (that comes from upbringing or religion) or groupthink (common view of members of a group) or repetition (it has been shown that people believe things they hear repeatedly). This replaces institutional trust (which is established over years of verification cycles) with feeble benchmarks that have little do do with the truth or accuracy.
But what if we replaced people with machines? Today, many news stories are being compiled and even written by AI. Imagine a story where a dozen street cameras and hundreds of sensors (both inside and outside the vehicles) at the intersection recorded an accident in which people died. What if the recordings were integrated and analyzed by an AI. Pushing it further, what if the AI took the data and analysis, and actually identified the guilty party and wrote up the narrative. We have now moved from institutional trust to people trust to machine trust. Would we trust this “news”? Or, would this objectivity be questioned because at of lack of trust in the algorithms or the companies that designed them.
Our AI trajectory is in the direction of richer and vaster data, better curation and interpretation algorithms, and more robust conclusions. AI based news could be an answer once we have clear transparency and accountability frameworks. The hope is to get to some kind of equilibrium where news can be certified (like a food nutrition label) based on the extent of objective data vs. algorithmic sourcing and curation.
At some point we have to resolve the trust issue with news. Society cannot function well if there are different truths that need constant mediation. So when we lose trust in institutional data and people….can we trust machines that are derivatives of those two? AI can give us deep fakes that acerbate the problem, but it can also provide remedies as long as we can figure out how to trust the darn algorithms.
Leave a Reply