Fighting AI’s assault on authenticity

AI and its deepfakes pose an unusually powerful challenge to the individual human psyche and thus – bulked-up – to human society.

This is a question of human authenticity, which has been an anxious preoccupation for millennia. The bright sparks of Alphabet, Meta, X, and Anthropic have richly intensified it.

Luckily, up to a point, the solution is in our own hands. (We can renounce social media.) But though we may individually have some control of how AI works on our psyches, we will join the billions who are living in societies impoverished by AI’s effects on the “mass” psyche (as shaped by social media).

I’ll unpick some of that now.

We have all grown up with the growing problem of verification. It’s an ancient difficulty. We humans have always wanted to be able to verify the authenticity of all the utterances that fly at us.  We have never known for sure whether any particular individual meant what he or she said, let alone whether he or she was speaking the truth. Any real person may be a deliberate liar, or be uttering untruths knowingly or unknowingly.

Epistemology concerns itself with such issues.

When it comes to the factual truth of an utterance, JS Mill – an odd cove – was invaluable in proposing that the only way for humans to test it was in contest. That’s why science and history are alike subject to constant challenge and revision. It’s why one should read the Guardian and the Financial Times as well as The Times and the Wall Street Journal. Comfort zones,  safe zones, echo chambers and orthodoxies not only deprive persons of arenas for free speech, they deprive readers, listeners and audiences of the certainty that propositions have been challenged. 125 years ago, with Paul Seaman and Gary Taysom, I developed the online “livingissues” project which aimed to challenge media accounts of controversies, not least by diving into context and evidence. It was to be corporate-funded and university-validated, but in spite of initial support from executives and academics, failed to land support at corporate or institutional level. Some such similar ventures, such as fact-checker sites, will become increasingly important to help people find their feet as a torrent of flaky and fake information threatens to engulf them.

Personal authenticity is an even knottier issue than factual accuracy. It is an issue at least as old as humankind’s ability to leave a record of its self-examination. From the start, humans understood that consciousness was bewilderingly central to their being. We can never be sure that we are behaving authentically (except perhaps when pain or ecstasy take us out of ordinary self-control). When we operate in the world, as TS Eliot had it, we “prepare a face to meet the faces that we meet”. That is not a voluntary act. It is an involuntary, inevitable one. The most “genuine” person is in some degree a fake. Neither the person being “genuine” (or fake) nor the person observing the “genuine” (or fake) can be confident they have identified the presence or absence of authenticity.

The main effect of computers and the internet on this ancient dilemma is to industrialise, amplify and democratise the potential for deceit. Handheld computer phones and all the tricks of the social media operators tempt millions or billions of people to tailor and construct their “faces to meet the faces that they meet”. To have and promote a performative identity and activity in the real world is hard work compared with constructing one out of thin air and letting algorithms spread your phoney word for you.

In the past 20 years, online performative self-expressions, which are often rather intense self-manipulations, have fuelled the self-obsession and self-absorption which have characterised modernity since the mid-19th Century and have become powerfully vertiginous since the post-modernity of the mid-20th Century.

Today’s AI of course crucially intensifies these effects and the capacity for fakery. An individual will no longer control their performative identity since it may well be matched by an equally convincing alternative virtual twin, or many of them. These fakes can be made to go forth and say and do whatever their creator wishes.

This has effects on our psyche (or soul, our psychology, our sense of ourselves). AI has always and obviously been a highly-energised engine of plagiarism. But it can now plagiarise persons, and not merely their appearance, but their utterance and behaviour). The philosopher Katherine Stock has described the shock of having been cloned and animated online. She is one sort of victim of AI’s modern fakery and speaks freely of the impact it had on her psyche.

I have dared to smuggle out some of her words from an article in The Times of 28 December 2024, entitled, “We’re not ready for the voodoo of deepfakes”.

Stock, having herself been the subject of a deepfake deception, wrote: “This is some powerful voodoo for social media providers to unleash so casually on our psyches. You don’t have to be especially proud or narcissistic to feel uncomfortable with aspects of your secret self (or [deepfake] “self”) being on apparent show for all to see”.

All these trends hinge on human authenticity, and all are increasing the uncoupling of personal (“secret” authenticity from the performative utterance (in words and images). Agreed, personal authenticity is a hypothesis rather than a fact; factually, though, we have good evidence that people are increasingly faking or deepfaking themselves up (in what they may call self-transformation, or self-improvement). And of course, third party deepfake plagiarism is even more troubling to those who are deepfaked. I am pretty sure that these deepfakes will cumulatively prove to be troubling to those who volunteer to consume the deepfake,

We have never been sure what autonomy (what freedom of thought and action) personhood granted to persons. However, we can be sure that creators of fakes are messing with an important element of how persons process the world. They have taken what was perhaps a historic too-trusting faith in personal authenticity and so re-engineered the art of presentation as to create the need for a new scepticism about what we see online. Or to be more pointed: we have a generation of young who have decided to repose their trust in social media outlets which have in effect been designed by near-geniuses to abuse the innocence of their consumers.

It is pretty obvious that it is likely to be unhealthy for individuals to play with creating and promoting false versions of themselves. Equally, it is obviously likely to be shaking to have false versions of oneself operating online with no route to proving them inauthentic.

Again, it is pretty obvious that one large dangerous effect of AI fakery – already in play –  is to those who choose to live in a social media world. They have chosen (or been induced, addictively) to be indifferent to whether the utterances they consume are fact-based.  Increasingly they will be marooned in an online world in which this or that online “personality” bears little relation to the physical person living in the real world who was as the inspiration for the fake.

We face the plain likelihood of living in a world in which very large numbers of citizens have volunteered to be dumbed-down and wound-up by AI’s plagiarisms and deepfakes. They will have sub-contracted their authenticity to falsity.

In a sea of untruth, there is obviously a requirement to find baselines – bedrocks – by which to calibrate authenticity.

We need at least to try to develop a taste for authentic character in as many people as possible. We should be aiming consciously to produce young people of character who have a taste for trustworthy real-world living.

This is in the first instance a parenting obligation. Parents or carers should understand they are preparing their charges to be fit for the real world, as it is, rather than produce overly-tender charges in the vain hope the outside world will catch up with these new trends. The young should be taught that cultivated victimhood is a self-fulfilling disablement, and that self-reliance is far to be preferred, if available. The young should of course be helped to understand what many parents freely admit of their own experience: social media is as addictive as smoking. These developments require that parents understand that they are the adults in the room: they have to rule their own households, which are fragments of the real world and cannot be standalone paradises.

The more people of character there are in the world, the more leaders, influencers, entrepreneurs, and many other meritocratic elite players will have receptive audiences for the disciplined responsibility that could become the watchword of civilised behaviour. Many of the elite are already in professions, or live with near-professional codes of conduct: these mostly aim to codify adult character, and institutionalise it.

I say these things because AI will leap ahead, but has already produced awesome power. The public needs to be able to build a viable real world with viable citizens in it. If we want authenticity, we need to rediscover it for ourselves.

It matters very much that we understand that in the 1920s, and thereafter for several vital decades, thoroughly modern people of every class were both alert to modernity, and reconciling it with inherited values. On TV we see it in Julian Fellowes’ “Downton Abbey” (Yorkshire – and London – aristocrats, tenant farmers, servants are all there), and in ITV’s “Heartbeat” (a 1990s and 2000s  reconstruction of 1960s Yorkshire). In books we can see it – beautifully – in RC Sherriff‘s The Fortnight in September. This gives us an upwardly mobile lower middle class London family in the mid 1920s. Reviewing it as a classic re-issue in The Times Review (4 January 2025) the novelist Clare Clark rightly saw that in their seaside holiday the family were attractive in their mutual affection and in their ability to ride small disappointments. But they “are uplifting and unforgettable company” because of “their private resolve to live honourably, to square up to their consciences and to face their shortcomings with dignity and grace.”

It is of course moot whether AI algorithms will replicate “grace” in machinery, but right now we certainly need it in stalwart form amongst humans.

 

ends

 

 

 

 

 

Footnotes

  • 1
    25 years ago, with Paul Seaman and Gary Taysom, I developed the online “livingissues” project which aimed to challenge media accounts of controversies, not least by diving into context and evidence. It was to be corporate-funded and university-validated, but in spite of initial support from executives and academics, failed to land support at corporate or institutional level. Some such similar ventures, such as fact-checker sites, will become increasingly important to help people find their feet as a torrent of flaky and fake information threatens to engulf them.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Publication date

11 January 2025