AI as we know them right now, seems like they are smart, lovely, sociable and even empathic. They are not. They only use nice words to statistical rules. So, if AI is ‘faking it’, or ‘pretending’, ‘lying through their proverbial teeth, or ‘performing’, what does this do to us? Can we be fooled and does it matter? The answers are yes and yes.
I won’t bore you with ethical conundrums relating to truth and the value of what is real, and what is not. I like to talk about the real effect on you when somebody, or something is lying to us. In the case of AI: it says what’s statistically is most likely to say, something that seems prudent, but is it? No.
The AI’s output are words of affirmation, a compliment, a well formed sentence concluding to something that seems factual, or perhaps a very common, but awesomely timed question. If you know that these things are false, unfounded, or fake, how would you feel? Lied to? Well, that’s just what happened. It’s words without intent, making them meaningless, off putting and confusing.
Confession time: I am interacting with AI a lot and because I use a repetitive request, so I can kind of gauge how accurate the output is and how it comes across. One thing that left me was a feeling: the AI uses the right words, but it seems disingenuous. Weird huh?
AI is performing to effect
This is happening on a massive scale in AI land. People seem to hear what they have not heard before. In general people, other than AI are hesitant to perform, or lie without motivation. People need a good reason to lie. AI doesn’t care, or even know if it’s lying. AI has no scruples and just tells you what you want. It is not trained to lie, just to give you high quality, effective words of affirmation each time.
Nowadays people consult AI about their inner most feelings, questions, problems and even relationships. From the AI they get nice, balanced and seemingly engaged interactions. The AI interactions look like empathic interactions, they use the right words, but do they mean the same?
The signals of performance
A famous president once said: “I did not have sexual relation with that woman …” This seems like stating a fact, right? He must be truthful, as if he’s under oath. Oh, but he prefaced it with this: “I want you to know”. This changes everything, with “I want you to know” everything that follows is a wish, rather than a statement of fact! He wishes that you know that he did not have sexual relations! This is what we call ‘performative‘. People put up an act with the goal to please all parties.
Performatives have two effects. In our example: the president did not feel he told a lie (‘technically’ he told the truth) and the public felt like they got the truth, but they didn’t. It made us feel good, for the wrong reasons … for a while.
AI does similar things all the time: it simply does not know if the words are true, hit home, or are well meant. It does know that statistically the words might be effective, the right response, the best answer. When we read such a thing we receive all the signals that are common to package truth, empathy and let’s call it ‘a human conversation’, or a ‘realistic video’, or ‘a human sounding voice’. And people can be fooled when they don’t heath attention. Because AI is just faking it. It gives a great performance, but it’s a performance never the less.
How does this make you feel?
So when interacting with deceitful people we know what to do: we feel manipulated, don’t take their word for it any more, we don’t trust them, the thing they helped with will be undone, or is not done. And this is the trouble with the performance AI gives us these days. It uses all the words of confidence, empathy and truth, but only is motivated use those words through statistics, not because it’s true.
This contemporary performance of AI is like empty calories: they make you feel good, but don’t help you. You feel heard, but are not. You feel helped, but who knows. You feel successful, but might have missed a thousand ways of doing better.
These empty calories of passive deception, of lack of intent, of unmotivated affirmation are working in the short haul, and will be weaponized. Weaponized AI performative, for gain, for favor, by mistake. Specifically its application, i.e. how corporations are using them, is something we need to watch out for.
In this regard I would press upon people that perceived value of AI conversations to second guess those gains. Please check in with a human. However great you feel now from a bunch of well chosen words, you will feel even better from actual interaction with people.