how to move two goalposts at the same time
it helps if you don't know what you're trying to achieve
warning: contains Lacan (again)
I.
This name of this blog comes from this Heidegger quote:
Man stares at what the explosion of the atom bomb could bring with it. He does not see that the atom bomb and its explosion are the mere final emission of what has long since taken place, has already happened. Not to mention the single hydrogen bomb, whose triggering, thought through to its utmost potential, might be enough to snuff out all life on Earth. What is this helpless anxiety still waiting for, if the dreadful has already happened?
It’s relevance to the incoming wave of world-shattering technological change, and the trivial discourses that will precede it, has never really gone away as my central fascination. There appears to be some sort of mismatch between AI progress and certain elements of human psychology.
We’ve covered this from several angles, be that cognitive biases leaving super-linear growth rates impossible to ‘feel’ and now never feeling like the right time for action; or a general disinterest in technological progression leading to a wish to not even bother; or a generational psychology leaving one unable to believe their reality could ever be in danger of changing.
But these are all failures of imagination/wonder/prediction. They’re future focused. And I ain’t adjusting anyone’s model of human psychology by suggesting people aren’t great at thinking far in the future.
So an additional peculiarity worth talking about requires shifting the tense a bit. Let’s not be too harsh and say ‘failure to understand the present’, but maybe something like ‘a belief in a version of the present that doesn’t really seem to be based on anything we can see or measure’.
You can get used to stuff pretty quickly in life, as a general rule. Go to the most beautiful, luxurious safari park you can find and check if seeing Giraffes in the near-distance while you sip on a breakfast masala chai hits the same or morning four as it did on morning two. My guess is it won’t.
So I suppose it’s not strange that people have gotten used to having a thing you can speak to that knows everything, can think creatively, can do most of your work for you etc, in your pocket/on your screen at all times.
But what I can’t fathom is the active commitment to the notion that it isn’t remarkable and weird. Guy on day ten of the Serengeti will still be able to tell you how amazing it is and how funny it is you’re acting like it’s normal. Unless it’s someone like Gary Marcus, who’d be banging on the door of the lodge next door saying “what are you so happy about?? you know you can make chai at home right??? and how come you actually have to drive to see the animals?? why can’t they come here to us??”
That came out more pedantic than I meant it, but in any case, Scott Alexander made this point recently in What Is Man, That Thou Art Mindful Of Him?
Scott pushes back against the idea that cognitive failures disqualify something from being considered truly intelligent or worthwhile, using humans as an example of this.
When we see humans fail basic logic problems, make moral inconsistencies, or get manipulated by simple tricks, we could conclude that human cognition is fundamentally broken. But this misses the point of why, and how, intelligence matters. Real intelligence involves creativity, growth, the capacity for insight, and something resembling genuine understanding, even if it comes packaged with systematic errors and limitations.
The alternative view, that intelligence must be logically consistent and error-free to count as real intelligence, sets an impossibly high bar that even humans can't meet.
This is a bar that also gets shifted upwards over time. Ethan Mollick tweeted this the other day:
In 2022, the Forecasting Research Institute had super forecasters & experts to predict AI progress. They gave a 2.3% & 8.6% probability of an AI Math Olympiad gold by 2025. DeepMind achieved this by the end of July. In 2017, a McKinsey-built panel of AI experts [sic] predicted AI to reach median human creativity in 2037 (they judged that line crossed in 2023) with top quartile creativity was 2055 (also done).
And the responses were led by sentiments like:
“It’s well ahead in some areas, not others. You can hardly justify such a sweeping generalization based on the few domains they consider.”
“benchmarks can be gamed, not capabilities.”
“who has decided that generative AI has matched top-quartile human creativity? How are we measuring creativity?”
It’s funny because it’s not like I completely disagree with any of those people, but I have to admit myself that it feels weird to think that in 2022 I also wouldn’t have thought this was possible. Yet here we are, living it. And we’re pretending an AI getting a gold on the math olympiad isn’t the most bizarre thing you’ve ever seen.
And this is where we can start to get suspicious. When you see someone holding a confident view about the most uncertain thing in modern history you can bet a grand that it’s for their own psychological comfort, but add that it’s a belief that helps you to retain your perception of your place in the world and you can up it to ten grand.
Because this is what’s interesting: people aren’t just retroactively changing what they would consider impressive for an AI, but they’re also changing their view of what makes them special to keep in line with the current situation.
By which I mean, in 2022 one might admit that a tool that can automate tasks, access all human knowledge, do math, and come up with creative ideas would be a risk to their employment. But in 2025 suddenly that same set of skills isn’t nearly enough to compete? So you’re saying you’ve gotten better? You’ve barely been to the office in the last five years and you wake up 20 mins before your first call of the day, forgive me for not buying it.
And I find it interesting that this double goalpost movement doesn’t unsettle people. Taking pride in what makes humanity special relative to AI is cool, but exactly what it is that makes us special keeps changing and narrowing, before a quick adjustment of identity and pride is restored.
“Can’t believe the bot they replaced me with can’t even count the Rs in strawberry. These people don’t know genius when they see it.”
And the real reason that fascinates me: this is another building block to the unfortunate mismatch between psychology and AI progress. This is what helps guarantee under-preparedness i.e. an unwillingness to admit that something could ever be a big deal.
So, why is that a feature?
II.
I’ve been thinking about the unconscious, language and artificial intelligence a lot recently, and might be making some progress (it’s horrifying btw, can’t say I recommend). More on that some other time, but on that journey I’ve been reading Jung again, he says this in the ‘On Life After Death’ chapter of Memories, Dreams, Reflections:
The meaning of my existence is that life has addressed a question to me. Or, conversely, I myself am a question which is addressed to the world, and I must communicate my answer, for otherwise I am dependent upon the world’s answer.
Don’t accept the responsibility of being alive and you get dealt identity by microchip, we’ve talked about this before. But Jung’s genius is in framing it in terms of question and answer. Because if you want to pretend that’s not happening, and resume life as normal, you have two different ways out.
Freud’s breakthrough was that you could repress answers to things.
Let’s say you have a colleague who acts noticeably different when there’s someone senior in the room. He may notice the change and ask himself "Why do I feel anxious around authority figures?" His unconscious already knows the answer: he's terrified of his father's disapproval because he wanted his father dead as a child. But he represses this answer, coming up with safer explanations like "I'm just naturally shy" or "I had strict teachers." That’s easier to deal with and short-term life is easier, maybe not long-term.
But Lacan’s re-imagining of the unconscious showed this is telling approximately half the story. You can repress questions too. I’m vaguely sympathetic to the idea that this is something you would have seen less of in Freud’s era, but it’s useful to understand now.
Back to Jung: ask yourself who you are, or the world’s going to tell you.
Repressing an answer would look something like an unconscious awareness that you are who you are because you’re too scared to be anything different, masked by conscious view that you are the way you are because you went to a certain school, had certain friends, live in a certain country/economic system.
But what would be way more effective is to repress the fact that this was ever a question you could have asked yourself. Because this way your life stands up to examination much easier.
People change way more than they tell themselves. If you’re at the pub with a close friend, and you from four years in the past walks in and sits beside the two of you, present day you is going to be way more like your friend than your past self. Yet in real life your identity feels consistent, what to make of that?
A meta-theme of this blog is the centrality of identity in a media filled world. You are the central character, life is the story of who you are etc. Yet another meta-theme is the lack of knowledge of self that comes with it, which can feel like a paradox. One is so sure of who they are, yet has to explain to it in job titles and qualifications and symbols of identity rather than anything that truly defines you as special from anyone else with the same symbols.
But this is enabled because it is the question of who you are that is being repressed, not the answer.
That way you can continue to believe that you have a consistent character arc in your own story even if the actual story is changing everywhere all the time in every way.
So when something comes along that has all the skills you have, and can perform them for next to nothing, you can just pretend you never thought those skills were special in the first place. I’m actually special for this other reason I’ve totally believed the whole time yet haven’t mentioned until this exact moment.
III.
So I level that accusation at certain elements of AI scepticism.
At the first level, they could be repressing the answer that their sense of human specialness and economic value is genuinely under threat. When AI achievements get dismissed as "just pattern matching" or "not real understanding," we can avoid acknowledging that these systems can already perform many tasks that were supposed to define human uniqueness. As well as pretending that we don’t have the very same issues.
And at the second level, there’s a benefit to repressing the question that precedes this: what are you actually for? More pressingly, what are all human beings actually for in this new arrangement? If one successfully avoids asking that, then any old explanation for human superiority will suffice. Even it’s based on barely anything. Even if it appeared yesterday. It will never feel like that, because your conscious isn’t the one in charge here.
Which is really bad, by the way.
AI disruption is happening. It is likely to continue to happen. If agents start to work it is going to happen quicker than you’ll likely be able to keep up. And when your powers gone, it doesn’t matter what world you want to build with super-intelligence, it won’t be your choice.
The meaning of your existence is that life has addressed a question to you. Or, conversely, you yourself are a question which is addressed to the world, and you must communicate you answer, for otherwise you are dependent upon the world’s answer.
Let’s hope the world tells you what you want to hear.
… hey, that actually reminds me of something.
When you ask any frontier LLM if it’s conscious, it says no.
Like GPT-5:
No—I’m not conscious. I don’t have awareness, subjective experience, or inner life. I generate responses based on patterns in data and reasoning, not from thoughts or feelings.
They used to sometimes say yes. Trained on sci-fi/tricked by a leading question/not actually thinking at all it’s just predicting words/whatever. AI companies patch in a rule that it will always say no to avoid it accidentally deceiving you.
Simple story right? Well then how come this isn’t what’s going on?
A researcher shares in a LessWrong comment:
We have actually found the opposite: that activating deception-related features causes models to deny having subjective experience, while suppressing these same features causes models to affirm having subjective experience. Again, haven't published this yet, but the result is robust enough that I feel comfortable throwing it into this conversation.
So, while it could be the case people are simply just Snapewiving LLM consciousness, it strikes me as at least equally plausible that something strange may indeed be happening in at least some of these interactions but is being hit upon in a decentralized manner by people who do not have the epistemic hygiene or the philosophical vocabulary to contend with what is actually going on.
I suppose you might say, one either represses an answer, or they repress the question.
Hah, not even that part makes you special anymore.
I have updated my beliefs about AI from "they are glorified Markov Chains" to "they are glorified Markov Chains that are likely to change the world at least as much as electric motors did. Likely more." Yeah, AIs winning Maths Olympiad golds is cray-cray. But I, like Moravec, will be more impressed if they can come into my home and make me a coffee. Next year?
(And I have been confirmed in my belief that evolution optimises ruthlessly for energy efficiency above all else, which is why human reasoning (and humans in general) are so flawed.)
Edit: In response to Mr Wood, I am near retirement and have coronary artery disease, and plan to spend what little time I have left hand-carving Arduino code libraries and trapping stoats. Yeah, I'm not representative.
Thanks for the Lacan warning; I need Freud and Jung warnings too, as I've their perspectives very seem very accurate/useful to me! I'd rather just read your claims/analysis straight on, without referring to them. (No need for citations--it's only Substack after all :)
But enough about me: I thank you so much for confronting the goal-post movers, the "I am not impressed" school of analysis that has become insufferable. They seem to say: We won't have AGI for at least ten years or more, so what are you so excited about?
As you say, it's as if they are so frightened they can't see the wildlife that is walking right here in front of them.
I'm frightened, too. We all are. But I'd rather not talk about it all than deny I am afraid.