8 Comments
User's avatar
Jxsh8's avatar

This article was the first time I've been genuinely scared about the prospect of widespread AI. Super-intelligence functioning as a self-deluding defence mechanism—the shadow that thing would create.

Expand full comment
nope's avatar

Having seen people undergo psychosis I don't think it's "regular pattern recognition being pushed too far". It resembles something more like Alzheimer's or (ironically) a hallucinating AI. It's not that they make non obvious connections, it's that they see connections that aren't there at all, or say things that make no sense in any context.

Expand full comment
Stefan Kelly's avatar

yes but the question is why that happens and why didn't evolution get rid of it

and hmm great point about hallucination, wish I thought of that

Expand full comment
Jared Peterson's avatar

I think of it as assigning relevance to things which are irrelevant, and then trying to generate reasons for why it's relevant.

Imagine your brain keeps telling you there is an interesting pattern of men wearing red ties, and that this must mean something significant. Perhaps it's not that irrational to conclude there is a communist conspiracy, as John Nash did. If you start with the premise an arbitrary pattern is relevant, to explain it at all leads to delusion.

Expand full comment
Audrey's avatar

interesting.

Expand full comment
Nazar Androshchuk's avatar

Every time I read one of your articles, I’m exposed to new ideas and immediately agree with them. Good article, as always. Coincidentally, I just started trying to learn more about machine learning technology this week.

Expand full comment
G. Retriever's avatar

The real AI risk has always been us and what we do with it.

Expand full comment
Bb's avatar

Fantastic article. You really hit it spot-on. I hope more people can read this.

Expand full comment