Computers Don't Have BrAIns
This post is a short preview of a longer post you can find here. That longer post includes a summary of an even longer essay that I wrote and published a couple years ago.
There has been a lot of hullabaloo, nattering, hogwash, and poppycock, about computer generated art (what they are marketing as “AI”) in the last few years.
If you are on the internet—and if you’re not, I don’t know how you’re reading this, but congrats—you have to see these two letters, ’A’ and ‘I’, combined and blocking your way from getting to just about anything you actually want, unless what you want is glossy pictures of women with very big eyes or questionable facts about what you just typed into a search bar.
As much as it gets branded as “intelligence,” and the internals of your computer are described as “brains,” this is a metaphor. There is no “brain” in any computer, there is an extremely complicated and powerful calculator and in the last one hundred years they have built incredible languages on top of what is essentially a string of submicroscopic on/off switches. It is so technologically advanced that only a small fraction of the planet understands how it works in total. There are engineers who can’t program, and programmers who couldn’t build the computer they use for all of their work. I am fascinated by this and think everyone should be curious about technology, but it is not a brain. I don’t even mean the brain of a human. They can’t yet replicate the brain of an ant, or lizard, or dog, and a dog is the most lovable and dumbest of our domesticated animals. The only reason people are shocked and confused as to how an LLM does what it can is because they hadn’t thought about it until they started typing questions into it.
A few years ago I wrote several thousand words about computer generated art and printed it as a zine. I thought this was a good time to revisit it and see if I had changed my mind.
I remain convinced that language models could only ever produce mediocre prose that is fictional even when it tries to be factual, especially as being generalized is what they were trained for, and they are still notoriously unreliable, and partly designed to be.
I have strong, yet not particularly unique, opinions on the business and marketing of these companies selling the technology, and they are rarely positive. I am not interested in talking about the business of OpenAI or Anthropic or Google or Microsoft.5 This is about the systems and tools they are selling, which I see less discussion of outside of technical assessments. Articles written by technical experts (arstechnica.com is a good one for laypeople) tend to be much more neutral and much less interested in speculation.
continued elsewhere...