August 14, 2023

All About AI

Erik Johnson

Artificial, yes. But intelligent?


You would have to be living in a cave to have not heard about the advances in Artificial Intelligence (AI) by now. In the past year alone, AI has evolved so rapidly, it’s even astonishing to users of the technology and people who keep up with AI news. Reactions to AI technology range from enraptured adoption to fire and brimstone apocalypse predictions and everything in between. Whether you are on the ‘for’ side, or the ‘against’ side, one thing is certain – AI is changing the way people do everything from art, to schoolwork, to customer service, to financial planning.

Throughout this series, I’ll be diving into what AI is, how it works, what the impacts are, and what may be coming our way in the future. Join me, won’t you?

First, why should you listen to anything I have to say? Honestly, you probably shouldn’t. But if you are still reading, I’m an early adopter of the technology in its various forms over past two years. I have a long background in Information Technology and Cybersecurity. I also have a background as a corporate executive (I swear I’m a good guy) and have a business degree. All of this is to say I hope to present educated and interesting perspectives on the technology.

Second, I write this series first and foremost as a fiction author and a user of the technology. (I hesitate to say ‘researcher,’ but after reading a lot of material that’s been produced so far, it wouldn’t be a stretch.) My enthusiasm for this topic starts in science fiction, and writing science fiction is what I do. What I write does not reflect the opinions of any company I work or worked for, nor does it have any value as legal advice (and maybe not even entertainment value – we’ll see). Ever since I was a kid I longed to be able to talk to computers and have them respond like in Star Trek, Buck Rogers, and The Questor Tapes2. Only recently is the future I wanted taking shape. I want to share that enthusiasm, and maybe help people understand it along the way.

Generative AI lacks the capability to reason and does not in any sense ‘understand’ what it is doing.

What is AI, anyway?

Today’s AI technology evolved from the field of Neural Networks, which despite the name, has nothing biological involved. A neural network is a construct in which a number of individual decision nodes process data in a given way then pass it along to the next node layer, which does the same thing again. Over and over this happens until a result comes out of the last node. Nodes can and do change how they process the data over time, and so can be ‘trained’ for a task. The more the network processes the same data, the more efficient it becomes at that task.3

What people refer to today as AI is neural network technology in basis but trained with massive amounts of data. Billions of words, millions of pictures, entire libraries of data (literally and figuratively) are fed into the program to train it, and what comes out is ’emergent.’ That is, the output behavior appears to be an exponentially higher order of sophistication than what was put into it. This is also referred to as ‘generative AI,’ a term I prefer because it distinguishes from artificial general intelligence (AGI) which is what you see in fiction like HAL in 2001. Generative AI lacks the capability to reason and does not in any sense ‘understand’ what it is doing.4

The first AI technology you probably heard of was AI image generation. These were the first tools to make a big public splash. Dreams, DAL-E, Midjourney, Stable Diffusion, and many others have been front and center in the news for a couple years. These tools generate pictures and images given a text prompt. If you type, “A cute cat,” then the system generates a picture of a cute cat for you.

Today, the new hotness is Large Language Models (LLMs) such as ChatGPT. If you ask ChatGPT a question or tell it, “Tell me a story about a rabbit” it will generate a little story about a rabbit in a few paragraphs. The big deal is how absolutely human it can sound. This causes issues when people think that ChatGPT is a source of truth and can be relied on to give accurate information. Spoiler: it can’t.

Generative AI works, at a high level, similarly across the board. The system is given data with which it builds a mathematical model that represents what’s in the data set. The system does this over and over until it is ‘trained’ on all the data which is to say it has a mathematical model that represents everything it saw. When the system is then asked to generate something, it uses those mathematical models to ‘reverse’ the process. It does this in a pre-defined number of iterations known as ‘steps’ that refine the output. Each step will be closer than the last to what the AI thinks is the desired output.

Since models are only representations of the data and not the data itself, results do not contain any of the original image(s) it was trained on. Just like if you asked a friend to draw a cat, you can’t predict exactly what they will draw because you don’t know what their mind uses as a reference. If you give them specifics, it might be closer to what you want, but it will still not be exact. This becomes important when we address questions about why AIs can’t do hands, why they include ‘signatures’ or why they make almost exact replicas of some things, but not others. This is an important distinction to keep in mind, since the AI systems are often accused of storing the original data and ‘remixing’ it. That’s not what is going on under the hood – the original data are not found in the AI models, just ‘formulas’ that represent them.

LLMs work similarly, but they have an important difference. Instead of generating the entire output which it distills down over time, An LLM uses models trained on a mind-bogglingly huge corpus of text from the internet to generate one word at a time, linking them together in a chain of ‘what should come next’, much like a human writer would. But unlike a human, ChatGPT can’t evaluate the content, feel, style, or appropriateness of the word it selects. It chooses words based on which combinations it saw in the training data most often.

If you are interested in skipping ahead of the class, Stephen Wolfram has an excellent primer on the technology.5 I’ll be digging deeper into how that works in the next part of this series when I talk about AI art generators, and then after that, ChatGPT.

AI – the crowning achievement in modern computing science is nothing more than a massively advanced Mad Libs engine.

How is it intelligent?

Short answer: it’s not. At all. Thinking through the high-level, you will realize that there is no intelligence anywhere in the system. Generative AI does not create new information. It does not check information for accuracy. It cannot not look up data newer than what it was trained on. Generative AI doesn’t even have a concept of what it is doing – all its doing is spitting back likelihoods. AI – the crowning achievement in modern computing science is nothing more than a massively advanced Mad Libs engine. It’s the word prediction bar on meth. It’s a large number of, but still finite number of monkeys spitting out structured noise (quite literally as we’ll see when we dive into AI art).6

Generative AI is a very cool tool, but it will not replace human thinking in its current incarnation. Unless you are referring to AI’s capability for making things up and lying, in which case it emulates that pretty damn well.

This isn’t fair to the AI though. It wasn’t built to be accurate. It wasn’t built to be factual. It was built to give a human what it thinks they want. That’s why it will happily jump aboard the SS Make Shit Up all the time. Take ChatGPT again. It was trained on scrapes of pretty much the entire internet. If you take all that information, (and much of it is – GASP – inaccurate) blend it up then pour it back out again, you shouldn’t expect the truth – just truthiness.7 Where ChatGPT excels is creating human-like replies. And it does that very well. In the early days of the technology, it would (and pretty much did) pass the Turing Test.8 The ChatGPT system has been given large guardrails to keep it from telling you how to make meth9, or to commit suicide10, so the developers have contained it a lot.11 The key point to remember is that nothing ChatGPT or any other AI says is to be taken as the reliable, or even accurate.

Here’s another important limitation. Since generative AI presents results based upon representations of the original data, it is hit or miss when producing specific results. Like the cat drawing friend example earlier, you can try to be more specific – “a small, thin, black cat on a fence at Halloween, with a full moon, two clouds in the sky while a witch flies by making a silhouette” – go ahead. Think about how that should look in your mind’s eye and then try it. Here’s mine.12

… having worked with ChatGPT extensively, I am pretty confident that his method to detect AI writing is “just read it.”

How worried should we be?

It depends, but that’s an unsatisfying answer so let’s look at it from a couple of angles.

The technology itself is very cool, but it’s not as labeled on the tin. LLMs are used by Google, Microsoft, and a host of other companies eager to implement any technology that allows them to reduce staff. Sometimes with not as expected results.13 There’s no doubt AI already impacts jobs, especially support roles. Tasks that AI can do quickly, repeatedly, and constantly are prime targets. That’s unfortunate since those kinds of jobs are typically entry level jobs. What will happen in the future when there are no entry level staffers to promote? IT promotes entry level staff to higher level jobs as they gain experience and prove themselves. What will a company do when those jobs are replaced by bots? Make better bots? Hire from the outside? How do people get necessary experience if the entry level jobs don’t exist anymore?

The AI industry as a whole will likely generate even more jobs in the long run. Every IT automation advancement shifted jobs from one role to another but overall, there was net growth. That will likely be the case with AI, although it will be disruptive to humans. Firewalls didn’t eliminate network staff, antivirus didn’t eliminate security staff, spreadsheets didn’t eliminate accountants, but all of advancements did shift people into different jobs.

Sadly, although it should have been expected, AI technologies opened new avenues for swindlers, hucksters, and snake-oil sellers. Browse YouTube for a limitless supply of ‘GET RICH QUICK WITH CHATGPT!! WOW!!!’ videos. The less said about these people the better. Note I am not referring to the many channels dedicated to benign use of the technology, but AI is the new drop-shipping scam on social media. Now. before you email me and tell me you really did make $10,000 a day using this One Weird Old AI Trick, save it. It’s certainly not the norm. There isn’t enough demand (if any) needed for many people to strike it rich. Considering that the quality of AI art/writing/basket weaving right now is questionable, l conclude the real way to make money with AI is to write about how to make money with AI and then sell that.

Case in point, Neil Clark, editor of Clarkesworld magazine famously had to shut down submissions earlier in 2023 because he was flooded with AI generated stories. These submissions weren’t from actual writers, they were from people who expected to get rich quick after watching YouTube videos about making money writing stories with AI.14 These people are gullible and desperate, and I get genuinely angry when I think about people who prey on them to make money.

After Clarkesworld paused submissions, a lot of people ask Neil about his method for determining AI writing from human writing. Neil refuses to answer because it would give scammers a guide to end-round his controls, but having worked with ChatGPT extensively, I am pretty confident that his method to detect AI writing is “just read it.” The first time you read an AI story, you will probably be impressed. The second time, you will notice similarities with the first. The third time, you will see the flaws. Any more times and you will recognize the patterns it immediately. No matter how well generative AI imitates humans, the human brain is much better at seeing patterns in its writing.

Finally, the biggest threat us is likely not AI, not scammers, but corporate greed. Companies are laying off entire departments to save money, with no regard to the impact of the human beings affected15. The Writers and Actors Guild strikes are just as much about the studios trying to slip in clauses that allow them to use AI technology to underpay writers and replace actors with digital versions, as much as it is about paying fair wages for the work16. Companies are rushing to implement AI technology before there even know what the capabilities and limitations17 are. Nor do they care about the ethical concerns.18

What’s next?

We’ve only scratched the surface of AI. Next time we’ll do a deep dive into AI art programs such as Stable Diffusion and Midjourney, dispel some myths that have arisen about them, and discuss whether these tools are the end of Art in our society.19

Thank you for reading.

All images generated by Midjourney and/or DallE-3.

  1. Intentional, not a typo.
  2. How’s THAT for a deep cut?
  3. I am greatly simplifying this and other concepts in order to portray the technology in a vaguely intelligible way.
  4. I will also use terms like ‘want’, ‘know’, ‘understand’, and so on, I use these terms as shorthand so I don’t have to write huge lines of technobabble to make my point. I am aware that AI as it stands today has no Theory of Mind.
  6. Literal noise, not literal monkeys.
  7. Word credit: Stephen Colbert. Something that sounds true but isn’t.
  11. Although there are still ways. Google if interested.
  12. Honestly this is pretty good compared to a year ago, but I bet it’s still not exactly what you imagined. For starters, there are no witches in mine.
  14. Per Neil Clarke in panels at Readercon 2023.
  19. Not likely, but tune in next time to find out.