Artificial Intelligence — A Guide for Thinking Humans by Melanie Mitchell — book review

About the book

Those who know me won’t be surprised that I thoroughly enjoyed this book. Not only do I love non-fiction but as a retired computer professional I’m a typical computer nerd as well. Although there were times when I found this book to be dry and technical, this book did clear up a number of misconceptions I had about Artificial Intelligence (AI).

Beware the singularity?

One topic the author brought up was the “singularity”, that theoretical point in time when computers become smarter than humans and decide to “take over”, becoming masters over us mere living, breathing, flesh-and-blood animals. There are people seriously concerned about this, although I don’t think it’s going to happen that way (I’ll say more on this subject later).

The book has examples of AI in action, such as computer recognition of handwritten numbers.

AI fails

She also gives some examples of spectacular AI fails, such as identifying a pair of African-Americans in a photo as gorillas. (This isn’t artificial intelligence. This is artificial stupidity.)

Interest in and funding for AI research follows a boom-and-bust pattern — people get all excited about the wonderful world AI promises, but when the technology doesn’t deliver (AI is infamous for taking a lot longer than people think it should) interest and funding dry up.

One disappointment for me about this book was that the author didn’t mention the one thing that’s on everybody’s mind these days — Chat GPT. But this book was published before Chat GPT was released. Maybe I can find out more about this somewhere else.

This concludes my review on the book itself. The following are my own thoughts, including things I learned from the book as I now understand AI:

Some of my own thoughts

Late in my professional career, I wrote a program that could convert units of measurement — like knots into KPH. It looked up both the “convert from” and the “convert to” units in a table, and generated a short program to do the conversion (usually by multiplying by a constant it generated). I called this AI, although based on the book, that’s not what AI is. Yet I was quite proud of it.

One question presented in the book was, does AI think or does it merely simulate thinking? I would have to go with the latter. Try this simple thought experiment: think of something. Now did you know you were thinking of something? Of course you did. Since you knew that you were thinking, you were thinking. But an AI computer just executes code; it doesn’t know that it’s thinking — therefore it’s not thinking. (It could, however, be programmed to always answer “yes” to the question, “are you actually thinking?”)

What does training an AI actually mean? Let me give you an human-based analogy: You can easily tell the difference between men and women, right? How did you learn how to do that? Did someone sit down with you and describe the various secondary sex characteristics? (Clothes on, please. I want to keep this discussion G-rated.)

No, of course not. More likely, you would assert (or guess) one or the other, and if you were wrong, one of the people around you would have corrected you, or at least told you when you were mistaken. This is exactly how an AI network gets trained — by getting it right sometimes, wrong sometimes, and getting feedback in terms of when it was right and when it was wrong. This is how an AI learns. (Perhaps this is how we learn all of our “unwritten rules”. I still wish I could buy a book with at least 2000 of those rules written down in it.)

Let’s go back to an AI that recognizes handwritten digits. For this, multiple layers in a network are needed. The first layer would just look for loops. 0, 6, and 9 each contain one loop, 8 contains two loops, and a 4 may or may not contain a triangle. Once this first layer has been successfully trained, we can take its output and push it into a second layer which further analyzes what it’s got to work with. And so on. (I noticed that none of the sample 1’s in the book have a “hook” at the top of the digit. I don’t think an AI trained to recognize simple one-stroke 1’s, but not 1’s with a hook in them, would be very useful.) And if successive lines touch, the AI would have to separate them — that might prove difficult.

Taking over the world?

Can machines decide to suddenly take over the world? No, because computers cannot actually want to do anything. But computer systems have their own (human) masters behind them, writing specs and writing code, and the AI will wind up doing exactly what it was designed to do — fix an election, cheat investors out of their money, start a war, or whatever. There’s the real danger. And in these days of wealth being concentrated in fewer and fewer hands, like it was 100 years ago (USA example), it’s the rich who will wind up controlling AI.

Leave a Reply

Your email address will not be published. Required fields are marked *