In God, Human, Animal, Machine (Doubleday, Aug.), Wired columnist O’Gieblyn explores what it means to be human in a technological world.
Why this book now?
The book really grew out of my dissatisfaction with the current state of technology criticism, which is often focused on the immediate economic or political consequences of new technologies. It seemed like there was almost a historical amnesia. People were failing to make connections between ongoing debates about AI and these much older philosophical conversations that considered the same questions, such as the relationship between body and mind or whether we have free will. I noticed that some of these religious and spiritual ideas were reappearing in conversations about technologies. The book was an attempt to think through some of those larger questions, and to consider what technologies ultimately mean for humanity and how we understand ourselves.
What were you surprised to learn as you were writing?
Just how much the book was going to be about the question of meaning. I came to realize that science as much as religion is trying to satisfy the human longing to know, for example, what is the larger significance of our lives? I think I realized just how much my own interest in these topics was bound up with my desire to see my life as meaningful and to see the lives of humans generally as meaningful. Meaning is intimately connected to the question of consciousness, which is one of the big mysteries remaining in science, but people in the tech world shy away from it. The act of writing the book was, I suppose, an attempt to carve out some sort of larger meaning from my experience and to consider whether it’s possible to find meaning in a disenchanted world.
What lessons would you like readers to take away?
If we’re doomed to metaphor, we need to be aware of the way we’re using language and make sure it doesn’t start slipping into literalism or fundamentalism. Meaning is something we have to create for ourselves. We have a long history of looking elsewhere to avoid the hard work of meaning making, and that impulse is now being transposed onto technology. There’s a pervasive idea that once we have these super powerful algorithms, we won’t have to do that work; we’ll have oracles that will tell us anything we want to know, not just in terms of knowledge but also in terms of moral and ethical decisions. There’s already the sense that technologies know more about the world than we do, and that we can defer to them on the kinds of questions that used to fall within the realm of ethics or moral philosophy. But we can’t rely on machines to evolve a theory of meaning or to give purpose to our lives; we have to do that ourselves.