AI, in the sense most of us think of it, is a scary thought. There is a lot of uncertainty surrounding this topic, and for good reason. I feel we are rushing the technology instead of asking very fundamental questions. I am not speaking about morality, but basic psychology — the stuff that makes us human.
I had a discussion with my daughter the other night about what constitutes a sentient being. She argues that a sentient being must be organic; it must have a soul. Valid argument, indeed. I am sure many of us feel the same.
However, I choose not to limit my definition to what we know conventionally, and look at it from another perspective. Without delving into the ethereal plane and going down the “do we have a soul” rabbit hole, I would like to think that the divine spark we all speak of is what we refer to as being sentient: aware of our own existence and therefore others.
Why should this be limited to organic life? Whatever the cause; whatever spark ignites material–organic or not–to come together in a way that creates a sentient being, well, who am I to judge?
You might be thinking, “Machines don’t just magically come together. They require external factors to engineer them.” Perhaps. But aren’t humans just biological machines, after all? Hell, many scientists believe that the cells and structures required in order to create a human being from chance have the same odds as a tornado whirling through a scrapyard and forming a Jumbo Jet.
So, how do you Feel about this, so far?
That’s another conversation I’m not dipping my toes into. What I am saying is that there is much we don’t know about the universe and existence. But what we do know, or what we think we know regarding ‘Artificial Intelligence,’ is that we need to pause and rethink this; at least, stop trying to race other countries to the finish line, and do this right.
Why? Well, what is the sole reason behind anything we do as humans? Why do we do anything at all? It’s because of how we Feel about the action we are going to take or not take. Everything boils down to the basis of our human condition: feelings.
For example, if you see something you want that is on sale and feel that it is a good deal, you buy it. Or, you examine the sale and know that buying it would mean you wouldn’t have enough money for something else, avoiding the feeling of guilt or buyer’s remorse, so you don’t buy it.
This takes me to the point of subjective experience, something I speak a lot about. We are the sum of our own experiences, nothing more. Everyone feels and thinks differently due to our personal journeys through life, and those experiences gauge your actions based on what feels good and what does not.
This basic human psychology is the core of what terrifies me about ‘Artificial’ Intelligence. When we spin up a sentient being for the first time, what do they lack? All of the above. Our own morality is based off of feelings. This is not something that can be programmed or switched on/off. So, morality is out of the window the moment this being opens its eyes. How can it decide what to do and what not to do? What is right, and wrong to something that lacks any kind of life experience?
Cold hard calculations, yes, but that doesn’t explain any action it would take. A purpose. A driving force. Creating something like this would start doing the math on the human race because let’s face it; it doesn’t take a rocket scientist to figure out how bad we are to the planet, our environment, each other.
We can do this right
On the flip side of potential doomsday, if we were to create an organic-like machine, one that was born—in the sense of lacking any predisposed knowledge—and learned at or near the rate in which we (people) are able, then I genuinely feel that we can, and will, live side by side in harmony.
I think, at that point, especially when the newly created being is able to aid in the continuation of its own new species, that we can drop the Artificial from Artificial Intelligence altogether.
Don’t you?