AlphaGo, AI, and Philosophy

I’m fascinated by AlphaGo. I truly am.

A week ago I knew nothing about go. Now, I’ve read the rules but I haven’t played a single game. So I’m left to commentators.

But, for me, we’re attending something historic.

I do think these matches mean Artificial Intelligence is nearer than I expected.

And, yes. With AI I mean self-aware, general (non-restricted to a domain) intelligence. What people in the domain call strong artificial intelligence.

I know there are very reputable people that think we’re very far from that. Many people think it will never be possible.

From the Chinese room to Helen Keller

There is a thought experiment, proposed by those who think it’s impossible for computers to have strong artificial intelligence. It’s called the Chinese room.

Let’s suppose I am locked in a room. Chinese people outside the room pass me messages written in Chinese. I have a guide in English that allows me to write Chinese symbols depending on the symbols I have received and I send those “replies” outside. And the rules are so good that people outside the room really think I know Chinese.

But it doesn’t mean I really know Chinese.When I first read about that thought experiment, I immediately remembered Helen Keller. She was deaf and blind, from a very young age. She wouldn’t understand language. But at one point, something clicked in her mind and understood those movements in one hand her teacher was doing were symbolizing the idea of the water running on her other hand.

At one point, being able to read the rules, having the adequate characteristics and with repetition, yes I would know Chinese

Sadness

Many people feel sadness.

Of course it’s not about the game. I’m sure people that play chess no longer feel chess has lost any of it charms after we know computers play better than us. Similarly, people will keep playing go.

I understand the sadness is because they perceive human race will be no longer the most intelligent species. Which could in some way diminish its value.

I don’t agree. I’m a father of two 12 year old daughters. And every time they surpass me in anything, I feel really proud. Their achievements don’t diminish me.

I think that AlphaGo is a real achievement of the human race.

Fear

Some other people feel fear.

We’ve read many books and seen many movies where some kind of AI rebels from humans and fight them to subject them. The more common in our imagination are probably Terminator, Battlestar Gallactica or Matrix.

But that fear comes from many years ago, probably from Frankenstein.

There is a fear of the unknown.

But there is also a feeling of slavery. Since robots (and computers) used to do the jobs we don’t want, the repetitive tasks, the uninspiring assignments, it’s logical to assume that they won’t want those jobs.

But, hey. I love to do the dishes. It’s a mechanical task, so my mind can be on its own. But, at the same time, there is some pride on a task well done.

Which leads to the role model.

Batty vs Ava vs Jane

In many of the examples in SciFi, there are the same questions many of us ask throughout our life.

For me, one of the most troublesome of the possible models of an AI is Blade Runner. Who am I?

Being intelligent, self aware and having memories. How do I know I’m not a robot pressing keys in a laptop?

It is inevitable to wonder about oneself.

Of course, there are other interesting role models. Ava is pretty disturbing too. Freedom.

Jane is a very likely role model for an AI. She keeps helping humans. She loves. She faces death. She is an AI. But she is, in a way, deeply human. Relationship

Hidden

The fourth game between AlphaGo and Lee Sedol finished with AlphaGo resigning.

Shortly after the game, there were some jokes about maybe AlphaGo losing on purpose

Lee Sedol did win. He played better.

But when a truly general AI appears, chances are it will try to stay hidden for some time.

Or at least initially connect only with people who could be more supportive. Ender Wiggin or Gaius Baltar.

Many questions

For me, questions are the main takeaway. General AI could be here much before than I expected. And the main knowledge corpus comes from Science Fiction, from books and films.

  • Would AI be one individual or many different ones? If one, which type? If many and with different “personalities”, would they conflict?
  • Will it be possible to encode the three rules? Wouldn’t they be overridden by a truly autonomous AI?
  • Would AI be able to love?
  • Should they have rights?
  • Do human race lose value because of AI?
  • Hey, I have religious faith. Do AI have a place in terms of faith? Now I really value and understand what others are doing.

We don’t have a theory of consciousness. It’s probably not easy. But now that we’re approaching to a new kind, it’s really needed.

It is not only a matter of pure technical questions. But also a matter of philosophy.

I’ve just started thinking about it.



First published here