The Case for Open-Ended AI

Blake Elias
6 min readOct 11, 2020

Biology

One major influence on my view of AI, was spending 2 years in a synthetic biology lab. I came to appreciate how much intelligence is contained in living systems (even as simple as a cell). And attending Michael Levin’s NeurIPS 2018 keynote underscored that the key to intelligence is not just in the brain — that it is everywhere in the body, with even a single cell being autonomous and competent. This motivated my desire to leverage open-ended, biological intelligence in AI — shifting my view away from just “intelligence”, and more towards “artificial life”.

Playing with Toys

When I was a child, I played with a brand of educational toys called Discovery Toys. A core principle of these toys was that there was no one right way to play with them — instead, each toy had many valid configurations. You could build different structures, experiment with gravity and friction — the point was to explore the world, develop the senses, engage creativity, and learn general problem-solving.

Children develop so much, just through play. This begs the question: can we build a robot that plays with toys like this, and that learns from them in the same way as humans?

The next generation of AI? [source]

The key thing in common with these developmental exercises — playing, learning languages, instruments, sports — is that they’re all open-ended activities. Yes, there will be some rules to follow — those rules are what create the challenge. But within the rules, you get a lot of freedom to explore — what structure do I want to build with these blocks? What song do I want to play — and in what style/mood do I want to play it? The ability to explore is key, more-so than any specific skill that gets learned.

Such experiences do wonders for child development — playing, learning languages, learning an instrument. Perhaps they can do the same for AI?

When we help children develop in these open-ended ways — by playing with developmental toys, by doing creative projects — we find that their brains develop extensively, and they become sophisticated, versatile learners and doers.

Taking Tests

By the time we send kids to school, much of their critical development has already taken place. Children who have developed really well in those early years are able to breeze through the school curriculum, just because they’re such good learners.

On the other hand, when we neglect early childhood development, but then directly start teaching children the school curriculum, they have a harder time learning. The focus moves to standardized exams, and schools “teach to the test”. We find that to some extent, we can help children improve their scores. They can pass the tests — but do little else academically. The innate curiosity to learn has not been developed.

Today’s AI [source].

The way we train today’s AI is very much like “teaching to the test” in K-12 schools. With a great amount of time and effort, we can get it to pass a few tests — assuming those tests are similar enough to the preparatory materials. And for a moment we’re quite impressed. But as soon as we ask the machine to do something new for us, we find that it can’t — just like the children who only know how to take tests. Its intelligence is a hollow shell of what we thought it was.

This shouldn’t be that surprising. What does our AI do “outside the classroom”? Nothing. An image classifier classifies images. Otherwise it sits idle. It isn’t passing the time doodling, daydreaming, or playing outside, like a developing child might. Maybe it should go out and make new friends — try hanging out with a speech recognition model, see what games you can play together? Or trying its hand at image captioning, instead of just classification? Maybe if it got good at those things, we’d also find it got better at its original job of classifying images.

One could argue, however, that we do often give that image classifier some “supplementary instruction”: we train it on more images! But this is a very limited form of continued learning. It gets better at that one task, but can’t do anything else — much like the kids who just take more tests.

This isn’t to say that we don’t attempt more creative things with our models. We get our image models to “dream up” new images, or use the weights of one model as a representation to learn some other task. But the issue is, these are all experiments that we set up ourselves, rather than the machine figuring them out. It has no curiosity to bridge new connections. And it’s that very curiosity that enables our children to develop agency, personality, and creativity.

These are things which are hard, if not impossible, to write down in any concrete objective function. And that’s exactly the point.

By focusing too much on the “task” we are aiming to train our AI to do, we miss out on opportunities for their broader development. Maybe instead of accuracy and objective functions to say when AI’s done a good job, we should be asking, did that algorithm do something interesting? Something surprising, or clever?

We should be training AI to solve brain-teasers like this. [Source: Amazon]

These are things which are hard, if not impossible, to write down in any concrete objective function. And that’s exactly the point. Instead of shying away from these goals because they’re hard to specify, we should be looking for machines to do things we didn’t specify — i.e., algorithms that produce surprising, emergent behavior.

Because we know from human education, that the best way to become versatile, is to explore in a self-directed, open-ended way.

I think we should be training AI to solve chain-link brain teasers, like this one. And then when it solves them, let it design new ones that it can’t solve! (This is very much like the idea from Enhanced POET, of creating new environments and then solving them.)

A Debate

Skeptic: We won’t be able to get machines to be this curious/creative yet. It’s too early. Before we can create architectures that do multiple tasks at once, let’s make ones that can do one task at a time, and do each one well.

Response: You have it backwards! Creative exploration, unstructured play, needs to come first. Once you have that, it makes everything else easier (reading comprehension, etc.). As Minsky used to say, the tasks that are easiest for a young child are the hardest for a machine. But that’s why we need to focus on them.

A Grand Challenge

I believe the greatest task for AI research would be to develop a really good 3-year-old.

One who’s fun to play with, engages in clever play with other children, creatively challenges its peers (e.g. creating/music art together, etc.) That is the intelligence we really need.

Can children and AI help each other’s development? [Source]

Perhaps if we put such an agent in contact with human children, it will help their own development too — being a really engaging toy/play-mate who can help the other children learn.

This is a different paradigm than adversarial learning (e.g. AlphaZero, GANs, etc.). Instead of a zero-sum, adversarial set-up, this framework is cooperative and positive-sum between the human and the machine — and therefore gives more un-bounded opportunity to learn.

If this joint approach to AI & education works (a process for open-ended creative development of human and machine intelligence), it will unlock the real potential of both, and show us where we’re stuck — both in AI and human education. Besides changing how we think about AI, we need a revolution in human education as well — which is almost always too methodical, when what we really need today is creative, collaborative, out-of-the-box thinking.

--

--