I have long believed that the best things to do are the things that drive you internally, rather than those things that solely bring extrinsic rewards (e.g. payment, etc.). So when I recently came across the below lecture, “Behavior without Utility” (by Prof. Simon DeDeo from Carnegie Mellon and the Santa Fe Institute), I was intrigued.

DeDeo proposes a way to explain human behavior that doesn’t depend on representing one’s desires with a “utility function” — i.e. a formula for determining how much a person likes / dislikes a particular state of the world, thereby capturing the person’s preferences.


We need ways of making collective decisions as a society and building consensus, if we’re going to solve our large challenges. We need it for this pandemic. How are we going to solve it? With a vaccine? With lock-downs? Not solve it at all, just let it take its natural course? Similarly, we’re going to need to make decisions about climate change and other issues affecting everyone.

I argue that this pandemic is our best shot at getting it right. Our best chance to learn something. Why? The solution options are pretty straightforward: we know what works and what doesn’t…

If COVID-19 were 10 times as deadly, we would have eliminated it by now. It would have been so clearly dangerous, that we would be unwilling to accept its continued spread, and we would work harder to eliminate it. In fact, I bet that fewer people would die from such a disease.*

From an evolutionary perspective, such a virus would be less fit, as it would provoke such a strong social response that it gets itself banished. COVID-19, however, has found a more sly strategy: be harmful enough to have a real effect on humans, but not so harmful that…


One major influence on my view of AI, was spending 2 years in a synthetic biology lab. I came to appreciate how much intelligence is contained in living systems (even as simple as a cell). And attending Michael Levin’s NeurIPS 2018 keynote underscored that the key to intelligence is not just in the brain — that it is everywhere in the body, with even a single cell being autonomous and competent. This motivated my desire to leverage open-ended, biological intelligence in AI — shifting my view away from just “intelligence”, and more towards “artificial life”.

Playing with Toys

When I was a child…

If we want to talk about general intelligence, with agents that undergo life-long, open-ended learning, then we have to talk about agents existing in a continuous feedback loop with a changing environment. It’s not enough to design a “perfectly intelligent” agent, that we train just once and then put out in the world — the “train → test” paradigm of machine learning. An agent that’s truly alive and intelligent is never done learning: over its lifetime, it will constantly be “tested” in the real world — but as a result of these tests, will continue to “re-train” itself.

We have…

Strikingly absent from the present scientific discourse, is the role of attitude and optimism in creating solutions.

Taking the example of COVID-19, the main thing preventing us from achieving elimination, is a belief that elimination is impossible. As Henry Ford said: “Whether you think you can, or you think you can’t — you’re right.”

A positive, “we can solve it” approach, enables us to ask the right questions. We ask if we can eliminate COVID-19, and see that we can. We ask how this can be realized, and see that each person just needs to make a small sacrifice. We…

The most striking thing about COVID-19 is that six plus months since reaching US shores, it is still rampant within our borders — when we could eliminate it within 5 weeks if we took correct actions.

This is a serious enough threat to public health, the economy, and national security, that there’s a significant up-side to full containment — as we’ve done for other pandemic threats (e.g. Ebola, SARS).

At times, we’ve gone 90% of the way to elimination — but then we give up effort and let resurgences continue. Restaurants want to re-open. Consumers want their usual lifestyle back…

Programming is at the heart of intelligence. If we want to develop “Artificial General Intelligence”, it will certainly be necessary to have algorithms that can write code, because this is something that humans can do. I claim it will also be sufficient: if we can get computers to code, they’ll be able to build many other systems autonomously — in particular, they’ll be able to code new AI systems, and will therefore be able to self-improve. In this sense, getting computers to synthesize new, original code is the most fundamental problem we can pose in AI.

I am intrigued by…

Let’s look at one example of something amazing that our human visual apparatus can do. Take a look at the image below, and say the first word that comes to mind:

What do you see?

Do you see the triangle?

Many people say “triangle” before they’d say “three pac-man / pie chart objects” (which is what’s really there). Why does our vision work this way? Would state of the art computer vision systems see this “triangle that isn’t there”? In general, do our AI/ML systems infer this kind of missing information, and then take that leap of imagination?

When I was an undergraduate at…

On Broken Incentives and Deadlock.

By Shawn Jain and Blake Elias
[Shawn Jain is an AI Resident at Microsoft Research.
Blake Elias is a Researcher at the New England Complex Systems Institute.]


Several years ago, Blake read “Good Strategy / Bad Strategy: The Difference and Why It Matters”, a book on business and management strategy. Chapter 8, “Chain-Link Systems”, describes situations in which a system’s performance is limited by its weakest sub-unit, or “link”:

There are portions of organizations, and even of economies, that are chain-linked. When each link is managed somewhat separately, the system can get stuck in a…

Blake Elias

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store