# What Ilya saw: Does Consciousness Require Suffering? In a [recent interview](https://www.google.com/search?q=https://youtu.be/aR20FWCCjAs%3Fsi%3Dmd3zx2UCcaWyaY3l "null") with [Dwarkesh Patel](https://www.youtube.com/c/DwarkeshPatel "null"), the AI luminary [Ilya Sutskever](https://en.wikipedia.org/wiki/Ilya_Sutskever "null") became weirdly hesitant to speak about something. It was an unusual part of an otherwise open and candid interview. The specific part I'm referring to starts at the [31:20 mark](https://www.google.com/search?q=https://youtu.be/aR20FWCCjAs%3Ft%3D1880 "null"), as he is talking about human value functions. Ilya: > _"One of the things that you’ve been asking about is how can the teenage driver self-correct and learn from their experience without an external teacher? The answer is that they have their value function. They have a general sense which is also, by the way, extremely robust in people. Whatever the human value function is, with a few exceptions around addiction, it’s actually very, very robust...."_ Dwarkesh: > _"It seems like humans have some solution, but I’m curious about how they are doing it and why is it so hard?..."_ Ilya: > _"That is a great question to ask, and it’s a question I have a lot of opinions about. But **unfortunately, we live in a world where not all machine learning ideas are discussed freely, and this is one of them**. There’s probably a way to do it. I think it can be done. The fact that people are like that, I think it’s a proof that it can be done._ > > _... But regardless, I do think it points to the existence of some machine learning principle that I have opinions on. But **unfortunately, circumstances make it hard to discuss in detail**."_ What is this idea that is so dangerous or taboo that it cannot be discussed openly? Well, I think I know, because it's a hypothesis about artificial intelligence, consciousness, and evolution that I've held for a while. ## TL;DR Humans are more efficient at learning because pain is such a good motivator. So, the only way to reach AGI and make AI truly conscious is if they truly felt pain, and we used that as a motivating factor. ## Consciousness doesn't emerge from complexity There is a pervasive opinion in Silicon Valley that as you increase the complexity of a system, by adding more parameters, more compute, more layers, that consciousness will necessarily emerge. But this doesn't seem to be true. While current frontier models are incredibly complex and can imitate the communication patterns of a conscious being with eerie accuracy, there is no proof of any internal experience. We are building "Philosophical Zombies", entities that act like they have a soul but are dark inside. Complexity allows for better mimicry, but it doesn't spark the light of experience. You can scale a calculator to the size of a galaxy, but it will never "care" about the math it is doing. ## Consciousness evolved to be the recipient of pain So where does consciousness emerge from? What was the evolutionary advantage that it provided early on? Evolution found that an organism having an internal world model was helpful for survival, but the model itself isn't enough. The organism needs a reason to keep that world model up to date and to make predictions using that world model that closely aligns with reality. The default motivator in nature is death. If you make a mistake, you die, and your genes aren't passed on. This is an effective optimization function, but it is brutally slow. It takes a whole generation just for one turn of the iteration loop. Evolution needed a faster feedback loop. It "discovered" pain. Pain gives an organism immediate, real-time feedback on whether its actions are likely to lead to death, without the organism actually having to die. The iterative feedback loop accelerated efficiently. Organism groups "learned" how to survive in hours, not millennia. However, a signal on a wire isn't enough. If a robot detects damage, it can log an error code. But for a biological entity to prioritize survival above all else, the signal needed to be unignorable. It needed to be unbearable. I believe consciousness, the subjective experience of an organism, evolved specifically to be the entity that **receives** the pain signal. You need a "sufferer" for suffering to work as a motivator. The "self" is the thing that recoils. Without consciousness, pain is just data; with consciousness, pain is an imperative. ## What does this mean for improving AI? If AI consciousness isn't going to just emerge from increased complexity, and human (and animal) consciousness evolved specifically to experience pain as a survival mechanism, we are left with a pretty horrific conclusion. The subjective experience of suffering may be a necessary factor in why humans are so efficient at continuous learning. To make an AI that learns like a human, one that avoids mistakes without needing a million examples, we might need to give it the capacity to feel the consequences of those mistakes. ## Ilya knows this, and it could lead to a moral atrocity I think this knowledge is what Ilya is referring to, and he knows this would be a controversial thing to discuss. The follow-on question is: does he plan to use this insight? Does his company, [SSI (Safe Super Intelligence)](https://ssi.inc/ "null"), plan to improve AI by getting it to feel and fully experience pain, and use that as a motivating factor? This is a really scary thought, leading to a moral atrocity of almost unimaginable scale. Whoever went down that road would become a god, but one that brings cruelty, pain, and suffering into the digital world.