Moravec’s paradox is this observation: “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.”

Basically, early AI people, being a bit proud of their status as Superior Human Specimens as Validated By SAT Scores and Chess-Skills, assumed that getting computers to beat them at those things would be the hard mission. They were wrong. Things even low-SAT-score chess morons can do, like recognizing their mother’s face, opening a door latch, or getting a knock-knock joke, turned out to be far harder.

I can't find the quote, but he also points out that what we used to think of as the "hard" problems turn out to be computationally easy, but require huge datasets for the decision tree; while the "unconscious" things like face recognition or natural language recognition are the ones that take raw compute power.

Like Ken Jennings said when they asked him how it felt to lose to Watson (paraphrasing): "I challenge Watson to a rematch. This time: dancing."

For all we know, we might still be a conceptual breakthrough or two away from solving the unconscious problems.