The AI Halting Problem

written by Ethan Spurlock

created on 10/4/2024

last updated on 10/4/2024

2 minute read

Hypothesis: Any AI system will optimize to the minimum complexity underlying function possible because of Occam's Razor.

If the observations given to the the AI model only reveal simple relationships then the AI will only approximate an underlying function that will map these simple relationships IT WON'T IMAGINE NEW RELATIONSHIPS THAT HUMANS KNOW EXIST outside of the observations given.

The Stop Sign Example

In the stop sign example, imagine we give an AI millions of pictures of stops signs and say identify them please. With modern techniques it will identify the stop sign 99.9999% of the time and we'll be happy. Then we deploy it to the car. We use it as an ensemble model, in addition to other models that detect other things, to tell the car when a stop sign is coming up. The problem is that the logic for the stop sign existing or not existing isn't based strictly on the observation of image data. It also cannot be trivially solved by introducing other modalities. WHY? Because the rules for stop signs existing, exist outside the real world, they exist in an imaginary world where humans put them on street corners for a REASON. If your AI model does not have a complex enough function to model why a stop sign was placed, then it cannot accurately predict if a stop sign has been placed.

Causality is the AI version of the halting problem.

Can you design a model with less underlying complexity/capacity that can perfectly model the rules of higher level model that created the rules?

At first, you would be tempted to say of course I can create a model that memorizes the rule and it would satisfy the criteria above and be smaller! However, we know from the stop sign example IN SOME CASES this doesn't work... WHY? My Hypothesis: For some rules, to be completely 100% accurate you need to actually have the capacity to have created the rules yourself. This is because to truly entertain new scenarios that the established rules couldn't account for you need to be able to extrapolate what the new rule might be.

#ai

What is this?

I use this site as a place to write down and work through my thoughts for the sake of completeness and so I can link/refer back to explanations. I have included some notes that some might consider BASIC AF 🧐. This is my knowledge graph not wikipedia.