What AI can't do!

What AI can't do!

A problem in economics illustrates the difference between artificial and human intelligence. Understanding tacit knowledge and the limits of AI is crucial to using it effectively and fairly.

The "Red Bus - Blue Bus" problem is one of the few clear thought experiments ever conducted by econometricians. It demonstrates a central drawback associated with statistical estimation: the probability that an individual will make a particular choice when faced with multiple alternatives. If you are indifferent between taking a car or a red bus to work, your assessment of the probability of choosing a certain option is equivalent to a coin toss. The probability of driving is 50 percent, and the probability that you will take the red bus is also 50 percent. Therefore, your chances of selection are 1:1.

Introduce a third transportation option in two different scenarios, assuming that the traveler is indifferent between alternative options. In the first scenario, a new railway line is opened, so that the alternatives for the traveler are car, red bus and train. The estimated probabilities are now one-third car, one-third red bus and one-third train. The odds are the same as in the two-choice scenario, 1:1:1.

If we assume that the bus in the second scenario could be blue instead of red, the traveller would have the choice of taking a car, taking a red bus, or taking a blue bus. Is there any difference between taking a red bus or a blue bus? No, it's effectively the same choice. Therefore, the probabilities should be split evenly at 50 percent for taking a car, 25 percent for taking a red bus, and 25 percent for taking a blue bus.

This is because the actual choice is exactly the same as in the first two-choice scenario, i.e. taking a car or a bus - in other words, a red bus and a blue bus stand for the same choice. The color of the bus is irrelevant to the traveler's choice of transportation. The probability that a traveler will choose either a red or blue bus is therefore only half the probability that the person will take the bus. However, the method by which these probabilities are estimated is not able to decode these irrelevant alternatives. The algorithm encodes car, red bus, blue bus as 1:1:1 as in the scenario with the train.

Algorithmic flaws

The (non-)selection of a red bus/blue bus is a good example of how algorithmic calculations can fail. In their raw form, models cannot distinguish between the nuances of linguistic description. For a human being, it is intuitive why the red bus and the blue bus are identical when considering transport alternatives. However, it is clear that there is a difference in the selection if a train is introduced instead of the blue bus. On the other hand, it is extremely difficult to describe why the bus color is irrelevant as a programmable rule in an algorithmic process. Why is this the case?

This conundrum is an example of Polanyi's Paradox, named after physicist Michael Polanyi. The paradox, simply stated, is: "We know more than we can say, that is, many of the tasks we perform are based on tacit, intuitive knowledge that is difficult to codify and automate." Polanyi's paradox comes into play whenever a person can do something but cannot describe how to do it.

Polanyi's Paradox explains why machines cannot take over all human tasks.

Evolutionary Skills

The "Moravec Paradox" is a formalized paradox put forth by researchers Hans Moravec, Rodney Brooks, and Marvin Minsky which states:

  • We should assume that the difficulty of reverse engineering a human skill is approximately proportional to the amount of time it took for that skill to develop in animals.
  • The oldest human abilities are mostly unconscious and therefore appear to us effortlessly.
  • Consequently, we should expect that skills which appear to develop effortlessly are difficult to reconstruct, but skills which require effort need not necessarily be difficult to develop.

It is paradoxical that mental thinking and abstract knowledge require very little computational power, but sensomotor skills, visualization of future results, and perceptual inferences require enormous amounts of computing resources. It is thus found that it is relatively easy to give computers a performance at the level of adults on intelligence tests, and it is difficult or impossible to endow them with the skills of a one-year-old in terms of perception and mobility.

The future of AI is complementary

For artificial intelligence, these paradoxes therefore form an intractable conclusion, which leads to a fundamental question of resource allocation. If the simplest skills for humans are those that machines find most difficult, and if these tacit skills are hard or impossible to codify, then the simplest tasks that people perform unconsciously require enormous amounts of time, effort and resources.

There is an inverse relationship between the ease with which a skill can be performed by a human and both its describability and replicability by machines. The most important economic question is therefore: Is it worth developing AI to fulfill intuitive human tasks? Why invest more and more resources to develop an AI that does increasingly simpler tasks?

This indicates a natural slowdown in the general development of AI. If the opportunity costs of AI research, which enables machines to perform increasingly simple human tasks, become too expensive, development will slow down as returns diminish.

In the ideal case, the future of AI lies in its complementarity with human abilities and not in its replaceability. The current view of AI and its interaction with human abilities requires a serious rethink in terms of the type of problems it is developed for. Do we really need AI to tell us that red buses are equal to blue buses?

This might also interest you