An apple is a fruit, that can be red, green or yellow, and when fully grown, it should be around 8 cm in diameter. but it is not just that.
Apples are sweet, in 100 grams of apples, there should be on average 5.9 grams of fructose and 2.4 grams glucose, chemicals that interpreted as sweet in our brains, when our taste buds send a signal to it.
Apples are also healthy, 100 grams of apples contain it 2.4 gm of fibers, it has only 52 calories, and 86% of it is water.
Apples are incredibly affordable, an average apple will set you back around 45 cents in the US.
Apples also have a lot of historical, cultural and religious value.
Sir Isaac Newton wondered why apples fall from trees, and that kick started his process of discovering gravity.
Steve jobs co-founded a computer company by the same name, and now billions of people use that companies product everyday.
In eastern cultures, some people smoke an apple flavoured hookah that goes by the name double apple.
And in western religions, eating a forbidden apple led to the banishment of Adam and Eve, along side all the human race, from heaven to earth.
An apple is a very simple entity, yet it packs a lot of information and knowledge, in order to answer the question “What is an apple?”, you have to understand what is the context of that question, and if you have sufficient information about the relationship between the apple and the given context, only then you can answer the question correctly.
This is just one example, of a huge but finite number of concepts, that are not just the sum of their parts, and just like an apple, a car is not only a vehicle, or a cat is not only an animal, we humans understand so much about these concepts through experience and learning, but what makes us special is that we can understand the context, and apply the relative information in order to achieve a goal.
And yet, for all intense and purposes, these complex and magnificent concepts have been oversimplified in our current AI literature to the point of disrespect.
An apple now is just a vector that is inferred from an image to be collapsed into 1 bit of information describing an object in computer vision, while in natural language processing, an apple is a word vector that represents its relationship between other word vectors.
And although some recent break throughs like open AI’s GPT-3 seems impressive, it is nothing but a useful gimmick, one that memorises words patterns and their relationships to each other, while not fundamentally understanding what a word is, just finding the context, and applying the “apple” word to it, which is in its very essence, an extremely smart adaptive copy paste program, it copies from its memory that is related to the context, and pastes the answer.
The holy grail of artificial intelligence is general intelligence, and yet all the literature aims to build robust narrow intelligent programs. this in my opinion will never yield a general purpose intelligence. I argue that in order to achieve a general intelligence, an agent, being a software program, or an organic creature, has to have the ability of understanding a concept, and the ability of differentiating the related properties of that concept to the applied context.
And although ourself bias makes some of us believe that we are the only species that has the ability of doing that, it is not the case. Some animals have a better grasp of concepts than we might think, even better than our smartest most complex AI systems.
In the wild, it does not take long before a prey knows when to run or hide from a predator, and for a predator to sharpen its skills to hunt preys.
The prey understood the concept of danger, and built a relationship between it and the predator, while the predator understood the concept of hunting, and how to be a more efficient hunter, it understood the relationship between its actions, and the prey reactions.
And so I argue, in order to achieve AGI, we must teach concepts, contexts, and the ability to apply the relative information of the given concept to the applied context.
This article marks the start of my biggest pet project to date, testing this theory and seeing its affects.
Do we have the current means to teach a collection of silicon ships about concepts, can we program an AI to apply these concepts ? can this be used as a tool for paradigm shift from correlation to causation ?
While the answer is probably not, maybe we can learn something about representing a concept in a multidimensional vector, and maybe reality in a multidimensional space.
In the next chapter, we will try to understand how human infants start building their understanding of concepts, and how they take sensory data from their 5 senses, to have a grasp on reality, and create memories and knowledge.