What is artificial intelligence?
Since artificial intelligence is becoming more and more useful and important in everyday private life, you quickly come into contact with it intentionally or unintentionally.
Many do not know whether they are using AI (artificial intelligence) or have interacted with it to provide a service or obtain information.
As AIs, we usually refer to applications and capabilities that can execute and use machines or programs to imitate human behavior. All current AIs are on the available data limited.
Can an AI think like a human?
Research on Artificial General Intelligencemodels, AGI for short, deals with this topic.
The purpose of these special AIs is to close the gap between human intelligence and artificial intelligence. Developers and researchers working on such systems achieve this through five important features that not all normal AI tools can have.
1. Learning new skills and patterns of behavior.
2. The ability to own knowledge, save it and use it later for new tasks.
3rd Speaking and understanding speech, to communicate with the human user.
4. The ability to plan and make decisions autonomously.
5. The perception of the environment and of one's own body.
These capabilities set us apart from normal AI tools and are the advantage of human intelligence. If an AI was able to overcome this hurdle as well, it could be on a par with or even superior to a human.
In order to get closer to the goal of AGI AI, many systems are already working with “weak” or individual capabilities of “strong” AI models.
One weak AI Almost everyone has used it before, although the word “weak” does not mean the weakness we are familiar with. In this context, the term refers solely to the inability to solve unknown problems.
Even the use of weak AI in existing processes and work flows with clearly defined tasks shows the huge potential.
Weak AI is typical of digital assistants such as Siri, the Google Assistant or Amazon's Alexa. However, we can also find them in automatic ordering systems and in many AI tools that process simple tasks according to fixed patterns.
This AI reaches its limits when it is to solve more complex tasks for which it cannot find a suitable reaction or answer in its clearly defined behavior patterns.
The lack of creativity and adaptability is also quickly noticeable when such a AI to create and edit images should be used.
Technology has made huge progress in this area in recent years, but often fails due to the data provided.
This is referred to as “MAD” (Model Autophagy Disorder). MAD is a phenomenon that is caused by a poor database. For example, AI has problems portraying sad or angry people, as it is more often trained with images of laughing and happy people.
However, this is not the developers' preference, we simply prefer to photograph happy people, which is why there are more pictures of happy people available on the Internet.
Something like this is allowed in a strong AI They don't happen, which is why they are only available in parts so far.
A strong AI can compensate for missing data through creativity and then learn from the resulting results. Like us humans, it is controlled by our brain by a reward system.
These skills are already available in detail and can be admired in several projects carried out by various companies.
An AI that uses sensation works, we mainly find in autonomous vehicles. Cars from Tesla, Mercedes, BMW and other companies can use sensors and cameras to perceive their surroundings in order to make decisions based on the data obtained from them. Robots from Tesla, Boston Dynamics, and Agility Robotics use sensors to maintain balance, feel objects, perceive their environment, and recognize human reactions and emotions.
Other important features include Problem identificationto be able to identify sudden obstacles in completing tasks. This is followed by the solution development, which ensures the autonomous processing of these new problems. The most important function, however, is that learning from the resulting situation, which stores new knowledge, new patterns of behavior and reactions. Ideally, these are made available to all other AI-controlled systems so that they do not have to go through the same learning process again.
Only when all these capabilities are fully combined in one AI can you speak of real “Artificial General Intelligence.”
How close are we to AGI?
The question of how to measure an intelligence that is still in development is extremely difficult. We have now learned the concept and objectives behind AGI and can therefore work with a comparison.
A human child normally goes through six development phases (baby, toddler, kindergarten child, preschooler, schoolchild, teenager), which can all be assigned to different abilities.
Since we're talking about different AI tools, which all have different areas of responsibility at the moment, it's hard to compare them with each other. Even though the individual AI is already very advanced on its own, it cannot go beyond its capabilities.
So we're not sitting in front of a child who can learn new skills, but in front of many children who should work together to combine their skills. Researchers assume that this is not possible and that AGI AI cannot be achieved with the models of current AI tools.
The limit of what is possible is reached as soon as the tasks go beyond complex but recurring activities for which a specified problem solution is known.
In other words, an AI that meets all the requirements of the AGI model is not yet within reach.
Will Agi-AIs work like the chatbots we know?
All the AI tools we use right now are based on the probability model.
When creating images, the AI calculates the subject that is likely to come closest to the user's wish. Even supposedly intelligent decisions are basically a selection of responses that are likely to lead to the best result. For example, the Autopilot in Tesla brakes when the probability of an accident exceeds that of a normal journey.
However, that doesn't always have to be correct. An AGI AI would make flexible decisions and find a solution even in moments when normal AI believes an accident is inevitable due to lack of information.
Even a chatbot that works according to the AGI model would not be satisfied with the probably correct answer. If the answer doesn't result in a perfect solution, AGI AI would try out new methods to learn from them.
Therefore, a conversation with an AGI AI would be more similar to a human conversation with all its advantages and disadvantages.
The chatbot would be curious, maybe too curious sometimes? He would ask counter questions and evaluate our decisions and statements. An important skill of AGI models is reasoning, which could also be a hindrance for an obedient companion.
However, there would also be advantages if the chatbot made better decisions than its human interlocutor. Even currently used AI chatbots can be used at a Buying advice are convincing and offer a high level of added value as long as their skills and data match the current task. However, there is always a limit: the pre-programmed ability to communicate and the available data.
A chatbot in a hospital could help doctors make a diagnosis and create treatment plans. Scientists and researchers could work on complex theories together with Agi-AI tools that can have more knowledge than the brightest minds in humanity.
From these examples, you can quickly see that an AGI AI would be severely underwhelmed when asked about today's weather or ordering from the delivery service.
Is the future of AI also our future?
An AI that works within the AGI model cannot be dangerous to us and humanity. That's because we're able to effectively restrict them. One common method is to restrict data and access to other systems, as well as to constantly monitor activities.
It is a useful tool that can increase productivity, take on hazardous work or help us with everyday problems and tasks.
There are countless areas of application and industries that would benefit from the integration of powerful AI. In the end, it is humans who decide how they want to use AI.
A further upgrade to the AGI model would be the ASI model, which stands for “artificial super intelligence.”
As the name suggests, this is an AI that far surpasses humans. While the AI in the AGI model is as intelligent as its creator, we cannot keep up with an AI in the ASI model. Since we can currently control what an AI can do through the source and amount of data, an ASI AI could gain access to any knowledge thanks to its enormous intelligence and unpredictably fast learning processes.
The concern of AI critics is that we have such a AI could lose control. In that case, we would no longer be playing with a child who should learn from us, but with a much more intelligent system that is faster and smarter than us.
Even with the AGI model, some concerned scientists are talking about the fact that we want to play God in order to create an intelligent being with consciousness.