This class of intelligence is defined by the ability to find
statistical patterns in data and use those patterns to make
predictions or inferences. Deep Neural Networks are complicated
systems that can perform this task, although it should be understood
that these systems do not "understand" in any meaningful way. For
example, a DNN trained to classify images of cats will not
understand the salient features that make a cat a cat, as opposed to
a dog. Still, systems like OpenAI's GPT3 has achieved human like
media generation which is very impressive.
This class of intelligence is defined by the ability to symbolically
reason the difference between two catagories. In essence, as the
name suggests, it as the ability to generalise learning from a task
that has been trained on to a different one which has not been
trained on. As an example of abstract learning, if I learn to drive
a specific car, then I can generalise this knowledge to be able to
drive (almost) all cars and vans. For more on this topic, I
recommend
Francois Chollet's work.
This class of intelligence is encourages exploration of an
enviroment in such a way to minimise the amount of unknown
information about that enviroment. This can be modeled with
reinforcement learning techniques. In this way, curiosity enables
the ability to make discoveries to solve complex problems with
unknown or rare rewards. Further, by combining curiosity and
abstraction, I believe creativity will emerge. For more on this
topic, I recommend
Pierre-Yves Oudeyer's work.
This class of intelligence defines a set of rules for how to
interact with the environment which maximises a positive survival.
When interacting with alive agents in the environment, it defines
rules that generally encourage each agent to live in a mutually
beneficial capacity. Internal value systems are trained by primary
care givers to imprint socially acceptable ways to interact. When
actions are taken that do not align with that internal value system,
emotions of guilt, sadness, etc will emerge, thereby encouraging
future actions to be in line with the internal value system. If an
agent does not obey the external societal value system, external
agents can use emotions to re-align such a disagreeable agent with
the societal values (but this may not change their internal values -
just their actions). Base level emotions like hunger, thirst, etc,
provide a rule for short term survival, whereas emotional societal
value systems help survival through cohesive group dynamics. For
exploratory research in developmental robotics that aims to create
AI with base emotions, such as pain and joy, I recommend
Angelia Lim's work.