Advertisement

Comparison of LLMs with Children's Defense Mechanisms

Not long ago, a video went viral, emphasizing that children are not good at lying but may fabricate or exaggerate facts.

Through the study of defense mechanisms, we can better understand this phenomenon and its similarity to large language models (LLMs).

The definition of defense mechanisms is a psychological strategy where the brain excludes certain emotions or memories that one is unwilling or inconvenient to face from consciousness in order to protect the individual. These mechanisms often initially form during a child's development and continue into adulthood, albeit with changes in form and method. For adults, they may tend to use more mature defense methods such as humor, sublimation, and rationalization, but under specific stress or circumstances, they may still revert to some primitive defense patterns.

Defense Mechanisms

For example, during the oral stage (0 to 3 years), children may experience hallucinations as a defense mechanism. Then, during the anal stage (1.5 to 5 years), children will adopt denial and lack of differentiation as defense strategies.

  • Hallucinations: This refers to seeing or hearing things related to what you are trying to avoid thinking about, such as desires, opinions, fantasies, or criticisms. These hallucinations do not possess the ability for reality testing.

  • Denial: Causes people to ignore or fail to acknowledge facts that are inconsistent with their own emotions, even when these facts are clearly evident. For example, when people say someone is "denying reality," it actually means that this person avoids facing certain unpleasant facts through defense mechanisms.

  • Loss of differentiation: Individuals struggle to distinguish between themselves and others and may excessively conform to others' expectations in terms of behavior or performance. This pattern is also common among adults, especially those who overly seek others' approval.

Hallucinations

When we compare this with LLMs, an interesting similarity emerges. Take hallucinations as an example; for children, this may be because they have not yet fully developed cognitive abilities that align with reality. In the context of large language models, "hallucination" refers to the model producing outputs that contain false or inaccurate information. Children may lack sufficient experience or ability to verify whether their perceptions match reality. Although the model has a vast amount of knowledge, it lacks true real-world experience or common-sense reasoning capabilities, so it may not be able to determine whether its output is true or logically reasonable. Coping methods:

  • Children: Over time and with more life experiences, their cognitive abilities will gradually develop, and the frequency of hallucinations will gradually decrease.
  • LLM: Through more training data, better model structures, or more advanced training techniques, the occurrence of model hallucinations can be reduced. Both children's hallucinations and LLM hallucinations can be seen as misunderstandings or inaccurate reflections of reality. Although they differ in specific causes and manifestations, both point to a core issue: how to accurately understand and reflect reality.

Denial

When children encounter information they do not want to accept or cannot understand, they may choose to deny it. For example, when a child breaks a toy and is asked why, he might say "I didn't do it" or "It wasn't me," even though the evidence is clear; similarly, when an LLM encounters ambiguous questions or ones it doesn't have a clear answer to, it may provide a neutral, vague, or evasive response rather than explicitly admitting it doesn’t know the answer. Over time and with more external corrections, children may learn and correct their behavior, reducing denial; through more training data and adjustments to model parameters, LLMs can be "corrected" to reduce inaccurate outputs.

Lack of differentiation

When users interact with LLMs, their expectations or anticipations directly influence the output of the LLM. This is because LLMs attempt to meet user needs by providing answers that match the input. In a sense, this "response mode" of LLMs resembles the "lack of differentiation" defense mechanism in children, as both adjust their behavior or output based on external input or expectations.

As shown in the example below, if you tell the LLM that it is a teacher or a detective, it will strive in that direction:

In general, there are similarities between the defense mechanisms of children and the behavioral patterns of LLMs, especially in terms of how they respond to external stimuli or expectations. Understanding these similarities can help us use LLMs more effectively and gain deeper insights into human psychological development.