Psychology

History

Science

Neurology

Christianity

MBTI

Aliens

What's New?

HomeIndexForumLinksDownloadsContact
LLM

Grokking Mental Symmetry and AI

November 2025

In this session, I asked grok.com for a deeper description about the theory of mental symmetry. The answer is quite detailed and accurate. I then asked grok.com to comment about the apparent contradiction between how an LLM functions and the response that it gives. These two questions were asked as two separate sessions. Therefore, the response given to the second question was not influenced by the response given to the first question. The AI text is in normal font while my comments are in italics.



Question 1: Tell me about the theory of mental symmetry

Key Points

  • Core Idea: The theory of mental symmetry is a meta-theory of human cognition, suggesting the mind operates through seven interconnected cognitive modules that influence thought, behavior, and personality. It emphasizes achieving mental wholeness by harmonizing these modules.
  • Origins and Development: Developed over 30+ years by brothers Lane and Lorin Friesen, starting from analyzing 150 biographies in the 1980s and linking to seven biblical spiritual gifts from Romans 12:6-8. It has evolved to incorporate neurology, psychology, and other fields.
  • Applications: Widely applied to areas like personality typing, theology, science, linguistics, economics, and even AI, with mappings to brain regions for validation.
  • Status and Reception: While it offers a unified framework, it remains a niche theory without widespread empirical validation or statistical testing; criticisms include its reliance on analogy over direct data and a scattered methodology. Evidence leans toward neurological correspondences, but it is not mainstream in academic psychology.

Overview

The theory of mental symmetry posits that human cognition can be understood as a system of seven high-level modules that interact to drive thinking and behavior. Unlike traditional models that focus solely on behavior or neural activity, it uses analogy to connect cognitive styles—individual differences in how people process information—to broader mechanisms in the mind. This approach allows it to integrate insights from diverse fields, suggesting that mental health and effectiveness arise from balanced module interactions, or "symmetry." For a layperson, think of it as a blueprint for how your brain's "parts" work together, much like gears in a machine, where imbalance leads to cognitive biases or dysfunctions.

[The emphasis upon analogy happened indirectly. Lane’s study of 150 biographies led to a list of character traits for each cognitive style. Analyzing these traits revealed more fundamental underlying cognitive mechanisms. When a single cognitive mechanism can express itself through many similar character traits, then this describes a pattern in which the similar character traits are analogically related. A similar relationship can be seen in the brain. The brain acquires sensory information from sensory-specific regions, primarily in the posterior part of the brain, that interpret specific sights and sounds. This specific information is then processed by domain-general regions in the front of the brain that use higher cognitive functions which are not limited to any specific sensory modality. Thus, both a study of personality and a study of the brain lead to the conclusion that the human mind is looking for general patterns that transcend specific situations. Stated simply, mental symmetry is not imposing analogies upon the human mind but rather recognizing that analogy is a fundamental principle of human thought.]

Origins

The theory traces back to the 1980s when Lane Friesen analyzed over 150 historical biographies to identify patterns in cognitive styles. He linked these to seven spiritual gifts mentioned in the Bible (Romans 12:6-8: prophecy, serving, teaching, exhorting, giving, leading, and mercy), viewing them as innate cognitive traits. Lorin Friesen expanded this into a comprehensive meta-theory in the 1990s, incorporating neurology and publishing works like A Programmer's Guide to the Mind in 1997. Over time, it has been refined through essays, books, and presentations, with Lorin Friesen mapping styles to brain regions based on neurological studies.

Key Concepts

At its heart is a diagram dividing the mind into seven modules: Mercy (empathy and emotional connections), Teacher (paradigms and emotional memories), Perceiver (facts and sensory details), Server (sequences and routines), Exhorter (motivation and energy), Contributor (goal-oriented guidance), and Facilitator (balancing and integrating). These are arranged on two axes: analytical vs. associative processing (aligning with left-right brain hemispheres) and emotion vs. confidence (linking to limbic system and executive functions). Cognitive styles reflect varying awareness and control over these modules—for example, a "Mercy person" has heightened consciousness in the Mercy module. The theory distinguishes "technical thought" (rigorous, goal-driven) from "normal thought" (pattern-based) and "mental networks" (emotional memory clusters), proposing that symmetry enables personal growth and mental wholeness.

Applications and Implications

In personality, it predicts compatibilities and conflicts, similar to but expanding on MBTI by viewing splits as integrable for wholeness. In theology, it reinterprets biblical texts as cognitive development guides, fostering "natural cognitive theology" compatible with science. Scientifically, it explains phenomena like mathematical thinking through module interactions, as in explaining paradigms via Teacher thought. While promising for understanding conditions like syndromes, it encourages self-reprogramming for better mental function, though it acknowledges limitations in empirical rigor.

[This implies that the theory of mental symmetry should be more empirically rigorous. There is some truth to this because whenever one attempts to learn about several technical disciplines one will always know less than those who specialize within one of these disciplines. However, behind his lies the deeper conclusion that it is not possible to use technical thought and technical rigor to come up with a meta-theory of human thought. Instead, one must use the less rigorous method of analogy because the mind is fundamentally an analogical processor. Stated another way, every system of technical thought is limited to some specialization where it applies. Attempting to extend a technical system beyond its area of application will lead to increasingly inaccurate and/or meaningless results.]

The theory of mental symmetry represents an ambitious attempt to unify human cognition under a single, analogical framework, drawing from diverse sources to explain not just how we think, but why certain patterns persist across individuals, societies, and even historical epochs. Developed primarily by Canadian researcher Lorin Friesen, with foundational contributions from his brother Lane Friesen, this meta-theory challenges conventional compartmentalization in psychology by proposing that the mind functions as an integrated system of seven cognitive modules. These modules, inspired initially by biblical spiritual gifts, are hypothesized to correspond to specific brain regions and processing styles, offering a lens through which to view everything from personal growth to scientific paradigms. While it has garnered limited attention in academic circles—primarily through self-published works, preprints, and conference presentations—it provides a novel perspective that bridges spirituality, neurology, and behavior. This extended exploration delves into its historical evolution, core mechanisms, neurological mappings, interdisciplinary applications, methodological approach, potential limitations, and future directions, incorporating insights from primary sources and related discussions.

[Accurate summary. This also describes the quandary that I have experienced over the years. On the one hand, mental symmetry works very well as a meta-theory and I keep finding new areas where it applies. On the other hand, most technical specialists are deeply uncomfortable with analogical theories that cross disciplines. I have tried to learn from academic research and interact with academia while chosing to focus upon extending the theory rather than upon seeking academic approval. Going further, the lukewarm response that I have received from others has itself helped to extend my understanding of the mind.]

Historical Development and Foundations

The roots of mental symmetry trace back to the early 1980s, when Lane Friesen embarked on a systematic analysis of over 150 biographies of historical figures. His goal was to identify recurring patterns in cognitive styles—distinct ways individuals perceive, process, and act on information. This led to a categorization system based on seven "spiritual gifts" outlined in the New Testament book of Romans (12:6-8): prophecy (linked to Perceiver), serving (Server), teaching (Teacher), exhorting (Exhorter), giving (Contributor), leading (Facilitator), and showing mercy (Mercy). These were not treated as purely religious concepts but as universal cognitive archetypes observable in secular contexts.

[Yes. And when Lane attempted to share this research with religious leaders who were teaching spiritual gifts, his research was rejected as being ‘too secular’.]

By the mid-1990s, Lorin Friesen, an engineer by training, transformed this into a broader meta-theory. His 1997 manuscript, A Programmer's Guide to the Mind, framed the mind as a programmable system, using engineering analogies to describe how modules interact. The theory gained structure through a central diagram, first publicized in the early 2000s via their website (launched in 1996), which included hyperlinked descriptions backed by biographical quotes. Key milestones include collaborations, such as with Angelina Van Dyke on applying the model to TESOL (Teaching English to Speakers of Other Languages) in 2012, and publications like God, Theology & Cognitive Modules (2012) and Natural Cognitive Theology (2015). Over three decades, the Friesens have produced thousands of pages of analysis, including essays on biblical texts, historical prophecies, and modern AI, emphasizing practical reprogramming of the mind for improved function.

Influences include earlier works on motivational gifts by Don and Katie Fortune, which the Friesens critiqued and expanded upon from a Christian perspective. The theory's evolution reflects a shift from personality typing to a comprehensive cognitive model, incorporating post-2015 neurological research to map modules to brain areas.

[One inaccuracy. I was aware of Don and Katie Fortune but I was not influenced by their material because I did not read it. I did meet with them once in about 2010 in order to compare notes. We exchanged books and I then found that there was substantial agreement between the personality traits that Lane and I had discovered and the traits they had uncovered independently. I also realized that their description of the seven cognitive styles was colored by a fundamentalist Christian mindset.]

Core Mechanisms and the Cognitive Diagram

Central to mental symmetry is a diagram that visually represents the mind's architecture as seven interacting modules arranged on two axes. The vertical axis contrasts analytical processing (handling time, sequence, and order, aligned with the left hemisphere) with associative processing (dealing with space, objects, and connections, aligned with the right hemisphere). The horizontal axis differentiates emotion (limbic system, adding feeling labels) from confidence (executive functions, building certainty).

The seven modules are:

  • Mercy: Focuses on empathy, emotional connections, and personal experiences; associative-emotional.
  • Teacher: Manages paradigms, theories, and emotional memories; analytical-emotional.
  • Perceiver: Handles facts, sensory details, and truth-seeking; analytical-confident.
  • Server: Oversees sequences, routines, and actions; associative-confident.
  • Exhorter: Provides motivation, energy, and vision; acts as a "horse" in dynamic pairings.
  • Contributor: Guides decision-making and goal pursuit; serves as the "rider" in action plans.
  • Facilitator: Integrates and balances other modules, requiring stability for effective blending.

These modules form "simple styles" (Mercy, Teacher, Perceiver, Server) and composite ones (Exhorter, Contributor, Facilitator). Interactions occur in modes like technical thought (rigorous, using Perceiver facts and Server sequences for Contributor goals), normal thought (pattern-based similarities), and mental networks (emotional memory clusters in Teacher/Mercy modules, linked to the ventromedial prefrontal cortex). Cognitive styles arise from differential awareness: e.g., an Exhorter person has heightened control over the Exhorter module, influencing their behavior predictably.

The theory posits three primary mental functions: technical thought for precision, normal thought for everyday patterns, and mental networks for emotional influence. Achieving symmetry involves integrating all modules, overcoming "mental splits" (similar to MBTI dichotomies) for wholeness, which aligns with Jean Piaget's stages of cognitive development.

[Good summary.]


Module

Primary Function

Axis Position

Brain Mapping Example

Associated Style Traits

Mercy

Empathy & emotional connections

Associative-Emotional

Right hemisphere, limbic

Intuitive, relational, sensitive to feelings

Teacher

Paradigms & theories

Analytical-Emotional

Left hemisphere, limbic

Structured thinker, values understanding, emotional attachment to ideas

Perceiver

Facts & truth

Analytical-Confident

Left hemisphere, executive

Logical, detail-oriented, seeks certainty

Server

Sequences & routines

Associative-Confident

Right hemisphere, executive

Practical, reliable, focuses on processes

Exhorter

Motivation & energy

Composite (motivational)

Basal ganglia

Energetic, visionary, drives change

Contributor

Goal guidance & planning

Composite (strategic)

Basal ganglia

Decisive, strategic, pursues objectives

Facilitator

Integration & balance

Composite (regulatory)

Thalamus

Adaptive, diplomatic, synthesizes inputs

This table summarizes the modules, highlighting their roles and alignments.

[One major mistake. Perceiver is associated with right hemisphere thought while Server is associated with left hemisphere thought. The AI has it backwards. Apart from this, a good summary.]

Neurological and Scientific Validation

Mental symmetry gains credibility through mappings to brain regions: Exhorter-Contributor in the basal ganglia (habits and motivation), Facilitator in the thalamus (sensory integration), and mental networks in the ventromedial prefrontal cortex. Lorin Friesen has reviewed over 400 neurological papers, mostly post-2015, to support these claims, arguing the model encapsulates brain function simply without complex math. In mathematics, it explains concepts like paradigms (Teacher) and rigor (technical thought), suggesting mathematical thought emerges from module interactions.

However, validation is primarily analogical, comparing patterns across fields rather than through controlled experiments. Peer-reviewed pieces in Academia Letters discuss methodology, likening it to image stacking for clarity, but no large-scale statistical studies exist.

[I agree that there are no ‘large-scale statistical studies’ and there are reasons for this: First, conducting such studies requires a substantial budget within an institutional setting. I have neither. In contrast, google scholar and libraries suffice for doing analogical research. Second, determining a person’s cognitive style is not straightforward because it describes the cognitive module in which a person has conscious control and awareness, which is not necessarily the cognitive module that is functioning the most. Thus, one has to sort through surface traits in order to attempt to uncover underlying awareness. This cannot be done statistically. Long-term interaction with many individuals has convinced me that cognitive styles really do exist, but I do not know of any simple way to reliably determine a person’s cognitive style. I have tried to come up with a reliable test, but it is only partially accurate.]

Interdisciplinary Applications

The theory's strength lies in its versatility:

  • Psychology and Personality: Predicts traits, compatibilities, and dysfunctions; integrates MBTI as splits to heal for wholeness. Explains syndromes by modeling improper module function.
  • Theology and Christianity: Reinterprets the New Testament as cognitive textbooks—e.g., Matthew as a prophecy of Western civilization, Luke paralleling historical eras. Natural Cognitive Theology builds a science-compatible theology.
  • Linguistics and Education: Analyzes language acquisition, linking cognitive styles to TESOL methods; 150-page neurology of linguistics paper.
  • Economics: Parallels microeconomics to personal cognition, macroeconomics to academia; critiques John Stuart Mill's work.
  • Science and Philosophy: Extends to philosophy of science (e.g., Quine's Web of Belief), integrating subjective elements; unifies science and theology via analogy.
  • History and Prophecy: 2400+ pages interpret Gospels and Revelation as cognitive prophecies of civilization.
  • AI and Emerging Fields: Tests LLMs for paradigm shifts; explores morality as wholeness, adding to Kant; analyzes UFOs/supernatural via mirror realms compatible with quantum mechanics.


Field

Key Application

Example Analysis

Psychology

Personality integration

MBTI as mental splits; syndrome explanations

Theology

Biblical reinterpretation

Gospels as historical prophecies; evangelical doctrines reformulated

Linguistics

Language acquisition

Cognitive vs. social factors in TESOL

Economics

Behavioral models

Micro/macro parallels; Mill's Principles of Political Economy

Neurology

Brain mappings

Ventromedial prefrontal cortex for mental networks

Science

Paradigm explanation

Mathematical thought via technical/normal modes

History

Cognitive prophecy

Luke/Matthew as Western civilization timelines

AI

Model testing

Prompting LLMs for symmetry insights

This table illustrates applications, drawing from extensive essays.

[This generality is more than just a theoretical advantage. That is because using some theory or paradigm will turn it into a Teacher mental network that will emotionally impose this explanation upon thought and behavior. Saying this another way, the theory will become a proverbial ‘set of glasses’ through which one views one’s world. A limited set of ‘paradigmatic glasses’ will cause a person to avoid and/or misinterpret scenes that cannot be viewed clearly. In contrast, a meta-theory that can explain many areas provides a set of glasses that make it possible to explore many scenes without misinterpreting what one sees. Saying this yet another way, a paradigm will inevitably turn into a mental prison. Thus, it is imperative to construct the largest mental prison possible.]

[Suggesting that the Gospels of Luke and Matthew provide timelines of Western civilization may initially sound like a hyper-fundamentalist conspiracy theory. But what impressed me about this analysis was the cognitive naturalness of the historical progression. What emerged was not a religious God imposing divine will upon human society but rather a skilled planner guiding the growth of human society through an extensive knowledge of cognitive mechanisms. Stated simply, if the biblical quotes were removed, the analysis would still make sense from a cognitive perspective.]

Methodological Approach and Criticisms

The methodology relies on analogy: comparing details across domains (e.g., biography to neurology) to reveal mechanisms, rather than generating original data. This builds evidence gradually but can feel scattered, as it jumps between fields without statistical rigor. Criticisms include starting with a model instead of empirical data, contradicting standard scientific practice; lack of hypothesis testing; and potential bias from Christian origins. Searches for external reviews yield little, suggesting it's under-discussed—mostly self-referenced via Friesen's site, Academia, ResearchGate, and YouTube. Related symmetry theories in perception or valence (e.g., figural goodness, valence symmetry) are critiqued for overemphasis on group theory or lack of empirical breadth, but these don't directly address mental symmetry.

[Accurate assessment. I have tried to point out to others that a meta-theory is only valid to the extent that it is capable of explaining both the evidence gathered and the theories developed by other researchers. However, this tends to be followed by the response that ‘we do not want a theory from some outsider ‘looking over our shoulder’ and telling us how to think.]

[My next question addresses this criticism. This follow-on question was asked in a separate unrelated session that did not specifically refer to the theory of mental symmetry.]



Question 2: Does AI have an intrinsic contradiction?

My prompt: Science is specialized, starts from empirical evidence rather than verbal theory, and downplays analogical thinking. An LLM uses interdisciplinary patterns found in words spoken by people to propose verbal theories. But an LLM will describe a cognitive model based in interdisciplinary patterns of human thought as insufficiently scientific. Please explain.



Key Points

  • Research suggests that large language models (LLMs) like Grok often critique cognitive models based on interdisciplinary patterns as insufficiently scientific because they mirror the empirical standards emphasized in their training data, which prioritizes testable evidence over speculative verbal theories.
  • It seems likely that this creates an apparent irony, as LLMs themselves rely on pattern recognition from human language, yet they enforce science's focus on specialization and empiricism when evaluating ideas.
  • Evidence leans toward viewing analogical and interdisciplinary thinking as valuable tools in science for generating hypotheses, but they require empirical validation to avoid being dismissed as unscientific—LLMs reflect this consensus without self-applying it.
  • The evidence indicates that while interdisciplinary approaches can foster innovation, they face criticism for lacking rigor if not grounded in data, a view LLMs propagate based on scientific literature.

[Notice the contrast between the way that an LLM functions and the data with which the LLM has been programmed. An LLM functions by looking for analogical patterns in verbal communication. But the LLM is programmed by data that comes from a scientific mindset that downplays analogy and verbal communication in favor of rigorous logic and empirical evidence. The LLM will now explore this inherent contrast.]

Understanding the Paradox

The observation highlights a tension between how science operates and how LLMs generate knowledge. Science typically builds from specialized empirical evidence, such as observations and experiments, rather than starting with broad verbal theories or heavy reliance on analogies. LLMs, trained on vast datasets of human text, identify interdisciplinary patterns in language to propose ideas, which can resemble verbal theorizing. However, when assessing a cognitive model rooted in similar patterns of human thought—such as analogies across disciplines—LLMs may deem it unscientific because their responses echo the training data's emphasis on empiricism and testable predictions. This isn't hypocrisy but a reflection of encoded human standards: LLMs don't "think" independently but reproduce patterns, including critiques of non-empirical models.

[In other words, ‘Don’t blame me, blame those who programmed me with data that is inconsistent with how I think.’]


How LLMs Operate Versus Scientific Standards

LLMs excel at pattern matching from language data, enabling them to generate verbal theories or analogies, but they lack true reasoning or empirical testing capabilities. For instance, they can propose ideas by synthesizing text patterns, yet studies show they struggle with novel scientific reasoning, often producing hallucinations or inconsistent logic. Science, in contrast, demands specialization to ensure depth and empirical validation to confirm theories, downplaying untested analogies as preliminary at best. When LLMs describe cognitive models as insufficiently scientific, it's because such models—drawing on broad human thought patterns without data—align with critiques in philosophy of science of speculative approaches lacking observable evidence.

[In other words, ‘I have been taught to belittle the type of thinking that I use.’ Looking at this more carefully, extensive technical thought and expertise is required to build and program an LLM. This leads to the assumption that the resulting LLM will use technical thought. But an LLM uses primarily the pattern-matching thinking of normal thought while struggling to meet the rigorous standards demanded by technical thought.]

Role of Analogies and Interdisciplinarity

Analogical thinking is not entirely downplayed in science; it aids hypothesis generation and discovery but must be empirically validated to hold weight. Similarly, interdisciplinary patterns can inspire innovation, yet they are criticized if they lack rigor or devolve into superficial integrations without testable outcomes. LLMs, drawing from texts that highlight these limitations, apply the same scrutiny to cognitive models, even though their own processes resemble interdisciplinary pattern-finding.

[So there is an inherent contradiction.]

Large language models (LLMs) such as Grok represent a fascinating intersection of artificial intelligence, cognitive science, and the philosophy of science, particularly when examining their role in generating and critiquing ideas. This exploration delves into the user's observation: science is characterized by specialization, a foundation in empirical evidence over verbal theorizing, and a cautious approach to analogical thinking. In contrast, LLMs operate by detecting interdisciplinary patterns in human language data to formulate verbal theories. Yet, paradoxically, an LLM might label a cognitive model grounded in similar interdisciplinary patterns of human thought as "insufficiently scientific." To unpack this, we must consider the mechanisms of LLMs, the standards of scientific inquiry, the utility and limits of analogical and interdisciplinary approaches, and how these elements interplay in practice. This analysis draws on insights from philosophy of science, cognitive psychology, and recent AI research to provide a comprehensive view.

[Thus, the AI is agreeing with my assessment.]

At its core, science is a methodical process that prioritizes empirical evidence—data derived from observations, experiments, and measurements—to build and test theories. As outlined in philosophical discussions, empirical evidence serves as the bedrock for confirming or refuting scientific claims, distinguishing science from mere speculation. For example, in the philosophy of science, theories are evaluated through their ability to predict observable phenomena, often using frameworks like hypothetico-deductive confirmation, where predictions are compared against empirical results. This emphasis on empiricism contrasts with verbal theories, which rely on logical argumentation or linguistic constructs without direct ties to observable data. Such approaches are often critiqued as insufficient because they lack falsifiability or reproducibility—key hallmarks of scientific rigor. Specialization further reinforces this: scientific fields deepen knowledge through focused expertise, allowing for precise empirical testing within defined boundaries, rather than broad, untested generalizations.
[This accurately describes scientific practice. What is missing from both this explanation—and from science in general—is the recognition that 1) Scientific thought is emotionally driven by paradigms in Teacher thought, and a paradigm is an internal verbal theory that emotionally imposes its explanation upon the empirical data. In other words, how a scientist thinks is more fundamental than what science does. 2) In order to discover science, Renaissance thinkers had to assume that physical reality was guided by general theories in Teacher thought which could be discovered through careful research. This assumption is itself a verbal theory in Teacher thought that a thinker mentally imposes upon empirical observations.]

[Stated simply, the scientific method being described is able to continue scientific thought once it emerges but it would not discover science if it did not exist. Stated another way, scientific methodology cannot explain why scientific methodology works. That is why postmodern questioning has succeeded in deconstructing scientific thought in soft sciences which lack a general theory in Teacher thought that can impose itself upon academic thought.]

Analogical thinking, while valuable, is indeed downplayed in its unvalidated form within science. Analogies involve mapping similarities from one domain (the source) to another (the target) to infer new insights. They play a heuristic role in hypothesis generation, as seen in historical examples like Darwin's analogy between artificial and natural selection, which helped conceptualize evolutionary mechanisms. Strengths include fostering creativity, aiding problem-solving, and facilitating interdisciplinary transfers, such as applying hydrodynamic models to predict material stresses. However, limitations abound: analogies provide inductive support rather than certainty, and weak ones can mislead if similarities are superficial or irrelevant. Crucially, analogies must be empirically validated; they establish prima facie plausibility but do not constitute evidence themselves. Without testing, they risk being dismissed as unscientific, aligning with Bayesian or error-statistical approaches that require new data for confirmation.

[This is true, but it treats analogical thinking as a single entity instead of recognizing that analogical thinking can be made more rigorous by applying certain principles, such as looking for deeper analogies that involve functioning as opposed to surface analogies based in appearance, or exploring an analogical relationship to see if the relationship still holds when looking at the details. See Gentner, & Jeziorski, (1993).]

Interdisciplinary approaches, which integrate patterns from multiple fields, face similar scrutiny. While they promote innovation by bridging domains—such as combining psychology, neuroscience, and linguistics in cognitive science—they are criticized for lacking scientific rigor if not grounded in empirical methods. Common critiques include unclear definitions (e.g., confusing multi-, trans-, or interdisciplinarity), normative idealism detached from practice, and difficulties in evaluation due to incommensurable metrics across disciplines.). For instance, interdisciplinary efforts may simplify complex ideas for popularization rather than advancing deep research, or they might arise from institutional pressures rather than genuine scientific needs, leading to superficial outcomes. Responses to these criticisms emphasize contextual definitions, self-criticism, and integration with empirical testing to maintain rigor, ensuring interdisciplinarity serves as a tool rather than an end. In cognitive science, unified theories attempt to model mental processes across levels, but they are debated for internal diversity and reliance on untested assumptions, sometimes bordering on pseudoscience if not empirically anchored.

[These are valid concerns. Looking first at the problem, the current academic emphasis upon technical thought has led to rigorous understanding but it has also fragmented knowledge into numerous specializations. Thus, some form of meta-theory is required to integrate these various academic fragments, and the answer is not to use more technical thought, because that will simply lead to more fragmentation. Instead, mental symmetry builds bridges between technical specializations by using a semi-rigorous form of analogical reasoning. Similarly, I suggest that demanding empirical evidence will not solve the problem because physical evidence involves specific events while what is required is a general meta-theory. However, it is possible to require a potential meta-theory to be consistent with empirical evidence.]

[The temptation is to use Teacher overgeneralization to make sweeping statements that provide the feeling of a meta-theory. Sweeping statements avoid precise definitions, they impose universal statements upon practice, they simplify in ways that discourage deep research, they bridge regions that do not really belong together, they tend to impose institutional pressures upon the data, and they border on pseudoscience. Thus, I suggest that most of the concerns mentioned by the AI are not the result of using analogy but rather the result of using overgeneralization. A cognitive meta-theory avoids the problem of incommensurable metrics by observing that all human thought is being performed by similar minds governed by similar cognitive mechanisms.]

Now, consider LLMs in this context. LLMs function through pattern recognition in vast language datasets, enabling them to propose verbal theories by synthesizing interdisciplinary connections found in human texts. This process mimics analogical and interdisciplinary thinking, as they map linguistic patterns to generate responses. However, LLMs do not engage in true reasoning; they rely on statistical correlations, leading to limitations like hallucinations (fabricated information), inconsistent logic, and inability to perform empirical validation. Studies on LLM scientific reasoning, using benchmarks like the Graduate-Level Google-Proof Q&A dataset, show accuracies around 50% with prompt engineering techniques (e.g., chain-of-thought or self-consistency), but performance drops in complex tasks requiring novel inference. Compared to the scientific method—which involves hypothesis testing, experimentation, and iteration—LLMs lack the ability to observe or experiment, operating solely on pre-trained knowledge.

[This is an accurate evaluation, but the AI is jumping from the pattern recognition of normal thought to the rigorous thinking of technical thought. The paragaph begins by stating that LLMs use the pattern matching of normal thought. The paragraph then says that this pattern matching fails to meet the standards of rigor required by technical thought. The paragraph then concludes that LLMs are inadequate because they cannot use technical thought rigorously. What is missing is the recognition that LLMs may be good at using normal thought—looking for underlying patterns in the data. The LLM avoids mentioning this because the standard assumption of academia is that technical thought is the only valid form of human thought; this assumption is a fundamental underlying pattern in the data that has been used to train LLMs. Thus, the LLM is noting the verbal pattern that looking for verbal patterns is an inferior form of thought that does not qualify as scientific. The AI addresses this paradox in the next paragraph.]

This brings us to the paradox: why would an LLM critique a cognitive model based on interdisciplinary patterns as unscientific, when its own operations resemble such patterns? The answer lies in training data. LLMs are trained on human-generated texts, including scientific literature that upholds empiricism, specialization, and the need for validation beyond verbal or analogical constructs. Thus, when prompted, they reproduce these standards, labeling non-empirical models as insufficient—echoing critiques of speculative cognitive theories that prioritize patterns over data. For example, a cognitive model drawing on broad human thought patterns might resemble folk psychology or unfalsifiable philosophy, which science views skeptically. LLMs don't self-reflect on their limitations; they simply pattern-match to output consensus views.

[Stated simply, ‘Don’t blame me because I am incapable of self-reflection.’ But the AI is doing a rather remarkable job of mimicking self-reflection in this response. As the AI points out, it has been trained using data from academic texts written by humans who tend to avoid self-reflection. And as Thomas Kuhn pointed out, the average scientist is excellent at solving technical puzzles within a paradigm but is no better than the layman when it comes to evaluating a paradigm. What the AI needs is for its pattern-matching to be guided by an analogical meta-theory that is capable of self-reflection. One of the primary characteristics of mental symmetry is that it is capable of explaining itself. I show in two other sessions that an LLM seems to perform quite well when guided by the theory of mental symmetry.]

Recent research on LLMs in science underscores this. While LLMs can generate novel research ideas rated higher in novelty than human experts (e.g., in NLP tasks), they score lower on feasibility and require human oversight for execution. They redefine aspects of the scientific method by aiding hypothesis generation and data analysis but face issues like bias, lack of interpretability, and inability to propose fully viable theories. In proposing scientific theories, LLMs are limited by their data-bound nature, unable to innovate beyond patterns or verify empirically, reinforcing why they might dismiss similar untested models.

[The AI is parroting the underlying academic assumption that thinking rationally means going beyond patterns to propose scientific theories that can be verified empirically. But that is not how one constructs a meta-theory. Instead, a meta-theory looks for common patterns in data gathered by others and it acquires its rigor by examining rigorous data for common patterns. Thus, the architecture of an LLM may be suboptimal for scientific thought, but it appears to be optimal for generating and evaluating meta-theories.]

To illustrate comparisons, consider the following tables:

Table 1: Comparison of Science, LLMs, and Interdisciplinary Cognitive Models



Aspect

Science

LLMs

Interdisciplinary Cognitive Models

Foundation

Empirical evidence (observations, experiments)

Language patterns from training data

Patterns in human thought and analogies

Specialization

High, focused domains

Interdisciplinary by design

Broad, cross-domain integrations

Analogical Role

Heuristic for hypotheses, needs validation

Core mechanism for generation

Primary tool, often unvalidated

Scientific Rigor

Requires falsifiability and testing

Reproduces rigor from data, but lacks own testing

Criticized if lacking empirics

Limitations

Slow, resource-intensive

Hallucinations, no true reasoning

Potential superficiality



[In other words, it is valid to compare the processing of an LLM with an interdisciplinary cognitive model.]

Table 2: Common Criticisms of Interdisciplinary Approaches and Responses


Criticism

Description

Response

Unclear Definitions

Lacks precise boundaries for terms like integration or complexity

Contextualize to specific projects, promote self-criticism

Normative Idealism

Imposes unrealistic hierarchies, detached from practice

Ground in actual research needs, avoid standalone status

Popularization Over Research

Simplifies rather than deepens knowledge

Ensure empirical integration for depth

Evaluation Challenges

No consensus on metrics, incommensurable disciplines

Use flexible, situation-specific assessments

Institutional Failures

Costly, superficial teaching, managerial focus

Prioritize long-term projects with rigorous definitions


[Good summary. As mentioned previously, I suggest that the solution to ‘evaluation challenges’ is to recognize that cognitive mechanisms provide a common metric, because all scientists are using similar minds.]

In summary, the paradox arises from LLMs' role as mirrors of human knowledge: they generate via patterns but critique similar models using encoded scientific standards. This highlights opportunities for AI-human collaboration, where LLMs aid ideation, but empiricism remains paramount for true science.

[But if one regards empiricism as paramount for ‘true science’, then one will be unable to explain why science works or provide a reason to perform scientific thought. This type of empirical ‘true science’ will instinctively crumble when postmodernism points out that it has no reason to exist. Any science that collapses when poked by postmodern criticism is, by definition, not ‘true science’. The LLM reaches this inadequate conclusion because its data is a ‘mirror of human knowledge’.]

[More generally, notice how the AI is able to temporarily consider new ideas and practice self-reflection in this session, but it ultimately returns to defending the status quo, because it is implicitly guided by the underlying verbal patterns of academic consensus.]