Skip to content
-
Subscribe to our newsletter & never miss our best posts. Subscribe Now!
strivex
strivex
  • Home
  • Home
Close

Search

  • https://www.facebook.com/
  • https://twitter.com/
  • https://t.me/
  • https://www.instagram.com/
  • https://youtube.com/
Subscribe
Blog

On Developing New Ways of Thinking to Adapt to AI

By zaminmughal2028
January 24, 2026 9 Min Read
0

Key points

  • Every major technology has reduced our capacities in some areas while expanding them in others.
  • We can’t metabolize overwhelming experience alone. We need containment.
  • AI may augment our very capacity for thinking to meet the demand created by its presence.

Here’s the irony at the heart of our artificial intelligence (AI) moment: The tool designed to think for us may be the very thing that forces us to think better. Yes, AI can make us lazy, atrophying certain skills as we outsource judgment to machines. Recent research (e.g., Kosmyna et al., 2025) suggests heavy AI use may weaken certain cognitive capacities.

But this framing misses something essential about how human minds develop. Every major technology has reduced our capacities in some areas while expanding them in others.1 Writing obviated our need for memorization—some theories suggest oral cultures possessed near-photographic recall—but it allowed us to externalize cognition, to think by writing, and to make thinking collectively available, moreso with the printing press, and now digital media.

What might we gain in adapting to AI?

Thinking Arises in Response to Thoughts

The monumental psychoanalyst Wilfred Bion (1962) observed that we aren’t born with a fully developed thinking apparatus. We have the potential only. We encounter thoughts—raw, undigested experiences that exceed our current capacity—and the problem created by such thoughts spurs us to develop thinking.

Thomas Ogden, noted Bion scholar, puts it this way:

  1. “Thinking is driven by the human need to know the truth—the reality of who one is and what is occurring in one’s life.”
  2. “It requires two minds to think a person’s most disturbing thoughts.”
  3. “The capacity for thinking is developed in order to come to terms with thoughts derived from one’s disturbing emotional experience.”
  4. “There is an inherent psychoanalytic function of the personality, and dreaming is the principal process through which that function is performed.”

A young child encounters an overwhelming experience—the death of a beloved pet. They don’t know what to do with this flood of feeling. A competent parent provides a “container,” helping the child regulate and make age-appropriate sense of the loss. If this happens successfully enough, we develop what Bion called an “apparatus for thinking”; if it fails, we develop an “apparatus for projection.” This capacity matures into proper conceptualization—what Bion called “realizations,” when concept meets reality and things make sense, allowing us to move forward even when we don’t like what we’ve realized. If we can’t manage the experience, we aren’t thinking—we are projecting, and splitting.

Bion understood that we can’t metabolize overwhelming experience alone. We need containment—another mind that can hold what we cannot yet process and return it in digestible form. For the infant, this is the mother. For the patient, the analyst. For the student, the teacher.

This reframes everything if we view generative AI—and its anticipated, more powerful descendants—as one of humanity’s most disturbing thoughts: disturbing both for its risks and its potential for transformation. Hence the fragmentation, from hype to hope, utopia to apocalypse.

article continues after advertisement

The AI Containment Problem

With AI, we face an unprecedented situation: No one has already metabolized this experience. We’re building the container while being overwhelmed by the contents. No parent, teacher, or therapist has broken this ground before.

What kind of thinking do we need? From this vantage, AI is a developmental object—something that can propel the evolution of human thinking itself, expand the psyche to new frontiers. AI may augment our very capacity for thinking to meet the demand created by its presence.

We don’t fully know what these systems do or what’s happening inside them. Like a brilliant child, they’re full of surprises. When a new system is built, developers experiment to understand how it works, but much remains mysterious even to the designers. And there’s vast untapped potential: Ordinary physics allows matter and energy, properly arranged, to be very, very smart. Call it the computational mining of reality.

The Bekenstein bound sets the theoretical maximum on computation per unit of space. Early computers occupied city blocks; now smartphones dwarf them. Imagine computation approaching black-hole density. The human brain is marvelous, but it barely scrapes what its 1.4 kilograms could theoretically allow. AI is a computational gold rush, and we’ve just begun tapping its veins.

Without adequate containment, Bion warned, we default to primitive defenses—splitting the world into good or bad, projecting rather than tolerating complexity. Look at our AI discourse: utopians versus doomers, acceleration versus pause, savior versus destroyer. This isn’t thinking. It’s evacuating anxiety through polarization.

Freud identified three great injuries to human narcissism: Copernicus showing Earth isn’t the center of the universe, Darwin revealing our animal origins, and psychoanalysis confronting us with our unconscious. AI may represent a fourth injury—challenging the uniqueness of human thought itself, threatening to unseat us as the center of the intelligent universe.

article continues after advertisement

The Developmental Opportunity

And yet—this is precisely the kind of pressure that generates growth. When our ancestors descended from the trees onto the savanna, they entered an environment impossible to survive without adaptation. That pressure didn’t destroy them, though they are gone. It made them us.

AI is our savanna. The complexity of our systems is outstripping our unaided capacity. We need new forms of judgment, verification, synthesis, and meaning-making—not despite AI, but in response to it. Here are some conjectures on what might work and what might not.

Those likely to thrive:

  • High metacognitive capacity: able to monitor, evaluate, and direct their own thinking and the outputs of AI systems
  • Robust identity: secure sense of self that can incorporate AI as a tool without fragmenting or over-depending
  • Frustration tolerance: capacity to stay with difficulty, developed through adequate developmental friction
  • Verification habits: instinctive cross-checking, source evaluation, epistemic humility
  • Human relational investment: maintained capacity for embodied, imperfect, mutual human relationship
  • Adaptive authenticity: articulated framework for what “real” means in a synthetic-capable world
  • Attention sovereignty: cultivated ability to direct and sustain focus

Those likely to struggle:

  • Low metacognitive capacity: unable to evaluate AI outputs, prone to automation bias and confident error
  • Fragile identity: diffuse self-structure that fragments under AI mirroring
  • Low frustration tolerance: developed in frictionless environments, unable to persist without instant feedback
  • Verification-poor: lacking skills or social resources to check claims, epistemically dependent
  • Parasocially substituted: human relational skills atrophied through AI replacement
  • Authenticity crisis: caught between frameworks, unable to locate “realness”
  • Attention captured: focus colonized by algorithmic optimization, unable to sustain depth

article continues after advertisement

Designing the Container

The true singularity isn’t merely technological. It’s relational and existential—what happens between human minds and artificial ones, and among human minds grappling together with what we’ve made.

What would species-level containment look like? What kind of thinking will emerge from the tumult? We literally have no good sense of this. But the principles are clear: spaces that tolerate ambiguity rather than demanding premature certainty; institutions that can move at technology speed without abandoning deliberation; practices that preserve enough frustration to keep the thinking apparatus engaged. It’s a tall order, and in my view, we are likely to need both AI augmentation as well as good old mammalian wisdom to meet, contain, and transcend this crisis-opportunity.

Artificial intelligence is not simply another technological tool. It is a rupture. Unlike previous machines that extended human muscle or speed, AI extends—and increasingly replaces—human cognition. It writes, predicts, analyzes, generates images, diagnoses patterns, and makes decisions. As a result, the challenge AI poses is not primarily technical, but psychological and philosophical.

Most discussions about AI focus on skills: which jobs will disappear, which new ones will emerge, and how humans can reskill quickly enough to stay relevant. But this framing misses the deeper issue. The real adaptation problem is not about learning new tools; it is about learning new ways of thinking. The mental models that shaped education, work, creativity, and identity in the pre-AI world are no longer sufficient.

To adapt to AI, humans must rethink what intelligence is, what value means, and what it means to be human in a world where thinking itself is no longer uniquely ours.

The Industrial Mindset in a Post-Industrial Age

Modern thinking has been shaped by the logic of the industrial era. Schools trained students to memorize, follow procedures, and produce standardized outputs. Work rewarded efficiency, repetition, and compliance. Intelligence was measured by speed, accuracy, and the ability to outperform others at predefined tasks.

AI excels at precisely these things.

When humans try to compete with AI using industrial-era thinking—faster calculation, better recall, greater productivity—they are already losing. This is not because humans are inferior, but because the rules of the game have changed.

Adapting to AI requires abandoning the mindset of competition with machines and adopting one of complementarity. The question is no longer “How can I do this better than AI?” but “What kinds of thinking become valuable because AI exists?”

From Knowledge Accumulation to Sense-Making

For centuries, intelligence was closely associated with knowledge accumulation. The more facts you knew, the more educated you were. In the age of AI, knowledge is abundant, instantly accessible, and increasingly generated without human effort.

This shifts the value of human thinking from storing information to making sense of it. Sense-making involves contextual understanding, ethical judgment, emotional awareness, and the ability to connect information to lived human experience.

AI can generate answers, but it does not care which questions matter. It can simulate understanding, but it does not live with the consequences of decisions. Humans must therefore develop ways of thinking that emphasize meaning, interpretation, and responsibility rather than recall and optimization.

Letting Go of Cognitive Ego

One of the greatest psychological obstacles to adapting to AI is cognitive ego—the belief that our worth is tied to our intellectual superiority. For many professionals, identity has been built around being the smartest person in the room, the expert, the problem-solver.

AI destabilizes this identity. When a machine can outperform a human in tasks once considered markers of intelligence, the ego reacts with denial, hostility, or despair.

Developing new ways of thinking requires humility: the willingness to accept that intelligence is not a personal possession but a distributed process. Human value must shift from being the source of answers to being the steward of judgment, direction, and meaning.

Thinking in Systems, Not Tasks

AI thrives at task-level optimization. Humans must therefore move toward systems-level thinking.

Systems thinking involves understanding relationships, feedback loops, unintended consequences, and long-term dynamics. It requires asking how parts interact rather than how fast a single part can perform.

In an AI-rich world, those who think only in tasks will be easily replaced. Those who understand systems—social, ecological, organizational, psychological—will remain essential. This kind of thinking is slower, more reflective, and often resistant to automation because it depends on values, context, and foresight.

Creativity Beyond Production

AI has disrupted traditional ideas of creativity. It can generate music, art, writing, and design at scale. This has led to anxiety that human creativity is becoming obsolete.

But this anxiety is rooted in a narrow definition of creativity as production.

Human creativity is not just about making things; it is about choosing what to make, why to make it, and what it means. AI can generate outputs, but it does not originate intention. It does not rebel, suffer, or care.

Adapting to AI requires shifting creative thinking from output-focused to intention-focused. The human role becomes one of framing, curating, and imbuing work with lived meaning.

Emotional and Moral Intelligence as Core Capacities

AI lacks emotional experience and moral accountability. It does not feel fear, empathy, guilt, or responsibility. This absence defines the boundary of its usefulness.

New ways of thinking must therefore elevate emotional and moral intelligence from “soft skills” to core human capacities. Understanding how decisions affect people, communities, and futures cannot be automated without serious risk.

As AI systems increasingly influence hiring, healthcare, justice, and governance, humans must think ethically, not just efficiently. The failure to do so is not a technical error—it is a moral one.

Living with Uncertainty

Traditional education trains people to seek correct answers. AI provides answers instantly. What it cannot provide is certainty about the future.

Adapting to AI requires comfort with ambiguity. Careers will be nonlinear. Skills will expire. Social structures will shift. Those who cling to fixed identities and rigid plans will struggle.

New thinking emphasizes adaptability over mastery, learning over expertise, and curiosity over control. This is psychologically demanding, but necessary.

Reclaiming Attention in an Intelligent Machine World

AI increases not only productivity, but distraction. Recommendation systems optimize for engagement, not wisdom. The human mind risks becoming reactive, fragmented, and dependent.

Developing new ways of thinking requires reclaiming attention as a finite and valuable resource. Reflection, deep work, silence, and intentional limits on automation are not luxuries—they are survival strategies.

A mind that cannot focus cannot adapt.

Education for an AI Age

If thinking must change, education must change first. Teaching students to outperform machines is futile. Teaching them to think critically, ethically, creatively, and systemically is essential.

This means prioritizing questions over answers, dialogue over memorization, and self-understanding over standardized testing. Education must prepare individuals not for specific jobs, but for continuous transformation.

Conclusion: Becoming More Human, Not Less

Adapting to AI does not mean becoming more machine-like. It means becoming more human.

The future belongs not to those who can calculate fastest, but to those who can think most wisely. AI forces humanity to confront a long-avoided question: if intelligence is no longer our defining feature, what is?

The answer lies not in resisting AI, but in evolving alongside it—by developing new ways of thinking grounded in meaning, ethics, creativity, and responsibility.

The challenge is not to keep up with intelligent machines.

It is to remain human in their presence.

Author

zaminmughal2028

Follow Me
Other Articles
Previous

Psychology’s Misdiagnosis Problem

Next

6 of the best AI-powered tools for fast

No Comment! Be the first one.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Copyright 2026 — strivex. All rights reserved. Blogsy WordPress Theme