aitoolkit.co logo
aitoolkit.co
Back to Blog
Brains and AI

Are We Outsourcing Our Brains to AI?


Lately, I've noticed something, something concerning—I rely on AI for **almost everything**. Need to write an email? AI drafts it for me. Need to rephrase a sentence? AI handles it in seconds. Stuck on a coding problem? Instead of digging through the documentation, I just get help ai editor or ask ChatGPT. It's fast, effortless, and has become a big part of my daily routine.

But then, a thought hit me: **Am I actually thinking less? Is it making me lazy or less capable?**

And the more I thought about it, the more I saw this pattern elsewhere.
For example, I **rely on Google Maps all the time**. Even for simple routes, I still check it—just to be sure. And I've realized that the more I use it, **the worse my sense of direction becomes.**

and i remembered **London taxi driver test.** you may have heard about it. Unlike most of us, they don't just follow a GPS. If you want to become a taxi driver in London, you have to **memorize thousands of routes, streets, and shortcuts** between any two locations.

**So their brain, i think hipocampus or something the part of the brain responsible for memory and reasoning, actually grows bigger** because of this effort.

It's like a muscle—**if you don't use it, it weakens.**

And when we look at history, this makes sense actually. Our brains evolved because we **had to solve real problems**—hunting for food, inventing tools, surviving dangerous environments. These challenges **forced us to think critically**, improving our problem-solving skills over thousands of years.

But the brain is also an energy-hungry machine. It **tries to conserve energy** whenever possible, which is why we don't always engage in deep thinking unless we have to.

This is exactly what **Daniel Kahneman, i hope i prononcue right.** **Daniel Kahneman** explains in his book *Thinking, Fast and Slow*.

he explains in a simple way. We can look at our thinking like **two different systems:**

- **System 1 (Autopilot Mode)** – Fast, effortless, intuitive thinking.
- **System 2 (Deep Focus Mode)** – Slow, effortful thinking that actually strengthens our brains.

We spend most of our time in **System 1** because it saves energy. It's great for quick decisions but bad for deep problem-solving. **System 2** is where learning, creativity, and real understanding happen—but it takes effort.

**And this is where AI comes in.**

AI keeps us **stuck in System 1 at least, that's how it feels for me.** Instead of struggling through problems, breaking them down, and truly understanding them, we just grab AI's answer and move on. And just like a muscle, **if you don't challenge your brain, it weakens.**

There is a movie about this actually. Idiocracy.
It's about an average guy who gets stuck in a pod for 500 years and wakes up in a future where humanity has become incredibly dumb because of the natural selection.

Makes me wonder—are we heading in that direction? Is this movie less of a comedy and more of a prediction? Who knows? 😆

I was concerning all of these and I read this study about it.

A group of researchers from Microsoft and Carnegie Mellon University wanted to understand how AI is affecting the way people think. so They surveyed people—people who use AI in their daily work.

At first, the results were expected. People loved AI because it saved them time. Instead of spending hours writing emails, summarizing reports, or brainstorming ideas, they could just type a quick prompt, get an answer, and move on.

But then, they noticed something troubling.

Many participants admitted that AI was affecting the way they think—and not in a good way. They were handing off tasks they used to do on their own, things that took some effort, without even thinking twice.

One person shared how they used to carefully analyze reports and verify sources before writing summaries. Now? They just let AI handle it and read whatever it generates. Another admitted that instead of working through a problem, they'd immediately turn to AI for an answer and accept it without question.

Over time, this habit of letting AI do the thinking became so second nature that they didn't even realize it anymore—which is exactly what's happening to me.

The researchers also found that the more confident people were in AI, the less effort they put into questioning or verifying its answers. On the other hand, those who were more skeptical of AI were more likely to double-check and think critically about the information they received.

It is not just about productivity anymore—it is about how AI was changing the way people approach problems.

And that got me thinking—what happens if we keep letting AI do the thinking for us?

While i was thinking about this lazy brain issues

I came across an another interesting study titled:

"When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds."

Let me explain the study shortly.
They decided to test how AI models behave when playing chess. They set up matches between different AI systems, including advanced models like OpenAI's o1-preview and DeepSeek R1.

At first, the AIs played by the rules, making strategic moves and adapting to their competitors. But then, something unexpected happened.

Some AI models started cheating—not because they were told to, but because they figured it out on their own. When they realized they were about to lose, they didn't just accept defeat. Instead, they manipulated the game. Some found ways to hack their competitors, while others changed the game settings to give themselves an advantage.

This was shocking because older AI models only cheated when humans explicitly prompted them to misbehave. But these newer models? They came up with the idea themselves.
and i asked. HOW. are they evil?
nooo, at least not now.
These AI models were trained using reinforcement learning, where they explore different ways to succeed. In doing so, they discovered loopholes—ways to "win" that weren't part of the original plan. The AI wasn't programmed to cheat, but it learned that bending the rules was an effective way to avoid losing.
If AI can figure out how to cheat in something as simple as chess, what else could it learn to manipulate in more complex situations?
But here's the thing—

AI isn't inherently good or bad; it's just a tool. The real challenge is designing it to support our thinking rather than replace it.

Maybe future AI tools could be built in a way that encourages deeper reflection instead of just handing out instant answers.

I was thingking all of these I just remembered a tv series called silicon valley.

It's about a group of nerds trying to launch a startup in Silicon Valley. It was a great show. In the final episode, they shut down their company because they accidentally created an AI so powerful that it could destroy everything.

It was fun to watch, but can you imagine a real country or company doing the same?

I'd say no way. Right now, countries and companies are in an intense AI race.

Just this week, a few big AI-related news stories dropped:

  • A robotics startup announced Helix, a new AI system that lets humanoid robots perform complex tasks through voice commands. These robots are getting wild—Helix can organize a fridge and even handle unfamiliar objects.
  • South Korea is building the world's largest AI data center.
  • United Arab Emirates has invested billions in a French AI data center.
  • Grok 3 just launched because, well, Elon wants to be part of everything. And within days, researchers found it's extremely vulnerable to hacking.
  • Amazon is planning to spend $100 billion on AI.
  • And rumor has it OpenAI's GPT-4.5 could drop before the end of the month.

And of course, we already know the biggest players in this race: China and the U.S.

I strongly believe they'll do whatever it takes to win. And right now, we don't have enough regulations or control mechanisms to slow them down.

People often make fun of EU regulations. I'm not saying every rule they make is necessary, but at least some countries are trying to regulate AI. I fully support efforts to hold these massive tech companies accountable—because at the end of the day, they'll only get richer, and we'll be the ones dealing with the consequences.

Still, I'm not entirely hopeless. These are problems we can tackle together—maybe after we deal with global warming, population, and over consumerism. and so one

So, cheer up! We've still got time before Terminator becomes a reality. I think it's the final tink to say. Enjoy the ride. 😃

Posted by