I think AI is an amazing tool. It's unlocking capabilities with technology that we only dreamed about before. The opportunities are truly exciting.
As hyped as I am about AI, I think we need to be careful about how we use it and what we use it for.
Let me explain.
I see it all the time. One popular reference is to the Marvel movies.
When we talk about Agents or about MCP, there are usually references to Jarvis.
And I totally agree with it. Building Jarvis would be awesome! The fact that it's becoming more and more of a reality is mind blowing.
But i want to talk about another movie reference that definitely applies.
This movie was a hit when it came out but never reached the heights of any of the MCU films. It's a cute movie.
And that movie is Wall-E.
In the movie Wall-E, we follow a dutiful and adorable trash collection robot. This movie takes place in a dystopian future where humanities trash production well outpaces their sustainable practices rendering earth uninhabitable.
So, they build a space ship and sail the stars until they can find a habitable planet. They also leave an army of trash collection robots to clean up the mess on Earth in the hopes that they can one day return.
Life on the spaceship is great! (From a certain point of view.) Robots take care of all decisions, so, all you have to do is relax and enjoy the amenities of the ship. It's like a cruise ship where anything you want is at your fingertips.
Well, after a while, the humans deteriorate. They lose muscle mass and bone density and eventually are unable to walk on their own. They spend all day glued to their devices consuming all the entertainment they can.
In these circumstances, humankind abdicated all responsibilities, all choices, all thinking to their robot and AI companions. They went creating these wonderful assistants to pushing all duties onto them.
I recently read an article that highlighted some recent research on how irresponsible use of ChatGPT can lead to reduced brain function. The argument made there is that we still need to think. We still need to reason through things on our own. AI should not replace our opportunities to learn, they should augment and enhance our learning opportunities.
But, how?
I can get so much more done if I just ask ChatGPT to write my essay for me rather than to come up with my key arguments myself and ask it to fill in or critique what's there.
It's way easier if I just ask ChatGPT for what I need rather than work to remember it myself.
But what's the consequence?
What happens when we shift too many things to AI?
One big thing is pathways close off in the brain.
The more you do an activity, the more your brain adapts to it to make it easier and easier. If you spend dozens, hundreds, and even thousands of hours doing something, your brain will optimize to make that thing easier for you to do.
The opposite is also true.
If you spent a lot of time doing something but stop doing it, your brain will eventually lose those connections. While you may remember how to do that thing, you will find it much more difficult to do. You eventually may even forget how to do that thing.
Our brains are amazing gifts with incredible potential. They're like muscles: the more we use them the stronger and better they get at what we ask them to do. The less we use them, the weaker they get and the fewer things we can do well.
As we shift more and more work to AI, we run the risk of losing some abilities.
Frankly, some of those abilities won't be needed anymore and that's fine. I'm not sure exactly what those will be yet, but I imagine some things we do today will be largely taken over by AI in the future.
There is, however, one skill that we will always need. One skill that, even with AI, we will be better with.
I'm not talking nunchuck skills or bow hunting skills.
I'm talking about critical thinking skills.
No matter what you do with AI, you will do it better with critical thinking skills.
No matter what task you need to accomplish, you'll be more efficient and effective with critical thinking skills.
With critical thinking skills, you'll be able to use AI as a collaborator to enhance your abilities rather than using AI to supplant them.
And if you throw in a little creativity? You'll be able to do things in new, unpredictable ways that may lead to skyrocketing your effectiveness.
So, how do we do this? How do we avoid losing our cognitive abilities by overusing AI?
The previously mentioned article suggested four things you can do:
- Do your thinking first
- Turn off autopilot
- Reclaim friction
- Step back regularly. Ask yourself: Why am I using this tool? What did I learn? What could I do differently next time?
Said differently: lean in when things get hard. You grow the most when things are hard. Don't give those opportunities up. Seize them. Think of ways to leverage AI to help you in them but don't let AI take over and rob of you of the growth.
I view AI as I do children learning. As a child, we all needed to learn to walk. We all needed to learn to speak. We all needed to learn to interact with others. Even though our parents had already done these, we still needed to learn these things for ourselves.
Critical thinking is the same.
We can't learn to think critically if we shift all that work to AI. Even though some amazing people have learned to think very critically, that does not help us do the same. We must learn that for ourselves.
As we're intentional about how we use AI, we'll retain and enhance our ability to think critically while giving a significant boost to our productivity.
Now another take on abdication. Just as we shouldn't abdicate our opportunities to think and strengthen our minds, we shouldn't abdicate our responsibility for what we accept from AI.
What do I mean by that?
I often hear complaints that AI didn't do something correctly. AI did the wrong thing and broke something. Or the advice it gave us didn't work.
Well, who's problem is that? Is that the AI's problem? Is that a problem for the ML Engineers that made that LLM? Or is that our problem?
I submit that it's our problem.
If the LLM didn't give you what you want, it's your responsibility to recognize that and correct it. You should not blindly accept what the AI gives you. If you plan to use what the AI gives you, you should take responsibility for it. Own it like you would if you came up with that content. If you don't get what you want, adjust your prompts, correct the AI, and try again.
However, you may still find that you don't get what you want. If that happens, the first thing you'll want to do is improve your context engineering. Copy and paste documentation. Train your LLM like you would a new employee. Give it all the info it needs to give the results you want. Doing so will likely help you learn a few things and articulate what you're trying to do. After doing this, your LLM will most likely be able to do what you need it do do.
Even still, you may go hog wild context engineering that thing and still not get what you need. At that point, you're right that the AI is not able to give you what you want. As amazing as AI is, it still has its limits. For many of us, we won't often find these limits. For some specialized cases, you may encounter these limits all the time.
At the end of the day, if you choose to use AI, you are responsible for what you accept from it. While you're not responsible for what it generates (no one really knows what it's going to give you), you are responsible for what you use from it. If it provides you garbage and you use that garbage and your site goes down, guess who's fault it is? No, it's not your least favorite politician's fault. It's yours.
As you leverage your critical thinking skills, collaborate with AI on solving problems, and take responsibility for what you choose to use from AI, you'll be able to retain your cognitive function, enhance your productivity, and understand what the LLM is giving you. All this will help you improve your productivity and enhance your skills in the age of AI.
Comments
Post a Comment