close

DEV Community

Cover image for Don’t let AI do your thinking: a practical guide for engineers
Julien Avezou
Julien Avezou Subscriber

Posted on

Don’t let AI do your thinking: a practical guide for engineers

An AI dependency self-check

I designed a "Thinking Guide" for engineers building with AI. And looking for your feedback.

We’re entering a new era of software engineering.

Code can now be generated instantly. Entire functions, components, even systems can be scaffolded in seconds.

This is a massive opportunity.

But there’s a subtle tradeoff I’ve been thinking a lot about lately:

When execution becomes easier, thinking can quietly decrease.

Not because we’re less capable, but because many of the moments that used to force us to think - debugging, struggling, reasoning - are now compressed or skipped.

You can ship faster than ever.

But speed alone does not build understanding.


Several studies suggest that heavy reliance on AI tools can reduce cognitive effort, weaken critical thinking, and impair long‑term learning, which are all forms of cognitive offloading.

A recent experimental learning study reported in Psychology Today (“Cognitive Offloading: Using AI Reduces New Skill Formation”) showed that learners who relied heavily on AI while acquiring a complex skill (e.g., coding assistance) formed substantially fewer new skills than those who completed the same tasks without AI. The authors conclude that when tasks are done with the goal of learning, offloading key steps to AI “considerably reduces” skill formation and recommend delaying AI support until a solid base of independent proficiency is built.

The takeaway isn’t to avoid AI. It’s to use it in a way that keeps you thinking, rather than replace your thinking.


Why I wrote this guide

Over the past few years, I went from a graduate with no experience to a senior software engineer in 5 years, while also building and launching my own projects.

One thing I realized along the way:

Growth didn’t come from writing more code.

It came from reflecting on the code I wrote.

After debugging an issue, what signal led me to the root cause?

After designing a system, what trade-offs did I make?

After using AI, did I actually understand what was generated?

So I started building small reflection habits around my work.

Over time, this turned into a structured system.

I recently turned that system into a guide:
"Thinking in the Age of AI"

The goal is simple:
Help engineers keep developing their judgment and intuition while using AI tools.


How to use the guide

This isn’t meant to be read once and forgotten.

It’s designed to be used in small moments during your workflow.

Most exercises take 2–5 minutes.

15 exercises in total.

You don’t need to use everything. Just start by picking one tool and applying it consistently.

The idea is not to reflect more.
But to reflect deliberately.


Example 1 — AI Dependency Detector

One of the tools in the guide is a quick and effective assessment on whether AI strengthening your thinking, or replacing it.

Here is the full exercise extracted from the guide:

Part 1 — Quick Self-Assessment

For each statement, check the box if it applies to you frequently.
☐ I use AI before attempting to think through the problem myself.
☐ I paste AI-generated code without fully reading it.
☐ I struggle to explain AI-generated solutions afterward.
☐ I rely on AI for problems I previously could solve independently.
☐ I rarely rewrite AI-generated code in my own style.
☐ I feel uncomfortable solving similar problems without AI.
☐ I accept AI solutions even when I do not fully understand them.
☐ I use AI to avoid difficult cognitive effort.

Interpreting Your Results

0–2 boxes checked → AI is likely accelerating your thinking.

3–5 boxes checked → Monitor usage and reintroduce friction.

6+ boxes checked → Consider temporarily increasing independent problem-solving time.

Part 2 — Dependency Signals

Reflect on the following questions:

When AI is unavailable, how confident would I feel solving similar problems?

1 — Not confident
2 — Somewhat unsure
3 — Neutral
4 — Confident
5 — Fully confident

Circle one: 1 2 3 4 5

Have my independent debugging skills improved, stayed the same, or declined over the past months?

☐ Improved
☐ Stayed the same
☐ Declined

If I removed AI from my workflow tomorrow, what would weaken first?

_____________________________________________________________________

Part 3 — Cognitive Strength Check

Complete this short test after using AI:

Without looking at the code:

• Can I describe the logic step by step?

• Can I identify edge cases?

• Can I reason about time/space complexity?

• Can I modify the solution confidently?

If you answered “no” to multiple items, this is not failure, it is a signal.

Part 4 — Adjustment Plan

If you notice rising dependency, choose one corrective action:

☐ Attempt solutions yourself for 5–10 minutes before using AI

☐ Rewrite AI-generated code in my own words

☐ Explain the solution aloud after generating it

☐ Use AI only after defining constraints clearly

☐ Limit AI use for certain categories (e.g., debugging)

Write one commitment for your next session:
_____________________________________________________________________
Enter fullscreen mode Exit fullscreen mode

Example 2 — Reflection Prompt Cards

I also created a set of prompt cards organized by engineering themes:

  • system design
  • debugging / incidents
  • learning
  • promotion / impact
  • AI usage

I printed them out and keep them on my desk when building products so that I can ask myself the right questions along the way.

Whenever I’m working, I can pull a relevant card and quickly reflect.

For example:

  • "What trade-offs did I consciously make here?"
  • "What signal led me to the root cause?"
  • "What would I do differently if I had to rebuild this from scratch?"

These small questions compound over time and build intuitions.


I’m now sharing this guide for feedback

If you’re curious, I’ve put the guide here:

👉 https://javz.gumroad.com/l/thinking-in-the-age-of-ai

You can grab it for free (just set the price to 0).
If it ends up being useful, feel free to support so I can keep investing my time in creating resources to help engineers, totally optional of course.

Mainly looking to get feedback from other engineers.

Would especially appreciate feedback on:

  • which exercises feel useful vs unnecessary
  • what feels missing
  • how this fits into your actual workflow

Curious how others approach this

I’d also love to hear from other devs here:

How do you make sure your thinking keeps up with AI tools?

Do you have any habits or systems in place?


AI can generate code.

But the engineers who will stand out are the ones who continue to think deeply while using it.

That’s what I’m trying to build here. This guide is helping me, I hope it can help you too along your journey as a dev.

Curious to hear your thoughts!

Top comments (28)

Collapse
 
itsugo profile image
Aryan Choudhary

I love this idea, it's so easy to get caught up in relying too heavily on AI tools, but this guide sounds like a great way to keep our critical thinking skills sharp. I'll definitely be checking out some of those exercises and reflection prompts to help me stay on track.

Collapse
 
javz profile image
Julien Avezou

Thanks Aryan! I hope you find it useful. I would love to get your feedback after trying out some of the exercises and reflection prompts.

Collapse
 
itsugo profile image
Aryan Choudhary

Sure!

Collapse
 
leonidasesquire profile image
Leonidas Williamson

This resonates. I've noticed the same pattern in my own work — the friction that used to force understanding (debugging, tracing through logic, reasoning about edge cases) is exactly what AI compresses.

Your AI Dependency Detector is a good self-check. The question "If I removed AI from my workflow tomorrow, what would weaken first?" is particularly sharp. For me, the honest answer is probably algorithmic problem-solving — I've let AI handle that more than I should.
One thing I'd add to your framework: distinguishing between types of AI usage.
Not all AI assistance has the same cognitive cost:

Generation (write this function) → High offloading risk
Explanation (explain this code) → Low risk, often builds understanding
Review (find bugs in this) → Medium risk, depends on whether you verify
Scaffolding (boilerplate, setup) → Low risk, saves time without skipping thinking

I've found that being intentional about which mode I'm using helps. Generation before I've thought through the problem = bad. Explanation after I've attempted something = good.
The reflection prompt cards are a nice touch. "What signal led me to the root cause?" is one I should ask more often — it's easy to fix a bug and immediately move on without internalizing the pattern.

Would be curious if you've thought about team-level versions of these exercises. Individual reflection is powerful, but I wonder how you'd apply this in code reviews or post-mortems where AI-generated code is becoming common.

Collapse
 
javz profile image
Julien Avezou

I am glad this resonates with you.
I appreciate your feedback and agree with you on distinguishing the types of AI usage. I like your breakdown here and mapping them to congitive costs.

This guide focusses on reflection at an individual level but teams can definitely make use of the same exercises. Also the reflection prompt cards can be printed out and used as a team exercise for code reviews/post-mortems/RFC discussions or just simply team building.
The team-level is definitely an interesting one due to the higher complexity of team dynamics. I am actually working on a prototype for a product that aims to help teams with surfacing thinking layers better. I will share it with you when the MVP is ready if you are interested.
Have you witnessed any effective solutions at a team-level from your experience?

Collapse
 
leonidasesquire profile image
Leonidas Williamson

Would definitely be interested to see your MVP when it's ready — surfacing thinking layers at the team level is a hard problem worth solving.

On team-level solutions I've seen work:

  1. "Explain the AI" rule in code reviews

If you used AI to generate a non-trivial piece of code, you have to add a comment explaining why that approach was chosen — not what it does, but why it's the right solution. Forces the author to actually understand it before it gets merged.

  1. Rotating "no-AI" debugging sessions

When a tricky bug comes in, one person debugs it without AI assistance while the team watches (or pairs). Sounds old-school, but it surfaces reasoning patterns that juniors rarely see anymore. We treated it like a learning exercise, not a performance.

  1. Post-mortem question: "What did the AI miss?"

After incidents, we started asking what assumptions the AI-generated code made that turned out to be wrong. Patterns emerged — AI is consistently weak at certain things (edge cases around state, race conditions, business logic exceptions). Made the team more skeptical in the right places.
None of these are products — just lightweight process changes. But the common thread is making the thinking visible, not just the output.

Curious what angle your prototype is taking. Is it more about async reflection, or real-time collaboration?

Thread Thread
 
javz profile image
Julien Avezou

Awesome! I will share the access to the prototype with you once ready.
The angle I am taking is a real-time collaboration engineering prompt system that allows engineering teams to easily surface and share their thinking when coding through prompts. The vision is for teams to use this tool to help with their rituals such as retros, RFCs, planning etc.

The examples you shared are very valuable. I like how these processes cover the whole spectrum from pre-merge to post-merge via debugging and incident retros. The "no-AI" debugging sessions is definitely an interesting and I can see the value for juniors as it forces troubleshooting and knolwedge sharing across the team rather than silo quick debugging with AI. Thanks for sharing!

Collapse
 
automate-archit profile image
Archit Mittal

The framing of AI as a thinking partner rather than a thinking replacement is the right mental model. A practice I've added with my team: before accepting any AI-generated code, you have to be able to explain each block in plain language and justify why that approach over two alternatives. If you can't, you bounce it and re-prompt. It catches the "looks right but I don't understand it" trap that creates the worst tech debt. Do you have a habit or ritual that stops the copy-paste-ship cycle on your side?

Collapse
 
javz profile image
Julien Avezou

I like this practice of yours, it adds intentional friction to pause and reflect on understanding the code in itself but also in comparison to other options. Curious to know how you enforce this practice at a team level?
In one of my teams, we would increase the frequency of live knowledge sharing sessions which would force each team member to take more accountability of the code they are expected to explain it to others and be challenged with questions from team members. This would be done in a live session but also documented in writing so that stakeholders or new team joiners are able to understand and onboard to the systems/codebase more seamlessly.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Wow, I'm for sure giving this a try! 🚀

Collapse
 
javz profile image
Julien Avezou

Nice! Would be super valuable to get feedback/insights from you Sylwia!

Collapse
 
andrewrozumny profile image
Andrew Rozumny

Feels like the real danger isn’t AI doing the thinking — it’s how invisible the shift is.

You don’t notice when you cross the line from “using AI to accelerate” to “using AI instead of understanding” until you hit something slightly non-standard… and suddenly you’re stuck.

I caught myself in that exact loop:
generate → tweak → ship → repeat

Looked productive, but if I’m honest — my ability to reason about the system wasn’t improving at the same speed.

And that’s the scary part: AI gives you output that feels correct, so your brain stops pushing back. 

What worked for me recently is treating AI modes differently:
• generation = high risk
• explanation/review = actually useful
• debugging without understanding = trap

Are you getting better as an engineer with AI… or just faster at producing things you don’t fully own?

Collapse
 
javz profile image
Julien Avezou

Thanks for sharing your insights Andrew. I like the breakdown of AI modes that you use. I can relate to the case of debugging without understanding as being the biggest trap with the highest cognitive. If you don't understand what the problem is, how can you be confident that the solution is the right one. You run the risk of introducing unintended side effects and the more changes you pile on without understanding, the harder it gets to debug in the future.
The question you raise at the end is an important self-check to use as an engineer regularly.

Collapse
 
devlomose profile image
Jonathan Guzman Guadarrama

At beginning I was doing like hey Claude do this, and that’s it. Then I stared to take this kinda more serious because I started to feeling like I’m started being bad at some coding stuff, then I found the superpowers repo for skills and start using the brainstorming that enforce you to analyze options and think about what you are gonna “create”. So now you may need to think about the “problem” so with this skill you refine the idea and implementation. I think this approach + “plan mode” involves you in some way you don’t keep the thinking part aside.

Collapse
 
javz profile image
Julien Avezou

Very interesting, what is this superpowers repo for skills? could you share it? I would be curious to look into it more.
And I completely agree with the "plan mode". I also tend to ideate about requirements and architecture decisions extensively in plan mode before moving on to the implementation phase.

Collapse
 
devlomose profile image
Comment deleted
Thread Thread
 
javz profile image
Julien Avezou

Thanks a lot! I will check this out.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

push back on this - when AI handles execution, thinking does not decrease, it moves upstream. Instead of figuring out how to build something, you are immediately on whether it should exist. The hard questions surface faster now.

Collapse
 
javz profile image
Julien Avezou

I agree with this. Have you come up with processes to help surface these hard questions?
I have made the mistake in the past of spending too much time building features nobody needs. So now that building is faster, I prototype quickly and validate it with potential users before spending more time building a fully functional feature. This way I have users help me surface the questions I need answering.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

yes - weekly demos before much code is written. nothing surfaces the real questions faster than watching someone actually use it.

Collapse
 
henryaza profile image
Henry A

The distinction between AI usage modes is worth calling out — generation vs. explanation vs. review vs. scaffolding all have very different cognitive costs. I've noticed the same pattern: using AI to generate a solution before I've thought through the constraints produces worse output and weaker understanding. Using it to explain or review something I've already attempted does the opposite.

The "what would weaken first" question is uncomfortably honest. For me it's probably infrastructure debugging — tracing why a VPC route table isn't propagating or why a security group rule isn't matching. I've caught myself reaching for AI before I've even looked at the CloudWatch logs, which is exactly the friction-skipping you're describing.

Collapse
 
javz profile image
Julien Avezou

I do really like the concept of breaking down AI usage into different modes and mapping them to +/- cognitive costs. It gives a ball mark measure of AI dependency and whether AI is being leveraged or overrused.

Collapse
 
openclawcash profile image
OpenClaw Cash

Very useful! Feel free to check out my posts as well.

Collapse
 
javz profile image
Julien Avezou

Thanks for the support! Will do.

Collapse
 
garvit_singh_006 profile image
Garvit Singh

Thanks a lot man, appreciate it!

Collapse
 
javz profile image
Julien Avezou

Of course! I hope you find value in this guide! I would appreciate your feedback after going through it.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.