(Refresh me!)

Date: 24th of June, 2025


Recently I've been reading about how bad relying on LLMs can be for your ability to do critical thinking and it's pretty scary. It's also striking a chord with me; it sometimes feels like I'm just turning off my brain whenever I offload whatever I'm doing to an LLM. And nowadays it doesn't even have to be a huge task for me to throw my hands up and just toss the problem at an LLM - it can be the slightest challenge like a misplaced semicolon or a missing ending parenthesis. I just can't be bothered anymore. My threshold for "giving up" is way lower now than before LLMs.

How can I expect to do any important work if I offload my task whenever I actually have to take some time and think about what I'm doing? I think you're setting yourself up for a plateau when you offload all the hard stuff to an LLM instead of painstakingly working through the problem yourself. That's how you learn; by trying and failing, then eventually, hopefully, succeeding.

LLMs skip the latter part. All you need to do is to try and fail, then immediately succeed because you asked an LLM how to solve the problem, it spat out the answer, which you pasted with no thought as to how and why (or if) the suggestion works.

Now, I'm not against AI or anything - I think it can be a great tool for learning, but I think it has to be applied in a specific way if you're using to actually learn. Here's a prompt I've been experimenting with to make the process more useful for learning:

NEVER give me the answer to my problem by using my own code and providing the solution. Instead walk me through some of the required knowledge needed to solve the problem along with some keywords for me to search for.