To assess true competency, I administered a test under strict conditions: pen and paper only, with absolutely no access to AI, the internet, or phones. Prior to the test, for fairness, I allowed each candidate to outline their skill set and self-assess their proficiency on a scale of one to five. My engineer then designed a custom evaluation based on those claims.
Of the thirty people tested, none passed. Notably, all who failed were under 30. The fact that not one could succeed in a core professional test without technological assistance highlights a troubling decline in fundamental skils..
Your solid approach to your profession is impressive. Good question—who is going to train the AI? It’s a scary thought, but I’ve had to become a developer myself because I couldn’t find qualified people. As my partner says, “If we’re the best coders in the room, we’re screwed.” Specifically, I was looking for software engineers with 4 to 8 years of experience. The good ones know they’re good and are always hard to get.
I also still like to write with a pen sometimes, as I find it helps me concentrate better. That makes me wonder, as an instructor: do you have your students code by hand? Looking forward to seeing more of your great work.
The mental models argument is probaby the strongest defense of fundamentals I've seen. I've noticed a pattern with junior devs who lean too heavy on AI code generation, they can't debug when things break because they never built intuition for how the pieces fit. The "magic words you can't control" phrase nails it. I spent like six months working through lower-level concepts last year and now when I use Copilot, I can actualy evaluate what it's suggesting instead of blindly accepting. The debugging piece especially matters, AI-generated code still breaks in ways that require understanding state management and data flow to fix.
The debugging insight is spot-on; when the AI-generated code inevitably breaks (and it always does), that's when the real learning gap shows up. Your journey from fundamentals to effective AI use is exactly the path I'm trying to help people follow here.
To assess true competency, I administered a test under strict conditions: pen and paper only, with absolutely no access to AI, the internet, or phones. Prior to the test, for fairness, I allowed each candidate to outline their skill set and self-assess their proficiency on a scale of one to five. My engineer then designed a custom evaluation based on those claims.
Of the thirty people tested, none passed. Notably, all who failed were under 30. The fact that not one could succeed in a core professional test without technological assistance highlights a troubling decline in fundamental skils..
Wow, that's mind blowing! Who is going to create the information that the AI needs to train itself in the future?
Your solid approach to your profession is impressive. Good question—who is going to train the AI? It’s a scary thought, but I’ve had to become a developer myself because I couldn’t find qualified people. As my partner says, “If we’re the best coders in the room, we’re screwed.” Specifically, I was looking for software engineers with 4 to 8 years of experience. The good ones know they’re good and are always hard to get.
I also still like to write with a pen sometimes, as I find it helps me concentrate better. That makes me wonder, as an instructor: do you have your students code by hand? Looking forward to seeing more of your great work.
The mental models argument is probaby the strongest defense of fundamentals I've seen. I've noticed a pattern with junior devs who lean too heavy on AI code generation, they can't debug when things break because they never built intuition for how the pieces fit. The "magic words you can't control" phrase nails it. I spent like six months working through lower-level concepts last year and now when I use Copilot, I can actualy evaluate what it's suggesting instead of blindly accepting. The debugging piece especially matters, AI-generated code still breaks in ways that require understanding state management and data flow to fix.
The debugging insight is spot-on; when the AI-generated code inevitably breaks (and it always does), that's when the real learning gap shows up. Your journey from fundamentals to effective AI use is exactly the path I'm trying to help people follow here.