Something I notice a lot in discourse around complex topics online, particularly AI, is that people often only hear half of what you're saying and instantly jump to disagreeing with that half. For example, if I say something like: "Learning to code is a useful skill even if vibe coding becomes the dominant programming method" People hear: "... if vibe coding ...", response: "I'm already using 12 agents simultaneously and never write code by hand", which completely misses the point that probably knowing how to code is one of the primary reasons they're able to do that effectively Which is a bummer, because then when you're trying to help people out in their career (give them a white pill instead of a black pill, encourage them to take ownership of their skills, trajectory, etc - tell them that it's still worthwhile to learn and work hard and that can make a true difference in your career) a bunch of people want to just pile on and yell about "AI Cope" or similar insults. I don't mind if people yell at me, I often yell at myself (rofl) but my concern is more that it convinces a bunch of people (particularly junior people in the field) that it's pointless to learn, iterate, and fight for mastery which I believe will really hurt them over the course of their career - and they don't really have anyone to tell them otherwise. Or people hear: "vibe coding becomes the dominant programming method", response: "I've tried LLMs and it's not as good as I am right now at <my specific task>", which completely misses the point that AI can often be a help for certain tasks, even if it's literally just saving you time on doing monotonous tasks in a code base! I've used agents just to do dumb stuff like move a field from one property to another, or move arguments around on a function eveywhere in a code base. What would have taken me quite a bit of time and a lot of effort took 2 minutes of focus on writing a prompt and then letting an agent run in the background! That's a useful tool even if never writes a new 0-to-1 feature for me! Anyway, this is mostly just some random thoughts because I saw someone complaining about a random Prime video today where he was talking about a very serious issue of LLMs possibly being poisoned by as little as a couple hundred documents - this is potentially serious news if you think about any second or third order effects around this. Companies poisoning data so that competitors code generates garbage, or that they get suggested first in the output of "the best way to deploy smorgasborg on cloud" or whatever. And the point being that even if AI becomes ubiquitous for all programmers , there are still real concerns: both for people and for technology. And... it's actually just OK to talk about those and try and have complex, nuanced conversations. Old man screams at cloud. Goodbye