Wow, this thread moved quickly! I’ve seen the doomsday predictions that AI is going to kill all the programming jobs. I’ve given it a good deal of thought, and come to the conclusion that it likely won’t. Some of the low level jobs at the bottom rung of the ladder, maybe, but almost certainly not the jobs higher up. Here’s why…
The big misconception is that writing code takes a lot of time. Not true! Good code (like a good song) typically happens quickly. What is time consuming is testing. That’s the tricky part, to decide if the thing you wrote does what you think it does/intended to do and figure out how it reacts to edge cases that you didn’t think of (aka the real gotcha). That part is really hard. It’s hard because the definition of success and failure is a multi-faceted thing that would be difficult to describe to a computer in language that is actionable to AI. Then, assuming you could tell the AI what success and failure even looks like, you may have to get it to interact with a number of analysis tools in complex ways to interpret the data. After all that, the AI needs to explain it’s analysis to whatever meatbag is actually giving this work the green light.
After that, you’re done and you can trust it, right? No, that’s where the really deep rabbit hole begins. Because then, you have to ask yourself did the AI analyze the data in the way you wanted, did it’s error metrics respect what you requested, did it test the edge cases, did it try to break the code, did it rig the implementation just to pass the given tests, and so on. Setting up all of that and then going back and double checking all of this would be effort commensurate to just having a trained human write the thing.
This neglects the really scary stuff like AI hallucinations, which can have all manner of fun implications, and the computer security aspect (which is such a hilarious nonstarter as to render all previous discussion irrelevant). And assuming all of this worked (a really, really big IF), all you’d have is a black box tool that no one on your team actually understands, and they can’t modify it fix it. How valuable is that to management? Oh yeah, and did the AI write coherent documentation? If not, you’re shit out of luck.
I would trust an AI to do highly algorithmic, simple, linear tasks, similar to what I’d expect of a kid fresh out of undergrad. Which is to say not much and not without lots of oversight.