Indeed. When you see some of the entitled opinions that occasionally pop up on Elektronauts, it’s not too surprising that Elektron make no promises about engaging here.
I’m mostly kvetching about the most unglued demands here, I would say the average “wouldn’t it be great” person on 'nauts is more on the magical thinking GAS train, which is more toxic to one’s own productivity wasting time on fantasies over actually using what you have in the manner it exists.
But also I can be snippy here to the antipatterns in ways I can’t to unreachable posters in the wild 
Don’t you dare hold me responsible for my inability to make music.
Do you know anything about how they enforce the new AI policies? I wonder if there’s a way to detect the “signature” left on code generated by LLMs without using one themselves.
I don’t think it’s enforceable, it’s just a position they’re taking and saying “don’t do that, we don’t want it”.
Yeah, so like an honor system. Because it could be pretty easy to use an LLM to generate the code, and then personalized it a bit and massage it into fitting within the policies – of course, at that point just write the code without AI! 
I worked with a guy who committed some AI-generated code and it totally passed code review. I’m hindsight, when he told me it was AI, I realized it wasn’t really his personal style, but in the moment I was just so focused on reviewing what the code did and didn’t notice
Which is always true for contributing to open source projects, because you’re always attesting that you wrote the code and didn’t just copy it from elsewhere without respecting the original copyright and license terms.
not sure how, maybe there will be a reverse lookup for code like for images? don’t know, maybe there is a way to check if this code is somehow in the LLMs database…
funny thing is that the LLMs will continue to scrape these huge open source repositories anyway, I’d say especially these *nix ones because they usually have high standards, and they will directly feed these commercial LLM solutions like CoPilot, it’s not like github or microsoft is helping the CoPilot grow right? only charging pretty penny for something they stole from these OS repos…
LLMs don’t have an active database of code (or text) they peruse, they can’t refer back to what they learned from by design. They’re little more than highly complex statistical engines that spew out the next most probable word (or in matters of programming: symbol) related to the prompt.
They do have some indicators and structure to them, I imagine so they can avoid their own spew as the internet gets bogged down worthless sludge content.
exactly… even if someone checked in code that they generated by AI, it would still be their responsibility.
we’d only lose accountably if the code was allowed to be checked in by ai bots.
unfortunately, I think a lot of the talk around AI is driven by a lack of understanding, in both how it works, what its capable (and not), and lack of real world use.
Ive been using Github Copilot in my dev environment for a few months now, and sometimes is pretty impressive, and sometimes pretty dumb.
Co-pilot is definitely the right name… if you use it intelligently, it really boosts productivity. essentially alleviating a lot of ‘dumb typing’, documentation lookup, its can also be quite a good educational tool… offering you perhaps an alternative to your ‘normal ways’ ( * )
but its no substitute for development experience… if you don’t read/understand the code it creates, if you don’t drive it (and let it drive you) … you are going to get into a lot of trouble further down the line.
perhaps one day, AI will be more foolproof, perhaps it will replace programmers… but today is not that day ![]()
( * ) in many ways, something like Co-pilot can be considered as an extension to the tools we have had in IDE for decades… e.g. things like code completion/suggestions, automatic refactoring etc.
yes, like with those improvements - there is a danger in coding fast , brainlessly …
for sure, when we were coding in VI without aid, it did make you think about each line a bit more… where now its quicker to just ‘give it a go’… but doing things brainlessly is the fault of the dev, not the tool ![]()
don’t worry…
it’s driven by the usual Emperor’s New Clothes breathless idiocy from the media and whatever aspiring feudal lords want to use it for their enrichment and to create digital serfs.
ML existed before generative crap, it’s still useful. I use projects relating to data science for work and personal research.
The term “AI” is so overbroad I loathe any product tied to it (tools are always just tools) even if it’s just using ML process.
Like, very little of the time spent on good products is in the actual coding over planning architecture, collaboration, infrastructure and maintenance.
It’s absolutely someone blowing smoke up peoples’ asses to disempower actual programmers to try and shove in gig workers or whatever colonial contracts overseas* to shit out nonworking stuff, pay the rest of us less to make it functional.
*I am lucky to be able to appreciate the majority of my overseas coworkers, but not all companies have bright ideas and the idea of firing quality coders and replacing them with if not inexperienced, very poorly paid individuals even for their local going rate.
Anyway, the fad is stupid and harmful and the good working aspects of ML are being downplayed for generative fantasies.
This is very true. Well planned, well thought out code can come together quickly. The things that take a long time are validating code and debugging bad code. Outsourcing the writing would guarantee in my mind that those tasks become more difficult. If for no other reason that you can’t just ask your software engineer “What the hell were you thinking when you wrote this?”
Microsofts .NET is open source, as is VS Code. I’m sure there’s a lot more stuff in these organizations that is open source. Which will likely have been indexed by CoPilot AI
Autocomplete/correct on my phone works for me only because I can evaluate what it is offering in a split second. Even then, I sometimes type too fast and it automesses up what I typed correctly. That is annoying, and I certainly wish I could configure it per app or even per window, but I put up with it.
I don’t have to work in an industrial environment where a lot of boilerplate has to be written. But I can’t evaluate “autocompleted” code in a split second. I have a lot of experience debugging student code, and it is not quick. Anything more than suggestions of library function completions would slow me down, even if I knew a human was on the other end and not a piece of software whose behaviour even its creators cannot properly categorize.
We use CoPilot a lot at work. And ChatGPT. You do end up reading and understanding code suggestions on the fly. Some bugs slip through. It’s like a constant code review, very taxing
That’s the thing! I would love for high level architectural modeling and proof of concept implementation but the overhead of going no no no NO fuck you im sick of “prompt hacking” over design. It’s a slot machine in hell and not in the end “smart lazy” as I prize in development. Not a great bang for the time-buck, not good for me building knowledge versus pressing a button and hoping to hack together a fix around hallucinations.
That’s not how LLMs work at all. It’s not the LLMs that select the data that makes up the training sets. Also all AI approaches at detection of AI content are really bad so far. Finding new training data is already extremely difficult, that’s why Google bought a license to use all content on Reddit: for training LLMs.
And that is also the reason why code that has been touched by AI is considered tainted: if you feed that into training sets, AI is eating their own shit…
It is almost never simple to code, even if something looks easy it is most often very deceptive.
There are multiple things which need to be taken into consideration. The overall architecture, proper design, compatibility with core design principles and design, including high-level vision of the product, performance impact etc, extensibility, maintainability, testability and many other important architectural characteristics
I’ve seen numerous projects where adding a single label to the UI required one month of implementation effort. It was caused by wrong architectural decisions taken in the past and lack of continuous evaluation of them.
Amusing, considering that some of the “simple to code” feature requests seems to spring from “I can see where it would fit into the UI”.