Even if you’ve been coding for ten years, it can still feel complicated. The site many coders rely on is Stack Overflow. People will post questions and get answers fairly quickly from other coders. And if those coders don’t know the answer, the website has automated answers generated by a chatbot, ChatGPT. It’s easy and quick.
However, Stack Overflow has to put an end to the site’s most attractive feature, or at the very least put a temporary ban on answers shared via ChatGPT. Simply because though the answers looked accurate and reasonable but more often than not they were wrong.
Now, this isn’t the first time a chatbot has been wrong and it certainly won’t be the last. The issue is the full embrace communities are giving to these chats. ChatGPT is experimental at best and works off an AI system making it feel seamless and correct. But it’s EXPERIMENTAL! It won’t always be perfect and people are realizing just how often it’s incorrect with its answers.
These systems take from information found on the internet. Though the idea is really interesting, and it would cut time from trying to find someone to ask, it’s reliability is nonexistent, or no more trustworthy than a quick Google search.
AI is becoming pretty established but we still have to be cautious of it, especially in instances like this. As The Verge says it is “fluent bullshit” – it sounds good and like it may be correct – but it simply isn’t.
Stack Overflow has done the right thing by removing it, until it’s been programmed to be more accurate it’s going to cause issues for users. The community has been very supportive of their choice and are given coming out of the woodwork with their own surprising answers generated via ChatGPT.
Say a coder in question works for a government establishment or a high-powered division and they are running into slight issues. They don’t want to be a bother, so they use a system like ChatGPT, which gives them the wrong info. Maybe the code works, but it’s not secure, or, maybe it doesn’t work and they are still struggling with the same issue.
It may not seem like a big deal – and AI in this form is still fairly new – so we don’t have much data overall on it. However, we already have a slew of misinformation being submitted by real people, and we don’t need AI putting out that information as well.
Between it all, where are the individuals with facts and correct answers, real and current information? Well, they are in the background, getting upstaged by louder voices and ‘convenient’ machines. Let’s make sure their voices get heard.