hckrnws
To provide a different perspective: I very rarely asked questions on SO, but I was a pretty active member (submitting both questions and answers) at tex.stackexchange.com a few years ago. It was a very, very nice and welcoming community (and probably still is, it's just that I moved on and don't participate that much anymore for personal/professional reasons). I am almost sure the same goes for emacs.stackexchange.com (which I sometimes read) and a few other sites in the "family".
So I would not conflate SO with the wider SE set of communities.
It's important to remember that there are 181 Stack Exchange sites. It's important to not paint them with the same brush. It's important to remember, conversely, that this is a general strike, and that diamond moderators from the 180 others are involved.
It's also important to remember that the 180 other sites are suffering from ChatGPT-created content as well. Sometimes it's better, in that there's almost no-one copying machine-written answers in the first place. Sometimes it's worse, in that now the brakes are very much off.
The SQA stackexchange site, for example, turns out to have 2 fairly prolific ChatGPT copy-and-pasters that have been discussed on its Meta site, and who have histories of soliciting business for their respective companies in answers. Since the inception of the diamond moderator strike, they have gone into out-and-out open competition, posting ChatGPT answers competing with each other to years-old questions that actually have human-written answers of long standing.
The RetroComputing stackexchange site, for example, appears to have no copies and pastes of ChatGPT content at all over the past few months.
I eventually got burnt out of answering there, but I found Academia StackExchange pleasant, and CrossValidated is in my dissertation acknowledgements.
Aviation Stack Exchange is lovely too! :)
I submitted a question regarding an obscure realmd bug and subsequently submitted the answer (which took about 24 hours to arrive at). The answer was nitpicked over a tag, downvoted, and finally removed. I would not be surprised if someone in the world is currently struggling to resolve the same realmd issue as it is not documented anywhere else online. That was the last time I will contribute to Stack Overflow.
I had a similar issue in the past querying about 'nice' rate limits towards DNS authorities (mainly registries and registrars) - and was told the question was somehow wrong/irrelevant. It was a simple check to see if domains have nameservers and resolve. It's not really a question there's a manual and page for, simple etiquette. I was told the question should have been X where X was definitely not the question.
The tone of the remarks were laughable too, militant/unempathetic. Last time I bothered to ask anything there, about 7-8 years ago.
I stopped using Stack Overflow years ago when they went on their jihad to remove "please," "thanks," etc from all posts.
I understand the motivation to keep "noise" low, but I disagree that civility is noise.
Does this have something to do with the strike? (E.g. you think the moderation is still ongoing despite the supposed strike, or worse than before?)
Or is this just a generic SO complaint?
I feel like you are itching to get his hacker news answer nitpicked over its relevance, downvoted, and finally removed.
I mean, yes. Wouldn't it be pretty nice for the top comment to be relevant to the article rather than the 500th rerun of the same SO moderation complaints?
Understandable comment. To be fair, the SO community (and moderation) clearly play an important role in the utility of the site, in the same way Wikipedia editors are held to high scrutiny for their objectivity. Surely if enough people have reasonable objections based on past experiences, that is relevant.
Comment was deleted :(
The predilection of SO moderators to overzealously remove answers seems clearly relevant to the question of whether it’s a good idea to give them the power to remove any answer because it has an AI vibe according to inaccurate AI detection tools.
It's important to highlight that SO mods remove very little. Almost the whole community can contribute and in my experience (as a mod) it's usually the enthusiasm of non-mod users that causes the problems you're describing.
They gamified community moderation. When you reward people for taking action, expect people to take lots of action.
These “generic” complaints are why SO is a dead platform and also why nobody really cares about the moderator strike. It’s not that we don’t empathize with them (moderating a deluge of chatGPT garbage is impossible), it’s that we just don’t care about SO. It’s dead like Quora.
Calling SO 'dead' like Quora is not only extremely dramatic but also have you looked at Quora? Ever?
I still find SO extremely valuable even if some parts of it can be extremely nitpicky. To some extent you need a hardass moderation team and community in order to prevent it from being destroyed over time. See: Wikipedia.
Agreed. I visit stack overflow every day. It's usually helpful, concise and correct.
> moderating a deluge of chatGPT garbage is impossible
Why do you think that?
I see quite a lot of garbage produced by the more meaty intelligences on the internet. And I don't understand why it'd be intrinsically more difficult to moderate ChatGPT content than any other.
Because ChatGPT:
- multiplies the rate at which people are able to produce content by many orders of magnitude, and
- uses sophisticated sounding language with a lot of correct statements, sprinkled with a few hallucinations that can be anywhere from slightly off-base to dangerously wrong, and
- doesn’t even need a human much in the loop on the producing side. Certainly there are karma farming bots posting answers written by ChatGPT right as we speak
So content produced by ChatGPT takes more time to review for the people looking into factual correctness of the answer while at the same time is able to be submitted at a much higher frequency than before.
Some of the mods really are bad. I've found SO to be OK as I only use it for very specific questions and word them well, so never have any friction.
But on some of the other SE sites? Man. I remember asking something on Server Fault about an obscure Apache setup, I can't even remember exactly what it was, but I was serving from localhost. The issue 100% had nothing to do with DNS, but some mod wanted me to add that info. I tried explaining it was irrelevant and he just deleted the question saying he knew better.
Petty behavior like that is all too common. Good moderation relies on a principle of least interference, not reviewing every post and punishing those that don't share a mods personal values/standards.
That’s pretty par for the course for the last few years, and I don’t even bother looking there for answers much anymore for the same reason.
I had several cases like these over the last years (joined the network 12 years audio but the issues seem to be a few years old. I have about 220k across the sites, with a predilection to editing questions and answers for format (readibility) and updating my own answers as technology progresses).
Some SE sites are better, some worse in that aspect. SO bring the largest usually gathers the anger most.
Then within SO there are better and worse communities. Some are toxic (Go, in addition to being maddeningly elitist), some are great (sql for instance).
The Meta is such a shitshow that I never get there anymore.
It is dad, but in reality a combination of culture, low effort questions and egos.
Can you post a link to the question where this happened?
I've asked the same multiple times when people here complain about X happening to them. They never answer.
My guess is they go and find it, read it, realize they were in the wrong, but never retract their statements.
Yes, there are cases where questions are wrongly closed, or you get a nitpicky comment. But a few wrongs when you have to deal with thousands of useless questions every day is to be expected. Just think of the state of the site if it wasn't moderated as strictly.. I got badges for having reviewed 1000+ of stuff, and then you see the absolute garbage...
Read @NoMoreNicksLeft's comment in this thread, you'll understand why. I have dozens of examples, but I don't care enough to find them, cite them, collate them. You think the site is functioning properly? I'm happy for you, keep using it, it's a free country.
The rest of us moved on years ago.
Why spend your time writing comment after comment about how bad SO is, if you "don't care enough"? That's quite ironic.
Those complaining about SO being toxic are often quite toxic in their comments. I find that quite ironic as well.
> Why spend your time writing comment after comment about how bad SO is, if you "don't care enough"?
I'm having a discussion on a discussion website, does that confuse you in some way? I don't go out of my way to talk about Stack Overflow, but this thread is about Stack Overflow so I'm talking about my experience with Stack Overflow. Not sure how any of that is ironic.
One of your comments even got flagged. That's neither "having a discussion" nor "don't care".
This thread is a discussion about a specific event at SO (the strike). You airing your general SO grievances is not really on topic for that, so yeah, I'd say you're going out of your way to talk about what you feel about SO in general.
[flagged]
Comment was deleted :(
https://meta.stackexchange.com/questions/389928/gpt-on-the-p...
This post shows the data that SE finally provided, though only after the strike was already happening.
So far the community is not convinced. It looks like there is something happening there that reduced the number of people that provide multiple answers more than expected, but not everybody is convinced it is the mod actions on ChatGPT posts.
Another issue might be that this only explains a part of the decline in questions and answers. So even if this part is solved, the larger problem of declining engagement remains.
Slightly off topic possibly -- but I personally felt overwhelmed in recent months due to the daily onslaught of new GPT/LLM/GenAI related news.
I work in a related tech area and I have been meaning to do two things (1) prusue recertifications on a couple of professional certs that are up for renewal later this year and (2) get hands-on experience with some of the LLM and GenAI tech that is coming out so I can either broaden my professional skills to include them or at least benefit from knowing those tools thrugh inproved productivity/creativity.
What actually happened is that I achieved much less -- virtually zero -- tangible progress on both these goals.
I think some of my general interests and pursuits have similarly suffered due to this indecisiveness and overload. I may just be blaming some external factors for my laziness or ADHD but I cant escape the feeling that I am in a more confusing world now.
Why am I saying this here --
Some of the trends in user behaviour that these massive tech-oriented community sites are seeing over this period could have external factors like those I described above. Maybe people are being drawn elsewhere. Maybe people are overwhelmend, confused, or doing something else, maybe participating more in SD / MidJourney Discords.
Assuming that their website and traffic is a closed box and only the variables they change will affect the outcomes would be naiive.
that "something" is pretty obvious: people are asking their questions to ChatGPT. Fewer questions -> fewer answers. I've done queries into the Stack Exchange network's public data dump and put my analysis here: https://meta.stackexchange.com/a/389960/997587
Stack Overflow has absolutely been destroyed by the moderators, so I couldn't care less about their "strike." I was (and still am) in the top ~%0.80 of users[1] but no longer contribute to the site (I stopped ~6 years ago) because of the moderators (a few in particular, but I won't name any names as some of them are here on HN). It has been an absolute shitshow of closing questions that shouldn't be closed, anally-retentive nitpicks which intimidate new users, the essential nuking of the community wiki (even prior to the official deprecation), bad answers being upvoted, good answers being deleted, and so on.
The whole "community moderator" thing ended up being a popularity contest where typical nitwitted social climbers ended up injecting themselves in every single minor conflict on the site just to score visibility points come community voting time.
On top of this, SO is also dying as it has no real viable way of cleaning up or deprecating old answers, and if new ones are asked, they are closed in favor of the old (outdated) ones. Slowly, reddit and language forums/mailing lists are becoming more and more valuable as Stack Overflow becomes more and more of a trash heap. It sucks because I really really loved Stack Overflow, but it just broke my heart one too many times.
I remember I stumbled on a StackOverflow question about my API. I wrote a comment to tell the user a part of the API they were using was being depreciated, and they should change one line to avoid their code failing in the following weeks. I received a couple of ugly replies saying that my post wasn't relevant to the original question and it should be removed. I mean, I agree it wasn't the original question, but I wrote the API and I was trying to help that user and any future users coming to the page. Not the kind of place where I want to spend my time.
I'm not a heavy user like you, I've only asked about 15 SO questions in the past 3 years (ranging from C++, gcc, Linux, software architecture), none of them have been deleted, and there's always people answering.
You don't have to, but can you give some examples of good answers being deleted or good questions being closed? I'm just surprised I always see some people on HN furious at SO, and wonder why their experience is so different from mine.
Btw, regarding old questions, I always make sure I search SO before I post my question, and if any similar questions exist, I link them in my new question and say "I found this [url1] but it doesn't apply to my situation because of circumstances XYZ". This is simply good practice to show you respect the community's time.
I've had a few questions closed for reasons I disagree with but I just looked back at my history and I'm thinking it must depend on what you're asking about - most of my questions are about AWS and have received answers and upvotes (as well as wording tweaks...) but when asking non-AWS questions, that's where I've run into trouble.
Examples are pointless. For those that must feel compelled to maintain the status quo, whether they are invested or just cretinish volunteers, they would inevitably have to argue against the examples. Even if he can put his finger on one so absurdly pathological that it can't be taken out of context, they'd do an "oops, so we got one wrong, it happens once in awhile".
He's not the only one who has this impression that it happens constantly. And there are enough such people that they can't easily be dismissed as malcontents and troublemakers.
I still ask questions once in awhile... if you're desperate, there's a chance that it slips past their asshole guards and gets noticed by someone willing to help. But it's certainly unusual at this point.
Once a culture of moderation develops, moderators have to moderate just to feel like they're useful/important. They do this even if moderation isn't needed. Instead of just keeping a watch out for the uncommon problem question, they're inspecting many/most of them and looking for reasons to close/delete. It's too high of a bar, the purpose of the site is to help those who don't necessarily know enough to ask brilliant questions. Or maybe it isn't anymore.
We've cultivated an entire society of busybodies out there measure lawns with rulers, so to speak. It would take decades to roll this back to something sane.
I have a similar experience as dvt.
Not all, but some of my questions also got downvoted or closed. You might not be able to see them when your reputation score is too low (also somewhat stupid).
The reasons of closes or downvotes vary:
- Being off-topic for the site. Which is often very arbitrary. And I don't fully agree with what they want to have as on-topic. E.g. they don't want to ask for library or software recommendations. I understand the reasoning (might be opinionated and can easily get outdated) but there are some cases where there are canonical answers even to those, and in most cases, it's still useful.
- Or quality not good enough yet. I don't think it's a good reason to close a question or downvote it. If it is unclear, people should just comment and ask for further clarifications.
- Duplicate. Sometimes it's true, but sometimes also not, as the question ask for a somewhat different thing.
Here are some examples, linked in this post: https://meta.stackexchange.com/questions/127617/how-much-res...
Here are some more:
https://stackoverflow.com/questions/69721352/pycharm-style-d...
https://stackoverflow.com/questions/65492932/ficlone-vs-ficl...
This is my experience as well. It's not that SO is inconsistent or anything, it's just that it's a very specific place on the internet that is appropriate for a narrow sliver of material. Sometimes your personal idea of quality material aligns with SO's, in which case great! Otherwise, frustration.
That's funny, I would have said that a lack of moderation in favor of being more "welcoming" was a big part of what ails the site. Seems like it's been overrun with "write my regex for me" kind of questions.
I've had enough reputation to see the moderation queue for years now. Imo, the site is anything but welcoming, as it shuts down newbie questions all the time with zero guidance.
In spite of this it seems like most of the activity is for what I'd consider very low-value questions.
Already when I was a student working on my thesis, my issues were too advanced to ask questions on SO. I asked but I got no replies.
At the time I would answer to others. Now I don't really bother. I often see a wrong answer being marked as correct, because it came before an actually good answer.
For me SO now is at most a website where to copy boilerplate stuff like "how to create a unix server socket".
Welcoming and newbie-friendly are not the same thing.
QFT. There should be mandatory 'why this question was closed' in cases there was and (misinformed) effort. Currently it's -10 closed, no comments.
There is?
Nothing is worse than asking a question. And it gets closed as duplicate. You click on the duplicate and it’s asking a different question or obviously not the same thing. And they refuse to reopen the question.
I'll say. I'm too scared to use it because every legit question I've seen is shut down by a mod with bonkers reasoning
I always hear this but have never experienced it. What kind of bonkers reasoning?
Literally the first one in my moderation queue[1]. Perfectly valid question with almost enough votes to get closed...
For what it's worth, I read the question without reading the flagging reason and could predict the flagging reason with high confidence.
The question looks like an "I'm out of my wits please debug my application" question which isn't atomic or precise enough to fit the SO format. I would have liked the person asking the question to provide a complete minimal failing example to even begin working on an answer.
i second that sentiment. i also find the way the "question" is put somewhat confusing and technically it's not even a question as it doesn't contain a question.
This kind of thinking is exactly why I stopped asking questions on SO a long time ago.
Because once you can construct the perfect question, you can most certainly answer it yourself. It should be OK for questions to be somewhat vague, to contain "noise" and non-related stuff (because how on earth should the person asking the question know if something is relevent if they don't know the solution already?).
And I think in most cases a "complete minimal failing example" is way overkill. If someone in the future runs into the same problem, they have code already, and if there is already a solution accepted, then all that is needed is the description of the problem case and the solution (and in the best case, a root cause analysis, but that is an optional extra).
Some mods seem to think that SO is the next Encyclopedia Britannica, and each question is an article in it. This was not true in the beginning, and I don't feel like this is the right direction. But who cares about what I think.
> Because once you can construct the perfect question, you can most certainly answer it yourself.
Not if what you're lacking is declarative knowledge, which is what the site tries to elicit, to a first approximation anyway.
You should reduce the question to the point where you can say "between substeps 1.3.5 and 1.3.6 there seems to be a proposition I don't know. What is it?"
If you can't reduce the problem to a missing proposition, there are better resources to turn to for help with debugging it step-by-step until you have isolated a problem. I've had success with mentoring and IRC.
Aren't all questions on SO of this type? With that reasoning you could close the whole SO. If you really debug hard enough and for a long time, you wouldn't ever need to ask on SO.
Not a moderator but have ~8k rep on SO and have spent some time on mod queues in the distant past.
Most questions on SO these days (that are not "do my homework for me please") are garbage like this. Signal to noise ratio is horrendous. Gin, Next.js, whatever are pure noise, the actual problem is glossed over in a vague sentence: "my browser is getting the setcookie header correctly, it doesn't say it's blocked at all but my browser isn't setting the cookie". You could try to talk the submitter into providing the complete HTTP request, which could take a very long back and forth, but after a thousand questions like this, you probably decide it's not worth the effort to salvage the question; or like me, you decide it's not worth participating in the site at all.
No. The purpose of the site is serving as a general knowledge base, not debugging random code that's only going to be useful to one person.
… as a result of two votes to close from members of the community, which is not an example of “shut down by a mod” (GP) or how “Stack Overflow has been absolutely destroyed by the moderators” (GGP). Closing questions isn’t really a primary purpose of elected moderators to begin with, because it doesn’t need more privileges than the rep-based privilege system already grants.
Meanwhile, the spam/abuse deletion, removal of illegitimate votes, conflict resolution, and so on that having mods is actually valuable for happen behind the scenes.
[flagged]
Is pointing out that you’re not supporting your point with the correct evidence lashing out? For what it’s worth, I don’t feel the need to defend the site, and think it does a pretty terrible job of delivering useful questions and correct answers most of the time – just for pretty much the opposite reasons.
So rather than address the fact that you were wrong, you chose to change your argument entirely, attack the person's character and then lash out at them?
You have access to the moderator queue and have been according to your history a person with relatively influential ability. Why don't you work towards changing the culture yourself?
I can see that one being pushed back, there's a lot of context for sure but the direct question is not surfaced visibly enough.
""" I have this working no problem locally , but in production with different domains, my browser is getting the setcookie header correctly, it doesn't say it is blocked at all but the browser isn't setting the cookie. """
I know there's a question in there somewhere but it feels pretty much like a run on sentence and the clarity is lacking. There's something that could be wrapped up in that final paragraph that I think could make the question more direct.
No need. ChatGPT to the rescue. The answer quality is about the same but for different reasons. It's a hit or miss for both. But! With ChatGPT there is no social pressure exerted by humans. It's only you and the machine. It's liberating.
When I ask ChatGPT about stuff I actually know about, it always gives glib, misleading and completely wrong answers. I therefore don't trust it to answer questions about stuff that I don't know much about.
ChatGPT would be useless for programming questions without SO, though. If SO dies, ChatGPT will be stuck with a 2023 knowledge of many languages and libraries.
Don't know why this gets downvoted. It is simply true. In most cases, ChatGPT4 gets it right. You have to check anyway.
I agree. Also, I've had good luck asking GPT-4 to cite sources with its replies. It speeds up the process of fact-checking, and makes "hallucination" detection trivial. (Does the link 404?)
Obviously it's not perfect, but it's no worse than asking human coworkers, which are also sometimes wrong. (I'd prefer not to interrupt my human coworkers with questions with searchable answers anyway.)
Agree. Before generative AI came along, I once had to scroll down all to way to the bottom for the only valid answer, which got deleted the next day...
The outdated answer problem is infuriating, particularly on Server Fault. We have answers about the Linux login process that predate systemd still out there and being used to delete new questions that have similar wording but need completely different answers now.
We need a competitor that accepts any question, uses GPT to create answers, offers the public the ability to vote and discuss these answers, and uses GPT to rate and prune the discussions. Eliminate the community moderators.
Maybe we do but mostly as a cautionary tale for people who think LLMs are going to solve all their problems.
No, we don't. If you think you don't need moderators then you have never run a community site. But please go ahead and create it; it would be destroyed in a matter of weeks by trolls, racists etc that make your site look bad.
That doesn't appear to be happening to chat GPT, the moderation endpoints seem to do a pretty good job even if they're unfortunately overzealous. Moderation combines summarization and sentiment analysis - two things that large language models excel at.
https://platform.openai.com/docs/guides/moderation
You're naïve if you don't think that the job of moderation is going to be replaced with AI by large social communities.
Plug for the upstart smol doge alternative: https://codidact.com/
Stack Overflow seems mostly self-moderating to me. The vast majority of moderator actions are on posts that have already been downvoted and reported by regular users. SO could automate this human intervention. A moderator already isn't required for a post to be closed, it just takes 3 close votes from normal users.
So then what is the purpose of the mods? They seem to prefer spending their time generating a constant stream of metadrama that has little to do with SO itself. See Monica, the code of conduct changes, modifying the design of the upvote/downvote buttons, Chat GPT...
I think SO needs to scrap the mods and replace them with a small number of professional mods who they pay.
SE has been more or less useless at least for me for about a decade, mostly because of completely overzealous de-duping moderation. It doesn't help that I'm mostly on Server Fault, where small differences in versions and combinations of services can make a huge difference in what the right answer is. But if the question is "how to get nginx to pull auth from two different databases" the answer can be very different based on 1. the version of nginx you're talking about and 2. the specific databases you are using. But a mod sees "nginx auth multiple db" and deletes it as a duplicate.
In those cases where things are version sensitive, it's useful to put version info in the title. That way _everyone_ will know what version the question is about- people searching for answers, and people trying to curate the knowledge-base.
[flagged]
I’m surprised by the negativity towards SO. I’m relieved when google gives me a SO hit for a question, because in my experience it has the best answers, often better than official documentation.
I’ve asked about 25 questions and never had one closed. A few didn’t get a useful answer but most gave very helpful answers. In one case an answer wrote 50 lines of code which would have taken me days to figure out.
Maybe my experience is uncommon?
This article has some useful information, but I read the petition first, and it was very hard to understand what they were talking about.
My tl;dr is: 1: chatgpt became an easy way to provide mostly correct answers that sound completely correct. 2: mods got annoyed that many incorrect answers were being published, so they made a bot that would identify and delete answers written by chatgpt. 3: As anyone who asked a question to stack overflow, and had it removed as a duplicate, knows, the mod's bots have a very high false positive rate. 4: The admins didn't like that answers from actual humans were being deleted that often, so they forbid the mods from autodeleting based on their tool. 5: some mods are angry at that decision, so they won't be moderating until the admins change that policy.
In this situation (as opposed to the Reddit situation), I am all for the admins. I am unable to ask any question in stack overflow without it being removed as a duplicate (even if I clearly state why the other answer is irrelevant to my question). If the AI detecting bot has even a remotely similar false positive rate as the duplication bot, then I am glad that it is not allowed. This would discourage many people from not only asking questions, but also answering.
There is no bot. This is entirely about human moderators acting on posts they consider AI-generated.
It has been interesting seeing the effects.
* The most apparent immediate effect was on Ask Ubuntu, where the diamond moderators no longer handling spam with their tools that can delete it immediately meant that the ordinary moderators had to handle it the longer and harder way with accumulated votes to close. There was a lot more off-topic advertising staying visible for longer almost as soon as the strike began. The difference was noticeable on the first day. As I write this 5 out of the top 10 active Q&As are egregious spam. This is higher visibility than before the strike.
* The effects at SuperUser and AskDifferent have been less prominent than at Ask Ubuntu.
* As I have observed elsewhere, the brakes are off at SQA stackexchange, with the 2 prolific ChatGPT regurgitators now going into open competition with each other.
* Conversely, although there are localized cases like SQA stackexchange, there doesn't appear to have been a widespread gold rush to post machine-written answers now that it is public that they aren't going to be zapped and their posters aren't going to be suspended.
There are several things that clearly a lot of people commenting here don't know:
* There is a difference between a moderator and a diamond moderator on Stack Exchange.
** Pretty much all of the grumbles in the past 2 hours of this discussion have nothing to do with diamond moderators, yet it is the diamond moderators that have the access to the confidential moderators-only discussion forum, that were subject to the secret policy directive sent out therein, that can issue account suspensions, and who are the major participants in the strike.
** Everyone with even a modicum of points is a moderator. This point has been long-touted in the introduction pages to the sites, and was a selling-point of the system in the first place. Ironically, diamond moderators often don't partake in the actions that are being once again grumbled about in a Hacker News submission that has the words "Stack Overflow" in its headline, because adding the diamond moderator flag to their accounts turns their tools into supervotes, and they actually get feedback and advice to not do the things that ordinary people do. If you've seen comments on a question or downvotes, or votes to close as duplicates, these are from the hundreds of thousands of people with moderation capabilities on the various sites, almost always not from the people who have gone on strike here.
** Ironically, the people who have gone on strike did so because they were given secret instructions that, had they enforced them, would have made it seem that they weren't following the published policies and were not doing their jobs.
* There are 181 Stack Exchange sites. This is a general strike.
** Some of the people involved, particularly the ones from Academia stackexchange, object to ChatGPT in very strong terms indeed. One Academia SE person in discussions has repeatedly made the point that questions and answers in that arena are often life-changing, and machine-written answers have far more frequent potential to be disastrous.
** Yes, jlericson is also at fault for misrepresenting things, calling this a "Stack Overflow strike" when it is more than that. It's a catchy click-bait title. But although it garners the attention of more people than "Stack Exchange" probably would, it conversely hides the fact that some of the smaller SE sites have different problems with ChatGPT. "But the code worked after I fixed the obvious problems! So how is this answer a problem?" is a meaningless counterargument to machine-written answer objections on many of the sites, which have nothing to do with computer code.
Seems like a massive non-event.
Why should it matter if an otherwise good answer came from an LLM? Presumably no one is bothered if a human author is aided by a search engine, tech blogs, their own intuition and experimentation, or some guesswork - so long as they submit a quality answer; so why should it be any different for answers aided by LLMs - are they supposed to be automatically worse?
The sole determinate of whether an answer should be allowed is its quality (i.e. usefulness to people accessing the site who are trying to solve their programming problems). An answer's origins are irrelevant (except for the purpose of attribution).
StackOverflow's voting system already does an excellent job of getting the best answers to the top and down-voted ones to the bottom. Further, high-rep users have the ability to close/delete very bad answers.
Protesting LLMs seems like not much more of an extension on the idea of protesting autocorrect.
> Why should it matter if an otherwise good answer came from an LLM?
I think the problem is that quite a bit of the time the answers won't be good, in addition to it being extremely easy to generate a large volume of content, which makes moderation difficult.
That's more or less what happened with Clarkesworld: https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt...
> "By the time we closed on the 20th, around noon, we had received 700 legitimate submissions and 500 machine-written ones," he said.
Imagine there suddenly being 2x more content to moderate, maybe 5x or 10x. And many of the completely false answers being written in a confident tone, yet still with hallucinations like referencing methods that don't exist and so on.
That said, it's not like SO has any problem making everyone already have a hard time, with famously nitpicky moderation that scares people away.
I have to keep posting this in every thread on this topic I guess. LLMs have an extremely high noise to value ratio; it is easy and quick for a user to generate an incorrect but nicely worded post and it is easy for said users to do so across multiple posts instantly. Additionally, there is no value in allowing for users to post answers by LLMs because if someone wanted that, they could've just asked the LLM themselves.
A great example of this problem is people submitting bullshit PRs to repos, which has become worse with the advent of LLM popularity [1]. This means reviewers need to spend a lot more time sussing out if it's AI generated or not which takes away time from reviewing actual, valuable PRs. Or even stuff that's bad but generated by an actual human, because there's potential for actual knowledge transfer there.
> Why should it matter if an otherwise good answer came from an LLM?
Because good as LLMs may be at writing good answers, they're even better at writing good-looking answers. That's pretty much what they're been trained to do, and actually good answers only appear as a fortunate side effect.
Obviously, this makes moderators' jobs a lot harder. I can understand that they get frustrated. It also allows automated climbing in SO's gamified status hierarchy, which makes it even more dysfunctional and threatens those currently on top of it.
So they want to be able to ban posts for looking automated. It's just that whether you decide manually, or with an automated tool of your own, you're going to have a huge false positive rate and quite possibly unfortunate biases in it as well. (Very recently there was a paper showing how automated LM detectors were great at sussing out non-native English speakers and deciding they were bots).
I agree with the rest of your comment, but it's framed as a retort but doesn't really work as one. The person your responding to already said "otherwise good answer" which excludes a wrong but "good-looking" one.
Moderators generally aren't checking for correctness - who would have that sort of time?
I would expect the moderators to have some subject knowledge and often it doesn't take much to spot that an answer is blatantly fantastical as LLMs sometimes are. The finer points of some API might need actual checking, but a lot of things you'll know on sight.
Point is, you don't know which are wrong unless you do the work. The moderator has to do the actual work, but the LM (or rather, the user running an LM bot on his account) gets the credit.
They asked why it is a problem, and I tried to explain why it is in practice.
I stumbled this week on few answers on a topic that I’m following, and although they were really nicely looking, correctly formatted, etc. - they were absolutely wrong - ChatGPT was hallucinating, showing calls to functions that aren’t in specific SDKs, etc. And I’ve noticed that such answers are added to old questions too, adding more noise :-(
Comment was deleted :(
I'm on the internet a lot but I don't know, I feel like it would be almost embarrassing to be an unpaid moderator of a site like that. Then to 'strike' like they are flexing their unpaid power on some company that doesn't care about them ... I don't know it seems like a lack of self respect. Why don't they quit moderating and do something else, it would seem more dignified.
The entire company is structured around unpaid labor so they might care if it all stops.
StackOverflow is toxic and if I don't have to deal with that it's a plus in my book. Too much drama on for my liking.
Finding content is harder too, go through a Google search and 5-6 tabs before you reach an answer, ChatGPT is my first go-to tool for solving common stuff and stuff like syntax reference.
Top 0.05% here. I still haven’t seen any written evidence for the policy change being protested (the protest site says they are protesting a “near-total prohibition on moderating AI-generated content”, it seems incredible that StackOverflow would ever prohibit moderation of any type of content) and when I’ve asked I’ve been told it was discussed in secret so nobody can prove it.
What I suspect is that somebody has asked the site to use automatic GPT removal tools and the site has said they don’t work, which is true.
A teacher thought he figured it out and asked ChatGPT if it wrote the text that his students handed in. ChatGPT said yes for every piece of work, so he failed the whole class.
Apparently it didn't write it. Turns out ChatGPT thinks it wrote a lot of other things like the email that the teacher wrote to the class to sack them, when one of the students asked ChatGPT as proof of fiction. Not sure how it finally ended.
Yeah, given ChatGPT is most likely trained on SO a simple test would be to use the automatic tools on all of the pre-ChatGPT SO. My bet is, much of pre-ChatGPT SO would be removed.
I have not used SO once since chatgpt 4 came out, not once.
I'm sure there are cases there SO has better responses than GPT, but frankly, they are rare, and the ability to query and shape chatGPTs responses basically obsoletes the SO model overnight. So the SO CEO is absolutely correct to pivot to AI before that ship sinks.
Isn't ChatGPT also using SO? If SO would stop and there would be no replacement, then the answers might get worse. My take is that these two currently complement. In my opinion the issue with SO and with most social media is the obsession with metrics. It would be nice if they took more of an HN approach to metrics (votes particularly).
It's pretty neat when you get upvotes on answers you made months or years ago; a substantial number of them represent someone being helped out by your answer. I think that's pretty neat.
For years the open source tool I spent the most time on was one I only very occasionally used myself, but got a lot of feedback and questions. Because this was a non-technical tool most of it was over email, and translated in fairly few "stars" (although I did migrate to GitHub quite late).
All of this makes it rather different from voting on HN.
Well chatgpt is trained on SO. If people stop using SO, chatgpt will not be able to respond about newer languages/frameworks.
Chatgpt is trained on the internet, off which SO is part of it. It's also trained off interactions with it's users, which would eventually take over sources like SO.
You think people will start to teach chatgpt when it can't answer?
Same here, much more convenient to use than SO
Crafted by Rajat
Source Code