"Don't post comments that are not human originated at this time. We want to see your human opinion shine through."
This gives people some amount of leeway and allows just rhe right amount of exceptions that prove the rule.
(That said, to be frank, some of the newer better behaved models are sometimes more polite and better HN denizens than the actual humans. This is something you're going to have to take into account! :-P )
He said he will take his business elsewhere then!
What is amazing is it would have remained so just a couple of years ago!
After all, no one knows I'm a dog.
https://news.ycombinator.com/item?id=47334694
Most people don't seem to care.
That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.
Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.
99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.
Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.
I've been pretty wary about flagging AI slop that wasn't breaking other guidelines, and by default this will probably make me do it more. But it is a lot harder to be certain about something being AI-written than it is to judge other types of rules violations.
(But am definitely flagging every single "this was written by AI" joke comment posted on this story. What the hell is wrong with you people?)
I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear-cut.
PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends.
1. Prevent any account from submitting an actual link until it reaches X months old and Y karma (not just one or the other.)
2. Don't auto-link any URLs from said accounts until both thresholds in #1 are met, so they can't post their sites as clickable links in comments to get around it. Make it un-clickable or even [link removed] but keep the rest of the comment.
3. If an account is aged over X months/years old with 0 activity and starts posting > 2 times in < 24 hrs, flag for manual review. Not saying they're bots, but an MO is to use old/inactive accounts and suddenly start posting from them. I've seen plenty here registered in 2019-2021 and just start posting. Don't ban them right away, but flag for review so they don't post 20 times and then someone finally figures it out and emails hn@.
4. When submitting a comment, check last comment timestamp and compare. Many bots make the mistake of commenting multiple detailed times within sixty seconds or less. If somebody is submitting a comment with 30 words and just submitted a comment 30 seconds ago in an entirely different thread with 300 words, they might be Superman. Obviously a bot.
5. Add a dedicated "[flag bot]" button to users that meet certain requirements so they don't need to email hn@ manually every time. Or enable it to people that have shown they can point out bots to you via email already. Emailing dozens of times a day is going to get very annoying for those that care about the website and want to make sure it doesn't get overrun by bots.
Invites could be earned at karma and time thresholds, and mods could ideally ban not just one bad actor but every account in the invite chain if there’s bad behavior.
This feels like don't buy at Walmart, support the local small shop. We passed the no return sign miles ago.
Gemini's:
This is like advocating for artisanal blacksmithing in the age of industrial steel. It sounds great in theory, but we passed the point of no return miles back.
Yeah, we can tell the difference :)
"I don't fully agree with banning AI-edited comments. Using AI to improve readability and clarity is a reasonable thing to do. A well-structured comment is often much better than a braindump that reads like rambling. AI is quite good at this, and it will probably get better. To illustrate the point, here is how this comment would have looked if edited"
Are there any places in life where conversation is _not_ intended to be between humans?
I understand we often see insightful comments from new accounts, but I always find it suspicious when non-throwaway accounts are created just in time only to make a quip.
In my observation, recently there are quite many new AI generated comments in general. Like not even trying to hide with full em-dashes and everything.
I do feel like people are gonna get sneaky in future but there are going to be multiple discussions about that within this thread.
But I find it pretty cool that HN takes a stance about it. HN rules essentially saying Bots need not comment is pretty great imo.
It's a bit of a cat and a mouse problem but so is buying upvotes in places like reddit but HN with its track record of decades might have one or two suspicious or actions but long term, it feels robust. I hope the same robustness applies in this case well hopefully.
Wishing moderation luck that bad actors don't try to take it as a challenge and leave our human community to ourselves :]
Another point I'd like to say is that, if successful, then we can also stop saying, "did you write your comment by LLM" and the remarks as well which I also say time to time when I see someone clearly using AI but it seems that some false-positives happen as well (they have happened sometimes with me and see it happen with others as well) and they also de-rail the discussion. So HN being a place for humans, by humans can fix that issue too.
Knowing dang and tomhow, I feel somewhat optimistic!
You may also notice that I don't have much common history here. I mostly comment on Reddit.
Here's where I draw the line. If you are not reading the text that is produced by the LLM, then I don't want to read whatever it is that you wrote. I will usually only do one or two iterations of my comment, but afterwards I will usually edit it by hand.
Technically, there is light AI editing of this comment because FUTO keyboard has the ability to enable a transformer model that will capitalize, punctuate, and just generally remove filler words and make it so that it's not a hyper-literal transcription.
Humans write a bit messier — commas, short sentences, abrupt turns.
I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.
But I have some concerns about suppression of comments from non-native English writers. More selfishly, my personal writing style has significant overlap with so-called "tells" for AI generated prose: things like "it's not X, it's Y", use of em-dashes, a fairly deep vocabulary, and a tendency toward verbosity (which I'm striving to curb). It'd be ironic if I start getting flagged as a bot, given I don't even use a spell-checker. Time will tell.
To be clear, I'm neither proud nor embarrassed by this. I'm just trying to communicate in the most efficient way I can.
I'm not sure how I feel about this new rule.
Personally I would just like to read the best comments.
Forum mechanics have always shaped discourse more than policies. Voting changed everything. The response to LLMs should be mechanical not moral — soft, invisible weighting against signals correlated with generated text. Imperfect but worth the tradeoff, just like voting.
https://claude.ai/share/9fcdcba8-726b-4190-b728-bb4246ff82cf
I’m so over these comments. Sure I can flag them but I feel like it deserves a special call out.
As I understand it, HN moderators are thinking hard about this insane new world.* From my POV, there are a combination of worthy goals: transparency of the process, mechanisms for appeal, overall signal-to-noise ratio, and (something all of us can do better) more empathy and intellectual honestly. It isn't kind to accuse a human being of not being a human being.
If we can't find ways to be kind to people because of the new dynamic, maybe we need to figure out a new dynamic! And it isn't just about individuals; it is about the culture and the system and the technology we're embedded in.
* Aside: I'm not sure that any of us really can grasp the magnitude of what is happening -- this is kuh-ray-Z.
I definitely agree with AI generated comments.
Whatever the rules are, I’m happy to play by them.
This rule will have an effect on the behaviour of the 'good players', and make the 'bad players' a lot easier to spot. Moderation needs this. I see this as stopping a race-to-the-bottom on value extraction from HN as a platform.
The biggest danger of LLMs is impersonating humans. Obviously they have been carefully constructed to be socially appealing. Think of the motivation behind that:
It is almost completely unnecessary to LLM function and it's main application is to deceive and manipulate. Legal regulation of LLMs should ban impersonation of humans, including anthropomorphism (and so should HN's regulation). Call an LLM 'software' and label it's output as 'output'.
Imagine how many problems would be solved by that rule. Yes, it's not universally enforceable, but attach a big enough penalty and known people and corporations will not do it, and most people will decide it's not worth it.
I asked [insert LLM here] about this, and it said [nonsense goes here]
I feel Like I see it less this week, but every time I do see it I wonder why they are even here.
@dang, if you read this, why don't we implement honeypots to catch bots? Like having an empty or invisible field while posting/commenting that a human would never fill in
Rules like this seem to me more like fomenting witch hunting of "AI comments" than it is about improving the dialogue. Just about any place I've seen take this hardline stance doesn't improve, it just becomes filled with more people who want to want to pat each other on the back about how bad AI is.
Just my two cents. I don't filter my comments through any AI, but I am empathetic for people who might have great use of them to connect them to the conversation.
But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?
Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?
I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.
I hope to see more bots on there (and not here)
> Off-Topic: Most stories about politics
Without some kind of private proof of personhood enforced at the app level, this means nothing.
(Sorry, couldn't resist.)
I don't think it is a moral failing to use AI to generate writing or to use it to brainstorm ideas and crystalize them, but c'mon isn't it weird to insist that you need them to write _comments_ on the internet? What happens when the AI decides you're wrongthinking?
Plenty of people already use search engines, editors, translators, etc. when writing. An LLM is just another tool in that box.
The practical approach is the one HN has always used: judge the content.
Btw, this was co written with ChatGPT. Does that make any difference to anyone?
J/K, actually it was not co written by ChatGPT.
Or maybe it was…
AI is a tool. You can use it constructively, like Grammarly, or spellcheck. You don't need to be afraid of it.
Bit of a shameless plug but I wrote a HN AI comment detector game[0] with AI and most of my friends and fellow HN users who tried it out couldn't detect them.
I see this all the time, and even if I find the topic interesting, I don’t want to see comments littered with discussion about how the content was AI generated.
To be clear, I'm not condoning AI-generated content. I’m completely fine if the community chooses to not upvote AI-generated content, or flagging it off the FP.
But many threads can turn into nothing but AI complaints, and it’s just not interesting.
I was thinking, this argument is suspicously cogent!
It's just a tool ffs! there are many issues with LLM abuse, but this sort of over-compensation is exactly the sort of stuff that makes it hard to get abuse under control.
You're still talking with a human!, there is no actual "AI" you're not talking to an actual artificial intelligence. "don't message me unless you've written it with ink, on papyrus". There is a world of difference between grammarly and an autonomous agent creating comments on its own. Specifics, context, and nuance matter.
My only caution is that good writers and LLMs look very similar, because LLMs were trained on a corpus of good writers. Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.
So we should make sure to follow that other HN rule, and assume the person on the other end is a good faith actor, and be cautious about accusing someone of using AI.
(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)
->> ◕ ‿ ◕ <<--
Re-reading the HN guidelines, each seems individually reasonable, yet collectively I’m worried that they create an environment where we can take issue with almost anyone’s comments (as per Cardinal Richelieu’s famous quote: “Give me six lines written by the most honorable person alive, and I shall find enough in them to condemn them to the gallows.”)
Really, all the rules can be compressed into one dictum: don’t be an arsehole. And yet the free speech absolutists will rail against the infringement upon their right to be an arsehole. So where does that leave us? Too many rules leads to suppression of even reasonable speech, while too few leads to a “flight” of reasonable speech. End result: enshitification.
Today it flagged a post about an AI tool for HN and suggested I reply with:
"honestly, if you need an AI to sift through hn, you might be missing the point—this place is about the human touch. but hey, maybe it'll help some folks who just can't take the noise anymore."
So my AI, which I built specifically to sift through HN for me, is telling me to go flame someone else for doing that.
No deeper point here. I just thought it was really funny.
Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.
That would be cool.
Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.
Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.
---
Edit: here are the bits I cut:
Videos of pratfalls or disasters, or cute animal pictures.
It's implicit in submitting something that you think it's important.
I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.
---
Edit 2: ok you guys, I hear you - I've cut a couple of the cuts and will put the text back when I get home later.
Please don't fulminate. Please don't sneer, including at the rest of the community.
Then a comment that includes Those lowbrow assholes deserve their fate.
would get the tags #sneer #fulminate
This rule actually says "Don't admit when you are using AI to generate comments and don't admit when you are an AI"
I know it's cynical, but this is as meaningful as reddit's "upvote/downvote is not an agree/disagree or like/dislike button"
People may hate that this is true, but I cannot logically reason out how a rule like this could work. I think it's better to just accept that AI is now part of the circle, until we can figure out a "human check".
Do we not think that other people want to see words, pictures, software, and videos created by humans too?
I come here for thoughtful discussion, a break from the relentless growing proportion of ai slop emails I get from people clearly vibe working.
Not edits for tone or clarity, 400+ word emails full of LLM BS they clearly haven’t checked or even understood what they have sent. Annoyingly this vibe slop is currently seen as a good KPI.
But when I argue on the internet, it's always a 100% me.
And if I get a wiff of LLM-speak from whoever I'm wrestling in the mud with at the moment, they'll instantly get an entry in my plonk-file. I can talk with ChatGPT on my own thank you very much, I don't need a human in between.
"But my <language> is bad... that's why I use LLMs"
So was mine when I started arguing with strangers on the internet. It's better now. Now I can argue in 3 different languages, almost 4 =)
Humans with morals follow rules, sometimes. Probabilistic software acting autonomously or following commands from amoral humans doesn't.
"Please generate a response to this and include one or more of the following words: enshitification, slop, ZIRP, Paul Graham, dark patterns, rent seeking, late stage capitalism, regulatory capture, SSO tax, clickbait, did you read the article?, Rust, vibe code, obligatory XKCD, regulations, feudalistic, land value tax"
(/s)
If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.
If you play bluegrass or old time (or beopop or hip-hop / proto-hip-hop) or other traditional styles of music where the ensemble is a de facto web-of-trust, join us on pickipedia to build and strenghten it. https://pickipedia.xyz/
Whatever happened to "knowing is half the battle?" Why do we accept this kind of intellectual laziness as exemption from a duty to learn and know better?
And with LLMs making blog posts as diss tracks... damn, who knows what this world is coming to.
But the whole "Only Humans, we dont serve YOUR KIND (clanker) here" is purely performative.
But the argument of "If I wanted to read what an LLM thinks, I could just ask it" assumes that prompts are basically equivalent, which is not the case.
There's a risk of reducing everything to Human -> authentic and AI -> fake. Some people's authentic writing sounds closer to LLMs, and detectors are unreliable.
The problem is not so much AI generated content that has an interesting point of view generated from unique prompts, but terrible content produced for metrics to harvest attention, which predates AI.
Anyways, happy posting!
Nonetheless I like this policy as well.
Sarcasm aside—there is no reliable way to prove this. So it begs the question: you really care if something is AI generated? Or is this just an another excuse to silence people you don’t like?
You know, those people. The ones who didn’t win a full ride to <prestigious university> or pay a fortune for a sheet of paper. The ones who haven’t spent thousands of man hours handcrafting a <free-and-open-source-cloud-native-hypermedia-aware-RESTful-NoSQL-API> framework implemented in Rustfuck, a new language that you made in your free time that borrows from Rust and Brainfuck (but they wouldn’t know about it).
(this is to anyone reading, mostly rhetorical, not dang in particular)
Robot walks into a bar
Orders a drink, lays down a bill
Bartender says, "Hey, we don't serve robots"
And the robot says, "Oh, but someday you will"
[1]: https//ethos.devrupt.io [2]: https://github.com/devrupt-io/LLaMAudit
I think, in the end, it is less about the tool you use and more about the purpose you use it for. It is more like when you use certain tools, you should be cautious about whether you are using them for the right purpose.
So I'm just baffled, why anyone was using AI to generate comments. Like what was the incentive driving the behavior?
Consequently, I hardly ever spend the time to write out long and detailed HN comments like I used to in the pre-LLM era. People nowadays have a much harder time believing that an Internet stranger is meticulously crafting a detailed and grammatically-airtight message to another Internet stranger without AI assistance.
At the end of the day, I'm here because of all the thoughtful commenters and people sharing interesting stories.
Though I do wish we'd see less AI related posts on the front page, they simply aren't sparking curiosity, it is the same wrapped in a different format, a different person commenting on our struggles and wins with AI, the 10th software "rewritten" by an AI.
At this point there nearly should be a "tax" on category, as of this moment I count 8-10 related posts on the front page related to AI / LLMs. It is a hot field, but I come to hackernews, to partake in discussions about things that are interesting, and many of those just doesn't cut it, in my opinion.
Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for).
I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done.
I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one.
I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.
Would that be OK or would that count as an AI written comment?
I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of:
1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]).
2. Use too many commas.
3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too.
I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for.
Personally, I try to look beyond the language, which admittedly can be grating, for some interesting ideas or insights. Given that people are already starting to sound like ChatGPT, probably through sheer osmosis, we will have no choice but to look past that anyway.
Yes, it's annoying to read LLM-isms. It's also fine to downvote or ignore or grumble internally, and move on.
Its like an amnesic genius who once he already wrote a masterpiece and keeps cycling, and looses his train of thought after some fixed amount of time.
This groundhog day effect is mitigated in some respects by code -we create key-value memories and agents and stores and countless ways to connect agents via MCP and platforms/frameworks like A2A and the like but until we solve that longer lived instance problem we won't be able to trust these systems without serious HITL (human oversight)
I think we need models that update their own weights and we need some kind of awareness cycle rather than just a forward pass inference run with a bigger context window
The fact of the matter is that there're not hours enough in the day to read, in realtime, to each and every one of you the reams they've written on why you're wrong. Do I have to establish a tag-team?
The fact is that I've spent thousands upon thousands of hours painstakingly collating the perspectives that I'm now delivering to you—I am a river to my people. And it's only because they pass under the bridge of an LLM that they're objectionable?
This is a bit like challenging your plumber for charging you over a minute's fix, when they've spent 20 years getting it down to that minute.
The work's been done. You're paying for the outcome.
Edit: All fresh off the top of my head, folks.
Ah, that reminds me: I wouldn't feel compelled to do all this refutation if radical reactionary political extremism was properly moderated.
Elon said it well, there must be some disincentive to do this.
Language translation is the origin of (the current wave of) AI and its killer app. English is not the main language of the world, and translation opens us up to a huge pool of interesting thinkers.
I'm a native speaker in a foreign language, but out of practice except of a weekly family call. I recently had to write a somewhat technical email to my family, and found it easier to write it in (my more practiced) english and have AI translate it, than write it in the target language myself. Of course, in my case I was able to verify that the output conveyed the meaning I intended, because I am fluent in the target language.
As with the rise of GenAI, I've also noticed a rise of translated messages. It's usually hard to tell the difference, except by looking at the commenter's history (on other subreddits, impossible on HN).
I understand the original frustration with GenAI comments and reactionary response. I'm sorry that we're excluding what could be a large pool of interesting people because we can't tell the difference.
It is not about whether the comment was written by AI, a native English speaker, English major, or ESL.
What matters is an idea or an opinion. That is all what matters.
This is similar to when people check someones post history and if they are pro Trump, they are immediately against their idea or opinion.
The next step is to run Pangram on every post and ban the offenders! Fight AI with AI! /s
In all seriousness, this is one of the few places I trust for genuine conversations with other people. Forums are mostly dead, Reddit is bots-galore, and I'm not signing up for Facebook just for groups.
I miss pre 2010 internet. As soon as the advice animal memes started appearing on Facebook it was a quick decline.
Don't insinuate that someone else must have broken that. It was you.
Do run the linter
Don't commit throw-away code
Do write a test case
Don't write a comment describing every single function
Seriously, run the linter. And fix the issues.
It is your fault.
This rule will atleast partly stem the danger of HN getting turned into what dang calls a "scorched earth" situation.
The real issue isn't just "slop" or bot-spam; it's the cost of entry. HN works because of the "proof of work" behind a good comment. If I’m spending five minutes reading your take on a kernel patch or a startup pivot, I’m doing it because I assume a human actually sat down and thought about it.
When the cost of generating a response drops to zero, the value of the conversation follows it down. If the author didn't care enough to write it, why should I care enough to read it?
The "AI-edited" part of the rule is the trickiest bit, though. We’re reaching a point where the line between a sophisticated spell-checker and a generative "tone polisher" is non-existent. My worry isn't that the mods will ban bots—they've been doing that for years—it's that we'll start seeing "witch hunts" against anyone who writes a bit too formally or whose English is a little too perfect.
Ultimately, I’m glad it’s a rule. I don't come here to see what an LLM thinks; I can get that on my own localhost. I come here for the "graybeards" and the niche experts. If we lose the human friction, we lose the signal.
Sorry everyone, I couldn't help but to ask Gemma3-27B-it-vl-GLM-4.7-Uncensored-Heretic-Deep-Reasoning-i1-GGUF:q4_K_M to respond. Sorry dang. :)
PS It followed it up with:
> Disclaimer: "Slightly insulting" is subjective on HN. The mods there are sensitive.
These Heretic models are fun.
## Opposing the Ban on AI-Generated/Edited Comments on HN
*The value of a comment should be judged by its content, not its origin.*
Here are key arguments against this policy:
- *Ideas matter more than authorship.* If a comment is insightful, well-reasoned, and contributes meaningfully to a discussion, dismissing it solely because AI assisted in its creation is a genetic fallacy — judging an argument by its source rather than its merit.
- *We already accept tool-assisted thinking.* People routinely use calculators, search engines, spell-checkers, and reference materials before posting. AI assistance exists on a spectrum with these tools. Drawing a bright line specifically at "AI-edited" is arbitrary when someone could use a thesaurus, Grammarly, or have a friend proofread their comment without objection.
- *It disadvantages non-native speakers.* Many HN users are brilliant engineers and thinkers who don't write fluently in English. AI editing can level the playing field, allowing their ideas to be judged on substance rather than prose quality. This policy inadvertently privileges native English speakers.
- *It's effectively unenforceable.* There is no reliable way to distinguish a lightly AI-polished comment from a naturally well-written one. Unenforceable rules erode respect for the rules that are enforceable and important.
- *The real problem is low-effort content, not the tool used.* What HN actually wants to prevent is shallow, generic, or spammy comments. A policy targeting quality directly (which HN already has) addresses the actual concern better than a blanket tool prohibition.
- *Human intent still drives the conversation.* A person who uses AI to articulate their own idea more clearly is still participating in a human conversation — they're just communicating more effectively. The thought, the intent to engage, and the underlying perspective remain human.
*In short:* This rule conflates the medium with the message and risks excluding valuable contributions in pursuit of an authenticity standard that is both philosophically fuzzy and practically unenforceable.
I don't feel this is an imposition on others. I think it's the opposite. It enhances signal by reducing nitpicking, spelling/grammar errors that might muddle intent, and reminds me of proper sentence structure.
Many of us are guilty of run-ons, fragments, overly large blocks of text[1] because it's closer to how people often converse, verbally. Posts on the internet are not casual conversation between humans. They are exchanges of ideas.
[1] This is a classic example where I had to go back and edit it to ensure it was readable. As you do self-review with any commit ^^
I expect Y Combinator to cease and revoke all funding of all companies that leverage LLM technologies that interact with humans.
I wonder if there's an AI-hate movement in China.
But here's the funny thing. I'm pretty sure the frontier models are now smarter than I am, more eloquent, and definitely more knowledgeable, especially the paid versions with built-in search/research capability. I'm also fairly certain that the number of original thoughts in a given discourse on the Internet is fairly small, I know that's certainly the case for me.
So whither humans now?
If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?
So....?
fulminated, fulminating to explode with a loud noise; detonate. to issue denunciations or the like (usually followed byagainst ).
(Because “don’t fulminate” is the rule that follows the referenced one :) )
I've been feeling more and more that generative AI represents the average of all human knowledge. Which has its place. But a future in which all thought and creativity is averaged away is a bleak one. It's the heat death of thought.
https://simonlermen.substack.com/p/large-scale-online-deanonymization
https://news.ycombinator.com/item?id=47139716
It also points out the need for AI writing tools that very strictly just:
1. Point out misspellings and typos.
2. Point our grammar mistakes, if they confuse the point.
3. Point out weaknesses of argument, without injecting their own reasoning.
I.e. help "prompt" humans to improve their writing, without doing the improvement for them.
In fact, I would like a reliable version of that approach for many types of tasks where my creativity or thought processes are the point, and quality-control feedback (but not assistance), is helpful.
This is a mode where models could push humans to work harder, think deeper, without enabling us to slack off.
So develop and fund and use AI but manually paraphrase things and don't cite AI?
It is best to cite a source and/or a method.
Do you think it is better to paraphrase and not cite AI?
I don't recall encountering posts on HN that I've wanted to flag as AI.
“Don’t post generated/AI-edited assignments. School is for conversation between humans”
AI can be a great tool for learning, but also can pollute or completely hijack the medium for human interaction and learning.
Having HN flooded with AI generated content will be sad as I like reading it, but losing that same fight at schools will be detrimental.
And I'm worried banning AIs altogether will eventually lead to some form of prove-you-are-human verification to use the site, which will reduce anonymity. Even something seemingly benign like verifying email would mean many unverified accounts like my own will disappear.
And there is a legitimate use for LLM rewrite to counter identification by stylometry, so rewrite shouldn't be banned. I think we'll have to allow the AI stuff at some point, and make a system that incentivizes quality posts regardless of where they come from or how they're written.
As a Polish man I am repulsed when I hear AI generated Polish voice in a commercial, but can't see problems in AI generated English speech
Only for them to showing undeniable prove that they actually did create their art themselves.
For someone to be allowed to judge another. He should be doing a test where he can identify AI comments first with high accuracy.
It would be a pain to see real human comments and ideas to be hidden or removed by a mob.
Only really irritated by the ultra low effort “here is a raw copy paste of what my LLM said on this topic” comments. idk how people think that’s helpful or desired
Im of course exaggerating, but it is so easy just to run the text through an AI to make it sound "better" without changing what im trying to express.
---
I’m not a native speaker, so AI helps me get my point across more clearly. It’s hard not to come across like a dummy otherwise.
Of course I’m exaggerating, but it’s really easy to run the text through AI to make it sound better without changing what I’m trying to say.
For instance, if a non-native speaker translates their own writing using machine translation or an AI, is that problematic—provided they personally review and vet the content before posting? I don't think the people calling out AI use on this board are taking issue with that. Ultimately, it’s not about the method; it’s about the author's attitude.
The reason LLMs are so disruptive now is that while "shitposts" used to be obvious, we're now seeing "plausible" low-effort content generated without any human oversight. Irresponsible people have always been around, but LLMs have given them the tools to scale that irresponsibility to an unprecedented level.
What kind of human has an orange head and beige body with text written all over? An HN conversation is clearly with a computer program. Anthropomorphizing it is certainly an interesting take, but one that is bound to lead to misinterpretations and misunderstandings. The medium is the message. To avoid problems it is best to not play pretend.
llm-assisted for when i care about precision and accuracy
brain-generated for when i feel safe to make mistakes
Reading the site in past 2 years left me with the feeling that HN has been injected by subtle to catch AI marketing campaigns. It's exausting and calling out astroturfers imo is not that bad
I realized that the problem of AI generated/edited content flooding everywhere around us is a symptom of something wrong with the System.
It might have something to do with sensory deprivation. Here is a quote from the book caught my attention because of the word "hallucination":
> As we all know, sensory deprivation tends to produce hallucinations.
> FUNCTIONARY’S FAULT: A complex set of malfunctions induced in a Systems-person by the System itself, and primarily attributable to sensory deprivation.
(As I typed the text above on my iPhone, I was fighting auto completion because AI was trying to “correct” the voice of John Gall and mine to conform the patterns in its training data. Every new character is a fight against Gradient Descend.)
All you need is attention but the cost of attention is getting higher and higher when there is little worth our attention.
It takes a lot of efforts to be human.
AI can do a grate job for grammar, spell and formulation checking/fixing without changing any content. I.e. just adding as a fancy version of extended spell checking.
While I do currently not use it like that there shouldn't be any reason to ban it.
And tbh. given some recent comments I have been really wondering if I should use it, because either there are quite a bunch of people with lacking reading comprehension or quite a bunch of people with prejudice against people struggling with English spelling and grammar.
Either way using AI as extended spell checker does would help with getting the message through to both groups as
- it helps with spelling, grammar in ways where traditional spell checker fail hard
- it tends to recommend very easy to read sentence structure and information density
Then less motivation to jump out to external LLM to even get comments on your content which can temptingly lead to editing/generation.
Where's the curiosity about this world-changing technology? As all the CTOs have recently said: AI use not an option and it must change everything we do. /s
Second, I have to confess that I did this sin a couple of times now, but I came to realize that this is neither good for me nor for the HN community. Although I used AI just for rephrasing, I decided to not do this ever and I'd rather write my own words with mistakes than post generated words based on my thoughts.
It happened for me once and it strikes me like a nuke and I felt truly embarrassed. A couple of months I wrote that comment (https://news.ycombinator.com/item?id=42264786) then I asked ChatGPT to rephrase it and then mistakenly, pasted both comments, the original above and the generate one below and I hit submit. Shortly after, a user comes, read my comment and replied with that embarrassing reply and honestly, I deserve it. From that moment I realized how things can got messed up quickly when you rely heavily on that AI.
I think we are overwhelmingly utilizing negative reinforcement for AI generated content; where there are consequences for engaging in this behavior. On the other hand, positive reinforcement would encourage authenticity and greater human content. The reality of the situation is that AI generated content won't go away and it's become a game of who can hide their artificial content the best. Thus, I believe that positive reinforcement is the solution.
I think we must instead encourage human created content instead of policing AI generation. There are so many rules to follow already that by the time I create the content, I've gone through enough if/then logic that it feels like AI anyway.
The second is gonna be a lot harder to enforce, as we soon (and probably already) don't know who we're talking to on the internet - a real person or someone's agent? Will calling spaces "human only" later be seen as discriminatory by agents? How will we actually enforce "human only" spaces? Will websites like HN start to provide an "agent only" discussion forum or filter in addition to the "human only" sections?
I am surprised that apparently I am in a minority here.
However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.
(Sorry, couldn't resist.) I could be the lone dissenter here, but to me well-written comments are a lot more fun to read than near-gibberish.
I wished more people tried harder to be better communicators, but it is what it is. If AI can decipher these comments and produce a much more coherent statement, then I'm for it.
Comments: