I recently posted AI output in a thread and received some negative reactions about this including potentially banning AI output posts ( Jackpot config changes ). In my case, I was comparing two versions of config files for differences, which, in my opinion, is a very good and safe use of AI. Don’t get me wrong though, I have a love-hate relationship with AI. A lot of times it is more frustrating than useful and can be quite harmful. My opinion is banning AI seems analogous to a mayor in the early 1900’s trying to ban cars in their city. A better policy would have been to implement good rules for use of cars. What do you think?
Here is what Claude thinks:
The Reality We’re Facing
AI tools are already here and being used by many of us daily—whether for coding help, documentation, troubleshooting, or technical analysis. The question isn’t really whether AI exists or whether people will use it, but how we want to integrate it (or not) into our community discussions.
What I’m Seeing in Practice
The Good:
Quick analysis of complex technical data (like config file comparisons)
Help with documentation and explanations
Catching errors humans might miss in code reviews
24/7 availability for initial troubleshooting steps
The Concerning:
Potential for incorrect information presented confidently
Risk of low-effort posts that don’t add real value
Could reduce genuine human discussion and learning
May create dependency rather than understanding
Questions Worth Considering
Quality control: How do we ensure AI-assisted content is accurate and helpful?
Learning impact: Does AI help people learn faster, or does it create shortcuts that skip important understanding?
Community value: What makes our forum valuable—is it the information, the human interaction, or both?
Practical enforcement: How would we even distinguish between AI-assisted and human content reliably?
What Other Communities Are Doing
Some forums require disclosure tags, others have dedicated AI sections, and some ban it entirely. Stack Overflow initially banned it, then created specific policies. The approaches vary widely, and most are still evolving.
My Take
Rather than rushing to ban or fully embrace AI content, maybe we could:
Start with transparency requirements - let people know when AI tools were used
Emphasize verification - AI output should be tested and validated before sharing
Focus on community value - does this post help solve real problems our members face?
Learn as we go - adjust policies based on what actually works for our community
What Matters to You?
I’m more interested in hearing what you all think than pushing any particular position. What’s your experience been with AI tools in your technical work? What concerns do you have? What benefits have you seen?
This technology isn’t going away, so we might as well figure out how to handle it thoughtfully.
The only thing I typically have against it is I read a lot of posts each day. If I see a 5 paragraph post, I literally just don’t read it. I don’t have time for that.
When I do look through them, some of the stuff is confidently wrong. It is still AI. Humans are also confidently wrong but far less people inherently trust a human.
Feel free to use it if humans are not already answering, but if there is already a detailed discussion happening a 5 paragraph generated post is mostly going to get ignored, if not just get in the way.
I can implement AI in the discourse forum to answer every post then you would not have to bother copy and pasting, but I think we do a pretty good job at answering questions quickly and concisely.
I would only say something: formatting matters when modifying the config.yaml from FLUIDNC and thats something the AI didn’t showed you. Just read the docs, look for any similar issues you may want/need help with then post about your specific issues in a post providing the relevant information/config/gcode so someone can look it up and help you with the best of our knowledge.
As an example your first post is mostly fluff and I did not read it. I skimmed your first paragraph and stopped at Claude.
Another way to put it is if I ask about current ratings of a component on teh jackpot board, I want an answer (3A), not a link to an MIT course on EE that teaches you how they are made, the theory and how to read a specs sheet.
Consider… For AI content that adds value to a topic and directly answers someone’s question (it happens…), that has been generated with a prompt that minimizes fluff, that has been thoughtfully vetted, then, consider wrapping in [details="AI response"] fragment so the content is collapsed by default. This’ll help time limited and mobile phone users, and, people with other reasons to not want to read AI content…
AI response
Low fluff useful AI content that’s been thoroughly read 'n thinked about, and post edited by a human could go here. Hidden from people fearful/hateful of AI.
It is not getting banned unless it gets out of hand. We are getting tons and tons of bots lately and we are all trigger-happy on anything that looks even slightly spam/ai generated. It gets flagged and deleted by humans and the forum software itself.
Another cool way to add the more detailed ai response is just to link your search.
If I ask a question, and someone comes back with a “Here the results of my AI query” answer, that feels a lot like receiving a “Let me Google that for you” type of response.
I don’t think that’s the kind of community we want
Another way to put this is I come here because I trust the crew we have here. I come here when searches and AI fail me. So it is sort of redundant I guess. Usually I have already seen the AI answer and was not satisfied for whatever reason.
Yeah. We’re all rational, reasonable, smart people. We can navigate through a tough conversation every once in a while without any black and white rules. Rules mean more work to enforce and often to some exception that leads to some other exception. Or someone getting hurt because they “followed the rules” but still were not allowed to do whatever they thought they should do.
I would rather come at these things with politeness, empathy, and cautious optimism. I’m sorry if I came off curt or not empathetic. I was a little grouchy and I missed that it was AI at first, which made me very confused.
IMHO, let’s just get along, and gently push each other to be great contributors. If AI has something good to say about something, putting a warning on it like Aza did will offend fewer people.
Our city still bans cars on sidewalks. It isn’t about being a luddite. It is about applying the right tech to the right place.