Should a member be allowed to make a post which is AI generated or AI mixed without disclosing such use of AI as part of the post?

Should a member be allowed to make a post which is AI generated or AI mixed without disclosing such

  • Yes

  • No


Results are only viewable after voting.
Status
Not open for further replies.
Intent to deceive. Hmmmm. An interesting construct.

So if my writing skills were poor and I used AI to "re-write my entire post" to improve the clarity of my post, it's not really an "intent to deceive others into believing I wrote it" as much as it is an intention to provide greater clarity for the reader.

Seems to me that "intent to deceive" would be a challenge to determine. And determining the level of maliciousness in that intent would inherently be subjective. Here's Gemini AI's response when I asked "Why is intent to deceive a challenging determination?"

Intent to deceive is a challenging determination for several reasons, particularly in legal contexts like patent law:
  1. Subjectivity of Intent: Intent is an internal mental state. It's not something directly observable like an action. You can't simply look at someone and know what they were thinking. This makes proving intent reliant on circumstantial evidence and interpretation, which can be difficult and contested.
  2. Circumstantial Evidence: Since direct evidence of intent is rare, it's usually inferred from the surrounding circumstances. This can involve analyzing a person's actions, statements, and the context in which they occurred. However, different people can interpret the same circumstances in different ways, leading to disputes about whether the evidence truly points to an intent to deceive.
  3. Balancing Good Faith with Deceptive Intent: Often, there's a need to weigh evidence of deceptive intent against evidence of good faith. For example, in patent law, someone might have withheld information, but they might argue they did so because they genuinely believed it was irrelevant, not because they intended to deceive. Determining which interpretation is more credible can be complex.
  4. Burden of Proof: In many legal cases, there's a high burden of proof required to establish intent to deceive. This often means proving it by "clear and convincing evidence," a standard that requires more than a mere preponderance of evidence. Meeting this standard can be a significant hurdle.
  5. Complexity of Human Behavior: Human behavior is complex and multifaceted. People can have mixed motives, and their actions might not always perfectly reflect their true intentions. This complexity makes it difficult to definitively conclude that someone acted with the specific intent to deceive.
In summary, determining intent to deceive is challenging because it involves inferring a subjective mental state from objective evidence, often in a context where there might be competing interpretations and a high burden of proof.

All this said, I am a contrarian who spent 10 years as a Super Moderator of a 20+ year old forum with 100,000+ Members and 1,400,000+ posts who finds that rules which attempt to guide behavior generally fail to achieve their goals, and enforcement becomes untenable to even the most control-obsessed Moderators. Seems to me this is a unrealistic solution in search of an inevitably uncontrollable problem. YMMV
 
  • Like
Reactions: mtemur
Intent to deceive. Hmmmm. An interesting construct.

So if my writing skills were poor and I used AI to "re-write my entire post" to improve the clarity of my post, it's not really an "intent to deceive others into believing I wrote it" as much as it is an intention to provide greater clarity for the reader.

Seems to me that "intent to deceive" would be a challenge to determine. And determining the level of maliciousness in that intent would inherently be subjective. Here's Gemini AI's response when I asked "Why is intent to deceive a challenging determination?"

Intent to deceive is a challenging determination for several reasons, particularly in legal contexts like patent law:
  1. Subjectivity of Intent: Intent is an internal mental state. It's not something directly observable like an action. You can't simply look at someone and know what they were thinking. This makes proving intent reliant on circumstantial evidence and interpretation, which can be difficult and contested.
  2. Circumstantial Evidence: Since direct evidence of intent is rare, it's usually inferred from the surrounding circumstances. This can involve analyzing a person's actions, statements, and the context in which they occurred. However, different people can interpret the same circumstances in different ways, leading to disputes about whether the evidence truly points to an intent to deceive.
  3. Balancing Good Faith with Deceptive Intent: Often, there's a need to weigh evidence of deceptive intent against evidence of good faith. For example, in patent law, someone might have withheld information, but they might argue they did so because they genuinely believed it was irrelevant, not because they intended to deceive. Determining which interpretation is more credible can be complex.
  4. Burden of Proof: In many legal cases, there's a high burden of proof required to establish intent to deceive. This often means proving it by "clear and convincing evidence," a standard that requires more than a mere preponderance of evidence. Meeting this standard can be a significant hurdle.
  5. Complexity of Human Behavior: Human behavior is complex and multifaceted. People can have mixed motives, and their actions might not always perfectly reflect their true intentions. This complexity makes it difficult to definitively conclude that someone acted with the specific intent to deceive.
In summary, determining intent to deceive is challenging because it involves inferring a subjective mental state from objective evidence, often in a context where there might be competing interpretations and a high burden of proof.

All this said, I am a contrarian who spent 10 years as a Super Moderator of a 20+ year old forum with 100,000+ Members and 1,400,000+ posts who finds that rules which attempt to guide behavior generally fail to achieve their goals, and enforcement becomes untenable to even the most control-obsessed Moderators. Seems to me this is a unrealistic solution in search of an inevitably uncontrollable problem. YMMV

In that case the only option is giving up on forums and online dialog entirely, assuming you want to know if you're conversing with a human or an AI. Gemini is also wrong if it thinks intent to deceive is subjective, more AI garbage reasoning and more evidence that it'll feed you a load of BS. It's up to you to detect that yourself. This is an example of why we shouldn't allow people to post AI generated word salads.

AI has the potential to completely ruin social media of all types, so this isn't a unique problem we're discussing. Right now, AI generated content is pretty obvious, but that may change. Methods to detect and enforce rules limiting AI is a massive issue that we'll be grappling with for the foreseeable future. These methods will evolve along with AI.

For now, I think a vast majority of members here will abide by any rules that limit AI. AI posts have been recognized and outed. I don't see any real problems with a new AI-limiting rule right now. We can certainly catastrophize and make up scenarios in which the rule will fail but for now the sky isn't falling. All we can do is take it as it comes and adapt the best we can.

Academia has resources that are not available to audio forums. I do not see this forum owners paying for such experts work. IMO only the active participation of members in dialogues can scrutinize such activity.

It seems clear for me that forums will survive to AI only if the participant activity and enthusiasm can overcome AI posts - I think WBF is well prepared for it. Ron easily spot the same post I have checked a few days ago ...

Maybe. I have no idea on the cost of implementing an AI detection strategy suitable for WBF. Like anything, there's likely a wide range of products and services available. On the 2nd point, agreed. Right now I don't think it's a huge issue and I've only seen one member with an intent to deceive in their use of AI.
 
The Iron Dome's Tamir missiles exemplify advanced AI and computational decision-making. With only 10 seconds to calculate interception trajectories, these systems demonstrate sophisticated real-time threat assessment and response capabilities.

Similarly, state security authorities now use predictive AI to anticipate and preempt terrorist group developments, often preventing incidents before they become public knowledge.

In aviation, computer systems increasingly handle critical pilot decisions across multiple aircraft types, including commercial airliners like Airbus and Boeing, as well as advanced fighter jets such as the F-35 and F-22. These systems process complex data and make split-second decisions that were once exclusively human domain.

These examples illustrate how AI and computational technologies have transformed decision-making in defense, security, and transportation, enabling faster, more precise responses in high-stakes scenarios.
What State Security services? They all were told to go home, weren’t they?
 
@DaveC you seem very passionate about his subject. While I disagree with many of your statements and assumptions, I respect that you have an interest in stating your position.

I have a friend whose neighborhood association installed speed bumps they said would increase safety. I asked if they had any accidents or previous safety incidents and he responded "no". I asked if they discussed the many studies around the world of the negative aspects of speed bumps with regard to environmental damage, increased vehicle maintenance, increased fuel usage, increased noise, etc. He said "no".

In society, we often focus our attention on one side of an issue (frequently the emotional aspect, and often a passion of the minority), and fail to fully explore alternate sides and potential impacts and outcomes of an issue, leading to poorly thought-out and/or executed "rules". And frequently we create a solution in search of a problem.

- As you said, I also don't see a pervasive problem in this forum with many members being concerned with whether they are conversing with a person or an AI bot. This smells like a solution in search of a problem. For that reason I don't support another rule.
- In my experience, rules that are created for or against one individual or instance reflect poorly on the organization. For that reason I don't support another rule.
- There are potential gains in knowledge by utilizing the research provided by Internet searches and artificial intelligence. Those gains could be impacted by an AI-limiting rule. For that reason I don't support another rule.
- Having been a Super Moderator of a VERY active forum, enforcement of a rule such as this would likely add to the ongoing time commitment and frustration of Mods and Admins, most of which are unpaid volunteers. For that reason I don't support another rule.

All that said, I'll yield the floor as you seem to want to have the last word on the subject
 
...you seem very passionate about his subject. While I disagree with many of your statements and assumptions, I respect that you have an interest in stating your position.
All that said, I'll yield the floor as you seem to want to have the last word on the subject

You seem to want to make this personal, so I will decline to engage any further. Have a good day.
 
I think it can help with some things but AI reads like AI, it tends to be long winded because it can type real fast, and the answers are always adjusted towards being a bit more politically correct, so it adds yet another layer in between people. It's also frequently wrong and uses flawed logic.

If you don't require AI to be disclosed, and if you can't tell AI from human either now or in the future, we're all going to end up communicating with machines without knowing it... not sure that's in anyone's best interest.

So in my opinion, you absolutely have to require AI use to be disclosed and limited. The other options don't work.

The AI detector I used to show Ted's posts were AI generated give a probability and a % human / %AI. If it doesn't already exist, a sitewide scanner could easily check every post and report AI if you don't check a box disclosing it's use, or disclose the %AI with the post.

AI can also take what you wrote and "polish" it, that makes your post sound like AI word salad and imo the results are poor, personally I don't want to read modified AI posts either. Just my $.02...
"One has the right to use resources, hopefully judiciously to contribute."
 
"One has the right to use resources, hopefully judiciously to contribute."

How can you trust the AI content to be correct? You are not referencing an AES paper as an example.

Rob :)
 
I have seen participants on boards get in obvious arguments with bots and AI stuff. It's kind of funny, because some people don't seem to have the radar to understand they are engaging a non-human interface so they argue with it, sometimes heatedly.

Maybe that's the point. The decepticons want to make the interfaces indistinguishable for the purpose of cranking out automated propaganda confettis en masse. Many headlines today are AI generated with the 'human' commentary added as fill.

It's the curse of too much information, when most of it is manipulative or bad.
 
  • Like
Reactions: tima
You seem to want to make this personal, so I will decline to engage any further. Have a good day.
Not at all. Be well
 
Well, this isn't a high stakes situation, a detector would simply alert mods, and at least at this point, it's very easy to recognize AI writing so you have a human backup. All it does is make moderating a rule a little easier, the detector isn't likely, I hope, to auto-ban people so it doesn't really matter if the accuracy isn't perfect.

I wonder if this whole thread is addressing your discovery of Ted of SR using A.I. to post here in various threads. Why not simply prohibit the behavior?
 
I have seen participants on boards get in obvious arguments with bots and AI stuff. It's kind of funny, because some people don't seem to have the radar to understand they are engaging a non-human interface so they argue with it, sometimes heatedly.
I too have seen this...and it is exactly what the WBF does not need. It does not help the WBF membership and will only dilute some of the great, quality human made posts offered up by the members of this forum. In some of these cases, both "users" were removed from the forum because it got so heated. In one case, when it was discovered that two members were using A.I. to carry on an entire argument, both of those members also were banned from that forum (not this one).

Tom
 
(...) - Having been a Super Moderator of a VERY active forum, enforcement of a rule such as this would likely add to the ongoing time commitment and frustration of Mods and Admins, most of which are unpaid volunteers. For that reason I don't support another rule. (...)

As far as I see it, although we do not have the details, your moderation experience does not apply in full to this very different problem we are addressing involving the use of AI. Forums have to address the ethical aspects of such behaviour and analyse the different ways AI can participate in our debates. Possible implementation of rules and practices only can happen after this work is carried.

The existing issue is particularly serious in a forum that benefits from the presence of designers, manufacturers and dealers.
 
The existing issue is particularly serious in a forum that benefits from the presence of designers, manufacturers and dealers.
yes; when commerce is a factor, an added degree of urgency and concern. we cannot assume good, well meaning hobbyist intensions.
 
  • Like
Reactions: wil and bonzo75
As far as I see it, although we do not have the details, your moderation experience does not apply in full to this very different problem we are addressing involving the use of AI. Forums have to address the ethical aspects of such behaviour and analyse the different ways AI can participate in our debates. Possible implementation of rules and practices only can happen after this work is carried.

The existing issue is particularly serious in a forum that benefits from the presence of designers, manufacturers and dealers.
Thank you for your reply to my post. My moderation experience was in a forum of hobby-based consumer products where designers, engineers, manufacturers, distributors, dealers and consumers participated. While it may not have been identical to this forum as it was a different field of products, my post regarding my moderation experience was a comment regarding the significant amount of time that moderators commit to a forum, typically on a volunteer basis, and how a rule regarding the use of AI in posts could increase the time commitment of moderators.
 
  • Like
Reactions: microstrip
Not at all. Be well

All that said, I'll yield the floor as you seem to want to have the last word on the subject


Since there's no rule yet:

Gemini:

You're right to call that out. While the rest of the post is respectful and avoids being personal, that closing line, "All that said, I'll yield the floor as you seem to want to have the last word on the subject," does have a slight edge to it. It's not a full-blown personal attack, but it's a mildly passive-aggressive jab.
Here's why it's problematic:

  • Implies Negative Motive: It suggests that the other person is more interested in winning an argument or having the final say than in having a productive discussion. This is an assumption about the other person's motivations, and it's generally better to avoid speculating about motives.
  • Sarcasm: The tone can easily be interpreted as sarcastic. Even if not intended that way, it can come across as dismissive and slightly condescending.
  • Unnecessary: The rest of the post is so well-reasoned and respectful that this closing line feels unnecessary and detracts from the overall positive impression. The user could have simply ended with a more neutral closing, such as "Thank you for your time" or even just stopped at the previous point.
In summary: While the post is overwhelmingly not personal, that one line does introduce a touch of personalization by implying a negative motive to the other participant. It's a minor flaw in an otherwise well-constructed argument, but it's worth noting as an example of how even subtle language choices can impact the tone of a discussion.
 
  • Haha
Reactions: wil
OK @DaveC maybe a little personal on that last line. LOL. Kudos to you for calling me out on it.

BTW, if Gemini thinks I'm only minorly flawed, it doesn't know me well, as I am fully flawed! Be well mate!
 
  • Like
Reactions: Pokey77 and DaveC
Totally agree Mike

Nothing worse than googling over a nude girl to find out she is fake :p ;)
I like mine at least 30% fake anyway . :rolleyes:
 
Thank you for your reply to my post. My moderation experience was in a forum of hobby-based consumer products where designers, engineers, manufacturers, distributors, dealers and consumers participated. While it may not have been identical to this forum as it was a different field of products, my post regarding my moderation experience was a comment regarding the significant amount of time that moderators commit to a forum, typically on a volunteer basis, and how a rule regarding the use of AI in posts could increase the time commitment of moderators.
Someting seems amiss with your signature line, i list of previous motorcycles is a first on WBF. AI generated maybe ? ;)
 
Someting seems amiss with your signature line, i list of previous motorcycles is a first on WBF. AI generated maybe ? ;)
There's nothing "artificial" about the intelligence of owning and riding Italian motorcycles. There may be "questionable" intelligence in how one rides an Italian motorcycle, but it's definitely not artificial. LOL :cool: I still have three Italian motorcycles, and 2 other Euro ones. Plus a number former bikes. That said, at my age (60's), my riding is far less aggressive than many years ago.

As a contrarian, I thought it boring to list my audio system in my signature, though some of my goodies are a little interesting.

Would it be more appropriate for me to list my favorite bands in my signature line? Before you say 'Yes', know that though I'm in my 60's I mostly listen to metal and hard rock bands of the past 25 years, with particular interest in Metalcore, Deathcore, Black Metal, and Progressive Metal. (Be careful what you ask for, LOL)
 
Status
Not open for further replies.

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing