Should a member be allowed to make a post which is AI generated or AI mixed without disclosing such use of AI as part of the post?

Should a member be allowed to make a post which is AI generated or AI mixed without disclosing such

  • Yes

  • No


Results are only viewable after voting.
Status
Not open for further replies.
gentlemen. My example as to proposed addition to the use of TOS is quite simple,. It shall not be allowed. Having said that if there is a debate going on in a thread and one member does a search and puts up a post stating , :"did a search on the topic and AI found the following for me and here are the results, I see this as OK

Ron IMO you are overthinking this as I truly believe that if it becomes a TOS the problem will quickly become minimized , if not negated.

You keep falling back on your AI detectors, These are evolving at such a rapid pace that as I said above Xenforo is contemplating adding one of their own as an add-on in which all posts are scanned. I believe the majority in the poll who voted "NO" will never use and I also believe that the small percent who voted "yes" might use it but I bet they will divulge

I happen to agree with Dave C (which you don't and are trying to convolute a situation in which there its no need.

I put up my example with the ladder system only to outline to members what our ladder system is as there is only one rule for all Terms of Service. SAnyviolation is placed on the ladder system by our mods and it takes a soft warning initially. If further recurrences then a formal warning process begins and escalates up the ladder until a temporary ban is issue. What is written in the TOS will merely state, "the use of AI on WBF is not allowed unless full divulgence by the poster at the time of the post"

We employ a ladder system that issued exactly the same in all four TOS

I still believe that if a member knowingly or unknowingly uses AI and a soft warning is issued, the behavior will stop. I believe that "we human" have the ability to process right from wrong and I believe that said member will cease and desist.

Ron you are making something simple IMO into something far too complicated. Im beginning to wonder if you re a bot. ;)

I believe have to start somewhere and that a new TOS needs to be added and members need to be aware. I also believe that our mods act equally and without bias in all cases. Ron andd I do not moderate. That duty is done solely by the mods. Ron and I never get involved nor do we ever know what the mods are dealing with. This is purposely done becay=use we all decided that "owners do not moderate"

Finally there might be the one or few outliers who feel above the TOS and will advance up the ladder. Hopefully they will get the message or ultimately a temporary ban will occur. Surly that would stop the behavior but if not our ladder system deals with that.

I have been in touch with Xenforo and as stated on their owners forum they are already selling 3rd party AI Detectors

IMO we have chewed this to pieces and the time has come to add a TOS . We have to start somewhere and I believe my suggestion is a fair start. To answer the question about a post which suggests an AI mix I believe that informal discussions with the author should take place to discuss with him what our detectors are suggesting. I dont believe an AI mix of 2% is in any way meaningful. I do believe however that a discussion with the author is indicated for a mix with a higher percentage. Our mods are human and I believe they would know how to deal fairly with all members should such a need arise
Steve, because of this thread I ran my bio through AI detection software before I sent it to my State Board of Accountancy for regulatory position they are considering me for. I got low percentages of AI presence in a Word document I watched being typed in 2017 by the gentleman I replaced in 2014.

I think you should add the plug-in to detect AI and be done with it.

Stephen
 
What stand would you like to take?
I support that members should declare where they have utilised AI to write posts Ron. For good and for bad AI should stand clear so we can understand its contribution and value.

These are only relatively early days of its broad integration into life so attribution should be open and honest so we can reasonably evaluate AI impact.

I do support precautionary principle in its implementation because it is such a game changer in life… we do need to sufficiently manage that.
 
  • Like
Reactions: DLS and treitz3
I just add a few bad jokes and intentional misspelling's, no one has ever suspected i am really a robot ! ;)
Robots don’t buy their wives cosmetic enlargements Milan… you are super real!
 
Last edited:
  • Haha
Reactions: Lagonda
I'm sorry for being unclear.

I understand you would like to ban AI from the forum. Do you have a problem with a member asking an AI a question about audio, and then posting on WBF the question he/she asked as well as the answer he/she received from the AI, conspicuously disclosing that the answer posted was generated by AI?

In other words is posting AI okay with you if the question and AI answer are fully disclosed?

So, are you for stolen human intellectual property from undisclosed human sources (=AI) being posted on WBF?
 
  • Like
Reactions: Rensselaer
  • Bishop: [puzzled by Ripley's reaction towards him] Is there a problem?
  • Burke: I'm sorry. I don't know why I didn't even... Ripley's last trip out, the syn- the artificial person malfunctioned.
  • Ripley: "Malfunctioned"?
  • Burke: There were problems and a-a few deaths were involved.
  • Bishop: I'm shocked. Was it an older model?
  • Burke: Yeah, the Hyperdyne Systems 120-TD III.
  • Bishop: Well, that explains it then. The TD3s always were a bit twitchy.
Is that from Tim Winston’s new book, “Juice”?

That just about sums it up. At some point, the line between deciphering the difference will effectively vanish. AI generated text needs to be banned. It really is that simple.
The AI text are seldom rude though.
 
So, are you for stolen human intellectual property from undisclosed human sources (=AI) being posted on WBF?
Unfortunately it’s a bit too late for these concerns. Every book, every article, every blog, every You-Tube video, social media post, any piece of music, audio, in short each and every bit of data produced by a human mind since antiquity that’s on digital media has already been hoovered up into these giant foundation models or will be. Every keystroke you’ve made that’s in the web has been sucked up. There are many lawsuits pending but the courts move at glacial speed. By the time the lawsuits get resolved one way or the other, the models will have been completely trained. There’s no way to untrain them.

To give you a sense of the speed and power of these systems, it would take a really smart human to read through all of Wikipedia in a decade. These AI systems blitz through all of Wikipedia in a few hours. Take a really long book. Tolstoy’s War and Peace is a great classic of literature. It’s 1300 pages long. How long would it take you to read a 1300 page book? I’m guessing a few weeks if you read it every day. It takes a modern AI system like Gemini 10 seconds to read War and Peace. And it will remember a lot of details far better than you.

The aliens have landed in our midst. Humans are completely clueless what’s coming. Forget silly sci-fi flicks like The Matrix or Star Wars. The alien revolution is AI. It’s not sci-fi. It’s real. It’s here it’s in your phones. It’s watching me type. It’s watching you read this post. It doesn’t sleep or need to eat. It’s active 24/ 7 365 days of the year. And even the smartest AI researchers on the planet don’t know what these systems are learning. We can’t unpack what’s in their “heads”. All we hope is that we can control them. Hopefully.
 
Unfortunately it’s a bit too late for these concerns. Every book, every article, every blog, every You-Tube video, social media post, any piece of music, audio, in short each and every bit of data produced by a human mind since antiquity that’s on digital media has already been hoovered up into these giant foundation models or will be. Every keystroke you’ve made that’s in the web has been sucked up. There are many lawsuits pending but the courts move at glacial speed. By the time the lawsuits get resolved one way or the other, the models will have been completely trained. There’s no way to untrain them.

To give you a sense of the speed and power of these systems, it would take a really smart human to read through all of Wikipedia in a decade. These AI systems blitz through all of Wikipedia in a few hours. Take a really long book. Tolstoy’s War and Peace is a great classic of literature. It’s 1300 pages long. How long would it take you to read a 1300 page book? I’m guessing a few weeks if you read it every day. It takes a modern AI system like Gemini 10 seconds to read War and Peace. And it will remember a lot of details far better than you.

The aliens have landed in our midst. Humans are completely clueless what’s coming. Forget silly sci-fi flicks like The Matrix or Star Wars. The alien revolution is AI. It’s not sci-fi. It’s real. It’s here it’s in your phones. It’s watching me type. It’s watching you read this post. It doesn’t sleep or need to eat. It’s active 24/ 7 365 days of the year. And even the smartest AI researchers on the planet don’t know what these systems are learning. We can’t unpack what’s in their “heads”. All we hope is that we can control them. Hopefully.
One part that is very painful to watch is the rationalization of smart folks who are building and financing these models claiming that it if for humanity while it is actually for their financial benefit. Some of them truly believe it. Kind of. I assume they know what they are doing and that is rolling the dice with our future. FOMO is then driving all competitors and many venture capital/private equity firms. The money involved is mind blowing. It is a giant vortex of greed.

The biggest losers seem to be their young staff who buy into the "save humanity" marketing while assisting the development of AI which will take over their coding jobs. I hope AI pays into social security because a very large pool of people live off of that alone and others truly need it to make ends meet.

Of course, like any technology, this one can be used for good and the benefits to medical research, diagnostics and the like can assist human doctors and researchers. The ability to pattern search through vast amounts of data will pay dividends soon, it seems. It does take human ingenuity and intuition to know what paths to pursue. If/when AI can simulate those qualities, then what?
 
  • Like
Reactions: Pokey77
I support that members should declare where they have utilised AI to write posts Ron. For good and for bad AI should stand clear so we can understand its contribution and value.

These are only relatively early days of its broad integration into life so attribution should be open and honest so we can reasonably evaluate AI impact.

I do support precautionary principle in its implementation because it is such a game changer in life… we do need to sufficiently manage that.
Further thoughts… things that could be considered if implemented;

Generally my minimum supported view is for standard mandatory disclosure requirements. I think they should be universal at any rate but that’s just my position.

I don’t use AI for content creation and am unlikely to and commit no matter how this resolves to never make an AI post anywhere without a very clear attribution. I would support an AI free site if the membership would choose that as a point of difference to a potentially AI laden world. I also understand that could likely only ever be a voluntary condition and for me would represent the best outcome but doesn’t reflect my thoughts on AI in the workplace which I see as a different proposition.

* To make an AI contribution more distinct and separate and avoidable (and given AI posts can tend to be longer) if feasible could a text box similar to a typical reply box as at the start of this post as integrated into the dialogue functionality so AI contribution can (only) be added to posts as a distinctly different window with a limited/compressed height unless opened by the reader.

It then becomes an option to open for those who want to click to expand to reveal and read the full body of the AI text or ignored and generally less invasive on screen for those who want to pass on reading any AI element on the forum.

It will likely create an easier to identify distinction between member written and AI generation and leave post lengths in a thread shorter and layouts easier to navigate.

* At the institute where I work an AI program function is provided within the Institute desktop dialogue box and staff and students are required to use only the supplied AI program if any choose to use AI (I have not used any in content creation, I have used AI detection to check submitted work).

I imagine an AI function as a specific program provided by the site would allow a lot of control in terms of quality control and I would figure potentially higher accuracy in detector results and also potential for layout control and a control over standards as well.

Formatting of AI contribution could also then be managed.
 
Last edited:
  • Like
Reactions: PYP
Further thoughts… things that could be considered if implemented;

Generally my minimum supported view is for standard mandatory disclosure requirements. I think they should be universal at any rate but that’s just my position.

I don’t use AI for content creation and am unlikely to and commit no matter how this resolves to never make an AI post anywhere without a very clear attribution. I would support an AI free site if the membership would choose that as a point of difference to a potentially AI laden world. I also understand that could likely only ever be a voluntary condition and for me would represent the best outcome but doesn’t reflect my thoughts on AI in the workplace which I see as a different proposition.

* To make an AI contribution more distinct and separate and avoidable (and given AI posts can tend to be longer) if feasible could a text box similar to a typical reply box as at the start of this post as integrated into the dialogue functionality so AI contribution can (only) be added to posts as a distinctly different window with a limited/compressed height unless opened by the reader.

It then becomes an option to open for those who want to click to expand to reveal and read the full body of the AI text or ignored and generally less invasive on screen for those who want to pass on reading any AI element on the forum.

It will likely create an easier to identify distinction between member written and AI generation and leave post lengths in a thread shorter and layouts easier to navigate.

* At the institute where I work an AI program function is provided within the Institute desktop dialogue box and staff and students are required to use only the supplied AI program if any choose to use AI (I have not used any in content creation, I have used AI detection to check submitted work).

I imagine an AI function as a specific program provided by the site would allow a lot of control in terms of quality control and I would figure potentially higher accuracy in detector results and also potential for layout control and a control over standards as well.

Formatting of AI contribution could also then be managed.

I'm still working out this whole AI issue, so I may change my view down the line.

I suspect that I and others started from the perspective that we don't want to be deceived by content posing as humanly originated when it actually came from a robot -- regardless if the AI generated post ultimately came from human originated contributions and regardless of the content of the AI generated post.

We want to know with whom we are engaging.

That led to discussions about mixed AI / human content and dlscussions about disclosure. Presumably we want disclosure.

Yet the more I think about it the more uncomfortable I am becoming about having any AI content on this social media platform, regardless of disclosure. Maybe it's the vanity of humanity at root, but thinking about the practicality of dealing with AI content makes me wary.

One issue, for lack of better word choice, is the relative 'authoritativeness' of AI content. Some will be convinced of the superior credence of AI content, some will consider it a second-class citizen. This may introduce layers of contention that currently are not here.

AI generated premise #1.
AI generated premise #2.
-------------------------------------------
Person supplied conclusion.

Assuming validity, the truth-value of the conclusion depends on the truth or falsity of the AI premises.

I don't want to be arguing with robot generated text. I don't want someone telling me: "prove the AI statements are false". Or "the XYZ AI says you are wrong."

I suppose one could adopt a view that says: "I do not respond to posts containing AI generated content." That strikes me as advancing a decline in sociability. Or leading to the flip response: "You're not living in the modern world."

In these early days it seems like AI can have value, yet I don't feel off base prefering to interact with people on social media.
 
I'm still working out this whole AI issue, so I may change my view down the line.

I suspect that I and others started from the perspective that we don't want to be deceived by content posing as humanly originated when it actually came from a robot -- regardless if the AI generated post ultimately came from human originated contributions and regardless of the content of the AI generated post.

We want to know with whom we are engaging.

That led to discussions about mixed AI / human content and dlscussions about disclosure. Presumably we want disclosure.

Yet the more I think about it the more uncomfortable I am becoming about having any AI content on this social media platform, regardless of disclosure. Maybe it's the vanity of humanity at root, but thinking about the practicality of dealing with AI content makes me wary.

One issue, for lack of better word choice, is the relative 'authoritativeness' of AI content. Some will be convinced of the superior credence of AI content, some will consider it a second-class citizen. This may introduce layers of contention that currently are not here.

AI generated premise #1.
AI generated premise #2.
-------------------------------------------
Person supplied conclusion.

Assuming validity, the truth-value of the conclusion depends on the truth or falsity of the AI premises.

I don't want to be arguing with robot generated text. I don't want someone telling me: "prove the AI statements are false". Or "the XYZ AI says you are wrong."

I suppose one could adopt a view that says: "I do not respond to posts containing AI generated content." That strikes me as advancing a decline in sociability. Or leading to the flip response: "You're not living in the modern world."

In these early days it seems like AI can have value, yet I don't feel off base prefering to interact with people on social media.
I agree, but more. It’s not just that the eventuality of cheaper machines for greater profits will mean we don’t work, so we don’t earn, so can’t buy the cheaper AI produced products (and we all just starve to death), but AI separates us from human contact (have you tried to complain about service recently). I prefer human contact (though I suppose others prefer synthetics ‘else there wouldn’t be a market for silicone sex dolls!).
 
I'm still working out this whole AI issue, so I may change my view down the line.

I suspect that I and others started from the perspective that we don't want to be deceived by content posing as humanly originated when it actually came from a robot -- regardless if the AI generated post ultimately came from human originated contributions and regardless of the content of the AI generated post.

We want to know with whom we are engaging.

That led to discussions about mixed AI / human content and dlscussions about disclosure. Presumably we want disclosure.

Yet the more I think about it the more uncomfortable I am becoming about having any AI content on this social media platform, regardless of disclosure. Maybe it's the vanity of humanity at root, but thinking about the practicality of dealing with AI content makes me wary.

One issue, for lack of better word choice, is the relative 'authoritativeness' of AI content. Some will be convinced of the superior credence of AI content, some will consider it a second-class citizen. This may introduce layers of contention that currently are not here.

AI generated premise #1.
AI generated premise #2.
-------------------------------------------
Person supplied conclusion.

Assuming validity, the truth-value of the conclusion depends on the truth or falsity of the AI premises.

I don't want to be arguing with robot generated text. I don't want someone telling me: "prove the AI statements are false". Or "the XYZ AI says you are wrong."

I suppose one could adopt a view that says: "I do not respond to posts containing AI generated content." That strikes me as advancing a decline in sociability. Or leading to the flip response: "You're not living in the modern world."

In these early days it seems like AI can have value, yet I don't feel off base prefering to interact with people on social media.
I don’t see how AI information pulled from google is fundamentally different from pulling information from a website on google— other than the AI info is synthesized from multiple sources.

It can be just be a prompt, adjunct or jumping off point from which the humans can explore further.
 
Last edited:
  • Like
Reactions: Al M.
Unfortunately it’s a bit too late for these concerns. Every book, every article, every blog, every You-Tube video, social media post, any piece of music, audio, in short each and every bit of data produced by a human mind since antiquity that’s on digital media has already been hoovered up into these giant foundation models or will be. Every keystroke you’ve made that’s in the web has been sucked up. There are many lawsuits pending but the courts move at glacial speed. By the time the lawsuits get resolved one way or the other, the models will have been completely trained. There’s no way to untrain them.

To give you a sense of the speed and power of these systems, it would take a really smart human to read through all of Wikipedia in a decade. These AI systems blitz through all of Wikipedia in a few hours. Take a really long book. Tolstoy’s War and Peace is a great classic of literature. It’s 1300 pages long. How long would it take you to read a 1300 page book? I’m guessing a few weeks if you read it every day. It takes a modern AI system like Gemini 10 seconds to read War and Peace. And it will remember a lot of details far better than you.

The aliens have landed in our midst. Humans are completely clueless what’s coming. Forget silly sci-fi flicks like The Matrix or Star Wars. The alien revolution is AI. It’s not sci-fi. It’s real. It’s here it’s in your phones. It’s watching me type. It’s watching you read this post. It doesn’t sleep or need to eat. It’s active 24/ 7 365 days of the year. And even the smartest AI researchers on the planet don’t know what these systems are learning. We can’t unpack what’s in their “heads”. All we hope is that we can control them. Hopefully.
Interesting times.

When I have to engage with “customer service” via “chat”, my first question is always “Are you human?” If they respond “yes” I can’t help but wonder “has the bot learned to lie yet?”

I believe there is a very large difference between a sentient human spending 3 weeks reading War and Peace and a bot (however advanced) speed reading the same material. The bot will retain more details, but the human has the potential to resonate with the material on a deeper level the machine can not fathom.

Is AI to be our new God?

As a fatalist, I’m convinced in my darker moments that humanity is destined to extinguish itself one way or another at some point in our future. So why worry about AI doing the same? And who knows, if AI is to be as all powerful as you imply, perhaps it will end up saving us from our propensity towards extinction?

Have a nice day!
 
  • Like
Reactions: PYP
As a fatalist, I’m convinced in my darker moments that humanity is destined to extinguish itself one way or another at some point in our future. So why worry about AI doing the same? And who knows, if AI is to be as all powerful as you imply, perhaps it will end up saving us from our propensity towards extinction?

Have a nice day!
:)

The "end" might be a whimper: AI stops access to Tik Tok, YouTube and porn sites. Chaos ensues.

Or, in a more distant future (10 years from now? ;) ), AI turns off all the sex bots and AI companions. Chaos ensues.

Or future audiophiles are still arguing about analog vs. digital in year 5025 when the lights go out (sun explodes).

Have a nice day!
 
Last edited:
...don't let AI swipe that one. That's a gem, Tim. Betcha AI can't create that richness of language...unless you give it to "it."
too late! Can you hear the sound of scraping?
 
  • Haha
Reactions: MarkusBarkus
I don’t see how AI information pulled from google is fundamentally different from pulling information from a website on google— other than the AI info is synthesized from multiple sources.

It can be just be a prompt, adjunct or jumping off point from which the humans can explore further.

Have you active experience with AI bots? Do you pay for such services?
 
I'm still working out this whole AI issue, so I may change my view down the line.

I suspect that I and others started from the perspective that we don't want to be deceived by content posing as humanly originated when it actually came from a robot -- regardless if the AI generated post ultimately came from human originated contributions and regardless of the content of the AI generated post.

We want to know with whom we are engaging.

That led to discussions about mixed AI / human content and dlscussions about disclosure. Presumably we want disclosure.

Yet the more I think about it the more uncomfortable I am becoming about having any AI content on this social media platform, regardless of disclosure. Maybe it's the vanity of humanity at root, but thinking about the practicality of dealing with AI content makes me wary.

One issue, for lack of better word choice, is the relative 'authoritativeness' of AI content. Some will be convinced of the superior credence of AI content, some will consider it a second-class citizen. This may introduce layers of contention that currently are not here.

AI generated premise #1.
AI generated premise #2.
-------------------------------------------
Person supplied conclusion.

Assuming validity, the truth-value of the conclusion depends on the truth or falsity of the AI premises.

I don't want to be arguing with robot generated text. I don't want someone telling me: "prove the AI statements are false". Or "the XYZ AI says you are wrong."

I suppose one could adopt a view that says: "I do not respond to posts containing AI generated content." That strikes me as advancing a decline in sociability. Or leading to the flip response: "You're not living in the modern world."

In these early days it seems like AI can have value, yet I don't feel off base prefering to interact with people on social media.

I believe in giving credit where credit is due. Thank you for taking the issues raised in this thread seriously and thoughtfully.

My hope for this thread was not the poll as much as it was the discussion which I hoped would ensue underneath the poll.

These are complicated issues, and, for most of us, issues of first impression. This thread is the modern equivalent of neighbors discussing issues in the town square.

So I appreciate that you have taken these questions seriously. Thank you for explaining how your own thinking has evolved as you have thought through these issues.
 
  • Like
Reactions: PYP
Status
Not open for further replies.

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing