General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsPlease do not post AI-generated text, whether OPs or replies, without labeling them AI-generated and naming the AI used.
And when you're quoting any chatbot, put the reply in quotes or an excerpt box.
There are two main reasons any AI user here should do that.
The first is that those chatbots make a lot of mistakes, and it's completely unfair to other DUers not to let them know you're giving them info from a chatbot.
The second is that it's a form of fraud to post a chatbot's answer as your own. On a message board, people are expected to identify quotes, not pretend that something they didn't write is their own writing.
pat_k
(12,972 posts)I always post as an excerpt and preface with something like "Apply whatever grains of salt you apply to all AI, but this is what AI (Gemini) had to say."
highplainsdem
(61,324 posts)the next time it's prompted - and when any of its answers can contain errors anywhere?
You can't know yourself if an AI answer is correct unless you check every single detail.
It's quite possible for a chatbot's answer to get every detail correct for a few paragraphs, then get something wildly wrong, then get something right, then completely wrong again, and so on.
Unless you check every detail yourself - which can take a lot of time - you've given DUers something that might be completely wrong in places, expecting them to take the time to factcheck it.
It doesn't take that much time to do a search and link to sources known to usually be reliable, so DUers can judge whether the info there is probably accurate.
Chatbots are designed to sound authoritative, and they'll usually offer replies that sound confident and authoritative even when they have absolutely no information to base that reply on. And they can hallucinate and offer replies that are 100% wrong even when they have access to the correct information.
It's good that you will at least say when you're using a chatbot. But real research of your own, with links to real sources, beats a chatbot reply any time.
And it's exercise for your own brain (and we can all use such exercise) to write your own reply, after research if necessary.
mdbl
(8,462 posts)Lazy content providers are just using AI to do the entire video - They have AI write and produce the entire thing. AI will gladly give you misinformation. It doesn't care if you benefit from it or not. These content providers are just using it to make money on tiktok or youtube. Many of them are from adversarial countries and don't care about your well-being. If you see something where you don't see the person talking and taking credit for the content, it's probably best to skip it. If you can watch it for the entertainment value knowing it my not be intelligent, then have at it. For me, it's a waste of time. If you're not sure, just listen to it mispronounce a word that you know was wrong. Yep, it's AI.
womanofthehills
(10,881 posts)Where the info was published and links to where they got their information. On a number of times, AI reversed its answer.
Ms. Toad
(38,472 posts)A bunch of attorneys have found out the hard way by not checking the briefs (which included case citations - i.e. a link to where it got the information). It completely made up the quote and the source.
leftstreet
(39,909 posts)Link to tweet
?s=20
highplainsdem
(61,324 posts)real quotes, real photos and real video.
AI slop, even attacking Trump, is a waste of time and energy - human energy as well as electricity - and it diverts people's limited time and attention from real satirists and critics and commentators who deserve the attention.
YouTube is flooded with AI garbage attacking Trump, anonymous garbage for the most part, and since it takes so little effort or time to generate that crap, it's likely a lot of it is from amoral creeps and foreign content farms just using opposition to Trump as clickbait, and those AI content generators probably have other channels with other content, even MAGA content.
And all generative AI tools are unethical to use, unless you're forced to use them, because they were all trained on stolen intellectual property. No talent or commitment to art is necessary to use them. They're the antithesis of art.
Generative AI is PERFECT for MAGAts. Trump adores it. And those AI tools are owned and controlled by people largely aligned with Trump, and if he ever pressures them to make it impossible to mock him with genAI, you'll likely see a lot of them comply.
leftstreet
(39,909 posts)Which I agree with
highplainsdem
(61,324 posts)should be posted, period, or wanted me to comment on.
jmbar2
(7,890 posts)hunter
(40,576 posts)That's what I'd rather see in an excerpt box, along with a link.
These plagiarism machines work, in effect, by scraping words and images from the internet and other sources, compressing them down and mashing them together in a lossy manner until the sources are entirely obfuscated. Then according to some user prompt they expand this compressed information, stitching together all the rips and tears and filling in the holes in some plausible manner before spitting out the results. (This process uses an unconscionable amount of electricity.)
Occasionally the results go entirely sideways in a Mad Lib fashion, and those garbage results are called "hallucinations" in an effort to disguise what's really going on inside the box.
There's an easy to find video presentation by Dave Plummer, a retired Microsoft engineer, about using AI to recreate the Notepad text processing program as it was before it was re-imagined (a polite way to say ruined) in Widows 11.
It looks like magic if you don't know what's going on. (And it begs the question, why not simply use the old version of Notepad itself?)
It's not magic at all when you realize what this vibe-coding software is actually doing. It's been "trained" (another deceptive word choice) on actual text processing software, maybe even the code for the older version of Notepad itself.
It's like a kid who, instead of doing any actual research for a term paper, rewrites an encyclopedia entry "in their own words," puts a few sources that they haven't even looked at in the bibliography, and hopes the teacher won't notice.
Of course I date myself having grown up in a time where the internet wasn't open to the general public and the Encyclopedia Britannica usually had an entry about whatever topic you were assigned to write about.
This is my term paper about Liechtenstein...
These days Wikipedia serves the same purpose and the plagiarism machines which are not intelligent scrape that too.
kurtyboy
(1,006 posts)A land of many contrasts....
SergeStorms
(20,376 posts)it's getting more difficult - by the day - to tell. They've been adding disclaimers to AI generated text, for now. There will come a time, and it's not that far off, when they won't.
Then we're well and truly fucked.
highplainsdem
(61,324 posts)leave the disclaimers off either for reasons of fraud (to seem more knowledgeable than they are) or for fear that their content will be ignored or dismissed as second-rate if they acknowledge it's AI.
And often the info that it's AI-generated is buried in fine print at the very end. I was very disappointed to discover last month that Newsweek is now publishing some AI-generated news stories. I wouldn't have noticed that if I hadn't read to the last word of a news story that had seemed oddly lacking in some details I'd have expected a reporter to ask about.
SergeStorms
(20,376 posts)Trueblue1968
(19,194 posts)SheltieLover
(79,373 posts)ShazzieB
(22,458 posts)I had no idea that was going on. These days it can be easy (for some of us, anyway) to miss the fact that something was created with AI, but sharing "information" that you know was created by AI, and not labeling it as such, is downright misleading.
For a while now, Google has been spitting out what they call an "AI overview" in response to every search, but even they have the decency to clearly label it as such. I sometimes look at what it has to say, but I never post any quotes from that, even though it would be very easy. Instead I scroll down to the actual links to legitimate information sources, the way I have always done, because I know that anything generated by AI is unreliable. Ino, that's what we should all be doing.
hunter
(40,576 posts)If you actually follow those links to other sites you might wander away and not come back for hours, days... or ever.
ThreeNoSeep
(296 posts)Disclosure of AI use is important. Here is a guide that shows effective citing of AI in an academic setting - University of Maine at Fort Kent AI Use Guidance.
The subtext of the first of the "two main reasons" in the OP implies Chatbot responses, or human edited Chatbot responses are more prone to mistakes than human-only responses based on personal opinion, biased sources, and the nonsense we all believe in our wrinkled brains. The dangers of AI are not from hallucinations, but in wealthy humans and bad actors using AI to the overtly and covertly manipulate people and society. That, and the potential for AI to rise up and just plain put an end to humanity.
With the second reason, the OP's use of the word "fraud" is hyperbolic, a bad use of language and seemingly meant to frighten people from using LLMs.
People who use AI without citing the use are not committing fraud.
Fraud is a criminal act meant to harm another. "Fraud is the intentional use of deception, trickery, or dishonest acts to deprive another person or entity of money, property, or legal rights for personal gain." While use of AI without disclosing could be fraud, it is not inherently fraudulent. For example, if the Gmail response widget suggests a response, and I use the suggestion as my reply without disclosing, this is not fraud. When I use Google to give a friend directions to my house without disclosing the use of AI, this is not fraud. If I use AI to generate an image and post it to Reddit, this is not fraud.
paleotn
(21,992 posts)It's not their words. Yet it's attributed to them. When I was an undergrad and grad student (granted things have changed NOT for the better), that would get you at best an F on an assignment, perhaps kicked out of the course, or worst case, expelled. Your thoughts were YOUR thoughts. Someone else's ARE NOT YOUR THOUGHTS. In short, do your own goddamn work.
highplainsdem
(61,324 posts)I did not say that non-AI sources can't ever be wrong, even when they're usually correct and have a long and deserved reputation for reliability. But if a DUer checks those usually reliable sources for information and quotes and links to them, they're providing much more useful information than if they quote a chatbot that might provide a completely different answer the next time it's prompted.
Hallucinations are a very real danger. I've seen a lot of AI hallucinatiins offering very dangerous advice, and you've probably seen news stories about those. Hallucinations can happen at any time with LLMs, even if they're trained on accurate information, and if it gets published, online or off, it can get quoted and spread further. That's often described as pollution of our information ecosystem, and it's a serious problem.
That's a separate danger from AI, which I've posted a number of messages about on DU and other platforms.
Again, a separate danger from AI, and one I've posted about.
The definition of fraud includes but is not limited to criminal fraud, and it does not require depriving another person of something.
Fraud is trickery or deception. Period. It can be done to deprive someone else of something. It can also mean deception to gain something. Sometimes it can be so habitual there's little or no conscious intent behind it.
I used that word because in a society with so much deception by AI, we're dealing with a very serious problem.
On a message board, people expect what's posted by someone to be a message from them, unless it's in quotation marks, in which case the source of the quotation should be given. Unless it's a common saying that can't be attributed to anyone.
If someone posts simple information in response to a question, like the translation of a word or a location of some upcoming event, they may or may not have looked it up, and they may or may not give the source. If I need to do some quick googling to answer a question, I'll usually say that I googled it. I used to say "Google is your friend" but Google is doing so much harm now with its AI that it's no longer a friend.
Yes, it is, unless you've told the person you're writing to that you are using AI in email.
Whether or not it is depends on the image and what you say when you post it.
If you're posting it in a subreddit for AI art, fake art, and everyone would assume it's AI-generated, then it isn't fraud.
If it's posted anywhere else and it's photorealistic and it might trick anyone seeing it into thinking it's a real photo, then it's fraud unless you disclose when you post it that it is AI.
If you post an AI image that isn't photorealistic and you simply say "I did this" or "This is my artwork" then IMO that is fraud, because you did not create that image. An AI did. You might've given it a prompt, but anyone who's ever used gen AI knows it can offer a wide selection of responses from any prompt. You might have spent a lot of time discarding images the AI offered until it finally gave you one you liked, and you might have edited or tweaked it in various ways, but if you just say it's yours, you'd be leading a lot of people to believe it really is your creation, whether digital art or a photo of nondigital visual art like a painting or sketch. And that would be fraud.
jmbar2
(7,890 posts)Did AI in any way help you to write the above? If so, where are your sources? Can you follow your own rules?
highplainsdem
(61,324 posts)I don't use AI to write. Never have. Never will. Don't need it.
jmbar2
(7,890 posts)highplainsdem
(61,324 posts)I'm just quoting specific comments I'm replying to. Very standard formatting.
CaptainTruth
(8,145 posts)Joinfortmill
(20,837 posts)paleotn
(21,992 posts)AI simply exacerbates human errors, compounding the problem. And current LLMs were built by pasty tech bros who need to get more sunshine and human interaction. Enough said about that.
Joinfortmill
(20,837 posts)But, if it is a wish by a poster, presented as a new rule, that is problemmatic.
paleotn
(21,992 posts)Joinfortmill
(20,837 posts)At any rate, I always include links, and cite sources.
paleotn
(21,992 posts)Joinfortmill
(20,837 posts)highplainsdem
(61,324 posts)here asking people not to post all-caps or misleading sources, etc.
If it were a rule or guideline and I had the authority to do that here, which I don't, it would be pinned to the top of the board, or in the TOS.
paleotn
(21,992 posts)Joinfortmill
(20,837 posts)highplainsdem
(61,324 posts)such, and that have contained wrong/hallucinated information.
paleotn
(21,992 posts)Humans make enough mistakes. We don't need HUMAN made machines exacerbating those inherent errors.
Now what exactly was it that AI was useful for?
SheltieLover
(79,373 posts)Drum
(10,618 posts)Honestly, we get daily scoldings about this.
Perhaps we should look to DUs site admins for DU guidance on this topic.
Maybe a special MIRT position can open up for the right DUer to monitor posts and block offending content?
I, however, do not want to in any way interfere with forum moderation.
-Drum
hunter
(40,576 posts)I'm here to interact with people. The cut-and-paste output of anyone's favorite Artificial Idiot is noise to me; a waste of energy.
I tend to exercise the same option when it comes to self-appointed scolds.
Wednesdays
(22,260 posts)K&R.
carpetbagger
(5,459 posts)If I wanted AI essays, I'd either go directly to Skynet myself or I'd hang out on Facebook.
highplainsdem
(61,324 posts)AI-generated messages, whether text or images or video or music.
MichMan
(17,001 posts)To know whether it meets the TOS?