General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsColumbia University student: You have no idea how much we're using ChatGPT
From Columbia University undergrad Owen Kichizo Terry, in the Chronicle of Higher Education:
https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt
Archive page https://archive.ph/piYe3
As an example, I told ChatGPT, I have to write a 6-page close reading of the Iliad. Give me some options for very specific thesis statements. (Just about every first-year student at my university has to write a paper resembling this one.) Here is one of its suggestions: The gods in the Iliad are not just capricious beings who interfere in human affairs for their own amusement but also mirror the moral dilemmas and conflicts that the mortals face. It also listed nine other ideas, any one of which I would have felt comfortable arguing. Already, a major chunk of the thinking had been done for me. As any former student knows, one of the main challenges of writing an essay is just thinking through the subject matter and coming up with a strong, debatable claim. With one snap of the fingers and almost zero brain activity, I suddenly had one.
My job was now reduced to defending this claim. But ChatGPT can help here too! I asked it to outline the paper for me, and it did so in detail, providing a five-paragraph structure and instructions on how to write each one. For instance, for Body Paragraph 1: The Gods as Moral Arbiters, the program wrote: Introduce the concept of the gods as moral arbiters in the Iliad. Provide examples of how the gods act as judges of human behavior, punishing or rewarding individuals based on their actions. Analyze how the gods judgments reflect the moral codes and values of ancient Greek society. Use specific passages from the text to support your analysis. All that was left now was for me to follow these instructions, and perhaps modify the structure a bit where I deemed the computers reasoning flawed or lackluster.
The vital takeaway here is that its simply impossible to catch students using this process, and that for them, writing is no longer much of an exercise in thinking. The problem isnt with a lack of AI-catching technology even if we could definitively tell whether any given word was produced by ChatGPT, we still couldnt prevent cheating. The ideas on the paper can be computer-generated while the prose can be the students own. No human or machine can read a paper like this and find the mark of artificial intelligence.
-snip-
So rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI cant possibly help. Colleges ought to prepare their students for the future, and AI literacy will certainly be important in ours. But AI isnt everything. If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.
-snip-
Emphasis added.
It's depressing that a student this bright, who could write essays without any help from AI, is still using it.
But at least he's honest enough to admit he isn't using his brain. I've seen too many naive messages online from people who do think they're learning, or being creative, by using AI. (Or at least are trying to convince themselves and others that they are.)
And he's right that AI detectors can't catch this type of cheating. They often can't catch essays that are entirely done by ChatGPT, either. They miss lots of AI-written text, and give false positives on human-written text.
He's wrong on one point, though.
In the first paragraph, which I didn't include in the excerpt, he says that "a year ago" student academic-integrity policies were common sense, and now they're "laughably naive" thanks to ChatGPT.
It hasn't been a year since ChatGPT was released, causing so much harm.
It's been only 5-1/2 months, and it's already made a shambles of education.
TheBlackAdder
(28,253 posts)Last edited Sat May 13, 2023, 08:25 PM - Edit history (1)
.
I mean if the same parameters are inputed, you would think that the same output would be generated.
Unless, ChatGTP keeps the prior works to make sure new creations are unique. But, TurnItIn and other services can compare other submissions and if there are matches, the papers would get flagged. Every writer has a unique voice, so one of the keys would be to have students write essays in class and then compare those works with the papers submitted later on to see if the same author created them.
.
anciano
(1,021 posts)when it comes to assignments for a grade, they should now be either in-class written or oral exams. You can't fake those.
stopdiggin
(11,412 posts)do anything of any depth or really well thought out structure - in a sit down exam. (and same holds for orals) Even a very quick student is barely getting an 'idea' together, much less fleshing it out into 'substance.' And then you have the population that is not 'quick' .. but in the end might be better and more thorough thinkers.
Igel
(35,390 posts)Day 1: "You have 45 minutes to produce an outline and first draft. Your computers are locked down so you have no Internet access. " Minute 46: "Turn it in."
Day 2: "Here are your outlines. You have 45 minute to finalize your first draft for review. Your computers are locked down so you have no Internet access. " Minute 46: "Turn it in."
Day 4 or 5: "Here are your drafts with comments. You have 45 minutes to revise and submit to the drop box. Your computers are locked down so you have no Internet access. It's in-class work and if it's late it's 90 pts off your final grade."
Better on paper, to be sure. But then there are handwriting issues.
Note this gobbles up a crapload of class time.
I was told that chatGPT will fess up to writing something. "Yes, chatGPT wrote, '...'."
I checked that. Thirty minutes ago. Asked it to write a one-paragraph 'essay' on Shirley Chisholm's presidential candidacy. (Which is an essay required in the 22-23 version of a standard on-line initial-credit HS English 3, I think, "course".) Got a nice mini-essay.
Next: "ChatGPT, did you write, '..."?" and I cut-and-pasted its output in the dialog box. Yes, it fessed up. It said it wrote it. Score!
Next: "ChatGPT, did you write, '..."?" and I cut-and-pasted its output in the dialog box. *Then* I altered 4 words in the text. And it said that it had written that, and apologized for saying it had written the first cut-and-paste text (which, actually, it had). Uh ... It was a bit too willing to oblige. Chisholm, it wrote, "effortlessly" did something, not "ridiculously", for example.
Next: "ChatGPT, did you write, 'When in the course of human events it becomes necessary for one nation ...'?" It responded "Yes," and then said that was from the Declaration of Independence. Apparently Thomas Jefferson was reborn as ChatGPT.
gulliver
(13,205 posts)Thanks for posting!
Yavin4
(35,455 posts)Maybe more in-class discussions, lectures, debates, oral arguments, etc. with a written follow up of what happened in the class is the better approach. For example, have an actual discussion in class about a subject and then have the students write up a paper about the discussion.
I don't know for sure what will work, but Chat GPT, and other applications like it, are not going away. This is like when Napster dropped on the music industry. People still pirate music.
tinrobot
(10,927 posts)When a technology can provide that much savings in time and effort, it will not go away. The productivity gains are just too much to ignore.
And education is not in a "shambles". It may become temporarily antiquated, but it will evolve -- just like it has with all new technologies. Not too long ago, pocket calculators were supposedly the end of math education (except they weren't).
highplainsdem
(49,122 posts)tinrobot
(10,927 posts)And that is why it is not going away -- despite your protestations.
How about suggesting ways to keep people from getting "dumbed down" while also learning how to use these new tools?
Because that will be the challenge for education moving forward.
Why walk when gravity beds would move us more efficiently?
Y lrn 2 rite? Yuse spel chek.
Y voot? Let owr bedders duit.
Heard an engineer when discussing his kids taking calculus, "I hated calculus. You know, after university I never needed to use it. Not once. It's pointless. A waste of time--as a professional, you look shit up, and you use the same equations over and over and know their solutions. Or look them up. Or these days the software does it for you." Then he paused and walked that back big time. "But without knowing calculus I wouldn't know where the equations came from and their limits or why they work and when they don't work. Or how to use them." Nobody had to argue him into thinking his kids really *did* need calculus.
If you have servants that do everything including put food in your mouth, why bother to learn to feed yourself? Yeah, that's the extreme, but it's really the same thing. Let others do all the thinking and work for you and you're pointless and a parasite.
Note that AI doesn't *know* squat. It's a numerical paradigm that's good at replicating likely outcomes. You want tripe, you get tripe. Or worse. (Don't know what to make of Quanta's politics, but https://www.quantamagazine.org/ai-like-chatgpt-are-no-good-at-not-20230512/ .)
tinrobot
(10,927 posts)If people could sample guitars, drums, or whatever, then why would anyone learn to play?
It was an understandable concern, but didn't really happen. People are still learning how to play instruments.
But -- sampling did allow some very creative people to come up with whole new genres of music such as EDM, hip hop, chill, etc.
I think that's where this technology is headed. It's not going to destroy anything, but it will open up new ways of creating.
And, yes, some kids will use it to cheat on their homework. But kids have always done that.
getagrip_already
(14,970 posts)While the magnitude may be greater, search engines like Google revolutionized education and changed the way students worked forever.
When I went to school in the 70's, the worst thing available were laughable cliff notes. Then came the web, and suddenly other students works became available to use as "research".
There were also very easy to find resources that would even write papers for you.
So what we are dealing with now is just an evolution, though a seismic one, of a trend that has been occurring for generations.
Don't blame the players, blame the game.
highplainsdem
(49,122 posts)cheating themselves as well.
Sam Altman's brain-dead solo decision to release this free cheating tool is one of the worst things to ever happen to education. Much worse than Cliff Notes, or the web. Using ChatGPT is very much like having someone else do the work for you, but it's easier and cheaper. It will dumb students down, and that's the last thing we need.
getagrip_already
(14,970 posts)But the saying here has meaning. There is very definitely gamesmanship (gamespersonship?) involved in getting a degree, but the game rules are driven by for profit corporations masquerading as nonprofit entities.
Maybe cynical, but just look at what students who want to go to medical, law, or business schools have to go through to get there.. They need near perfect academic records, and have workloads that can (and do) crush souls.
So of course students, like mice in a maze, will seek to optimize their efforts and maximize their returns.
We as a society created the rules, the students are just navigating them. I can't really blame them, or ai.
Igel
(35,390 posts)The smart kids search smart. The dumb kids ...
A student was supposed to produce a quick report on mass-energy conversion and E = mc^2. Google to the rescue!
Presentation: Since a car uses gasoline to produce energy, it's a good example of Einstein's E = mc^2. It's converting gasoline, atoms, to energy.
Why? The Mighty Google said so. Was Google 100% wrong? Yeah.
Or bewail the endangered tree octopus. Or https://www.theverge.com/2013/1/5/3839946/wikipedia-hoax-about-bicholim-conflict-deleted-after-5-years .
Had one kid spend a period and a half searching for something for a project. Finally he said there was *nothing*! He was "good at google". No hits. I screwed up. The 60+ year-old teacher (me) looked at him, sighed, took his computer, typed 5 words in the google search bar and had 100 hits, the first page of hits all spot on. I shoved his computer at you. "An old man put you to shame. Embarrassed much?"
"How'd you do that?!" "I understand how search engines work. You don't. If you don't know how your tools work you're really just guessing." He looked at my incoherent search terms. "It's a Boolean search on these terms for the web pages. You have access to boolean operators. Use them to restrict the search--the search engine doesn't care about English grammar and punctuation."
getagrip_already
(14,970 posts)But like search, a user can be good at writing prompts or inexperienced.
There is no doubt that someone skilled at search can do much better research, and produce better works, than someone stuck in a library with only books to go through.
AI isn't really any different. It is a faster means to an end. It isn't something to fear.
If goals are driving the wrong behavior, we need to alter the goals and the rules. Not restrict access or use.
Free the AI five!
CTyankee
(63,926 posts)to rehash or reinforce previous art historians writings. I'm writing a book now on how artists have painted music through various means but I invariably find another previous art historian has already done it. So I have to develop themes or use of color or even just the instruments themselves, i.e. the beautiful lute. Or ballet. It's interesting and fun but I know I'm not the first one to come up with the ideas I develop.
highplainsdem
(49,122 posts)mindlessly by a large language model (LLM) AI thanks to algorithms piecing together other people's work that was ripped off to form a data set for training the AI.
Your perspective is valuable. Your own writing is valuable. You might have just the right take on something, or a way of explaining it, to strike a chord with your readers, to open their eyes to something new, even when the basic subject isn't entirely new.
CTyankee
(63,926 posts)have, based on the same art history books I have consulted. I could concentrate on developing new twists, I suppose, but why wouldn't AI? I would find it interesting to debate AI on a particular artist or painting.
highplainsdem
(49,122 posts)in a hundred sessions and get different responses every time to identical comments or prompts from you.
And whatever temporary facet of the chatbot is created in that moment by the algorithms may very well make up facts and make up citations to supposedly support those facts. And if called on the errors or hallucinations, either argue or apologize, and offer new "facts" and citations that are also very likely to be imaginary.
But it will be glib about it. The chatbot sounds as if it's aware and knows what it's talking about.
Please don't fall down a chatbot rabbit hole. Especially in terms of devaluing your own writing and expertise, and wondering if you should use ChatGPT. I remember your posting a couple of other messages wondering about this.
CTyankee
(63,926 posts)Tanuki
(14,930 posts)artists painting music:
https://www.nytimes.com/2023/04/25/arts/design/guitar-frist-museum.html?smid=nytcore-ios-share&referringSource=articleShare&fbclid=IwAR1gh-2ndZu30hoAj7MF7BYOOhp-JP1Lx1DUor56STYSBkH83KhKv6VGvVQ
..."The guitar itself can have meaning, other than simply being beautiful or making music, said Mark Scala, chief curator at the Frist Art Museum in Nashville, where Storied Strings: The Guitar in American Art, on view from May 26 to Aug. 13, will explore the guitars symbolism in American art, from late 18th-century parlor rooms to todays concert halls.
On display will be more than 165 works: paintings, sculpture, photography, works on paper, illustrations, videos, music in multimedia presentations and musical instruments, including a rare cittern, a popular string instrument in the 18th and 19th centuries, and seminal guitars by Fender, Gibson and C.F. Martin & Company.
Twelve thematic sections, with names like Cowboy Guitars, Iconic Women of Early Country Music and "Hispanicization, will weave in how artists and photographers have used the guitar as a visual motif to express the American experience and attitudes, from thorny issues like race and identity to the aesthetics of guitars themselves."....(more, including some pictures from the exhibit, at link).
CTyankee
(63,926 posts)How did I miss this? I read the NYT every day...
THANK YOU! I'll get to it right after my coffee...
highplainsdem
(49,122 posts)in January with some links you might find interesting. See reply 6 in this thread
https://www.democraticunderground.com/103491669#post6
and both the links for some very old art about music. On second thought, the first link I had there, about Greensleeves, might not be helpful, but the article on the history of the guitar
https://earlymusicmuse.com/guitarhistory/
has some very interesting art, and that website in general
https://earlymusicmuse.com/
is fascinating.
Scrivener7
(51,087 posts)learning how to read for meaning. If education stops including writing as practice for analysis, people will stop being able to read. They will stop having any discernment about the logic or truth of the things they read. This is already a bad problem. It is just going to get much, much worse.
Poiuyt
(18,134 posts)I jokingly asked him if they was using it. To my surprise, he said they were, and that they had to to keep up with the competition.
Still, it's troublesome to see it being used in schools and universities.
highplainsdem
(49,122 posts)seen on ChatGPT usage overall, and the percent of students who say they use it, suggest that students using it for cheating are far and away the largest group of users.
You might want to ask your friend how many employees he's laid off thanks to ChatGPT, or if he's indefinitely postponed hiring anyone else. Copywriters and marketers are getting laid off. Worldwide.
https://www.pcmag.com/news/chinese-company-ditches-human-creatives-for-chatgpt-style-ai
Igel
(35,390 posts)with cell phones.
A test question comes up. Algebra II.
They don't read the question. They use their phone's camera and app and the app gives them the answer.
They pick the answer.
"Teach, I got an A on my test!"
They confuse the grade with their learning and think they've mastered the content. Counselor puts them in AP chemistry or physics.
6 weeks in, failing, they drop down to level chem or physics.
6 weeks later, they're still failing. "But I got an A in algebra II!"
"But you can't do the math. You give the right answer but don't show your work. Here, show me you can do this." And the kid stares.
"I can't do it."
"So you fail. You cheated in algebra?" Deer in headlights.
"Yeah, you want to be an engineer? Sorry, you self-screwed. Hope it felt good. Maybe you can go into social studies. Say, how's pre-cal going?"
tinrobot
(10,927 posts)Last edited Sun May 14, 2023, 12:35 PM - Edit history (1)
Students need to learn technologies they'll encounter in the real world. Otherwise, they won't be able to compete for jobs.
What would be more more troublesome is schools and universities not teaching it.
Poiuyt
(18,134 posts)Use it or get left behind.
moondust
(20,025 posts)in the context of AI?
MineralMan
(146,351 posts)without notes. That'll stop this crap.
Bettie
(16,148 posts)cheating on a test or on an essay is really cheating yourself.
I went to college to get an education. Cheating would have given me the degree, but not the knowledge.
Seems like a waste of time and money if you are going to choose not to learn to go through the motions of education without actually doing it.
I guess that makes me a cranky old lady.