Jump to content

Who needs tuition or teachers anymore these days? Why waste money?


Windwaver
 Share

Recommended Posts

https://sg.style.yahoo.com/quit-teaching-because-chatgpt-173713528.html

I Quit Teaching Because of ChatGPT

This fall is the first in nearly 20 years that I am not returning to the classroom. For most of my career, I taught writing, literature, and language, primarily to university students. I quit, in large part, because of large language models (LLMs) like ChatGPT.

Virtually all experienced scholars know that writing, as historian Lynn Hunt has argued, is “not the transcription of thoughts already consciously present in [the writer’s] mind.” Rather, writing is a process closely tied to thinking. In graduate school, I spent months trying to fit pieces of my dissertation together in my mind and eventually found I could solve the puzzle only through writing. Writing is hard work. It is sometimes frightening. With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.

In my most recent job, I taught academic writing to doctoral students at a technical college. My graduate students, many of whom were computer scientists, understood the mechanisms of generative AI better than I do. They recognized LLMs as unreliable research tools that hallucinate and invent citations. They acknowledged the environmental impact and ethical problems of the technology. They knew that models are trained on existing data and therefore cannot produce novel research. However, that knowledge did not stop my students from relying heavily on generative AI. Several students admitted to drafting their research in note form and asking ChatGPT to write their articles.

As an experienced teacher, I am familiar with pedagogical best practices. I scaffolded assignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it.

In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. “It makes my writing look fancy,” one PhD student protested when I pointed to weaknesses in AI-revised text.

My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of “duplicative language” are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing.

Students who outsource their writing to AI lose an opportunity to think more deeply about their research. In a recent article on art and generative AI, author Ted Chiang put it this way: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” Chiang also notes that the hundreds of small choices we make as writers are just as important as the initial conception. Chiang is a writer of fiction, but the logic applies equally to scholarly writing. Decisions regarding syntax, vocabulary, and other elements of style imbue a text with meaning nearly as much as the underlying research.

Generative AI is, in some ways, a democratizing tool. Many of my students were non-native speakers of English. Their writing frequently contained grammatical errors. Generative AI is effective at correcting grammar. However, the technology often changes vocabulary and alters meaning even when the only prompt is “fix the grammar.” My students lacked the skills to identify and correct subtle shifts in meaning. I could not convince them of the need for stylistic consistency or the need to develop voices as research writers.

The problem was not recognizing AI-generated or AI-revised text. At the start of every semester, I had students write in class. With that baseline sample as a point of comparison, it was easy for me to distinguish between my students’ writing and text generated by ChatGPT. I am also familiar with AI detectors, which purport to indicate whether something has been generated by AI. These detectors, however, are faulty. AI-assisted writing is easy to identify but hard to prove.

As a result, I found myself spending many hours grading writing that I knew was generated by AI. I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”).  That is, I found myself spending more time giving feedback to AI than to my students.

So I quit.

The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries. They will find ways to encourage students to think critically and learn that writing is a way of generating ideas, revealing contradictions, and clarifying methodologies.

However, those lessons require that students be willing to sit with the temporary discomfort of not knowing. Students must learn to move forward with faith in their own cognitive abilities as they write and revise their way into clarity. With few exceptions, my students were not willing to enter those uncomfortable spaces or remain there long enough to discover the revelatory power of writing.

3b58df4eccb2a7149775cc3bbc47c060

↡ Advertisement
Link to post
Share on other sites

I had a lousy maths teacher.

We had an equation we didn't understand.

So he wrote the answer in the blackboard.

And then he asked we bit we didn't understand.

And we all said all of it.

If you understand maths one glace you see how they derive the solution.

If you don't understand it all look like Greek symbols.

He understood maths too well to explain.

The best person is a person that don't understand and takes a long time to understand and then he can explain.

CharGPT can give you the answer but can it teach the kid how to answer?

  • Praise 1
Link to post
Share on other sites

On 10/3/2024 at 10:21 AM, Jamesc said:

I had a lousy maths teacher.

We had an equation we didn't understand.

So he wrote the answer in the blackboard.

And then he asked we bit we didn't understand.

And we all said all of it.

If you understand maths one glace you see how they derive the solution.

If you don't understand it all look like Greek symbols.

He understood maths too well to explain.

The best person is a person that don't understand and takes a long time to understand and then he can explain.

CharGPT can give you the answer but can it teach the kid how to answer?

Ctrl A, Ctrl C, Ctrl V :D

  • Haha! 5
Link to post
Share on other sites

Internal Moderator
On 10/3/2024 at 10:21 AM, Jamesc said:

I had a lousy maths teacher.

We had an equation we didn't understand.

So he wrote the answer in the blackboard.

And then he asked we bit we didn't understand.

And we all said all of it.

If you understand maths one glace you see how they derive the solution.

If you don't understand it all look like Greek symbols.

He understood maths too well to explain.

The best person is a person that don't understand and takes a long time to understand and then he can explain.

CharGPT can give you the answer but can it teach the kid how to answer?

You just give an problem for chatgpt to solve! maybe chatgpt 5o can solve this liao! 😆

  • Haha! 1
Link to post
Share on other sites

Internal Moderator

I think if the kids have the means to find the answer themselves using AI, it isn't a bad thing at all. 

At least they spend the effort to get the problem solve. 

the worse is you send the kid to tuition, he just waste his time to wait for class to end, then come back empty handed. that one then headache ah. 

Link to post
Share on other sites

But cannot take ChatGPT into the exam how and ask ChatGPT write the answer right?

Working next time how?

Boss ask a question.

Tell boss wait ah.

Then run to toilet and ask ChatGPT.

Tell boss the answer.

Boss ask follow up question and what run to toilet again huh?

Then boss will say don't need you my phone also got ChatGPT.

  • Haha! 1
Link to post
Share on other sites

How to use chatgpt effectively?

You need to have good prompting skills and define the case well. Ie you need to ask intelligent questions or frame the questions effectively for the LLM to work effectively. 

A user who can't think properly will just use chatgpt to essentially vomit out the textbook or solve the math/science question without even knowing what the answer should be like.

 

Recently Govt wants to introduce AI introduction to primary school?
Come on, the kids can't even think properly yet. How to get them to use these LLM effectively.

The less technology they use at a younger age the better. Unless its something specific like coding whereby u need a computer to execute the instructions.
 

 

Edited by Lala81
  • Praise 2
Link to post
Share on other sites

People have to understand what ChatGPT is.

ChatGPT scans the internet and pluck answers to questions.

But ChatGPT does not know if the answer is right or wrong.

I give an example.

If my MIL was mysteriously murdered and you ask ChatGPT who did it.

ChatGPT will say jamesc based on numerous postings of jamesc saying he is going to do his MIL in.

But ChatGPT does not understand humour and if jamesc really wanted to do his MIL in he is not going to tell the whole world a million times.

ChatGPT does not have a nose to smell BS like humans do.

And as everyone in MCF knows me doing my MIL in is all just pure BS.

  • Haha! 8
Link to post
Share on other sites

On 10/3/2024 at 12:40 PM, Jamesc said:

People have to understand what ChatGPT is.

ChatGPT scans the internet and pluck answers to questions.

But ChatGPT does not know if the answer is right or wrong.

I give an example.

If my MIL was mysteriously murdered and you ask ChatGPT who did it.

ChatGPT will say jamesc based on numerous postings of jamesc saying he is going to do his MIL in.

But ChatGPT does not understand humour and if jamesc really wanted to do his MIL in he is not going to tell the whole world a million times.

ChatGPT does not have a nose to smell BS like humans do.

And as everyone in MCF knows me doing my MIL in is all just pure BS.

That's cos you already did her in.

 

Edited by Lala81
  • Haha! 5
Link to post
Share on other sites

To cut a long story short 

ChatGPT is like the student that copied the answers from the kid sitting next to him.

All the wrong answers also copy.

:D

Link to post
Share on other sites

On 10/3/2024 at 12:42 PM, Lala81 said:

That's cos you already did her in.

 

No I didn't!

She accidentally died all by herself!

:D

  • Haha! 3
Link to post
Share on other sites

I even have proof I didn't do it.

I asked my fren to drive to Jurong with my handphone when it happened.

So when polis check they will see my handphone was miles away in Jurong.

Nowhere near where she died!

Edited by Jamesc
Link to post
Share on other sites

If people don't believe my MIL is alive and kicking please ask ChatGPT.

Is jamesc from MCF MIL alive and kicking?

Or did he murder her?

Edited by Jamesc
  • Haha! 1
Link to post
Share on other sites

On 10/3/2024 at 9:54 AM, Windwaver said:

https://sg.style.yahoo.com/quit-teaching-because-chatgpt-173713528.html

I Quit Teaching Because of ChatGPT

This fall is the first in nearly 20 years that I am not returning to the classroom. For most of my career, I taught writing, literature, and language, primarily to university students. I quit, in large part, because of large language models (LLMs) like ChatGPT.

Virtually all experienced scholars know that writing, as historian Lynn Hunt has argued, is “not the transcription of thoughts already consciously present in [the writer’s] mind.” Rather, writing is a process closely tied to thinking. In graduate school, I spent months trying to fit pieces of my dissertation together in my mind and eventually found I could solve the puzzle only through writing. Writing is hard work. It is sometimes frightening. With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.

In my most recent job, I taught academic writing to doctoral students at a technical college. My graduate students, many of whom were computer scientists, understood the mechanisms of generative AI better than I do. They recognized LLMs as unreliable research tools that hallucinate and invent citations. They acknowledged the environmental impact and ethical problems of the technology. They knew that models are trained on existing data and therefore cannot produce novel research. However, that knowledge did not stop my students from relying heavily on generative AI. Several students admitted to drafting their research in note form and asking ChatGPT to write their articles.

As an experienced teacher, I am familiar with pedagogical best practices. I scaffolded assignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it.

In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. “It makes my writing look fancy,” one PhD student protested when I pointed to weaknesses in AI-revised text.

My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of “duplicative language” are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing.

Students who outsource their writing to AI lose an opportunity to think more deeply about their research. In a recent article on art and generative AI, author Ted Chiang put it this way: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” Chiang also notes that the hundreds of small choices we make as writers are just as important as the initial conception. Chiang is a writer of fiction, but the logic applies equally to scholarly writing. Decisions regarding syntax, vocabulary, and other elements of style imbue a text with meaning nearly as much as the underlying research.

Generative AI is, in some ways, a democratizing tool. Many of my students were non-native speakers of English. Their writing frequently contained grammatical errors. Generative AI is effective at correcting grammar. However, the technology often changes vocabulary and alters meaning even when the only prompt is “fix the grammar.” My students lacked the skills to identify and correct subtle shifts in meaning. I could not convince them of the need for stylistic consistency or the need to develop voices as research writers.

The problem was not recognizing AI-generated or AI-revised text. At the start of every semester, I had students write in class. With that baseline sample as a point of comparison, it was easy for me to distinguish between my students’ writing and text generated by ChatGPT. I am also familiar with AI detectors, which purport to indicate whether something has been generated by AI. These detectors, however, are faulty. AI-assisted writing is easy to identify but hard to prove.

As a result, I found myself spending many hours grading writing that I knew was generated by AI. I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”).  That is, I found myself spending more time giving feedback to AI than to my students.

So I quit.

The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries. They will find ways to encourage students to think critically and learn that writing is a way of generating ideas, revealing contradictions, and clarifying methodologies.

However, those lessons require that students be willing to sit with the temporary discomfort of not knowing. Students must learn to move forward with faith in their own cognitive abilities as they write and revise their way into clarity. With few exceptions, my students were not willing to enter those uncomfortable spaces or remain there long enough to discover the revelatory power of writing.

3b58df4eccb2a7149775cc3bbc47c060

Not really true, even with Google, I find that young people don't really know how to get the information themselves.

Formulating targeted search query is a necessary skill which they don't have. Still end up ask teacher [laugh] *and teacher googles*

ChatGPT is even harder to get an accurate and correct answer from, as it only gives you one reply each time. At least for Google search, it gives you a whole bunch of results that you can sift through and piece together the answer.

If anything, i find that tuition demand will lessen because of recent education policy changes that de-emphasize academic results. Good example: those enrichment that prepare students to get into GEP? All can close shop already soon. Perhaps can pivot to offer DSA activity classes. 'A' levels tuition is also in lesser demand now as more students choose the poly path. And poly tuition is usually not substantial, just cram a few weeks of lessons before exam, there is no long-term business.

Link to post
Share on other sites

Internal Moderator
On 10/3/2024 at 11:59 AM, Jamesc said:

But cannot take ChatGPT into the exam how and ask ChatGPT write the answer right?

Working next time how?

Boss ask a question.

Tell boss wait ah.

Then run to toilet and ask ChatGPT.

Tell boss the answer.

Boss ask follow up question and what run to toilet again huh?

Then boss will say don't need you my phone also got ChatGPT.

Boss rich can buy iPhone 16 pro max. got Apple intelligence

 

  • Haha! 1
Link to post
Share on other sites

On 10/3/2024 at 2:37 PM, kobayashiGT said:

Boss rich can buy iPhone 16 pro max. got Apple intelligence

Apple intelligent does not exist yet.

It's all vapor ware.

:D

And whoever bought Iphone 15 or 15 plus cannot run Apple intelligence.

They have to upgrade to 16 and later!

Edited by Jamesc
  • Haha! 1
Link to post
Share on other sites

On 10/3/2024 at 2:14 PM, Sosaria said:

Not really true, even with Google, I find that young people don't really know how to get the information themselves.

Not only Googly.

MCF also newbies don't know how to use the search function to get answers that was already answered.

There is so much information on the net and yet so many people still don't know so many things.

I think people these days are even dumber than our generation.

↡ Advertisement
  • Praise 1
  • Haha! 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...