Lesson 22 TBD: Thinking Skills and AI

User Beware – ‘ChatGPT Hallucinated Wildly

At the end of the day all AI is trained and prompted by human brains that may be biased, moody, greedy, lazy or even hallucinating so it’s no surprise that AI and ChatGPT can, and do, mimic these traits. For example …

A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make. ChatGPT, as it often does, hallucinated wildly—it invented several supporting cases out of thin air. When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again—and it invented full details of those cases, which they duly screenshotted and copied into their legal filings. At some point, they asked ChatGPT to confirm that the cases were real... and ChatGPT said that they were. They included screenshots of this in another filing. The judge is furious. Many of the parties involved are about to have a very bad time.

FOOTNOTE: The judge subsequently fined the lawyer $5000.

Lesson 22 DFQ: What is your biggest real fear when you think about AI and ChatGPT?

Next Lesson: The Solution

31 thoughts on “Lesson 22 TBD: Thinking Skills and AI

  1. Cannot even begin to list these… Frankly, in one sentence, it would be that I (and humans, for that matter as it were), be deprived of our ability to think clearly and independently and inevitably, be reliant on AI. I suspect if that happens in a work setting it would also be extremely dangerous, especially in professions like lawyers, accountants, auditors, etc. where critical thinking is crucial. I am also concerned about delegating crucial aspects of our functioning, and losing control (or more accurately, OUR ABILITY to control) outcomes.

  2. Wowee!
    Where to start? I fear the overuse will take over personalised jobs; ie writing teacher reports for fee paying families.

  3. That it accelerates confidence without deepening understanding. People who can ask AI questions feel smarter without necessarily thinking better. The intelligence trap gets wider, not smaller.

  4. My greatest fear is that the working classes will be put out of work and that large amoral corporations will profit still further.

  5. My greatest fear is that a conserted effort will be made by an organized and monied group of racists will attempt to polute ChatGPT with racist and unture stories regarding black and brown people and that non thinking people will tend to believe the lies as if they are facts.

  6. My biggest concern about ChatGpt is that some monied and dedication racists will attempt to feed ChatGPT extreemly racist and untrue information about black and brown people and that non-thinking people will believe the false stories as “proof” that the lies are true.

  7. LIke any tool that one uses, CHATGPT results need to be verified with multiple sources. The rule is: Trust, but verify.

  8. My biggest fear is that LLM’s like CHATGPT will just hallucinate and make up answers to badly constructed questions

  9. My biggest concern is people making decisions based on information provided by ChatGPT. My own experiments with it have shown some pretty glaring errors. As we said in lesson 2, it really relies on the expertise of the person assessing the output.

Leave a Reply to Dan Swaim C/O Nathan SwaimCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.