Lesson 22 TBD: Thinking Skills and AI

User Beware – ‘ChatGPT Hallucinated Wildly

At the end of the day all AI is trained and prompted by human brains that may be biased, moody, greedy, lazy or even hallucinating so it’s no surprise that AI and ChatGPT can, and do, mimic these traits. For example …

A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make. ChatGPT, as it often does, hallucinated wildly—it invented several supporting cases out of thin air. When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again—and it invented full details of those cases, which they duly screenshotted and copied into their legal filings. At some point, they asked ChatGPT to confirm that the cases were real... and ChatGPT said that they were. They included screenshots of this in another filing. The judge is furious. Many of the parties involved are about to have a very bad time.

FOOTNOTE: The judge subsequently fined the lawyer $5000.

Lesson 22 DFQ: What is your biggest real fear when you think about AI and ChatGPT?

Next Lesson: The Solution

31 thoughts on “Lesson 22 TBD: Thinking Skills and AI

  1. My biggest fear is the uncritical use of the output and the potential to lesson the development of our thinking skills.

  2. My biggest real fear about AI and Chat GPT, is the competitive disadvantage that a group or even a country may create for itself by not trying to understand, tame,use and improve this tool.

  3. My biggest fear is the use of AI/ChatGPT to create deliberate misinformation for political, economic and social reasons… those without critical thinking skills, who rely on social media or news outlets that lean a certain way may be unwilling to fact-check their views and arguments. COVID and Q-Anon are examples of people consuming information without discernment. I fear that it may become increasingly difficult to tell factual content from fictional.

  4. As computers became smarter, we’ve outsourced more of our critical thinking. Smart phones accelerated this, with the advent of apps. It seems AI and ChatGPT could take us to the next level, and the world may become a scary place.

  5. One thing that really freaks me out when they think about AI is the idea of losing control. It’s like, what if these AI systems become way too smart for us and start calling the shots, manipulating our decisions and actions?

  6. That I’m wasting my time learning to prompt a llm that is currently only useful for juniorisating tasks.

  7. Over dependence and the resulting loss, or opportunity to gain, basic skills and the ability to think critically for oneself.

  8. Anyone can use ChatGPT to create a course and come off as an expert, even though they have not worked to gain genuine knowledge and skills and experience… that can only be learned thru the school of life and hard knocks!

    1. Social uncertainty and disruption in transition. Our current economic model incentivises efficiencies and reserves increases in profitability for owners and top echelons of management. Anticipated increases in leisure time predicted over the last 50 years or so have not materialised. If the forecasts of the numbers of jobs to be affected by AI and Chat GPT are accurate, there will be potential for significant increases in the number of the economically inactive with resulting political and economic challenges for job creation, income redistribution, social and welfare provision and the management of social unrest.

Leave a Reply to Stephen TownleyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.