AIXIO S1:Lesson Three

Thinking Skills and ChatGPT

READ here, or on page 6 of the coursebook …

READ here, or on page 7 of the coursebook …

User Beware – ‘ChatGPT Hallucinated Wildly

At the end of the day all AI is trained and prompted by human brains that may be biased, moody, greedy, lazy or even hallucinating so it’s no surprise that AI and ChatGPT can, and do, mimic these traits. For example …

A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make. ChatGPT, as it often does, hallucinated wildly—it invented several supporting cases out of thin air. When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again—and it invented full details of those cases, which they duly screenshotted and copied into their legal filings. At some point, they asked ChatGPT to confirm that the cases were real... and ChatGPT said that they were. They included screenshots of this in another filing. The judge is furious. Many of the parties involved are about to have a very bad time.

FOOTNOTE: The judge subsequently fined the lawyer $5000.

Today’s DFQ: What is your biggest real fear when you think about AI and ChatGPT?

21 thoughts on “AIXIO S1:Lesson Three

  1. I fear AI and ChatGPT will go the way of google and the gate keepers will limit and filter outputs to manage thought and social engineer. I see that already when I ask for certain info. The power hungry cannot let us have freedom of thought.

  2. The biggest fear is that the level of creativity and possibly intellectual capacity and resilience of humans will decrease. And the dependence on ‘copy & paste’ research, assumptions and so called solutions will increase the critical mass of of those unable to perform unassisted thinking.

  3. In prompting different questions around fear and Chatgpt, following on from my original thought, my thinking arrived at “user education”. New technology needs instruction (in my experience, complex tech is not intuitive)

  4. As an early adopter of technology, I am excited by my interaction with ChatGPT – I can see that it is a useful tool to access content to flesh out my consideration of business issues in a more efficient way than those I have used in the past..
    Accordingly, I am not concerned about using AI (Chat GPT) at this stage of my journey and have no fears . As I press on into more applications of AI my views may change and I will continue to critically review my progress.

  5. Humans have progressively replaced the real world with a virtual world which is more and more difficult to escape from. My greatest fear is that AI
    will amplify the worst elements of social media being that people will be more likely to live in echo chambers and loose contact with reality.

  6. I fear. That I will make a serious decision based on misreading incorrect information tha I want to believe for emotional reasons

  7. I do not fear it, I have used it 8 hours per day in the past three years, However, I was well trained in the past by Michael H. Gleeson and I am equipped with the right tools. My concern is that non-equipped people dive into GPT blindfolded, like driving a car blindfolded and enjoy the ride until they will crush, All our team use GPT daily and at the same time we have build a quality control department to analyze the individuals behavior while building their relation with a “smart” tech. When you are not trained well you can damage your mind. The problem is not how your train the machine but how you train the mind. We have developed models to prompt the mind and the machine in parallel.

  8. My biggest real fear is – The damage ChatGPT can do to users especially young generation where over-reliance on it or similar tools can lead to killing their CREATIVITY and ORIGINALITY in problem-solving, and an over- reliance on pre-programmed solution. Already we see the growing tendency of users to swiftly jump to google then and now ChatGPT for any question posed to them. This when applied especially in academics it will bred laziness and making their thinking muscles wither instead of exercising them to gain strength.

    Continued use of ChatGPT or similar tools will result into ‘inbreeding of knowledge’ which I can equate to the situations where inferior generations result where there is inbreeding animal reproduction. There is high risk of ChatGPT abuse whereby instead of using it to supplement and complement our own original thinking it is going to replace it.

  9. I worry that as a society we will become too restrictive about the use of AI. I think a lot of the AI Doomer arguments don’t hold up to the scientific method. For the first time the issues blue collar workers had with offshoring may well come home to roost for knowledge workers. I’m a real optimist and I see great opportunity for the AI to be a FAC ( friend, assistant and coach) and be an amazing tool for working on significant issues like mental health. Of course there are risks with any tech. But dramatic change brings outsized fears.

  10. My biggest fear is the difficulty in discerning or eliminating the ‘fashionably irrational beliefs’ that constitute a significant part of the information with which current AI systems are programmed.

Leave your thought

This site uses Akismet to reduce spam. Learn how your comment data is processed.