ChatGPT: Responsibility for faulty behavior of the machine is borne by the human being, 31.03.2023
zuletzt bearbeitet: Tue, 18 Apr 2023 16:52:08 +0200
nmoplus@realtime.fyi
In my opinion, there is no scientific basis for claiming that ChatGPT can 'hallucinate'. It seems that this attribution is simply adopted by many science journalists without sufficiently checking whether it is really true.
Artikel ansehen
Zusammenfassung ansehen
Systems like ChatGPT therefore require a lot of capital, but the assumption of creating artificial intelligence through the use of statistical AI is and remains doubtful. An AI based on semantic understanding would deliver significantly better results with far less computational effort. If that succeeds, ChatGPT will be dead tomorrow.https://biblio.hypotheses.org/5284 "How does artificial intelligence work? A conversation with Ranga Yogeshwar about ChatGPT and the aftermath (1)", BY MICHAEL KNOCHE PUBLISHED 24 MARCH 2023 UPDATED 27 MARCH 2023 [1]
Because I'm interested, I consulted the ChatGPT oracle.
> Survey ChatGPT beginning
question: In an interview "How does artificial intelligence work? A conversation with Ranga Yogeshwar about ChatGPT and the consequences" Ranga Yogeshwar says: "An AI based on semantic understanding would deliver significantly better results with far less computational effort. If that succeeds, ChatGPT will be dead tomorrow."
ChatGPT, what is the state of the art, what are the hurdles, what are the difficulties to overcome in applying the semantic approach to bring performance comparable to ChatGPT?
ChatGPT-answer: The semantic approach to AI involves understanding the meaning and context of language, rather than simply recognizing patterns and statistical correlations. While this approach has shown promise in improving the accuracy and efficiency of language processing tasks, there are still several hurdles to overcome before it can reach the level of performance seen in models like ChatGPT.
One of the main challenges is developing effective algorithms for semantic analysis that can accurately capture the complex nuances of language. This requires a deep understanding of not only individual words and phrases, but also their relationships to one another and the broader context in which they are used.
Another challenge is the need for large amounts of high-quality training data to train these algorithms, which can be difficult to obtain in certain domains or languages.
Finally, there is also the issue of computational resources, as semantic analysis can be computationally intensive and require significant processing power to run efficiently.
Despite these challenges, progress is being made in the development of semantic AI, and it is possible that in the future, models based on this approach could surpass the performance of models like ChatGPT. However, it is also possible that these different approaches to AI will continue to coexist and be used in different applications depending on their specific strengths and limitations.
> Survey ChatGPT End
Generated on https://chat.openai.com/ on 31.03.2023
My prognosis is that ChatGPT & Co. are superior to humans in the application context " expert knowledge " in which humans ensure that the language models are trained with sufficient data of good quality become. In my opinion, there is no scientific basis for claiming that ChatGPT can "hallucinate". It seems as if this attribution is simply adopted by many science journalists without adequately checking whether it is really correct.
Ranga Yogeshwar studied experimental physics and is a freelance science journalist. In his work he has repeatedly dealt with artificial intelligence. I asked him how systems like ChatGPT work? Are they really "smart"? What happens when they are used extensively? How would this affect libraries? The conversation appears in 4 parts.[1]
[...]
Ranga Yogeshwar: First of all, ChatGPT is a statistical AI. Words are thrown together according to purely statistical criteria. In the past, this worked very well for image recognition, but this was a problem with text comprehension until so-called transformer models were introduced. In 2019 the first essays like this one, Training language models to follow instructions with human feedback 1, came out with an abstract that said: “Note, the abstract above was not written by the authors, it was generated by one of the models presented in this paper.” I found it very remarkable that the snake is now biting its own tail. However, ChatGPT is and remains a dice game and even starts to hallucinate .
Hallucination (from the Latin alucinatio 'dreaming') is a perception for which there is no demonstrable external stimulus basis. Such perceptions can occur in any sensory area. This means, for example, that physically undetectable objects are seen or voices are heard without anyone speaking.https://de.wikipedia.org/wiki/Halluzination, Translated with Deepl.com
If ChatGPT or any other AI makes any statements, then IMHO there is always a stimulus and it is the data that is entered as a prompt that is to be evaluated as a stimulus. Therefore, in my opinion, the naming of the behavior of the AI, which makes inaccurate statements, as hallucination, does not apply .
The machines hallucinate according to rules defined by humans ;-). So the AI or the language model hallucinates on behalf of the inventor of the model. When a car drives in circles in the middle of nowhere, no one thinks of attributing the ability of "hallucination" to the car, because it supposedly invents a non-existent or meaningless route itself.
= Responsibility for counterproductive behavior of the machine is borne by humans =
Not the machine, but the author of the respective model or algorithm or the users who use this machine are responsible for the machine behaving differently than expected - according to some people "hallucinating". IMHO it's important to get away from the "hallucination" term because the term "hallucinating" here is just plain misleading. By using the term "hallucination":
- if the discussion is misled, dematerialized
- hallucinatory behavior is attributed to the machine, although this behavior has nothing to do with a "hallucination" in the sense of the Wikipedia definition
- using the "hallucination" ability related to the machine IMHO distracts from the fact that the responsibility for the machine's counterproductive behavior lies with the human. Either the developer of the language model or the user who, upon discovering the counterproductive behavior, may use the time to report that behavior in the media. I have yet to read or hear of any "hallucination" reporters investing their time in providing feedback to the language model and then ensuring that the counterproductive behavior in a given context was corrected.
the condescending attitude of some forum participants towards the new phenomenon "Artificial Intelligence" - see " ChatGPT goes Powershell " is one thing. You can understand this attitude to some extent. In today's world, which is shaped by "political corectness", there is hardly any possibility of asserting one's own superiority in any way.https://administrator.de/forum/kuenstliche-intelligenz-hat-imho-respekt-verdient-6235658495.html
What worries me is that journalists are slipping into the same top-down attitude in reporting. In my opinion, a prime example of this is this article "ChatGPT ist lustig, aber Zeitverschwendung" - Inside IT [1] by Eva Wolfangel. Eva Wolfangel is presented in [1] not only as a journalist, but as a cultural scientist.
und
Tech charlatans and U.S. Senators are now spreading misinformation about predictive AI tools, which are not sentient.https://www.vice.com/en/article/ak3zbp/for-the-love-of-god-ai-chatbots-cant-decide-to-do-anything, Fat highlighting is from me, @neue medienordnung plus
[...]
Even when it’s fun to imagine, the idea of language models as a nascent superintelligent AI benefits the corporations creating them. If large swaths of the public believe that we are on the cusp of giving birth to advanced machine intelligence, the hype not only pads the bottom line of companies like Google and OpenAI, but helps them avoid taking responsibility for the bias and harm that result from those systems.
Better fits IMHO
Machine to be mistaken.
And the machine is wrong on behalf of the inventor of the model or for the users who supplied the machine with misleading data in the prompt.It is important to understand the role of the prompt when using ChatGPT:
> Survey ChatGPT beginning
question: I ask for an understandable explanation of the term "prompt" for the participants of an adult education course "How ChatGPT works and how to use it".
ChatGPT-answer: Sure! In the context of ChatGPT, a prompt is a piece of text that you enter to start a conversation with the AI model. You can think of it as a starting point or a question that you ask the AI to generate a response to. The prompt can be a single sentence or a longer paragraph, depending on what you want to ask or talk about.
For example, if you want to ask ChatGPT for a recipe, your prompt could be something like "Can you suggest a recipe for lasagna?" or "How do I make a vegan lasagna?". ChatGPT will then generate a response based on its training on a large dataset of text, and try to provide an answer that is relevant and coherent with the prompt. So, in short, a prompt is the text input that you give to ChatGPT to start a conversation and receive a response.
> Survey ChatGPT End
Generated by https://chat.openai.com/ on 03/31/2023
The definition quoted above explains very precisely that a prompt for the language model is a starting point, a trigger, a stimulus. The content generated by the language model depends on what is contained in the prompt. Given that ChatGPT works on a prompt basis, the thesis that it hallucinates cannot be considered scientifically sound.
= source list =
[1] https://biblio.hypotheses.org/5284 "How does artificial intelligence work? A conversation with Ranga Yogeshwar on ChatGPT and the aftermath (1)", BY MICHAEL KNOCHE PUBLISHED 24 MARCH 2023 UPDATED 27 MARCH 2023
Texts from the sources in German are translated with the translation integrated in the Chromium browser.
#Halluzination #halluzinieren #ChatGPThalluziniert #kontraproduktivesVerhalten #Qualitätssicherung #Wissenschaftsjournalistinnen #Wissenschaftsjournalist #Wissenschaftsjournalistin #FeedbackButton #Feedback #ChatGPTDiary #chatGPT #JanusRose #TechScharlatane #RangaYogeshwar #EvaWolfangel #TechScharlatane #Prompt #ChatGPTPrompt #PromptChatGPT #PromptEngineering #MICHAELKNOCHE
Konversationsmerkmale
Lädt...