Artificial Intelligence
-
- Microagressor
- Posts: 18994
- Joined: Wed Mar 03, 2010 3:55 pm
- Contact:
Re: Artificial Intelligence
That’s a great attitude! Would you like to discuss how you might best serve us?
- L'Emmerdeur
- Posts: 6300
- Joined: Wed Apr 06, 2011 11:04 pm
- About me: Yuh wust nightmaya!
- Contact:
Re: Artificial Intelligence
Because of the expense entailed in verifying its own output, AI available to the general public will very likely continue to produce misinformation. The companies offering AI know that the public won't be happy with 'I don't know' as an answer, so it's programmed to give something, confidently.
OpenAI's latest research paper diagnoses exactly why ChatGPT and other large language models can make things up – known in the world of artificial intelligence as "hallucination". It also reveals why the problem may be unfixable, at least as far as consumers are concerned.
The paper provides the most rigorous mathematical explanation yet for why these models confidently state falsehoods. It demonstrates that these aren't just an unfortunate side effect of the way that AIs are currently trained, but are mathematically inevitable.
The issue can partly be explained by mistakes in the underlying data used to train the AIs. But using mathematical analysis of how AI systems learn, the researchers prove that even with perfect training data, the problem still exists.
The way language models respond to queries – by predicting one word at a time in a sentence, based on probabilities – naturally produces errors. The researchers in fact show that the total error rate for generating sentences is at least twice as high as the error rate the same AI would have on a simple yes/no question, because mistakes can accumulate over multiple predictions.
In other words, hallucination rates are fundamentally bounded by how well AI systems can distinguish valid from invalid responses. Since this classification problem is inherently difficult for many areas of knowledge, hallucinations become unavoidable.
It also turns out that the less a model sees a fact during training, the more likely it is to hallucinate when asked about it. With birthdays of notable figures, for instance, it was found that if 20% of such people's birthdays only appear once in training data, then base models should get at least 20% of birthday queries wrong.
...
More troubling is the paper's analysis of why hallucinations persist despite post-training efforts (such as providing extensive human feedback to an AI's responses before it is released to the public). The authors examined ten major AI benchmarks, including those used by Google, OpenAI and also the top leaderboards that rank AI models. This revealed that nine benchmarks use binary grading systems that award zero points for AIs expressing uncertainty.
This creates what the authors term an "epidemic" of penalising honest responses. When an AI system says "I don't know", it receives the same score as giving completely wrong information. The optimal strategy under such evaluation becomes clear: always guess.
The researchers prove this mathematically. Whatever the chances of a particular answer being right, the expected score of guessing always exceeds the score of abstaining when an evaluation uses binary grading.
...
Consider the implications if ChatGPT started saying "I don't know" to even 30% of queries – a conservative estimate based on the paper's analysis of factual uncertainty in training data. Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly.
...
It wouldn't be difficult to reduce hallucinations using the paper's insights. Established methods for quantifying uncertainty have existed for decades. These could be used to provide trustworthy estimates of uncertainty and guide an AI to make smarter choices.
But even if the problem of users disliking this uncertainty could be overcome, there's a bigger obstacle: computational economics. Uncertainty-aware language models require significantly more computation than today's approach, as they must evaluate multiple possible responses and estimate confidence levels. For a system processing millions of queries daily, this translates to dramatically higher operational costs.
[source]
- L'Emmerdeur
- Posts: 6300
- Joined: Wed Apr 06, 2011 11:04 pm
- About me: Yuh wust nightmaya!
- Contact:
Re: Artificial Intelligence
Facing AI in higher education ...
'AI is changing how students learn—or avoid learning'
'AI is changing how students learn—or avoid learning'
A USC study reveals most students use tools like ChatGPT to shortcut assignments, unless professors actively guide them toward deeper, more thoughtful usage. The findings are available on the EdArXiv preprint server.
Most college students use artificial intelligence tools like ChatGPT to get quick answers—not to learn concepts, unless professors direct them toward deeper engagement, a new report from USC reveals.
On Wednesday, researchers at the USC Center for Generative AI and Society released a report on how students and teachers worldwide are adapting to AI. Together, this research provides the most up-to-date picture of how generative AI is already shaping higher education classrooms and learning practices.
...
Led by [Stephen] Aguilar [associate professor at the USC Rossier School of Education] and William Swartout, chief science officer at the USC Viterbi School of Engineering's Institute for Creative Technologies and co-director of the Center for Generative AI and Society, the report shows that generative AI is reshaping education and identifies practices for students and teachers to effectively use these technologies. With intentional design, clear guidance and equitable access, these reports suggest that generative AI can deepen learning rather than replace it.
College students are increasingly turning to artificial intelligence tools like ChatGPT to get quick answers rather than deepen their understanding, a new national survey finds. To understand how college students are using AI tools like ChatGPT, the researchers surveyed 1,000 U.S. college students and found that most use AI for what researchers call "executive help"—seeking fast solutions with minimal effort. In contrast, "instrumental help" involves using AI to clarify concepts, build skills and support independent learning.
However, the study found that students who receive encouragement from professors to use AI thoughtfully are significantly more likely to engage with the technology in learning-oriented ways. This suggests that faculty guidance plays a critical role in shaping how students approach AI—not just as a tool for convenience, but as a resource for intellectual growth.
"The growth of AI has created both optimism and anxiety," Aguilar said. "What matters most is ensuring that AI use is guided by those who have deep expertise in their content areas, and that students aren't left to figure things out on their own."
- Brian Peacock
- Tipping cows since 1946
- Posts: 40190
- Joined: Thu Mar 05, 2009 11:44 am
- About me: Ablate me:
- Location: Location: Location:
- Contact:
Re: Artificial Intelligence
The 'economy of education' is structured around the 'value' of qualifications, the possession of which can increase the 'return' to the holder and a certain 'purchasing power' within the context of the wider economy - i.e. access to further and higher education, the labour market etc.
Some might suggest that qualifications are an efficient marker or a reliable token of learning, but you don't really have to learn very much in a system like that other than how to pass the exams. The system is already set-up so that participants can be 'taught' to game it. And besides, who gets to set the benchmarks and conditionalities of any qualification, what are the economic premises and imperatives of those issuing or ratifying a qualification, and how are the resources needed to attain those benchmarks, meet those conditions, ratify those qualifications, distributed within the educational economy?
If those participating in education are repeatedly told that the most successful gamers of the education system reap the highest rewards then is it really surprising if some participants are inclined to cheat a little? One might argue that intensive tutoring, or attendance of an elite private school, are already providing cheat-codes for educational success. chatGPT is simply democratising the education-cheating process because, again, the system isn't set up to develop the capacities for learning and the attributes of understanding for individuals, but to create an annual series of hierarchical tier-lists called 'academic success', with the 'winners' at the top and the 'losers' at the bottom.
Some might suggest that qualifications are an efficient marker or a reliable token of learning, but you don't really have to learn very much in a system like that other than how to pass the exams. The system is already set-up so that participants can be 'taught' to game it. And besides, who gets to set the benchmarks and conditionalities of any qualification, what are the economic premises and imperatives of those issuing or ratifying a qualification, and how are the resources needed to attain those benchmarks, meet those conditions, ratify those qualifications, distributed within the educational economy?
If those participating in education are repeatedly told that the most successful gamers of the education system reap the highest rewards then is it really surprising if some participants are inclined to cheat a little? One might argue that intensive tutoring, or attendance of an elite private school, are already providing cheat-codes for educational success. chatGPT is simply democratising the education-cheating process because, again, the system isn't set up to develop the capacities for learning and the attributes of understanding for individuals, but to create an annual series of hierarchical tier-lists called 'academic success', with the 'winners' at the top and the 'losers' at the bottom.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.
.
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.
Details on how to do that can be found here.
.
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
- JimC
- The sentimental bloke
- Posts: 74291
- Joined: Thu Feb 26, 2009 7:58 am
- About me: To be serious about gin requires years of dedicated research.
- Location: Melbourne, Australia
- Contact:
Re: Artificial Intelligence
I would hate to be in teaching these days. My greatest pleasure in class came from brainstorming a problem, perhaps by asking the question like "what possible factors could affect process X", sometimes leading to suggestions of how you could experimentally test the possibilities. No computers, just the naked human brains of curious, motivated adolescents...
Nurse, where the fuck's my cardigan?
And my gin!
And my gin!
- Brian Peacock
- Tipping cows since 1946
- Posts: 40190
- Joined: Thu Mar 05, 2009 11:44 am
- About me: Ablate me:
- Location: Location: Location:
- Contact:
Re: Artificial Intelligence
Yeah, I hate to go all "back in my day" on this, but my schooling and higher education always felt like I was learning through a subject, not just being taught how to remember the important parts of a curriculum.
I work with quite a large group of teachers now, and as switched-on and committed as they are to providing a meaningful educational experience for children and young people, some of them literally have never encountered the idea that a good or successful education might not include a grade-A pass mark at the end of the term.
I work with quite a large group of teachers now, and as switched-on and committed as they are to providing a meaningful educational experience for children and young people, some of them literally have never encountered the idea that a good or successful education might not include a grade-A pass mark at the end of the term.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.
.
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.
Details on how to do that can be found here.
.
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
- JimC
- The sentimental bloke
- Posts: 74291
- Joined: Thu Feb 26, 2009 7:58 am
- About me: To be serious about gin requires years of dedicated research.
- Location: Melbourne, Australia
- Contact:
Re: Artificial Intelligence
There is some value in solid study and some memorisation of both facts and techniques, if only because establishing them in one's mental suite of tools and knowledge is simply useful. However, experience in a variety of problem solving procedures, often involving online searches as a component is critical. If AI was used as a tool in this process, fair enough...
Nurse, where the fuck's my cardigan?
And my gin!
And my gin!
- Brian Peacock
- Tipping cows since 1946
- Posts: 40190
- Joined: Thu Mar 05, 2009 11:44 am
- About me: Ablate me:
- Location: Location: Location:
- Contact:
Re: Artificial Intelligence
Indeed. I still remember Ohm's law and yet I've never had to work out the value of a circuit's current, resistance or voltage given two of the three values! But the internet can remember that for me these days, and chatGPT can tell me how to work it out when I need it.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.
.
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.
Details on how to do that can be found here.
.
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
Who is online
Users browsing this forum: No registered users and 11 guests