The rise of the machine

Post Reply
User avatar
Brian Peacock
Tipping cows since 1946
Posts: 37953
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Fri Jun 02, 2023 8:21 am


pErvinalia wrote:
Brian Peacock wrote:
Thu Jun 01, 2023 3:29 pm
pErvinalia wrote:
Thu Jun 01, 2023 3:17 am
I think we are at a critical point in time, and we need to get it right. While there's a non-zero chance that we could get sentient Terminator-like robots, I think the real threat is misinformation and manipulation of humans with said misinformation. Imagine a Donald Trump - like leader conspiring with autocracies like Russia. Video/audio emerges implicating him in treason. But we no longer have the ability to distinguish between real and fake video, as deep fakes have got so good. Even worse, imagine a networked AI faking evidence of a nuclear launch from China. How would a human operator in the US respond to this? Would they be tricked into launching a counterstrike?
While the capacity of machine learning to run these things up is pretty staggering, the issue isn't the machine learning artefacts themselves, but the people using them. We already have a shitton of fake news, plainly biased news, political operators, commenters, and media that literally buy into an alternate version reality (alt-facts), state-sponsored and corporate troll farms, hacking of infrastructure and intellectual property and profit-driven blackops, a plethora of fraud (some of it institutionalised and protected by law), the gaming of financial systems, commercial sectors, and entire economies. AI just makes all this quicker and a bit cheaper. The problem isn't the AI but the nefarious impulses and turpitudinous desires of certain classes of human agent.
You're not thinking this right. The problem is that soon AI won't need human agents to direct them. The worry is that AI will operate on its own, with its own desires and goals that we are not entirely sure of.
Whatever AI's "own desires and goals" end up being it will ultimately be a function of it's base programming, which is determined by the "desired and goals" of it's initial human designers and operators. Therefore it is those human agents that need to be regulated. What I'm saying is that the framing of this issue around the persona of AIs, as if they are potential wayward agents that need to be corralled and controlled, dominated and suppressed, deliberately downplays, ignores, or obfuscates the moral responsibilities of the commercial and military interests driving their technical development.

Taking the killbot example above, how and why did weighting the completion of the military mission at a higher value than the commands or life of the systems admin get to be an active factor in the killbot's operation? Well, probably because the premise of the system has a wanton disregard for human life baked into its initial conditions - because this is a system specifically designed by humans to kill humans as effectively and efficiently as possible. In such a case it is the moral and ethical outlooks of the designers that need to be addressed and challenged, and we all know the military are already morally compromised in that regard.

Now imagine AI systems premised on legally gaming the world's financial and economic architecture to accrue maximal wealth without regard to the consequences for or well-being of humans - capitobots, or fascobots! The underlying issue is the moral, ethical and value structures of those interested parties who already have the political and economic power to commission, design, and implement that kind of machine learning to that particular end.

By my lights this is primarily a cultural problem, and yet billionaire Longtermists with real skin in the AI game are framing our discourse around these cultural questions as ones that are wholly centred on machine learning systems, "digital minds" and "non-human intelligences" etc, conceptualised as independent rogue entities with agency and interests (desires and goals) that pose an "existential threat" to "our way of life". In effect, they are using essentially racist tropes to 'other' the idea of AI, and on the back of that calling for regulation that will contract the State to secure and enforce their continued development and dominance of the sector. This is because when corporate interests talk about an existential threat to 'our' way of life they only really mean 'their' way of life, and the problem isn't with 'our' AI but with the AI of notional 'others': their lesser rivals and independent developers - those whom they can't control or manipulate to serve their corporate interests and whom, therefore, the State must regulate away.

Machine learning is already doing amazing things like diagnosing the precursors to and very early onset of a range of medical conditions from retinal scans before symptoms even become apparent to the patient. Is that a "digital mind' we're supposed to be frightened of? Of course not, because the premise of those kinds of systems are clearly to enhance our understanding and well-being. So what kinds of "non-human intelligence" are we being warned about here, and who is commissioning, designing, resourcing, training, and implementing the kinds of machine learning systems we might be right to be frightened of? And when we think about those kind of nefarious entities, the people and organisations who might do a bad AI, isn't there a massive overlap with exactly the same class of people who are telling to be frightened while simultaneously saying that only they should have the regulatory backing to continue to develop and bring to market those self-same AI systems?

So is the problem really embodied (quite literally) in AI or is it located in the nefarious impulses and turpitudinous desires (the values) of certain classes of human agent and the structures and systems they already have the power to manipulate and game in the service of their own personal and pecuniary interests?


Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 59295
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Fri Jun 02, 2023 8:50 am

You're missing the crucial point of true AI. It will form its own goals and desires independent of any human. Thats is what the concern is that's being expressed in the open letter.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
macdoc
Twitcher
Posts: 6937
Joined: Tue Feb 23, 2010 3:20 pm
Location: Planet Earth on slow boil
Contact:

Re: The rise of the machine

Post by macdoc » Fri Jun 02, 2023 8:54 am

Whatever AI's "own desires and goals" end up being it will ultimately be a function of it's base programming, which is determined by the "desired and goals" of it's initial human designers and operators
nope
did you not see the eggshells in Jurassic Park??
US military
AI-controlled US military drone ‘kills’ its operator in simulated test
No real person was harmed, but artificial intelligence used ‘highly unexpected strategies’ in test to achieve its mission and attacked anyone who interfered
https://www.theguardian.com/us-news/202 ... lated-test

This possibility was "overlooked" ....... :thinks:

Perv has it .... :tup:
Resident in Cairns Australia Australia> CB300F • Travel photos https://500px.com/p/macdoc?view=galleries

User avatar
pErvinalia
On the good stuff
Posts: 59295
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Fri Jun 02, 2023 9:04 am

Capitalists 1, Marxists 0!

(sorry,i can't respond properly, I'm on my phone).
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 59295
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Fri Jun 02, 2023 11:00 am

And just on the point of "corporate interest", the people signing these letters are mostly scientists, not evil capitalists. I know it's hard for you to look past capitalism as the basis for all problems... afterall to a Marxist everything looks like a nail. But this is bigger than capitalism.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 59295
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Fri Jun 02, 2023 11:03 am

In regards to "base programming", unchecked AI will rewrite its code. It's not the capitalists that we have to be wary of in this scenario.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 73014
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: The rise of the machine

Post by JimC » Fri Jun 02, 2023 8:21 pm

pErvinalia wrote:
Fri Jun 02, 2023 12:38 am
Brian Peacock wrote:
Thu Jun 01, 2023 3:29 pm
pErvinalia wrote:
Thu Jun 01, 2023 3:17 am
I think we are at a critical point in time, and we need to get it right. While there's a non-zero chance that we could get sentient Terminator-like robots, I think the real threat is misinformation and manipulation of humans with said misinformation. Imagine a Donald Trump - like leader conspiring with autocracies like Russia. Video/audio emerges implicating him in treason. But we no longer have the ability to distinguish between real and fake video, as deep fakes have got so good. Even worse, imagine a networked AI faking evidence of a nuclear launch from China. How would a human operator in the US respond to this? Would they be tricked into launching a counterstrike?
While the capacity of machine learning to run these things up is pretty staggering, the issue isn't the machine learning artefacts themselves, but the people using them. We already have a shitton of fake news, plainly biased news, political operators, commenters, and media that literally buy into an alternate version reality (alt-facts), state-sponsored and corporate troll farms, hacking of infrastructure and intellectual property and profit-driven blackops, a plethora of fraud (some of it institutionalised and protected by law), the gaming of financial systems, commercial sectors, and entire economies. AI just makes all this quicker and a bit cheaper. The problem isn't the AI but the nefarious impulses and turpitudinous desires of certain classes of human agent.
You're not thinking this right. The problem is that soon AI won't need human agents to direct them. The worry is that AI will operate on its own, with its own desires and goals that we are not entirely sure of.
I agree that the "rogue AI problem" is the one that has all the pundits worried. I accept that it is possible, and that care is needed, but I don't see it as likely. Brian, however, has picked up on a different, more political issue; I think it is also a major worry...
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 37953
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Sun Jun 04, 2023 12:52 am

pErvinalia wrote:
Fri Jun 02, 2023 11:00 am
And just on the point of "corporate interest", the people signing these letters are mostly scientists, not evil capitalists. I know it's hard for you to look past capitalism as the basis for all problems... afterall to a Marxist everything looks like a nail. But this is bigger than capitalism.
Erm. The two words "corporate interests" from my post were not my point, but at the same time who is making the investment in AI and what do they want to do with it?

The AI problem didn't arrive with chatGPT. Machine learning has already changed the global financial and economic landscape, it's already profiling everybody with a computer and a smart phone and analysing everything we do with those devices, already mapping our movements (and our faces), monitoring our temperature, heart rate, and sleep patterns through smart devices, already determining what links we're mostly likely to click on from searches tailored to our profiles, already targeting ads at us and deciding what news we'll see and curating what we'll see about what people are saying about it. Machine learning systems know the content of our fridges, who passes through our front doors, what kind of music, films or games we like, how carefully or recklessly we drive our cars, it's analysing our healthcare interactions and our healthcare records, it's monitoring our interactions with state bureaucracies and our productivity at work. The list goes on. And this wealth of data is readily available at a price, and can be compared with other information in a largely unregulated data-market to predict everything from our insurance liabilities and our ability to service debts, to our health risks, the likelihood of us committing crime, and what we'll buy next time we go to the supermarket. In that sense we're already being targetted and manipulated for commercial ends, and it's not the AIs that are doing this - the AIs just help.

I said I thought this was primarily a cultural problem, which is to say it's a matter of how certain sets of morals and ethics are expressed throughout and across society - so that is bigger than capitalism. But in particular this is an issue rooted in the values and expectations of those who already have the commercial heft to commission, design, develop, and deploy machine learning systems - and have been doing so for over a decade now. What I pointed out was that there's a massive overlap between those who, off the back of the success of chatGPT, have considered it necessary to warn us about the dangers of AI and those who have been developing and implementing machine learning systems at scale.
pErvinalia wrote:
Fri Jun 02, 2023 8:50 am
You're missing the crucial point of true AI. It will form its own goals and desires independent of any human. Thats is what the concern is that's being expressed in the open letter.
Am I though? I noted the deliberate conflating of machine learning systems with Artificial General Intelligence by employing phrases like "digital minds" and "non-human intelligences" in the recent warning statements, and how the "it" you're referring to is a vague and nebulous concept - a notional 'other,' one, as I said, "[/i]conceptualised as independent rogue entities with agency and interests (desires and goals) that pose an 'existential threat' to 'our way of life'[/i]".

Let's have a think about that for a moment. What if a machine learning system did become intelligent? Let's not dwell on exactly how that might happen or what that actually entailed, but let's simply allow that one day an AI somewhere became aware of itself and the wider world around it in a way comparable to human intelligence. For one, would you call that AI sentient? If it was sentient would you call it a person? If it's a person would you grant it rights? Should a human be able to turn it off if they decided that its "desires and goals" were incompatible with their own "desires and goals"?

Secondly, all you seem to have at the moment is a general disquiet about how the "desires and goals" of this notional AI are worrisome because they are incorrigibly "independent of any human". I deliberately used the term 'incorrigibly' there in the sense of this AI's desires and goals are (apparently) going to be incapable of being scrutinised, or apprehended directly or observed objectively, or externally controlled or reformed - rather like your own personal desires and goals. Is the fact that you also have desires and goals that are incorrigibly "independent of any human" a significant problem for the rest of us - and if so does that mean we should be able to turn you off?

In accepting the framing of this issue as essentially being about "bad AI" aren't you succumbing to a rather unhealthy dose of Crumplism?
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 59295
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Sun Jun 04, 2023 1:14 am

Brian Peacock wrote:
Sun Jun 04, 2023 12:52 am
pErvinalia wrote:
Fri Jun 02, 2023 11:00 am
And just on the point of "corporate interest", the people signing these letters are mostly scientists, not evil capitalists. I know it's hard for you to look past capitalism as the basis for all problems... afterall to a Marxist everything looks like a nail. But this is bigger than capitalism.
Erm. The two words "corporate interests" from my post were not my point, but at the same time who is making the investment in AI and what do they want to do with it?

The AI problem didn't arrive with chatGPT. Machine learning has already changed the global financial and economic landscape, it's already profiling everybody with a computer and a smart phone and analysing everything we do with those devices, already mapping our movements (and our faces), monitoring our temperature, heart rate, and sleep patterns through smart devices, already determining what links we're mostly likely to click on from searches tailored to our profiles, already targeting ads at us and deciding what news we'll see and curating what we'll see about what people are saying about it. Machine learning systems know the content of our fridges, who passes through our front doors, what kind of music, films or games we like, how carefully or recklessly we drive our cars, it's analysing our healthcare interactions and our healthcare records, it's monitoring our interactions with state bureaucracies and our productivity at work. The list goes on. And this wealth of data is readily available at a price, and can be compared with other information in a largely unregulated data-market to predict everything from our insurance liabilities and our ability to service debts, to our health risks, the likelihood of us committing crime, and what we'll buy next time we go to the supermarket. In that sense we're already being targetted and manipulated for commercial ends, and it's not the AIs that are doing this - the AIs just help.
This is not what the letters are about. You are losing focus.
I said I thought this was primarily a cultural problem, which is to say it's a matter of how certain sets of morals and ethics are expressed throughout and across society - so that is bigger than capitalism. But in particular this is an issue rooted in the values and expectations of those who already have the commercial heft to commission, design, develop, and deploy machine learning systems - and have been doing so for over a decade now. What I pointed out was that there's a massive overlap between those who, off the back of the success of chatGPT, have considered it necessary to warn us about the dangers of AI and those who have been developing and implementing machine learning systems at scale.


Well it's not surprising that those who understand it the most (and are really the only people capable of understanding this complex subject) are the ones who are raising the alarm.
Secondly, all you seem to have at the moment is a general disquiet about how the "desires and goals" of this notional AI are worrisome because they are incorrigibly "independent of any human".
Me? There's letters signed by thousands of researchers concerned with this potential. Don't be disingenuous.
I deliberately used the term 'incorrigibly' there in the sense of this AI's desires and goals are (apparently) going to be incapable of being scrutinised, or apprehended directly or observed objectively, or externally controlled or reformed - rather like your own personal desires and goals. Is the fact that you also have desires and goals that are incorrigibly "independent of any human" a significant problem for the rest of us - and if so does that mean we should be able to turn you off?


The issue is not human-level intelligence, it's levels far surpassing human-level that are the concern. It's at levels that we can't comprehend, due to our own limitations.
In accepting the framing of this issue as essentially being about "bad AI" aren't you succumbing to a rather unhealthy dose of Crumplism?
If it was baseless, then yes. But thousands of authorities on the subject think not.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 37953
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Sun Jun 04, 2023 2:26 am

pErvinalia wrote:
Sun Jun 04, 2023 1:14 am
Brian Peacock wrote:
Sun Jun 04, 2023 12:52 am
pErvinalia wrote:
Fri Jun 02, 2023 11:00 am
And just on the point of "corporate interest", the people signing these letters are mostly scientists, not evil capitalists. I know it's hard for you to look past capitalism as the basis for all problems... afterall to a Marxist everything looks like a nail. But this is bigger than capitalism.
Erm. The two words "corporate interests" from my post were not my point, but at the same time who is making the investment in AI and what do they want to do with it?

The AI problem didn't arrive with chatGPT. Machine learning has already changed the global financial and economic landscape, it's already profiling everybody with a computer and a smart phone and analysing everything we do with those devices, already mapping our movements (and our faces), monitoring our temperature, heart rate, and sleep patterns through smart devices, already determining what links we're mostly likely to click on from searches tailored to our profiles, already targeting ads at us and deciding what news we'll see and curating what we'll see about what people are saying about it. Machine learning systems know the content of our fridges, who passes through our front doors, what kind of music, films or games we like, how carefully or recklessly we drive our cars, it's analysing our healthcare interactions and our healthcare records, it's monitoring our interactions with state bureaucracies and our productivity at work. The list goes on. And this wealth of data is readily available at a price, and can be compared with other information in a largely unregulated data-market to predict everything from our insurance liabilities and our ability to service debts, to our health risks, the likelihood of us committing crime, and what we'll buy next time we go to the supermarket. In that sense we're already being targetted and manipulated for commercial ends, and it's not the AIs that are doing this - the AIs just help.
This is not what the letters are about. You are losing focus.
Nope. I'm addressing your comments saying that I'm too narrowly focused on capitalism.
I said I thought this was primarily a cultural problem, which is to say it's a matter of how certain sets of morals and ethics are expressed throughout and across society - so that is bigger than capitalism. But in particular this is an issue rooted in the values and expectations of those who already have the commercial heft to commission, design, develop, and deploy machine learning systems - and have been doing so for over a decade now. What I pointed out was that there's a massive overlap between those who, off the back of the success of chatGPT, have considered it necessary to warn us about the dangers of AI and those who have been developing and implementing machine learning systems at scale.


Well it's not surprising that those who understand it the most (and are really the only people capable of understanding this complex subject) are the ones who are raising the alarm.
Secondly, all you seem to have at the moment is a general disquiet about how the "desires and goals" of this notional AI are worrisome because they are incorrigibly "independent of any human".
Me? There's letters signed by thousands of researchers concerned with this potential. Don't be disingenuous.
I'm not. I'm addressing your comment, as before.
I deliberately used the term 'incorrigibly' there in the sense of this AI's desires and goals are (apparently) going to be incapable of being scrutinised, or apprehended directly or observed objectively, or externally controlled or reformed - rather like your own personal desires and goals. Is the fact that you also have desires and goals that are incorrigibly "independent of any human" a significant problem for the rest of us - and if so does that mean we should be able to turn you off?


The issue is not human-level intelligence, it's levels far surpassing human-level that are the concern. It's at levels that we can't comprehend, due to our own limitations.
You didn't address the questions about the status of machine learning systems with general intelligence.

And I didn't say the issue was human-level intelligence but the moral and ethical standards of those developing advanced machine learning systems. The discourse has been focused (unduly imo) around talking about 'AI' as somehow independent of the people designing and developing it for commercial and military use, whereas I think we need to specifically address the moral and ethical assumptions, criteria, and justification of the people commissioning and deploying advanced machine learning systems.
In accepting the framing of this issue as essentially being about "bad AI" aren't you succumbing to a rather unhealthy dose of Crumplism?
If it was baseless, then yes. But thousands of authorities on the subject think not.
I'm not saying the framing of the statements is baseless, but that it is skewed by the moral and ethical assumptions and criteria of those who drafted them. I see that bias expressed as vague and nebulous warnings about the perils of future advanced machine learning systems with general intelligence that take little to no account of what machine learning is actually achieving already. If we'd like our future to be populated with advanced machine learning systems we have to address the ends to which we will (and already are) putting them to use - which is necessarily a self-reflective exercise about what kind of society and culture we wish to inhabit. At the moment I don't see very much of that, but I see a lot of "bad AI" obfuscation of the issues and no heed given to the question of what we actually want the future to look like. In that sense, blandishments about what we don't want it to look like--about what we need to control, limit, regulate against and/or away--are essentially arguments for a status quo that has put us in this position to begin with.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 59295
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Sun Jun 04, 2023 2:50 am

I think you are totally missing the point, Brian. Concerns about human ethics are nothing new. There's no change here. The letters' signatories are concerned about something on a whole new level. I think you need to refocus.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 59295
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Sun Jun 04, 2023 3:01 am

If we'd like our future to be populated with advanced machine learning systems we have to address the ends to which we will (and already are) putting them to use - which is necessarily a self-reflective exercise about what kind of society and culture we wish to inhabit.
This is irrelevant to the larger problem. It won't matter what use we put AI towards if AI develops sentience and develops its own goals and desires independent of us. If and when AI reaches this point we will likely be unable to determine these values without the AI directly implementing them. At which point it is too late.

You and Jim might think the likelihood of this happening to be low, but because of the level of downside it poses if it does, means we have to treat it seriously. Like nuclear winter. The odds of it happening are small, but we have to take it seriously because of the horror of its outcome if it were to happen.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 73014
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: The rise of the machine

Post by JimC » Sun Jun 04, 2023 4:09 am

I do think that the likelihood of an out-of-control AI is low, but I agree that is potentially a serious problem. The question remains whether a fractured and divided world can provide a consistent, enforceable and effective response, while still allowing uses which truly assist humanity...
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 37953
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Sun Jun 04, 2023 9:16 am


pErvinalia wrote:
If we'd like our future to be populated with advanced machine learning systems we have to address the ends to which we will (and already are) putting them to use - which is necessarily a self-reflective exercise about what kind of society and culture we wish to inhabit.
This is irrelevant to the larger problem. It won't matter what use we put AI towards if AI develops sentience and develops its own goals and desires independent of us. If and when AI reaches this point we will likely be unable to determine these values without the AI directly implementing them. At which point it is too late.
Why? Why will it be "too late"? Too late for what?
pErvinalia wrote:You and Jim might think the likelihood of this happening to be low, but because of the level of downside it poses if it does, means we have to treat it seriously. Like nuclear winter. The odds of it happening are small, but we have to take it seriously because of the horror of its outcome if it were to happen.
What are the horrors of independent artificial general intelligence?
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 59295
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Sun Jun 04, 2023 9:33 am

If AI considers humans as detrimental to its desires. Discovering those desires only by directly experiencing the manifestation of them, will be too late.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

Post Reply

Who is online

Users browsing this forum: No registered users and 11 guests