The rise of the machine

Post Reply
User avatar
Brian Peacock
Tipping cows since 1946
Posts: 37956
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Mon Jun 05, 2023 9:50 am

pErvinalia wrote:
Mon Jun 05, 2023 3:06 am
My thought is that you can't read properly. But knowing that you actually can read for comprehension, leaves me thinking you are being disingenuous on purpose. Why is that, Brian?
Yeah. Coming at this from a different angle must mean I'm lacking the necessary cognitive capacity, or else it's simply a moral failure or a deficiency in character, eh? But in our strand of the conversation you haven't really articulated exactly what you've been imploring me to be frightened of, nor have you addressed my points directly or answered any of the questions I've asked. So you'll have to forgive me for thinking that 'use your imagine' is just your way of saying 'I can't be bothered' or 'because, that's why'.

Here's L'Emmy's post about the Longtermists' statement, and my initial take on it.

Here's the safe.ai statement I posted here, in full:
safe.ai wrote:AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

*their emphasis
.

The first problem is the term "AI": artificial intelligence. We associate, conflate, and confuse the 'intelligence' of machine learning systems all too readily with human-like consciousness, and all that this entails. This skews the debate about machine learning systems, what they are, how they operate, and what we employ them to do. It casts them as independent agents at large in the world, and by that it removes responsibility from those who create the conditions by which these systems exist and operate. You may think that using a term like "AI" is just a detail pErvy, but I think it is an important one. How we conceptualise machine learning systems has a significant influence on how we think about them and act towards them, just as it does with things like blacknuss, gender, class, dark metal, election security, or the economy.

The second problem is a bit more subtle, but one made obvious by the selection of two possible existential threats and the deliberate absence of the most significant and present societal-scale existential threat we're currently facing: global heating and climate change. One might argue that this has been done to offer ready examples that everyone can relate to and agree as being 'a bad thing', but then we have to ask why the obvious inclusion of global heating and climate change is considered too contentious or controversial to mention. The reason is political of course. It's would be unhelpful to mention climate change only to have the loud-and-proud votaries of toxic conservatism brand the statement, and those who made it 'woke'. This also skews the discourse, quietly appeasing those powerful people and organisations whose economic and ideological frameworks are driving us off the climate cliff, and who continue to exert a great directing force on our societies, and our political, economic and democratic systems.

We absolutely need to talk about 'AI', about the amazing things machine learning is already doing, the amazing things it could do and the beneficial impacts it could have our societies, along with the potential consequences of badly designed or poorly implemented systems, or systems purposefully created to disrupt or cause chaos and harm. At the moment though we're being strongly directed to only talk about this in terms of 'bad AI'. At the same time, if we're starting from the position that whatever we do we mustn't upset the big business types in case they decide to sidetrack, degrade and/or close down the conversation then we're already off to a bad start. This is exactly where we find ourselves at the moment, imo.

Is it meaningful to conceptualise advanced machine learning systems in terms of them having or developing essentially human-like psychologiesy, and then to talk about them as having independent interests, motivations, drives, impulses, intuitions, "desires and goals"?

If we are going to talk about 'AI' as potentially having human-like psychological traits like "desires and goals" etc why are we being strongly directed to think about these systems as having super-human god-like powers that only represent societal-scale existential risks and threats?

If artificial general intelligence is going to 'evolve' (somehow, somewhere, somewhen) then to begin with it will be built on the initial conditions and premises of it's core programming, the data it has access to, and the hardware it runs on. So shouldn't the focus be on the values, "desires and goals" of those developing and implementing these systems, and the manner of that implementation, rather than the fear-inducing totem of future malevolent rogue "non-human intelligences" of 'bad AI'?

If the Amazon 'fulfilment centre' algorithm has been trained to monitor human activity and automatically issue you a written warning if you go to the toilet three times in an hour but doesn't notice when you collapse with a heart attack then what kind of animated anthropomorphic intelligence do we imagine that might 'evolve' into?

If a semi-autonomous combat system is being weighted to kill people who try to stop it killing people then don't we really need to take a very serious look at what we're developing machine learnings systems to do for us?

If the combat bot gets access to the data of the medical diagnostics bot then it could be used to kill people more efficiently, but if the diagnostics bot gets access to the combat bot's data then could it be used to treat those injured in conflict more efficiently - and which one should we be more worried about gaining autonomous intelligence?

If machine learning systems do develop into autonomous, human-equivalent intelligences with their own "desires and goals" etc then shouldn't we be giving some thought now to the challenges that will presents to our ideas and understanding about things like the fundamental rights of individual sentient life forms?

If we are going to share our future with so-called "digital minds" or "non-human intelligences" then what is stopping us from making sure that's a future that respects, protects, ensures, and secures the existence of all thinking beings - whether silicon or meat-based (or if Musk's dream comes true, then possibly a combination of both)?
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
Sean Hayden
Microagressor
Posts: 17882
Joined: Wed Mar 03, 2010 3:55 pm
About me: recovering humanist
Contact:

Re: The rise of the machine

Post by Sean Hayden » Mon Jun 05, 2023 11:07 am

pErvinalia wrote:
Mon Jun 05, 2023 8:28 am
JimC wrote:
Mon Jun 05, 2023 4:41 am
From where does our putative conscious AI get its drives, its motivations?
Initially it will be from us. Will we then get those AIs creating new AIs with a different set of motivations?
Motivations don’t matter, at least not malicious ones. If AI is tasked with creating trash cans humanity could be toast, see lemmy’s link. It’s the problem of unintended consequences. I make a pretty good trash can, I’d make the best and most given all your resources. Admittedly it seems far-fetched, but that’s irrelevant.

User avatar
pErvinalia
On the good stuff
Posts: 59297
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 11:13 am

@Brian... Trying to decide whether to tackle your verbiage, or just sum up the problem we are having here.

I'll start with the latter. The problem is that your Marxist world view clouds your vision. As capitalism is involved in these AI ventures, it's axiomatically true for you that there is some sort of subterfuge going on. This is summed up in the nonsense view that OpenAI, for instance, is calling for a halt to development to disadvantage/oppress its competitors. This is nonsense, as the call for a halt is to inhibit development past chatGPT 4, the very stage that OpenAI is at. This call for a 6 month ban on progress actually allows its competitors to catch up to it. How does this fit into your capitalism paradigm?

And then you try and introduce a false dichotomy - that being to place regulation of the people/corporations involved in AI development against regulation of any self-perpetuating AI. Why the differentiation? Both approaches work towards the same aim. It's not even clear how you are arguing against the concerns expressed in the open letters (other than the usual tired anti-capitalism rhetoric). You make a call for regulation. Um, hello, that's what the AI researchers are calling for.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 59297
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 11:14 am

Sean Hayden wrote:
Mon Jun 05, 2023 11:07 am
pErvinalia wrote:
Mon Jun 05, 2023 8:28 am
JimC wrote:
Mon Jun 05, 2023 4:41 am
From where does our putative conscious AI get its drives, its motivations?
Initially it will be from us. Will we then get those AIs creating new AIs with a different set of motivations?
Motivations don’t matter, at least not malicious ones. If AI is tasked with creating trash cans humanity could be toast, see lemmy’s link. It’s the problem of unintended consequences. I make a pretty good trash can, I’d make the best and most given all your resources. Admittedly it seems far-fetched, but that’s irrelevant.
Can you rephrase that? I can't work out what you are trying to convey.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
Sean Hayden
Microagressor
Posts: 17882
Joined: Wed Mar 03, 2010 3:55 pm
About me: recovering humanist
Contact:

Re: The rise of the machine

Post by Sean Hayden » Mon Jun 05, 2023 11:41 am

Sure. AI can hurt people without being motivated to do so. Lemmy’s link is a humorous example of how this can happen. It describes a situation where an unintended consequence of AI being rewarded for killing a target, is for it to kill anything that might interfere after a target has been identified, including its operator! :lol:

User avatar
pErvinalia
On the good stuff
Posts: 59297
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 11:47 am

Brian Peacock wrote:
Mon Jun 05, 2023 9:50 am
pErvinalia wrote:
Mon Jun 05, 2023 3:06 am
My thought is that you can't read properly. But knowing that you actually can read for comprehension, leaves me thinking you are being disingenuous on purpose. Why is that, Brian?
Yeah. Coming at this from a different angle must mean I'm lacking the necessary cognitive capacity, or else it's simply a moral failure or a deficiency in character, eh? But in our strand of the conversation you haven't really articulated exactly what you've been imploring me to be frightened of,
Just stop, Brian. I've made multiple posts with specific scenarios.
nor have you addressed my points directly or answered any of the questions I've asked.
You ask a hell of a lot of questions. I'd prefer to engage you on your views, not your rhetoric.
So you'll have to forgive me for thinking that 'use your imagine' is just your way of saying 'I can't be bothered' or 'because, that's why'.
Brian, I can't be expected to think for you. We are educated adults. You are more than capable of adding 1 to 1 to get 2.
Here's L'Emmy's post about the Longtermists' statement, and my initial take on it.

Here's the safe.ai statement I posted here, in full:
safe.ai wrote:AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

*their emphasis
.

The first problem is the term "AI": artificial intelligence. We associate, conflate, and confuse the 'intelligence' of machine learning systems all too readily with human-like consciousness, and all that this entails.
The problem is actually that we don't know what consciousness is and how it works. Therefore, we can't confidently exclude it from any analysis of AI functionality. We have to remain open to the possibility that a sufficiently complex AI will develop human level consciousness.
This skews the debate about machine learning systems, what they are, how they operate, and what we employ them to do. It casts them as independent agents at large in the world, and by that it removes responsibility from those who create the conditions by which these systems exist and operate.
The experts in the field are calling for regulation. This is the very opposite of "[removing] responsibility from those who create the conditions by with these systems exist and operate."
You may think that using a term like "AI" is just a detail pErvy, but I think it is an important one. How we conceptualise machine learning systems has a significant influence on how we think about them and act towards them, just as it does with things like blacknuss, gender, class, dark metal, election security, or the economy.
How do you conceptualise machine learning systems, and how does that differ from the experts in the field?
The second problem is a bit more subtle, but one made obvious by the selection of two possible existential threats and the deliberate absence of the most significant and present societal-scale existential threat we're currently facing: global heating and climate change. One might argue that this has been done to offer ready examples that everyone can relate to and agree as being 'a bad thing', but then we have to ask why the obvious inclusion of global heating and climate change is considered too contentious or controversial to mention. The reason is political of course.
No, not of course. Marxism rearing its ugly head again?
It's would be unhelpful to mention climate change only to have the loud-and-proud votaries of toxic conservatism brand the statement, and those who made it 'woke'. This also skews the discourse, quietly appeasing those powerful people and organisations whose economic and ideological frameworks are driving us off the climate cliff, and who continue to exert a great directing force on our societies, and our political, economic and democratic systems.


Cool theory, but I'd like to see some evidence of this. What have some of these experts said that drives you to this conclusion?
At the same time, if we're starting from the position that whatever we do we mustn't upset the big business types in case they decide to sidetrack, degrade and/or close down the conversation then we're already off to a bad start. This is exactly where we find ourselves at the moment, imo.
Big business is calling for regulation on itself. How does this fit into your theory?
Is it meaningful to conceptualise advanced machine learning systems in terms of them having or developing essentially human-like psychologiesy, and then to talk about them as having independent interests, motivations, drives, impulses, intuitions, "desires and goals"?
How is it not meaningful?
If we are going to talk about 'AI' as potentially having human-like psychological traits like "desires and goals" etc why are we being strongly directed to think about these systems as having super-human god-like powers that only represent societal-scale existential risks and threats?
Where has anyone said that it only represents existential risk? I must have missed that. I'd suggest, although it should be bleedingly obvious, that there has been concern expressed about existential risk, because its existential.
If artificial general intelligence is going to 'evolve' (somehow, somewhere, somewhen) then to begin with it will be built on the initial conditions and premises of it's core programming, the data it has access to, and the hardware it runs on. So shouldn't the focus be on the values, "desires and goals" of those developing and implementing these systems, and the manner of that implementation, rather than the fear-inducing totem of future malevolent rogue "non-human intelligences" of 'bad AI'?
What do you think they are calling for regulation on, if not the industry and those who practice in it? :think:
If a semi-autonomous combat system is being weighted to kill people who try to stop it killing people then don't we really need to take a very serious look at what we're developing machine learnings systems to do for us?
I think you'll find that is what they are calling for.
If the combat bot gets access to the data of the medical diagnostics bot then it could be used to kill people more efficiently, but if the diagnostics bot gets access to the combat bot's data then could it be used to treat those injured in conflict more efficiently - and which one should we be more worried about gaining autonomous intelligence?
Well both. Because the level of intelligence of either system could far surpass that of humans, such that we really can have very little to no insight into it.
If machine learning systems do develop into autonomous, human-equivalent intelligences with their own "desires and goals" etc then shouldn't we be giving some thought now to the challenges that will presents to our ideas and understanding about things like the fundamental rights of individual sentient life forms?
Well we've already had that Google employee fired for stating that he thought their AI had reached sentience. So it's not like this idea isn't being talked about.
If we are going to share our future with so-called "digital minds" or "non-human intelligences" then what is stopping us from making sure that's a future that respects, protects, ensures, and secures the existence of all thinking beings - whether silicon or meat-based (or if Musk's dream comes true, then possibly a combination of both)?
Making sure? Do dogs have the ability to make sure we humans don't get up to no good?
Last edited by pErvinalia on Mon Jun 05, 2023 11:47 am, edited 5 times in total.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 59297
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 11:52 am

Sean Hayden wrote:
Mon Jun 05, 2023 11:41 am
Sure. AI can hurt people without being motivated to do so. Lemmy’s link is a humorous example of how this can happen. It describes a situation where an unintended consequence of AI being rewarded for killing a target, is for it to kill anything that might interfere after a target has been identified, including its operator! :lol:
Yep. This is the sort of scenario that calls for something like Asimov's 3 laws of robotics.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 37956
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Mon Jun 05, 2023 7:03 pm

pErvinalia wrote:
Mon Jun 05, 2023 11:13 am
@Brian... Trying to decide whether to tackle your verbiage, or just sum up the problem we are having here.
You don't have to 'tackle' it at all. Just reading and then giving me your take on things would be fine.
pErvinalia wrote:
Mon Jun 05, 2023 11:13 am
I'll start with the latter. The problem is that your Marxist world view clouds your vision. As capitalism is involved in these AI ventures, it's axiomatically true for you that there is some sort of subterfuge going on. This is summed up in the nonsense view that OpenAI, for instance, is calling for a halt to development to disadvantage/oppress its competitors. This is nonsense, as the call for a halt is to inhibit development past chatGPT 4, the very stage that OpenAI is at. This call for a 6 month ban on progress actually allows its competitors to catch up to it. How does this fit into your capitalism paradigm?
Pfff. I don't have a "Marxist world view". You only think I have a Marxist world view because I'm critical of capitalism. This is response you've picked up from popular media. Besides, accusing someone of 'being a Marxist' is a cheap shot way to just invalidate their opinions and ideas without actually having to do the leg work of showing why - just 'being a Marxist' is explanation enough. N.B. my use of the Marx icon is mostly ironic, because anyone who forwards a less than favourable view of capitalism has got to be a filthy Marxist, right?



I actually think Marx was quite, quite wrong about many things, but in asking questions like why are people obliged to exist within social structures that systematically disadvantage them his insight offers a useful framework through which we can analyse the relationships between those who actually design, order, maintain, enforce, and ultimately reproduce those structures, and their effects on the overwhelming majority of ordinary people.

Longtermism might be developing into a relatively broad church but its instigators envision systems of top-down social order capable of reproducing the sets of power and social relations that support and maintain the current pyramidical paradigm. In this context a Marxist-adjacent analysis of social order isn't a novel, wayward, out there, or batshit crazy view of society, but a useful description of the circumstance we all find ourselves in - regardless of our place on the pyramid.

Those with the power to influence and direct our societies will do so primarily in their own interests, in circumstances where those interests are allied to sets of personal and institutional values which reinforce the structures and mechanisms that maintain and perpetuate that power.

In other words, it is through power and ethics that capitalism reproduces itself, with real consequences for pretty much everybody alive today.

As you pointed out, the first Longtermists statement is a call for a 6 month ban on progress that would actually allow its competitors to catch up to chatGPT-4. It seems there's nothing a rampant capitalist likes less than a free market! Their statement focuses our attention exclusively on the bogeyman of "digital minds" and "non-human intelligences", and in so doing turns our attention away from those already using machine learning systems and big data to accrue massive wealth and influence. I'm saying that we need to regulate the creators and the data-priates if we ever hope to control, limit, or mitigate the risks apparently posed by 'bad AI'.
pErvinalia wrote:
Mon Jun 05, 2023 11:13 am
And then you try and introduce a false dichotomy - that being to place regulation of the people/corporations involved in AI development against regulation of any self-perpetuating AI. Why the differentiation? Both approaches work towards the same aim. It's not even clear how you are arguing against the concerns expressed in the open letters (other than the usual tired anti-capitalism rhetoric). You make a call for regulation. Um, hello, that's what the AI researchers are calling for.
Meh. You missed the point. I said that "we absolutely need to talk about 'AI', about the amazing things machine learning is already doing, the amazing things it could do and the beneficial impacts it could have our societies, along with the potential consequences of badly designed or poorly implemented systems, or systems purposefully created to disrupt or cause chaos and harm."

If we're only talking about 'bad AI' we're not having a rounded debate. If we're not talking about the role machine learning systems in general are playing in our societies at the moment, and the effect they're already having, we're not having a rounded debate. If we're looking to regulate "self-perpetuating AI", "digital minds" with "desires and goals" that don't actually exist yet, then we're not having a rounded debate - and such regulations will have little impact on those commissioning and implementing machine learning systems that fall short of whatever the regs eventually say a "digital mind" entails. I'm not saying we should only regulate the creators and not the 'AI', I'm saying that the Longtermist statement deliberately skews the debate towards one particular conceptualisation of machine learning systems, the 'non-human, digital intelligent mind' and away from a broader, more general approach to the issues machine learning systems are already beginning to throw up. There's nothing dichotomous about that.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
JimC
The sentimental bloke
Posts: 73016
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: The rise of the machine

Post by JimC » Mon Jun 05, 2023 8:36 pm

Sean Hayden wrote:
Mon Jun 05, 2023 11:41 am
Sure. AI can hurt people without being motivated to do so. Lemmy’s link is a humorous example of how this can happen. It describes a situation where an unintended consequence of AI being rewarded for killing a target, is for it to kill anything that might interfere after a target has been identified, including its operator! :lol:
That is certainly true, but that is an example of "dumb" AI, not the putative ultra intelligent uber AI that, at best, would see us as cute pets, and at worst, as vermin, to be given the Dalek treatment...

I reiterate that I do not see such a being arising as likely, but if it did so, it would have to have some equivalent of a set of human drives. Otherwise, it would just sit there, gazing at its metaphorical navel, and calculating the digits of Pi to beyond googolplex...
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
L'Emmerdeur
Posts: 5700
Joined: Wed Apr 06, 2011 11:04 pm
About me: Yuh wust nightmaya!
Contact:

Re: The rise of the machine

Post by L'Emmerdeur » Mon Jun 05, 2023 10:18 pm

So, pErv's mention of Asimov's Three Laws sent me looking at what relevance they have in reality (not that it was implied that they have any).

Indeed it is generally thought that they come nowhere near addressing the advent of powerful AI, and are problematic in a number of ways. Which lead me to look into how this issue is really being addressed. I found pieces from a decade ago, speculating on how ethics in AI might be handled which were interesting but not really what I hoped to find. Then came across an article by the CEO of Virtue, 'a digital ethical risk consultancy.' He proposes a 7 component approach for digital businesses (both in regard to user data and AI) which may be of interest. One quote from the article:
For the most part, academic [ethicists] ask, “Should we do this? Would it be good for society overall? Does it conduce to human flourishing?” Businesses, on the other hand, tend to ask, “Given that we are going to do this, how can we do it without making ourselves vulnerable to ethical risks?”
He makes clear that as of now, AI ethics are being geared to "stakeholders'" interests, not those of society in general. Not promising, though he is also a consultant to the government of Canada. :dunno:

User avatar
JimC
The sentimental bloke
Posts: 73016
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: The rise of the machine

Post by JimC » Mon Jun 05, 2023 10:59 pm

L'Emmerdeur wrote:
Mon Jun 05, 2023 10:18 pm

He makes clear that as of now, AI ethics are being geared to "stakeholders'" interests, not those of society in general. Not promising, though he is also a consultant to the government of Canada. :dunno:
That is a fit with Brian's stance, implying that decisions around AI are tied up with power and profit rather than the welfare of humanity. However, I think that the more general potential concerns, as described by rEv and stated by the signatories to the warning about the future are also valid, even if (to me) they concern an unlikely outcome.
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
Svartalf
Offensive Grail Keeper
Posts: 40340
Joined: Wed Feb 24, 2010 12:42 pm
Location: Paris France
Contact:

Re: The rise of the machine

Post by Svartalf » Mon Jun 05, 2023 11:01 pm

Actually, Asimov's Robot books (or is there just I, Robot?) are all about how those laws can cause more trouble than they are worth, and how a truly powerful AI can bypass them, or turn them against themselves to achieve strange ends...
Embrace the Darkness, it needs a hug

PC stands for "Patronizing Cocksucker" Randy Ping

User avatar
pErvinalia
On the good stuff
Posts: 59297
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 11:47 pm

Top-down or not, AI promises a future free of drudgery and exploitation. I'd like to see some evidence that the creators of this technology view a future where the lower classes are exploited for their benefit. It's pretty lazy to just assume all innovators in a capitalist system are necessarily exploitative.

Bill Gates has called for a technology tax as an automation dividend. Eventually governments will be forced to implement a universal basic income as job losses to automation soar. I assume even the capitalists will have to get on board with this if they want a consumer class that can afford to buy their products and services. Is a system with universal basic income more or less exploitative than what we have now?
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 37956
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Tue Jun 06, 2023 9:48 am

pErvinalia wrote:
Mon Jun 05, 2023 11:47 pm
Top-down or not, AI promises a future free of drudgery and exploitation. I'd like to see some evidence that the creators of this technology view a future where the lower classes are exploited for their benefit. It's pretty lazy to just assume all innovators in a capitalist system are necessarily exploitative.
You might be right about the promise of 'AI', but it's also worth noting that we could have a present free of drudgery and exploitation.

I'll leave you to do your own research into what the ideology of Longtermism actually entails, and to draw your own conclusions about why billionaires and Silicon Valley are pumping $$$ into it's various thinktanks. That should give you a clue as to why I see Longtermism as a kind of Millennialism, not that far removed from the Apocalypsism of Bannon's Fourth Wheel ideology, and therefore why Longtermists view things like global heating as only a 'non-existential threat' or why their Transhumanist vision of humans persisting to the heat death of the Universe is bound to the logic of eugenics.
Bill Gates has called for a technology tax as an automation dividend. Eventually governments will be forced to implement a universal basic income as job losses to automation soar. I assume even the capitalists will have to get on board with this if they want a consumer class that can afford to buy their products and services. Is a system with universal basic income more or less exploitative than what we have now?
Machine learning systems aren't a kind of magic wand though are they(?) We're old enough to remember the 1980s, when many commentators were saying that Keynes vision of an automated future, releasing us from the drudgery of work and increasing our 'leisure time', was just around the corner. Since then automation and IT have done away with a lot of jobs, but where's all the extra leisure time? I was promised rocket shoes and holidays on the moon! Where did the future go?
.
rocketshoes.jpg
generated by ai
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 59297
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Tue Jun 06, 2023 10:07 am

Longtermists view things like global heating as only a 'non-existential threat'
Again, you'll have to provide some sort of evidence for this. Your interpretation of one sentence isn't really evidence.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

Post Reply

Who is online

Users browsing this forum: No registered users and 11 guests