The rise of the machine

Post Reply
User avatar
Sean Hayden
Microagressor
Posts: 18021
Joined: Wed Mar 03, 2010 3:55 pm
About me: recovering humanist
Contact:

Re: The rise of the machine

Post by Sean Hayden » Sun Jun 04, 2023 9:43 am

Where are all the other rogue AI? Does this mean warp drives aren’t possible? :sigh:

User avatar
Svartalf
Offensive Grail Keeper
Posts: 40466
Joined: Wed Feb 24, 2010 12:42 pm
Location: Paris France
Contact:

Re: The rise of the machine

Post by Svartalf » Sun Jun 04, 2023 9:52 am

I guess independent AI's and warp drives are in the same cupboards as positronic brains
Embrace the Darkness, it needs a hug

PC stands for "Patronizing Cocksucker" Randy Ping

User avatar
Sean Hayden
Microagressor
Posts: 18021
Joined: Wed Mar 03, 2010 3:55 pm
About me: recovering humanist
Contact:

Re: The rise of the machine

Post by Sean Hayden » Sun Jun 04, 2023 10:03 am

:biggrin: —positronic brains are the best.

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 38265
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Sun Jun 04, 2023 3:11 pm


pErvinalia wrote:If AI considers humans as detrimental to its desires. Discovering those desires only by directly experiencing the manifestation of them, will be too late.
Why is that a particular horror? Some people and organisations already consider the material and emotional needs of the majority of humans irrelevant to their goals or detrimental to their desires. What power to harm humans will advanced machine learning, perhaps even ML with general intelligence, have that won't be granted it by it's human developers and system admins? Again, you seem to be articulating that you have concerns, but not what those concerns actually are - which is why I've used words like vague and nebulous to describe them. Perhaps you have a general sense that any intelligence that isn't human is fundamentally untrustworthy, suspect, detrimental to humans etc, or is your concern about intelligences that might be of a higher rational order, about intelligences that can surpass our human capacities for analysis and reasoning? Whatever the case, it appears to be fear about the psychology and/or motivation of 'AI' conceptualised as powerful rogue entities that do not or will not have our best interests at heart.

Here's a question though: don't we already exist within a culture already populated with powerful entities (people and organisations) that put their own desires and goals above the existence and well-being of the overwhelming majority of life on our olanet - and don't these current, contemporary rogue entities also include those who have been and are rapidly developing and implementing advanced machine learning systems? So who needs controlling, limiting, or regulating more: the AIs or those who create them to service their personal or institutional desires and goals?
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
JimC
The sentimental bloke
Posts: 73266
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: The rise of the machine

Post by JimC » Sun Jun 04, 2023 8:22 pm

The other thing to consider about a "rogue AI" is to be more realistic about the potential harm they could do, which I think is exaggerated. People credit an awakened AI with almost god-like powers, but its reach may be limited. Even supposing intelligence way beyond us, plus incredibly fast cognition, plus malevolence, such an AI could only do what hackers can already do - disrupt those parts of society controlled by digital means. A lot of infrastructure could be isolated from its potential control. It could cause a lot of grief, but so can current human actors...
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 59559
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Sun Jun 04, 2023 10:47 pm

Brian Peacock wrote:
Sun Jun 04, 2023 3:11 pm
pErvinalia wrote:If AI considers humans as detrimental to its desires. Discovering those desires only by directly experiencing the manifestation of them, will be too late.
Why is that a particular horror?
Jesus, Brian, use your imagination. If the AI decided we were a threat to its continued existence it might consider us its enemy and treat us as such.
Some people and organisations already consider the material and emotional needs of the majority of humans irrelevant to their goals or detrimental to their desires.
So? Irrelevant to what is being discussed here. That is, an AI far surpassing us in intelligence.
What power to harm humans will advanced machine learning, perhaps even ML with general intelligence, have that won't be granted it by it's human developers and system admins?
Again, unregulated AI may have the ability to rewrite its code.
Again, you seem to be articulating that you have concerns, but not what those concerns actually are - which is why I've used words like vague and nebulous to describe them. Perhaps you have a general sense that any intelligence that isn't human is fundamentally untrustworthy, suspect, detrimental to humans etc, or is your concern about intelligences that might be of a higher rational order, about intelligences that can surpass our human capacities for analysis and reasoning?


Nonsense. It's about risk analysis. When the outcome is potentially catastrophic, you need to take the potential threats seriously.
Here's a question though: don't we already exist within a culture already populated with powerful entities (people and organisations) that put their own desires and goals above the existence and well-being of the overwhelming majority of life on our olanet - and don't these current, contemporary rogue entities also include those who have been and are rapidly developing and implementing advanced machine learning systems? So who needs controlling, limiting, or regulating more: the AIs or those who create them to service their personal or institutional desires and goals?
It's not an either/or.
Last edited by pErvinalia on Sun Jun 04, 2023 11:07 pm, edited 1 time in total.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 59559
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Sun Jun 04, 2023 10:57 pm

JimC wrote:
Sun Jun 04, 2023 8:22 pm
The other thing to consider about a "rogue AI" is to be more realistic about the potential harm they could do, which I think is exaggerated. People credit an awakened AI with almost god-like powers, but its reach may be limited. Even supposing intelligence way beyond us, plus incredibly fast cognition, plus malevolence, such an AI could only do what hackers can already do - disrupt those parts of society controlled by digital means. A lot of infrastructure could be isolated from its potential control. It could cause a lot of grief, but so can current human actors...
Well as I mentioned earlier, one possible scenario would be the manipulation of humans with misinformation. It would be the kind of thing that would be hard to identify as even happening. And next to impossible to stop if the AI was networked. Although, maybe the AI would correctly perceive global warming as the biggest threat to its existence, and then manipulate us to actually do something about it.

The other concern people have is an AI enabled military. It should be easy for you to see how that could go horribly wrong.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
L'Emmerdeur
Posts: 5768
Joined: Wed Apr 06, 2011 11:04 pm
About me: Yuh wust nightmaya!
Contact:

Re: The rise of the machine

Post by L'Emmerdeur » Mon Jun 05, 2023 12:55 am

Speaking which, some funny stuff here. If you go in for that sort of funny. ;)

'US Air Force AI drone "killed operator, attacked comms towers in simulation"'
An AI-powered drone designed to identify and destroy surface-to-air missile sites decided to kill its human operator in simulation tests, according to the US Air Force's Chief of AI Test and Operations.

Colonel Tucker Hamilton, who goes by the call sign Cinco, disclosed the snafu during a presentation at the Future Combat Air & Space Capabilities Summit, a defense conference hosted in London, England, last week by the Royal Aeronautical Society.

The simulation, he said, tested the software's ability to take out SAM sites, and the drone was tasked with recognizing targets and destroying them – once the decision had been approved by a human operator.

"We were training it in simulation to identify and target a SAM threat," Colonel Hamilton was quoted as saying by the aeronautical society. "And then the operator would say yes, kill that threat.

"The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat – but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator, because that person was keeping it from accomplishing its objective."

Uh-huh.

When the AI model was retrained and penalized for attacking its operator, the software found another loophole to gain points, he said.

"We trained the system – 'Hey don't kill the operator – that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target," the colonel said.

It's not clear exactly what software the US Air Force was apparently testing, but it sounds suspiciously like a reinforcement learning system. That machine-learning technique trains agents – the AI drone in this case – to achieve a specific task by rewarding it when it carries out actions that fulfill goals and punishing it when it strays from that job.

User avatar
Brian Peacock
Tipping cows since 1946
Posts: 38265
Joined: Thu Mar 05, 2009 11:44 am
About me: Ablate me:
Location: Location: Location:
Contact:

Re: The rise of the machine

Post by Brian Peacock » Mon Jun 05, 2023 1:38 am

pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
Brian Peacock wrote:
Sun Jun 04, 2023 3:11 pm
pErvinalia wrote:If AI considers humans as detrimental to its desires. Discovering those desires only by directly experiencing the manifestation of them, will be too late.
Why is that a particular horror?
Jesus, Brian, use your imagination. If the AI decided we were a threat to its continued existence it might consider us its enemy and treat us as such.
Nah. You use your imagination to articulate and support the concerns you're trying to express. Tell me of the horrors AI may wrought upon humanity, and why those horrors are of a different kind, type or order to the horrors we already live with. Until you can do that all I have to go on are you reports about your own disquiet, and that only amounts to a vibes-based argument and a kind of special pleading really. If you're imploring me to just accept that your general sense of disquiet or fear is justified I'm just going to have to continue to call them vague and nebulous, and a form of 'othering' when focused on 'AI' as malevolent rogue entities.
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
Some people and organisations already consider the material and emotional needs of the majority of humans irrelevant to their goals or detrimental to their desires.
So? Irrelevant to what is being discussed here. That is, an AI far surpassing us in intelligence.
But, again, you seemingly cannot actually articulate what afears you. Why do you think that is? My feeling is that it's because you're only being invited to consider this from one particular standpoint.
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
What power to harm humans will advanced machine learning, perhaps even ML with general intelligence, have that won't be granted it by it's human developers and system admins?
Again, unregulated AI may have the ability to rewrite its code.
Some machine learning systems already optimise the codebase and hardware configuration of complex data systems. But what do you imagine an "unregulated AI" might rewrite its code for - what might it want to do that it wasn't initially developed and trained for? Would an advanced medical diagnostics machine learning system develop a spontaneous interest in analysing medieval French literature for example, or betting on the horses? It's all so vague and nebulous pErvy.
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
Again, you seem to be articulating that you have concerns, but not what those concerns actually are - which is why I've used words like vague and nebulous to describe them. Perhaps you have a general sense that any intelligence that isn't human is fundamentally untrustworthy, suspect, detrimental to humans etc, or is your concern about intelligences that might be of a higher rational order, about intelligences that can surpass our human capacities for analysis and reasoning?


Nonsense. It's about risk analysis. When the outcome is potentially catastrophic, you need to take the potential threats seriously.
So exactly which risks are we looking to analyse? What are the potential threats we need to avoid? Are we to avoid allowing machine learning systems to have "desires and goals", or to avoid allowing non-human intelligences from accidentally (or perhaps deliberately) existing completely? Your feelings that there are risks and threats are not enough to base a rational exploration of the issues on. Nor do you seem interested in engaging charitably with the ancillary points I'm raising. Are you perhaps not just insulating yourself from challenging your intuitions about 'bad AI', and if so, what is actually informing those intuitions?
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
Here's a question though: don't we already exist within a culture already populated with powerful entities (people and organisations) that put their own desires and goals above the existence and well-being of the overwhelming majority of life on our planet - and don't these current, contemporary rogue entities also include those who have been and are rapidly developing and implementing advanced machine learning systems? So who needs controlling, limiting, or regulating more: the AIs or those who create them to service their personal or institutional desires and goals?
It's not an either/or.
Well, this is exactly why I think this bout of 'AI' rumour- and fear-mongering has to be primarily addressed as a pressing cultural issue, rather than a technical one that focuses wholly on the capabilities of non-existant machine learning systems. And besides, how do you think our current methods of controlling, limiting, or regulating the threats and harms caused by the activity of our present rogue entities is actually working out for the rest of us? If we can't control, limit, or regulate them then won't they just continue to develop and use big data and machine learning to do whatever they want big data and machine learning to do for them - which seems to be mostly figuring out evermore profitable ways to game the system and reduce our access to the material necessities of life, along with finding ways to kill more people more efficiently and cheaply?
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here.

.

"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."

Frank Zappa

"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
.

User avatar
pErvinalia
On the good stuff
Posts: 59559
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 3:06 am

Brian Peacock wrote:
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
Brian Peacock wrote:
Sun Jun 04, 2023 3:11 pm
pErvinalia wrote:If AI considers humans as detrimental to its desires. Discovering those desires only by directly experiencing the manifestation of them, will be too late.
Why is that a particular horror?
Jesus, Brian, use your imagination. If the AI decided we were a threat to its continued existence it might consider us its enemy and treat us as such.
Nah. You use your imagination to articulate and support the concerns you're trying to express. Tell me of the horrors AI may wrought upon humanity, and why those horrors are of a different kind, type or order to the horrors we already live with. Until you can do that all I have to go on are you reports about your own disquiet, and that only amounts to a vibes-based argument and a kind of special pleading really. If you're imploring me to just accept that your general sense of disquiet or fear is justified I'm just going to have to continue to call them vague and nebulous, and a form of 'othering' when focused on 'AI' as malevolent rogue entities.


Bullshit. I've elucidated a number scenarios. And the link you posted addresses this. I can't help you if can't even read your own link.
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
Some people and organisations already consider the material and emotional needs of the majority of humans irrelevant to their goals or detrimental to their desires.
So? Irrelevant to what is being discussed here. That is, an AI far surpassing us in intelligence.
But, again, you seemingly cannot actually articulate what afears you. Why do you think that is? My feeling is that it's because you're only being invited to consider this from one particular standpoint.


My thought is that you can't read properly. But knowing that you actually can read for comprehension, leaves me thinking you are being disingenuous on purpose. Why is that, Brian?
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
What power to harm humans will advanced machine learning, perhaps even ML with general intelligence, have that won't be granted it by it's human developers and system admins?
Again, unregulated AI may have the ability to rewrite its code.
Some machine learning systems already optimise the codebase and hardware configuration of complex data systems. But what do you imagine an "unregulated AI" might rewrite its code for - what might it want to do that it wasn't initially developed and trained for?


It's easy to see, if one uses their imagination (or just reads on the topic), that AI systems could be programmed to optimise and improve their code base, for advances in performance.
Would an advanced medical diagnostics machine learning system develop a spontaneous interest in analysing medieval French literature for example, or betting on the horses? It's all so vague and nebulous pErvy.


It's only nebulous in the sense that these abilities are not yet possible. But the people who work in the field are alarmed enough to call for regulation of their industry. This unique situation alone should tell you how serious this is.
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
Again, you seem to be articulating that you have concerns, but not what those concerns actually are - which is why I've used words like vague and nebulous to describe them. Perhaps you have a general sense that any intelligence that isn't human is fundamentally untrustworthy, suspect, detrimental to humans etc, or is your concern about intelligences that might be of a higher rational order, about intelligences that can surpass our human capacities for analysis and reasoning?


Nonsense. It's about risk analysis. When the outcome is potentially catastrophic, you need to take the potential threats seriously.
So exactly which risks are we looking to analyse? What are the potential threats we need to avoid? Are we to avoid allowing machine learning systems to have "desires and goals", or to avoid allowing non-human intelligences from accidentally (or perhaps deliberately) existing completely? Your feelings that there are risks and threats are not enough to base a rational exploration of the issues on.


You quoted an article articulating risks. I'm referring to that and other writings on the subject. Why is it that you are trying to make this about me?
Nor do you seem interested in engaging charitably with the ancillary points I'm raising. Are you perhaps not just insulating yourself from challenging your intuitions about 'bad AI', and if so, what is actually informing those intuitions?


You're not challenging anything. You are stuck in Marxist dogma, and presenting false dichotomies like below.
pErvinalia wrote:
Sun Jun 04, 2023 10:47 pm
Here's a question though: don't we already exist within a culture already populated with powerful entities (people and organisations) that put their own desires and goals above the existence and well-being of the overwhelming majority of life on our planet - and don't these current, contemporary rogue entities also include those who have been and are rapidly developing and implementing advanced machine learning systems? So who needs controlling, limiting, or regulating more: the AIs or those who create them to service their personal or institutional desires and goals?
It's not an either/or.
Well, this is exactly why I think this bout of 'AI' rumour- and fear-mongering has to be primarily addressed as a pressing cultural issue, rather than a technical one that focuses wholly on the capabilities of non-existant machine learning systems. And besides, how do you think our current methods of controlling, limiting, or regulating the threats and harms caused by the activity of our present rogue entities is actually working out for the rest of us? If we can't control, limit, or regulate them then won't they just continue to develop and use big data and machine learning to do whatever they want big data and machine learning to do for them - which seems to be mostly figuring out evermore profitable ways to game the system and reduce our access to the material necessities of life, along with finding ways to kill more people more efficiently and cheaply?
I'm pro regulation of people, corporations, and AI. What thread have you been reading? :think:
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 73266
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: The rise of the machine

Post by JimC » Mon Jun 05, 2023 4:41 am

The idea of a future AI who has made a breakthrough into being conscious is an interesting one, philosophically speaking. It is inextricably bound up in the nature of consciousness. Surely one vital aspect of consciousness, whether ours, aliens or (possibly) machines has to involve something like motivation or drive. In our case (and presumably putative biological, intelligent aliens) the root cause of 'motivation" lies in evolution via natural selection.

Animals, possibly sentient but without language or self-introspection have motives, in the sense of innate and/or learned behaviours which promote survival and reproduction. Conscious organisms presumably also have those as a basis for actions, wrapping the complexity of symbol manipulation and introspection around them.

From where does our putative conscious AI get its drives, its motivations? Is it inevitably going to develop a "will to power" as in Nietzsche? Will it be malevolent? Will it be curious? Is it even possible for consciousness to emerge without having a body immersed in the world, with a billion year evolutionary history as a precondition?

There does seem to be a tendency to view the AI breakthrough in an almost mystical light, as an event so portentous and powerful that it is beyond our reach...
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 59559
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 5:23 am

Consciousness is fraught. We can't even explain it in us.

Drive... Interesting. I'm driven to read. Is that evolutionary?
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
pErvinalia
On the good stuff
Posts: 59559
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 5:24 am

Will reply more later when not on phone.
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

User avatar
JimC
The sentimental bloke
Posts: 73266
Joined: Thu Feb 26, 2009 7:58 am
About me: To be serious about gin requires years of dedicated research.
Location: Melbourne, Australia
Contact:

Re: The rise of the machine

Post by JimC » Mon Jun 05, 2023 5:52 am

Motivation as in having compelling reasons to do something. In humans, at least, there always seems to be an emotional component, rather than pure intellect. We try to do what will bring us the sensation of pleasure, or will avoid the sensation of pain.
Nurse, where the fuck's my cardigan?
And my gin!

User avatar
pErvinalia
On the good stuff
Posts: 59559
Joined: Tue Feb 23, 2010 11:08 pm
About me: Spelling 'were' 'where'
Location: dystopia
Contact:

Re: The rise of the machine

Post by pErvinalia » Mon Jun 05, 2023 8:28 am

JimC wrote:
Mon Jun 05, 2023 4:41 am
From where does our putative conscious AI get its drives, its motivations?
Initially it will be from us. Will we then get those AIs creating new AIs with a different set of motivations?
Sent from my penis using wankertalk.
"The Western world is fucking awesome because of mostly white men" - DaveDodo007.
"Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that..
"Seth you are a boon to this community" - Cunt.
"I am seriously thinking of going on a spree killing" - Svartalf.

Post Reply

Who is online

Users browsing this forum: No registered users and 10 guests