Yeah. Coming at this from a different angle must mean I'm lacking the necessary cognitive capacity, or else it's simply a moral failure or a deficiency in character, eh? But in our strand of the conversation you haven't really articulated exactly what you've been imploring me to be frightened of, nor have you addressed my points directly or answered any of the questions I've asked. So you'll have to forgive me for thinking that 'use your imagine' is just your way of saying 'I can't be bothered' or 'because, that's why'.pErvinalia wrote: ↑Mon Jun 05, 2023 3:06 amMy thought is that you can't read properly. But knowing that you actually can read for comprehension, leaves me thinking you are being disingenuous on purpose. Why is that, Brian?
Here's L'Emmy's post about the Longtermists' statement, and my initial take on it.
Here's the safe.ai statement I posted here, in full:
safe.ai wrote:AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
*their emphasis.
The first problem is the term "AI": artificial intelligence. We associate, conflate, and confuse the 'intelligence' of machine learning systems all too readily with human-like consciousness, and all that this entails. This skews the debate about machine learning systems, what they are, how they operate, and what we employ them to do. It casts them as independent agents at large in the world, and by that it removes responsibility from those who create the conditions by which these systems exist and operate. You may think that using a term like "AI" is just a detail pErvy, but I think it is an important one. How we conceptualise machine learning systems has a significant influence on how we think about them and act towards them, just as it does with things like blacknuss, gender, class, dark metal, election security, or the economy.
The second problem is a bit more subtle, but one made obvious by the selection of two possible existential threats and the deliberate absence of the most significant and present societal-scale existential threat we're currently facing: global heating and climate change. One might argue that this has been done to offer ready examples that everyone can relate to and agree as being 'a bad thing', but then we have to ask why the obvious inclusion of global heating and climate change is considered too contentious or controversial to mention. The reason is political of course. It's would be unhelpful to mention climate change only to have the loud-and-proud votaries of toxic conservatism brand the statement, and those who made it 'woke'. This also skews the discourse, quietly appeasing those powerful people and organisations whose economic and ideological frameworks are driving us off the climate cliff, and who continue to exert a great directing force on our societies, and our political, economic and democratic systems.
We absolutely need to talk about 'AI', about the amazing things machine learning is already doing, the amazing things it could do and the beneficial impacts it could have our societies, along with the potential consequences of badly designed or poorly implemented systems, or systems purposefully created to disrupt or cause chaos and harm. At the moment though we're being strongly directed to only talk about this in terms of 'bad AI'. At the same time, if we're starting from the position that whatever we do we mustn't upset the big business types in case they decide to sidetrack, degrade and/or close down the conversation then we're already off to a bad start. This is exactly where we find ourselves at the moment, imo.
Is it meaningful to conceptualise advanced machine learning systems in terms of them having or developing essentially human-like psychologiesy, and then to talk about them as having independent interests, motivations, drives, impulses, intuitions, "desires and goals"?
If we are going to talk about 'AI' as potentially having human-like psychological traits like "desires and goals" etc why are we being strongly directed to think about these systems as having super-human god-like powers that only represent societal-scale existential risks and threats?
If artificial general intelligence is going to 'evolve' (somehow, somewhere, somewhen) then to begin with it will be built on the initial conditions and premises of it's core programming, the data it has access to, and the hardware it runs on. So shouldn't the focus be on the values, "desires and goals" of those developing and implementing these systems, and the manner of that implementation, rather than the fear-inducing totem of future malevolent rogue "non-human intelligences" of 'bad AI'?
If the Amazon 'fulfilment centre' algorithm has been trained to monitor human activity and automatically issue you a written warning if you go to the toilet three times in an hour but doesn't notice when you collapse with a heart attack then what kind of animated anthropomorphic intelligence do we imagine that might 'evolve' into?
If a semi-autonomous combat system is being weighted to kill people who try to stop it killing people then don't we really need to take a very serious look at what we're developing machine learnings systems to do for us?
If the combat bot gets access to the data of the medical diagnostics bot then it could be used to kill people more efficiently, but if the diagnostics bot gets access to the combat bot's data then could it be used to treat those injured in conflict more efficiently - and which one should we be more worried about gaining autonomous intelligence?
If machine learning systems do develop into autonomous, human-equivalent intelligences with their own "desires and goals" etc then shouldn't we be giving some thought now to the challenges that will presents to our ideas and understanding about things like the fundamental rights of individual sentient life forms?
If we are going to share our future with so-called "digital minds" or "non-human intelligences" then what is stopping us from making sure that's a future that respects, protects, ensures, and secures the existence of all thinking beings - whether silicon or meat-based (or if Musk's dream comes true, then possibly a combination of both)?