Whatever AI's "own desires and goals" end up being it will ultimately be a function of it's base programming, which is determined by the "desired and goals" of it's initial human designers and operators. Therefore it is those human agents that need to be regulated. What I'm saying is that the framing of this issue around the persona of AIs, as if they are potential wayward agents that need to be corralled and controlled, dominated and suppressed, deliberately downplays, ignores, or obfuscates the moral responsibilities of the commercial and military interests driving their technical development.pErvinalia wrote:You're not thinking this right. The problem is that soon AI won't need human agents to direct them. The worry is that AI will operate on its own, with its own desires and goals that we are not entirely sure of.Brian Peacock wrote: ↑Thu Jun 01, 2023 3:29 pmWhile the capacity of machine learning to run these things up is pretty staggering, the issue isn't the machine learning artefacts themselves, but the people using them. We already have a shitton of fake news, plainly biased news, political operators, commenters, and media that literally buy into an alternate version reality (alt-facts), state-sponsored and corporate troll farms, hacking of infrastructure and intellectual property and profit-driven blackops, a plethora of fraud (some of it institutionalised and protected by law), the gaming of financial systems, commercial sectors, and entire economies. AI just makes all this quicker and a bit cheaper. The problem isn't the AI but the nefarious impulses and turpitudinous desires of certain classes of human agent.pErvinalia wrote: ↑Thu Jun 01, 2023 3:17 amI think we are at a critical point in time, and we need to get it right. While there's a non-zero chance that we could get sentient Terminator-like robots, I think the real threat is misinformation and manipulation of humans with said misinformation. Imagine a Donald Trump - like leader conspiring with autocracies like Russia. Video/audio emerges implicating him in treason. But we no longer have the ability to distinguish between real and fake video, as deep fakes have got so good. Even worse, imagine a networked AI faking evidence of a nuclear launch from China. How would a human operator in the US respond to this? Would they be tricked into launching a counterstrike?
Taking the killbot example above, how and why did weighting the completion of the military mission at a higher value than the commands or life of the systems admin get to be an active factor in the killbot's operation? Well, probably because the premise of the system has a wanton disregard for human life baked into its initial conditions - because this is a system specifically designed by humans to kill humans as effectively and efficiently as possible. In such a case it is the moral and ethical outlooks of the designers that need to be addressed and challenged, and we all know the military are already morally compromised in that regard.
Now imagine AI systems premised on legally gaming the world's financial and economic architecture to accrue maximal wealth without regard to the consequences for or well-being of humans - capitobots, or fascobots! The underlying issue is the moral, ethical and value structures of those interested parties who already have the political and economic power to commission, design, and implement that kind of machine learning to that particular end.
By my lights this is primarily a cultural problem, and yet billionaire Longtermists with real skin in the AI game are framing our discourse around these cultural questions as ones that are wholly centred on machine learning systems, "digital minds" and "non-human intelligences" etc, conceptualised as independent rogue entities with agency and interests (desires and goals) that pose an "existential threat" to "our way of life". In effect, they are using essentially racist tropes to 'other' the idea of AI, and on the back of that calling for regulation that will contract the State to secure and enforce their continued development and dominance of the sector. This is because when corporate interests talk about an existential threat to 'our' way of life they only really mean 'their' way of life, and the problem isn't with 'our' AI but with the AI of notional 'others': their lesser rivals and independent developers - those whom they can't control or manipulate to serve their corporate interests and whom, therefore, the State must regulate away.
Machine learning is already doing amazing things like diagnosing the precursors to and very early onset of a range of medical conditions from retinal scans before symptoms even become apparent to the patient. Is that a "digital mind' we're supposed to be frightened of? Of course not, because the premise of those kinds of systems are clearly to enhance our understanding and well-being. So what kinds of "non-human intelligence" are we being warned about here, and who is commissioning, designing, resourcing, training, and implementing the kinds of machine learning systems we might be right to be frightened of? And when we think about those kind of nefarious entities, the people and organisations who might do a bad AI, isn't there a massive overlap with exactly the same class of people who are telling to be frightened while simultaneously saying that only they should have the regulatory backing to continue to develop and bring to market those self-same AI systems?
So is the problem really embodied (quite literally) in AI or is it located in the nefarious impulses and turpitudinous desires (the values) of certain classes of human agent and the structures and systems they already have the power to manipulate and game in the service of their own personal and pecuniary interests?