WHAT PEOPLE THINK ETHICAL ISSUES IN AI ARE: wow.... we're creating... new life........

WHAT ETHICAL ISSUES IN AI ACTUALLY ARE: techbros worshiping the almighty algorithm, not caring to look at what bad patterns the machines are picking up (racism, sexism, etc) and how to avert them, and overreliance on neural networks meaning that said algorithms are treated as magical black boxes where nobody wants to (or can, really) point out exactly how the equation works (and why it may be faulty)

basically if people would just carve off 5% of the sci-fi panic and anxiety and spend that energy being legitimately concerned about how if you tell a computer "learn the rules of this game", the rules it will come back to report to you will be dripping with systemic inequality that must be directly confronted instead of excused or praised as infallible because "a passionless computer did it"

we'd all be much better off

@wigglytuffitout I wanna see an extant AI given a scenario with the social rules of today and see what it does and how that unveils people's assumptions

like that one where the AI was given the problem of getting to point A from point B as fast as possible and they built a very tall structure that fell over and landed on point B

@InspectorCaracal the worst part is that for a lot of these social rules, the technology is being developed by people so sheltered and so lazy about it that they do not question if something's a bad choice

they don't know enough to look at, say, AI handling assessing risk of mortgages, to view the results and go "wait a second fellas, i think this is just redlining with extra steps". they go "oh, the computer has shown us the truth!" and are pleased to have their own racism reaffirmed.

@wigglytuffitout see that's why I don't want it to be a neural net like what I was talking about, that's literally just designed to recreate the things you fed it

i want a problem-solving AI that you give the rules and the goal and let it figure out the how, then you look a the how

and then everyone will look at the how and it will be totally fucking bizarre and they'll be like "what the fuck" and it'll be because they assumed A and B and C were just inherent

@wigglytuffitout like with the falling over thing, the programmers just assumed that the solution would inherently involved a mobile entity that traversed the distance linearly

but that wasn't actually put into the rules, and the giant tower thing falling over actually fit the rules

it forces people to evaluate the fact that the rules they think are the rules are not necessarily all of their rules

@InspectorCaracal @wigglytuffitout thr falling over thing was unexpected but it doesn’t come from any special cleverness in the AI, it comes from poorly specified rules. rules create an abstract space that a “solution” is a successful navigation of. a computer just hungrily navigated every corner, including surprising corners we don’t think are in the space because humans, especially adults, apply 1000 additional rules we’re not consciously aware of.

@InspectorCaracal @wigglytuffitout but it means a computer will cycle through solutions for society that are so taboo, human programmers won’t even think to forbid them.

@zensaiyuki @InspectorCaracal and in what's happening now, a lot of times society HAS seen fit to forbid bad solutions - it's just that the programmers doing this think that things like "looking at the current laws regulating equality in housing" is a waste of time because their box of numbers is more enlightened than that and anyway they're not interested in learning why such laws exist, so

WHOOPS IT'S RACIST REDLINING WITH EXTRA STEPS

Follow

@zensaiyuki @InspectorCaracal see also: silicon valley's love affair with "disrupting" instead of saying "okay, we're not doing things like this. i wonder why? maybe i should research that. it seems like we have a lot of regulations around this thing. i wonder why they exist. maybe i should read a book about it."

i'm just waiting until the new Apple (tm) iTriangleShirtwaistFactory is announced tbh

@wigglytuffitout @InspectorCaracal that’s assuming good faith. the more I hear about UBER, the more it sounds like its goal is not making money, but spending millions to achieve something else.

@zensaiyuki @wigglytuffitout I... don't think you realized that Harp was just saying that Silicon Valley is a bunch of racists and compared them to a historically famous human rights violation and mass death in early Western industry? there's no good faith being assumed there <.<

@InspectorCaracal @wigglytuffitout i meant “good faith” not as in “for the goof of humanity” but “actually working towards what they say they are working towards”, in this case, profit for shareholders, which in itself is not an obvious benefit to humanity- but i think they are not even doing that.

@InspectorCaracal @wigglytuffitout like, i think destroying baked in institutions that are heavily regulated is the point, and not a consequence of ignorance or naivety

@zensaiyuki @InspectorCaracal yeah i can see that actually

i think it's plausible they may have started out guided by egotistical ignorance, but so many people pouring money into them at this point... don't have those excuses

@wigglytuffitout @InspectorCaracal though, I have witnessed the exact egotistical ignorance you’re talking about, which is why using it as cover could be so effective, because these people actually exist, and would empathise with the feigned naivety and even work to defend it, all while the erosion of built up defense of the marginalised continues

@zensaiyuki @InspectorCaracal at some point, all one can really do is say "i'm not sure if these techbros are criminals or just criminally stupid" while shaking your head and figuring out how to square up against them LOL

i know it drives Family Member Who Actually Works In AI mildly crazy because a lot of the mindset re: AI development comes out of sheer laziness and being more in love with wanting to use "neural network 'cos it sounds neat" then actually know what they're doing lmao

@wigglytuffitout @InspectorCaracal for me, I am feeling like I need to use neural networks for random shit just to stay employed and have relevant buzzwordy skills. though I don’t get the feeling that my coworkers are driven by the same existential terror, and they genuinely think neural networks are neat. and they ARE neat of course! but there’s a line between toy and tool that I don’t think everyone has a firm grasp of

@zensaiyuki @InspectorCaracal the best use for neural networks right now is "we have made computers smart enough to be stupid in ways that are HILARIOUS" and i am so glad many people on the internet have realized this and set them loose on things like 'make me some my little pony names' or 'make me some recipe titles' which make me laugh until i cry a little

it is for e-slapstick, not real work, lmao

@wigglytuffitout @InspectorCaracal i agree. though I did see one comment at some random programmery space who was grouchy about that stuff because he saw them as examples of failed programs and didn’t understand why people were showing off failures. I hope that’s not a common viewpoint.

@zensaiyuki @InspectorCaracal that is the cry of a joyless man who cannot laugh at Crockpot Cold Water

@wigglytuffitout @InspectorCaracal sort of a corporatised techbro version of the “useful idiot” phenomenon we saw explode on socialmedia in 2015 if you’re getting my reference.

Sign in to participate in the conversation
Computer Fairies

Computer Fairies is a Mastodon instance that aims to be as queer, friendly and furry as possible. We welcome all kinds of computer fairies!