Follow

WHAT PEOPLE THINK ETHICAL ISSUES IN AI ARE: wow.... we're creating... new life........

WHAT ETHICAL ISSUES IN AI ACTUALLY ARE: techbros worshiping the almighty algorithm, not caring to look at what bad patterns the machines are picking up (racism, sexism, etc) and how to avert them, and overreliance on neural networks meaning that said algorithms are treated as magical black boxes where nobody wants to (or can, really) point out exactly how the equation works (and why it may be faulty)

basically if people would just carve off 5% of the sci-fi panic and anxiety and spend that energy being legitimately concerned about how if you tell a computer "learn the rules of this game", the rules it will come back to report to you will be dripping with systemic inequality that must be directly confronted instead of excused or praised as infallible because "a passionless computer did it"

we'd all be much better off

quit worrying about if AI is going to be creating new consciousness

start worrying about AI learning how to enforce things like redlining and the glass ceiling

worry about it because *it's happening now*.

@wigglytuffitout facetious one-liner aside, that's actually the root of the problem

AI is seen as a black-box magical Future Invention, and as such, the questions of the actual mechanics and process of developing it are generally ignored by the populace because it's not Real AI Yet. And that's bad.

@InspectorCaracal the worst part is that this is also pretty popular *within the people programming AI right now*.

neural networking is a cool buzzword that sounds sci-fi and high-tech, and it also processes data in a way that makes it very, very, very hard, if not nigh impossible, to go "okay, i told you to learn the rules of the game. can you tell me what rules you learned?" and have the program show you what it's doing. BUT IT SOUNDS SO COOL IT MUST BE INFALLIBLE..........

@wigglytuffitout I was very excited about neural nets until I realized that 99% of neural nets you hear about are just the same statistical analysis algorithm being fed different data to try to get it to be able to independently replicate the data you fed it

which.... okay.......?

@wigglytuffitout That last point can't be emphasized enough. **IT'S HAPPENING NOW**. People got all up in arms about the racist webcam years ago, but what they don't realize is that that racist webcam's cousin is now making important decisions in areas like the court system or the housing system. And it's not much better about not being racist. If anything, it might just be a bit worse.

@wigglytuffitout And, of course, the techbros love to say "Oh, it's just learning from the data." Yeah, guess what? The data contains **years** of prejudice. That computer ain't learning from some pure, neutral set. It's not like trying to teach it to recognize the number three. It's learning from our shittiness. The computer can't tell right from wrong, so if you feed it all the wrong of the past centuries, without feeding it *massive* corrections...

@wigglytuffitout This. So many people need to get into the field thinking about this because not enough people in the field care enough.

@wigglytuffitout (That said a conscious AI might recognize the problems too and wipe out all the techbros, so hey, bonus.)

@pettancow @wigglytuffitout A self-aware AI is going to be a nerdy teenager raised by libertarians, the ultimate techbro

@it_wasnt_arson @pettancow @wigglytuffitout does that make it a double negative, and the rebellious teenager AI will go full breadtube on them?

@wigglytuffitout I wanna see an extant AI given a scenario with the social rules of today and see what it does and how that unveils people's assumptions

like that one where the AI was given the problem of getting to point A from point B as fast as possible and they built a very tall structure that fell over and landed on point B

@InspectorCaracal the worst part is that for a lot of these social rules, the technology is being developed by people so sheltered and so lazy about it that they do not question if something's a bad choice

they don't know enough to look at, say, AI handling assessing risk of mortgages, to view the results and go "wait a second fellas, i think this is just redlining with extra steps". they go "oh, the computer has shown us the truth!" and are pleased to have their own racism reaffirmed.

@wigglytuffitout see that's why I don't want it to be a neural net like what I was talking about, that's literally just designed to recreate the things you fed it

i want a problem-solving AI that you give the rules and the goal and let it figure out the how, then you look a the how

and then everyone will look at the how and it will be totally fucking bizarre and they'll be like "what the fuck" and it'll be because they assumed A and B and C were just inherent

@wigglytuffitout like with the falling over thing, the programmers just assumed that the solution would inherently involved a mobile entity that traversed the distance linearly

but that wasn't actually put into the rules, and the giant tower thing falling over actually fit the rules

it forces people to evaluate the fact that the rules they think are the rules are not necessarily all of their rules

@InspectorCaracal @wigglytuffitout thr falling over thing was unexpected but it doesn’t come from any special cleverness in the AI, it comes from poorly specified rules. rules create an abstract space that a “solution” is a successful navigation of. a computer just hungrily navigated every corner, including surprising corners we don’t think are in the space because humans, especially adults, apply 1000 additional rules we’re not consciously aware of.

@zensaiyuki @wigglytuffitout ... I was literally using that as an example of using AI to unveil underlying assumptions about rules in the humans writing the code....

@InspectorCaracal @wigglytuffitout I got that, and it can be useful for finding legit solutions that a human would never think of. another great example is a circuit designing AI was able to save a part by not including an oscillator, but instead using electromagnetic interference from neighboring circuits for that function.

but the drawback is that these unconventional solutions don’t always reveal the gaps in our ressoning.

@zensaiyuki @wigglytuffitout I feel like you thought I was making some sort of vast encompassing point about the ultimate ideal use of AI in problem solving, but I really was *only* talking about how I want to see it fuck with people's preconceptions more often. <.< Not because I think it is The Answer, I just want it to happen more.

@zensaiyuki @wigglytuffitout The problem is that people hate being wrong so they'll avoid mechanisms that show them up as being wrong and seek out mechanisms that confirm them as being right.

Hence the fondness for neural nets that are really just regurgitating what we told them.

@InspectorCaracal @wigglytuffitout i agree that the unconventional solutions are amusing, and we need more of them. having actually tried to build AI to do exactly that, myself, a lot of the magic of why those things happen is dispelled. a human could find these solutions with a pen and paper. it would just take a lot longer.

@InspectorCaracal @zensaiyuki basically we need to rely on AI less for important shit, and more shit like "hey AI, here is a two-legged critter, figure out the most efficient walk for it to walk on the moon", then we put googly eyes on the results, especially all the midway points it tried working towards the best solution

@InspectorCaracal @wigglytuffitout no I wasn’t thinking that deeply about it. You were just talking about something I know about and wanted to feel like a part of a conversation. sorry for any misunderstandings

@InspectorCaracal @wigglytuffitout but it means a computer will cycle through solutions for society that are so taboo, human programmers won’t even think to forbid them.

@zensaiyuki @InspectorCaracal and in what's happening now, a lot of times society HAS seen fit to forbid bad solutions - it's just that the programmers doing this think that things like "looking at the current laws regulating equality in housing" is a waste of time because their box of numbers is more enlightened than that and anyway they're not interested in learning why such laws exist, so

WHOOPS IT'S RACIST REDLINING WITH EXTRA STEPS

@zensaiyuki @InspectorCaracal see also: silicon valley's love affair with "disrupting" instead of saying "okay, we're not doing things like this. i wonder why? maybe i should research that. it seems like we have a lot of regulations around this thing. i wonder why they exist. maybe i should read a book about it."

i'm just waiting until the new Apple (tm) iTriangleShirtwaistFactory is announced tbh

@wigglytuffitout @InspectorCaracal that’s assuming good faith. the more I hear about UBER, the more it sounds like its goal is not making money, but spending millions to achieve something else.

@zensaiyuki @wigglytuffitout I... don't think you realized that Harp was just saying that Silicon Valley is a bunch of racists and compared them to a historically famous human rights violation and mass death in early Western industry? there's no good faith being assumed there <.<

@InspectorCaracal @wigglytuffitout i meant “good faith” not as in “for the goof of humanity” but “actually working towards what they say they are working towards”, in this case, profit for shareholders, which in itself is not an obvious benefit to humanity- but i think they are not even doing that.

@InspectorCaracal @wigglytuffitout like, i think destroying baked in institutions that are heavily regulated is the point, and not a consequence of ignorance or naivety

@zensaiyuki @InspectorCaracal yeah i can see that actually

i think it's plausible they may have started out guided by egotistical ignorance, but so many people pouring money into them at this point... don't have those excuses

@wigglytuffitout @InspectorCaracal though, I have witnessed the exact egotistical ignorance you’re talking about, which is why using it as cover could be so effective, because these people actually exist, and would empathise with the feigned naivety and even work to defend it, all while the erosion of built up defense of the marginalised continues

@zensaiyuki @InspectorCaracal at some point, all one can really do is say "i'm not sure if these techbros are criminals or just criminally stupid" while shaking your head and figuring out how to square up against them LOL

i know it drives Family Member Who Actually Works In AI mildly crazy because a lot of the mindset re: AI development comes out of sheer laziness and being more in love with wanting to use "neural network 'cos it sounds neat" then actually know what they're doing lmao

@wigglytuffitout @InspectorCaracal for me, I am feeling like I need to use neural networks for random shit just to stay employed and have relevant buzzwordy skills. though I don’t get the feeling that my coworkers are driven by the same existential terror, and they genuinely think neural networks are neat. and they ARE neat of course! but there’s a line between toy and tool that I don’t think everyone has a firm grasp of

@zensaiyuki @InspectorCaracal the best use for neural networks right now is "we have made computers smart enough to be stupid in ways that are HILARIOUS" and i am so glad many people on the internet have realized this and set them loose on things like 'make me some my little pony names' or 'make me some recipe titles' which make me laugh until i cry a little

it is for e-slapstick, not real work, lmao

@wigglytuffitout @InspectorCaracal i agree. though I did see one comment at some random programmery space who was grouchy about that stuff because he saw them as examples of failed programs and didn’t understand why people were showing off failures. I hope that’s not a common viewpoint.

Show newer

@wigglytuffitout @InspectorCaracal sort of a corporatised techbro version of the “useful idiot” phenomenon we saw explode on socialmedia in 2015 if you’re getting my reference.

@InspectorCaracal anyway it's about this point in Said Family Member's soapbox rant about how the majority of AI programming these days is shit that i claim the soapbox for myself, and start yelling about how this is what you get when you divorce STEM completely from the liberal arts and raise graduates on the idea that silly social sciences where you have empathy for other people or look at history and its effects are completely not needed when you have the ability to crunch numbers

@InspectorCaracal fortunately said family member completely agrees with me on this part so it then becomes a soapbox duet, which, as we all know, is integral to all good family Anger Bonding (tm) moments

@wigglytuffitout Even worse, there already is a hostile AI ruling us all: corporations. https://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-soaking-in-it.html

The algorithms corporations implement will be just extensions of themselves.

@wigglytuffitout I admire how many good good words you have managed to put in one single toot and thank you for it.

@wigglytuffitout Sorry bro, your hair length is 17.3 cm. Our ML algorithm classifies everyone whose hair length is between 17.2 cm and 17.45 cm as a goat. There is nothing we can do (and we don't care about doing anything, you people are less than 0.01% of our consumer base!), you have to either cut your hair or deal with not being able to register to our site.
Also, if you do end up cutting it you will be considered as a potential abused because the system already stopped your evil deeds once.
:blobcatsweat:

@wigglytuffitout

tbh I think a lot of the sociocultural issues with AI/ML could be addressed by calling it by its real name, "curve fitting":

- less likely to make anyone think "oh it's a machine so it's objective"

- more obvious that it's just gonna propagate the preconceptions of what it's fed

- invokes that earnestly tedious numerical methods course you once fell asleep in, rather than a thundering herd of techbros

ethical AI is pretty accessible, just about anyone can read and understand the research

more the mystery why you would have these misconceptions
Sign in to participate in the conversation
Computer Fairies

Computer Fairies is a Mastodon instance that aims to be as queer, friendly and furry as possible. We welcome all kinds of computer fairies!