WHAT PEOPLE THINK ETHICAL ISSUES IN AI ARE: wow.... we're creating... new life........
WHAT ETHICAL ISSUES IN AI ACTUALLY ARE: techbros worshiping the almighty algorithm, not caring to look at what bad patterns the machines are picking up (racism, sexism, etc) and how to avert them, and overreliance on neural networks meaning that said algorithms are treated as magical black boxes where nobody wants to (or can, really) point out exactly how the equation works (and why it may be faulty)
basically if people would just carve off 5% of the sci-fi panic and anxiety and spend that energy being legitimately concerned about how if you tell a computer "learn the rules of this game", the rules it will come back to report to you will be dripping with systemic inequality that must be directly confronted instead of excused or praised as infallible because "a passionless computer did it"
we'd all be much better off
@InspectorCaracal the worst part is that for a lot of these social rules, the technology is being developed by people so sheltered and so lazy about it that they do not question if something's a bad choice
they don't know enough to look at, say, AI handling assessing risk of mortgages, to view the results and go "wait a second fellas, i think this is just redlining with extra steps". they go "oh, the computer has shown us the truth!" and are pleased to have their own racism reaffirmed.
@wigglytuffitout like with the falling over thing, the programmers just assumed that the solution would inherently involved a mobile entity that traversed the distance linearly
but that wasn't actually put into the rules, and the giant tower thing falling over actually fit the rules
it forces people to evaluate the fact that the rules they think are the rules are not necessarily all of their rules
@InspectorCaracal @wigglytuffitout thr falling over thing was unexpected but it doesn’t come from any special cleverness in the AI, it comes from poorly specified rules. rules create an abstract space that a “solution” is a successful navigation of. a computer just hungrily navigated every corner, including surprising corners we don’t think are in the space because humans, especially adults, apply 1000 additional rules we’re not consciously aware of.
@zensaiyuki @wigglytuffitout ... I was literally using that as an example of using AI to unveil underlying assumptions about rules in the humans writing the code....
@InspectorCaracal @wigglytuffitout I got that, and it can be useful for finding legit solutions that a human would never think of. another great example is a circuit designing AI was able to save a part by not including an oscillator, but instead using electromagnetic interference from neighboring circuits for that function.
but the drawback is that these unconventional solutions don’t always reveal the gaps in our ressoning.
@zensaiyuki @wigglytuffitout I feel like you thought I was making some sort of vast encompassing point about the ultimate ideal use of AI in problem solving, but I really was *only* talking about how I want to see it fuck with people's preconceptions more often. <.< Not because I think it is The Answer, I just want it to happen more.
@zensaiyuki @wigglytuffitout The problem is that people hate being wrong so they'll avoid mechanisms that show them up as being wrong and seek out mechanisms that confirm them as being right.
Hence the fondness for neural nets that are really just regurgitating what we told them.
@InspectorCaracal @wigglytuffitout i agree that the unconventional solutions are amusing, and we need more of them. having actually tried to build AI to do exactly that, myself, a lot of the magic of why those things happen is dispelled. a human could find these solutions with a pen and paper. it would just take a lot longer.
@InspectorCaracal @zensaiyuki basically we need to rely on AI less for important shit, and more shit like "hey AI, here is a two-legged critter, figure out the most efficient walk for it to walk on the moon", then we put googly eyes on the results, especially all the midway points it tried working towards the best solution
@InspectorCaracal @wigglytuffitout no I wasn’t thinking that deeply about it. You were just talking about something I know about and wanted to feel like a part of a conversation. sorry for any misunderstandings
@InspectorCaracal @wigglytuffitout but it means a computer will cycle through solutions for society that are so taboo, human programmers won’t even think to forbid them.
@zensaiyuki @InspectorCaracal and in what's happening now, a lot of times society HAS seen fit to forbid bad solutions - it's just that the programmers doing this think that things like "looking at the current laws regulating equality in housing" is a waste of time because their box of numbers is more enlightened than that and anyway they're not interested in learning why such laws exist, so
WHOOPS IT'S RACIST REDLINING WITH EXTRA STEPS
@zensaiyuki @InspectorCaracal see also: silicon valley's love affair with "disrupting" instead of saying "okay, we're not doing things like this. i wonder why? maybe i should research that. it seems like we have a lot of regulations around this thing. i wonder why they exist. maybe i should read a book about it."
i'm just waiting until the new Apple (tm) iTriangleShirtwaistFactory is announced tbh
@wigglytuffitout @InspectorCaracal that’s assuming good faith. the more I hear about UBER, the more it sounds like its goal is not making money, but spending millions to achieve something else.
@zensaiyuki @wigglytuffitout I... don't think you realized that Harp was just saying that Silicon Valley is a bunch of racists and compared them to a historically famous human rights violation and mass death in early Western industry? there's no good faith being assumed there <.<
@InspectorCaracal @wigglytuffitout i meant “good faith” not as in “for the goof of humanity” but “actually working towards what they say they are working towards”, in this case, profit for shareholders, which in itself is not an obvious benefit to humanity- but i think they are not even doing that.
@InspectorCaracal @wigglytuffitout like, i think destroying baked in institutions that are heavily regulated is the point, and not a consequence of ignorance or naivety
@zensaiyuki @InspectorCaracal yeah i can see that actually
i think it's plausible they may have started out guided by egotistical ignorance, but so many people pouring money into them at this point... don't have those excuses
@wigglytuffitout @InspectorCaracal or feigning ignorance is cover
@wigglytuffitout @InspectorCaracal though, I have witnessed the exact egotistical ignorance you’re talking about, which is why using it as cover could be so effective, because these people actually exist, and would empathise with the feigned naivety and even work to defend it, all while the erosion of built up defense of the marginalised continues
@zensaiyuki @InspectorCaracal at some point, all one can really do is say "i'm not sure if these techbros are criminals or just criminally stupid" while shaking your head and figuring out how to square up against them LOL
i know it drives Family Member Who Actually Works In AI mildly crazy because a lot of the mindset re: AI development comes out of sheer laziness and being more in love with wanting to use "neural network 'cos it sounds neat" then actually know what they're doing lmao
@wigglytuffitout @InspectorCaracal for me, I am feeling like I need to use neural networks for random shit just to stay employed and have relevant buzzwordy skills. though I don’t get the feeling that my coworkers are driven by the same existential terror, and they genuinely think neural networks are neat. and they ARE neat of course! but there’s a line between toy and tool that I don’t think everyone has a firm grasp of
@zensaiyuki @InspectorCaracal the best use for neural networks right now is "we have made computers smart enough to be stupid in ways that are HILARIOUS" and i am so glad many people on the internet have realized this and set them loose on things like 'make me some my little pony names' or 'make me some recipe titles' which make me laugh until i cry a little
it is for e-slapstick, not real work, lmao
@wigglytuffitout @InspectorCaracal i agree. though I did see one comment at some random programmery space who was grouchy about that stuff because he saw them as examples of failed programs and didn’t understand why people were showing off failures. I hope that’s not a common viewpoint.
@zensaiyuki @InspectorCaracal that is the cry of a joyless man who cannot laugh at Crockpot Cold Water
@wigglytuffitout @InspectorCaracal sort of a corporatised techbro version of the “useful idiot” phenomenon we saw explode on socialmedia in 2015 if you’re getting my reference.
@InspectorCaracal anyway it's about this point in Said Family Member's soapbox rant about how the majority of AI programming these days is shit that i claim the soapbox for myself, and start yelling about how this is what you get when you divorce STEM completely from the liberal arts and raise graduates on the idea that silly social sciences where you have empathy for other people or look at history and its effects are completely not needed when you have the ability to crunch numbers
@wigglytuffitout YEAH!!!
@InspectorCaracal fortunately said family member completely agrees with me on this part so it then becomes a soapbox duet, which, as we all know, is integral to all good family Anger Bonding (tm) moments
@wigglytuffitout see that's why I don't want it to be a neural net like what I was talking about, that's literally just designed to recreate the things you fed it
i want a problem-solving AI that you give the rules and the goal and let it figure out the how, then you look a the how
and then everyone will look at the how and it will be totally fucking bizarre and they'll be like "what the fuck" and it'll be because they assumed A and B and C were just inherent