Show newer

"AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights"

arxiv.org/abs/2509.00462

"Using a large-scale controlled resume correspondence experiment, we find that LLMs consistently prefer resumes generated by themselves over those written by humans or produced by alternative models, even when content quality is controlled. The bias against human-written resumes is particularly substantial, with self-preference bias ranging from 67% to 82% across major commercial and open-source models."

If you really want to turn to a chatbot for therapy, might I recommend Dr. Eliza Madslip, an AI whose datacenter size, speed, and power demands were eclipsed by a single TRS-80 Model I way back in 1977?

Chat with Eliza here: anthay.github.io/eliza.html

Eliza's history: elizagen.org/

@maxleibman I think this is partly right. But there’s another reason: the same reason some people keep eating stuff that is bad for them - it tastes good.

Chat bots feed people the psychic equivalent of high fructose corn syrup.

People don’t turn to chatbots for therapy because chatbots are good at therapy.

They do it for the same reason that people who are starving try to eat tree bark.

:trash:​✨​ The command

rm -vfr /

means remove under visual flight rules, landing on runway 2.

@soatok At my very first e-commerce job in 1997, the main router connected to the server had a sign over it:

“WHEN IN TROUBLE, OR IN DOUBT,
PULL THE PURPLE CABLE OUT!”

rm -fr actually means remove for real

Also, how many 18 year old PCs can run a modern OS without becoming a Ship of Theseus?

Show thread

The fact that they used that vast consent-violation infrastructure to push something as clownshoes balls-out bullshit as AI makes that infrastructure *visible*. But it doesn't go away just because we can't see it.

I ask that folks hold on to the feeling of violation, and recognize how much that feeling still applies even when those violations are harder to see.

Show thread

The AI bubble is only possible because tech as an industry has a huge infrastructure for pushing boundaries, eroding consent, and forcing functionality on users. That infrastructure isn't going away just because they decided not to use it for sparkling anus buttons any more.

It's going to still be there, in terms of prompting you to reduce privacy and increase ads, to accept locked-in platforms instead of open protocols. It's still there in terms of age verification.

Show thread

Having now seen two different ways that the AI bubble seems to be popping, I want to say something about what comes next so far as I can see and affect the world. Maybe the bubble isn't actually popping yet, in which case this is premature, but still.

Once the AI bubble is gone, and there's no more sparkling anus buttons everywhere trying to dark pattern you into using AI, I want to ask you to hold on to the feeling you have now, with every button being an erosion of consent.

BTW, I'm not at all upset at how long it takes to install anything from source. I'm used to it, & it's a conscious personal choice. My first Linux box was an '01 PC running Sorcerer, a now-dead source-based distro.

I'm just amazed at the extreme time disparity between building Firefox and anything Chrome-based.

Show thread

And if you've read that far and you're in IT, I have one word for you: unionize. ✊

Show thread

Welp. I've been using GitLab for over a decade and have been pretty happy with it. Deployed and maintained several instances, some personal, some for small hobby orgs, some for work.

But it looks like it is time to ditch GitLab for good:

> Software will be built by machines, directed by people. AI is the substrate on which future software gets built. Agents will plan, code, review, deploy, and repair.
about.gitlab.com/blog/gitlab-a

#GitLab #AI #FuckAI

Show older
Computer Fairies

Computer Fairies is a Mastodon instance that aims to be as queer, friendly and furry as possible. We welcome all kinds of computer fairies!