Pinned post

Welcome to Computer Fairies, where the ✨​sparkles✨​ are lively fairy dust, not lifeless AI slop.

Pinned post

Don't blindly believe everything you read online, tempting as that is. Do your homework first. This video series will help you do it well: youtube.com/playlist?list=PL8d

Show thread
Pinned post

Remember to pop your filter bubble every so often. If you think you're not in one, you're trapped in one and in for a rude awakening.

i read the words "hex editor" wrong and for a brief moment i imagined a world a lot more fun than this one.

every conversation about the potential usefulness of AI, divorced from ethical concerns, is just this dril tweet

Show thread

i am so tired of "ethical concerns aside" being a phrase i see every single time someone tries to defend the use of LLMs. fuck that! ethical concerns front and fucking center! it is very revealing that tech is currently in such a state that the quiet part can be said out loud without any pushback.

Also, congrats for finding a great way to use "galumphing galoots" in a sentence.

Show thread

So how were they caught, in spite of their former employer's huge mistakes? "BLUNDER TWIN POWERS, ACTIVATE!"

Ars Technica, 2026-05-14, "Fired hacker twins forget to end Teams recording, capture own crimes": arstechnica.com/tech-policy/20

Show thread

It's generally a good idea that, if you're going to fire someone with any sort of computer access, you should revoke their credentials before telling them they're fired. But what do government contractors know?

Ars Technica, 2026-05-12, "Twin brothers wipe 96 gov't databases minutes after being fired": arstechnica.com/tech-policy/20

"AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights"

arxiv.org/abs/2509.00462

"Using a large-scale controlled resume correspondence experiment, we find that LLMs consistently prefer resumes generated by themselves over those written by humans or produced by alternative models, even when content quality is controlled. The bias against human-written resumes is particularly substantial, with self-preference bias ranging from 67% to 82% across major commercial and open-source models."

If you really want to turn to a chatbot for therapy, might I recommend Dr. Eliza Madslip, an AI whose datacenter size, speed, and power demands were eclipsed by a single TRS-80 Model I way back in 1977?

Chat with Eliza here: anthay.github.io/eliza.html

Eliza's history: elizagen.org/

@maxleibman I think this is partly right. But there’s another reason: the same reason some people keep eating stuff that is bad for them - it tastes good.

Chat bots feed people the psychic equivalent of high fructose corn syrup.

People don’t turn to chatbots for therapy because chatbots are good at therapy.

They do it for the same reason that people who are starving try to eat tree bark.

:trash:​✨​ The command

rm -vfr /

means remove under visual flight rules, landing on runway 2.

Show older
Computer Fairies

Computer Fairies is a Mastodon instance that aims to be as queer, friendly and furry as possible. We welcome all kinds of computer fairies!