twinning 

for the last decade, we've wanted someone to introject many of us. To want to absorb pieces of us. For the longest time nobody did. We thought nobody was interested. We absorbed others, but they simply weren't interested in absorbing us. Then @12 helped us find the missing piece: very few people have the auditory ability needed to introject many headmates.

Being with 12 is the first time we've been able to properly merge with someone like this

twinning 

@28of47 @12 [Entangled] Wait, why is that a matter of auditory ability? And you're saying that auditory ability is unusual?

I don't see how it'd help, when one can't really hear the mindvoices of another system's members.

twinning 

@madewokherd @28of47 being able to pick out small details allows us to pick out the tiny details in how each headmate runs the body's voice

twinning 

@12 @28of47 Huh.

I don't think we've ever interacted with another system by voice enough to notice. While there are differences in how we use the body's voice, we haven't really tried to track it, as mindvoice is so much easier to distinguish.

It seems easier for us to detect differences in others' "written" voices than listening.

twinning 

@madewokherd @12 we’ve always differentiated partners headmates via voice. Until the last few months, we had no idea this ability was so rare

twinning 

@28of47 @12 Now I'm curious whether we could do this. Audio processing does seem to be strong with us, but our preference for communication outside the system is text, not voice, which I think means we've never really had the opportunity.

twinning 

@28of47 @12 Wouldn't that require sharing a living space with another system, at least temporarily? I'm not sure how we'd manage that. -Menderbot

twinning 

@madewokherd @28of47 nooo! voice samples shared on fedi, discord voice calls, that kind of thing

twinning 

@12 @28of47 I don't think we'd enjoy that, with our preferred mode of communication being text. -Menderbot

twinning 

@12 @28of47 Reflecting on it further, I think the main reason for this is that voice communication requires us to do language processing (both understanding and generation) in real time, when we'd prefer to have control over the pacing. Although we process it as audio either way, when the audio is synthesized by our mind based on written word, we have more control, and less real time attention is needed.

twinning 

@madewokherd @28of47 huh! So you have to do language processing and stuff in software, so to speak? You don't have hardware offload for that?

Follow

twinning 

@12 @28of47 Hm, that might explain some things, but I'm not sure how to determine whether it's the case.

Sign in to participate in the conversation
Computer Fairies

Computer Fairies is a Mastodon instance that aims to be as queer, friendly and furry as possible. We welcome all kinds of computer fairies!