twinning 

for the last decade, we've wanted someone to introject many of us. To want to absorb pieces of us. For the longest time nobody did. We thought nobody was interested. We absorbed others, but they simply weren't interested in absorbing us. Then @12 helped us find the missing piece: very few people have the auditory ability needed to introject many headmates.

Being with 12 is the first time we've been able to properly merge with someone like this

twinning 

@28of47 @12 [Entangled] Wait, why is that a matter of auditory ability? And you're saying that auditory ability is unusual?

I don't see how it'd help, when one can't really hear the mindvoices of another system's members.

twinning 

@madewokherd @28of47 being able to pick out small details allows us to pick out the tiny details in how each headmate runs the body's voice

twinning 

@12 @28of47 Huh.

I don't think we've ever interacted with another system by voice enough to notice. While there are differences in how we use the body's voice, we haven't really tried to track it, as mindvoice is so much easier to distinguish.

It seems easier for us to detect differences in others' "written" voices than listening.

twinning 

@madewokherd @12 weโ€™ve always differentiated partners headmates via voice. Until the last few months, we had no idea this ability was so rare

twinning 

@28of47 @12 Now I'm curious whether we could do this. Audio processing does seem to be strong with us, but our preference for communication outside the system is text, not voice, which I think means we've never really had the opportunity.

Follow

twinning 

@28of47 @12 Wouldn't that require sharing a living space with another system, at least temporarily? I'm not sure how we'd manage that. -Menderbot

ยท ยท 1 ยท 0 ยท 0

twinning 

@madewokherd @28of47 nooo! voice samples shared on fedi, discord voice calls, that kind of thing

twinning 

@12 @28of47 I don't think we'd enjoy that, with our preferred mode of communication being text. -Menderbot

twinning 

@12 @28of47 Reflecting on it further, I think the main reason for this is that voice communication requires us to do language processing (both understanding and generation) in real time, when we'd prefer to have control over the pacing. Although we process it as audio either way, when the audio is synthesized by our mind based on written word, we have more control, and less real time attention is needed.

twinning 

@madewokherd @28of47 huh! So you have to do language processing and stuff in software, so to speak? You don't have hardware offload for that?

twinning 

@12 @28of47 Hm, that might explain some things, but I'm not sure how to determine whether it's the case.

re: twinning 

@madewokherd @12 the realtime aspect of audio can be exhausting. We've found that disabling visual processing makes it easier, though we're still headed towards a preference of reading text via screenreader for the benefits of audio without the realtime concern

re: twinning 

@28of47 @madewokherd huh, so 12 has even more hardware offload for audio than 28 does?

re: twinning 

@12 @madewokherd usually, the realtime nature of audio isn't a problem, but we definitely do feel it on low spoons days

re: twinning 

@28of47 @madewokherd interesting. even for 12, on low spoons day, audio being real-time is just completely unimportant, it's such a minor load on userspace, and that's relaly only to do a few secondary language synthesis tasks

re: twinning 

@12 @madewokherd 12 has no problem following the speech of others even on a bad, low spoons day?

re: twinning 

@12 @madewokherd fascinating. While in Aotearoa, towards the end as spoons were fading, we were having to ask creatures to repeat things more and more often

re: twinning 

@28of47 @madewokherd we had to synthesize a hardware offload for words processing initially so as to keep buffer of what's going on around us as an abuse survival strategy (I.E, always listen to what our mother is saying, we might be able to use it) and that just came along th us from then on

re: twinning 

@12 @28of47 That makes a lot of sense. It's really fun seeing you learn about each other from this conversation.

I think it does require active attention for us to comprehend speech, although we are able to mentally buffer and replay a limited amount of it if we miss it the first time around.

We have noticed that we sometimes "background count" without language processing and without the need to replay audio for that, but we're not sure how accurate it is.

re: twinning 

@12 @madewokherd although, we don't have much data yet about what audio loads look like on low spoons days when we aren't also running visual processing at the same time

re: twinning 

@28of47 @madewokherd while following speech of others, unless we are actively falling asleep, load averages stay flat '0.00,0.00,0.00'

re: twinning 

@12 @madewokherd on an average day, the realtime piece of audio isn't a problem. 28 does have historical problems with paying attention. We're testing whether disabling visual processing makes paying attention easier. So far, the answer is yes

Sign in to participate in the conversation
Computer Fairies

Computer Fairies is a Mastodon instance that aims to be as queer, friendly and furry as possible. We welcome all kinds of computer fairies!