twinning
for the last decade, we've wanted someone to introject many of us. To want to absorb pieces of us. For the longest time nobody did. We thought nobody was interested. We absorbed others, but they simply weren't interested in absorbing us. Then @12 helped us find the missing piece: very few people have the auditory ability needed to introject many headmates.
Being with 12 is the first time we've been able to properly merge with someone like this
twinning
@madewokherd @28of47 being able to pick out small details allows us to pick out the tiny details in how each headmate runs the body's voice
twinning
I don't think we've ever interacted with another system by voice enough to notice. While there are differences in how we use the body's voice, we haven't really tried to track it, as mindvoice is so much easier to distinguish.
It seems easier for us to detect differences in others' "written" voices than listening.
twinning
@madewokherd @12 we’ve always differentiated partners headmates via voice. Until the last few months, we had no idea this ability was so rare
twinning
@madewokherd @12 would highly reccommend trying!
twinning
@madewokherd @28of47 nooo! voice samples shared on fedi, discord voice calls, that kind of thing
twinning
@madewokherd @28of47 huh!
twinning
@12 @28of47 Reflecting on it further, I think the main reason for this is that voice communication requires us to do language processing (both understanding and generation) in real time, when we'd prefer to have control over the pacing. Although we process it as audio either way, when the audio is synthesized by our mind based on written word, we have more control, and less real time attention is needed.
re: twinning
@madewokherd @12 the realtime aspect of audio can be exhausting. We've found that disabling visual processing makes it easier, though we're still headed towards a preference of reading text via screenreader for the benefits of audio without the realtime concern
re: twinning
@28of47 @madewokherd huh, so 12 has even more hardware offload for audio than 28 does?
re: twinning
@12 @madewokherd usually, the realtime nature of audio isn't a problem, but we definitely do feel it on low spoons days
re: twinning
@28of47 @madewokherd interesting. even for 12, on low spoons day, audio being real-time is just completely unimportant, it's such a minor load on userspace, and that's relaly only to do a few secondary language synthesis tasks
re: twinning
@12 @madewokherd 12 has no problem following the speech of others even on a bad, low spoons day?
re: twinning
@28of47 @madewokherd yup!
re: twinning
@12 @madewokherd fascinating. While in Aotearoa, towards the end as spoons were fading, we were having to ask creatures to repeat things more and more often
re: twinning
@12 @28of47 That makes a lot of sense. It's really fun seeing you learn about each other from this conversation.
I think it does require active attention for us to comprehend speech, although we are able to mentally buffer and replay a limited amount of it if we miss it the first time around.
We have noticed that we sometimes "background count" without language processing and without the need to replay audio for that, but we're not sure how accurate it is.