"Nothing About Us Without Us", only it still is without them most of the time
When disabled Fediverse users demand participation in accessibility discussions, but there are no discussions in the first place, and they themselves don't even seem to be available to give accessibility feedback
Artikel ansehen
Zusammenfassung ansehen
"Nothing about us without us" is the catchphrase used by disabled accessibility activists who are trying to get everyone to get accessibility right. It means that non-disabled people should stop assuming what disabled people need. Instead, they should listen to what disabled people say they need and then give them what they need.
Just like accessibility in the digital realm in general, this is not only targetted at professional Web or UI developers. This is targetted at any and all social media users just as well.
However, this would be a great deal easier if it wasn't still "without them" all the time.
Alt-text and image descriptions are one example and one major issue. How are we, the sighted Fediverse users, supposed to know what blind or visually-impaired users really need and where they need it if we never get any feedback? And we never get any feedback, especially not from blind or visually-impaired users.
Granted, only sighted users can call us out for an AI-generated alt-text that's complete rubbish because non-sighted users can't compare the alt-text with the image.
But non-sighted users could tell us whether they're sufficiently informed or not. They could tell us whether they're satisfied with an image description mentioning that something is there, or whether they need to be told what this something looks like. They could tell us which information in an image description is useful to them, which isn't, and what they'd suggest to improve its usefulness.
They could tell us whether certain information that's in the alt-text right now should better go elsewhere, like into the post. They could tell us whether extra information needed to understand a post or an image should be given right in the post that contains the image or through an external link. They could tell us whether they need more explanation on a certain topic displayed in an image, or whether there is too much explanation that they don't need. (Of course, they should take into consideration that some of us do not have a 500-character limit.)
Instead, we, the sighted users who are expected to describe our images, receive no feedback for our image descriptions at all. We're expected to know exactly what blind or visually-impaired users need, and we're expected to know it right off the bat without being told so by blind or visually-impaired users. It should be crystal-clear how this is impossible.
What are we supposed to do instead? Send all our image posts directly to one or two dozen people who we know are blind and ask for feedback? I'm pretty sure I'm not the only one who considers this very bad style, especially in the long run, not to mention no guarantee for feedback.
So with no feedback, all we can do is guess what blind or visually-impaired users need.
Now you might wonder why all this is supposed to be such a big problem. After all, there are so many alt-text guides out there on the Web that tell us how to do it.
Yes, but here in the Fediverse, they're all half-useless.
The vast majority of them is written for static Web sites, either scientific or technological or commercial. Some include blogs, again, either scientific or technological or commercial. The moment they start relying on captions and HTML code, you know you can toss them because they don't translate to almost anything in the Fediverse.
What few alt-text guides are written for social media are written for the huge corporate American silos. 𝕏, Facebook, Instagram, LinkedIn. They do not translate to the Fediverse which has its own rules and cultures, not to mention much higher character limits, if any.
Yes, there are one or two guides on how to write alt-text in the Fediverse. But they're always about Mastodon, only Mastodon and nothing but Mastodon. They're written for Mastodon's limitations, especially only 500 characters being available in the post itself versus a whopping 1,500 characters being available in the alt-text. And they're written with Mastodon's culture in mind which, in turn, is influenced by Mastodon's limitations.
Elsewhere in the Fediverse than Mastodon, you have much more possibilities. You have thousands of characters to use up in your post. Or you don't have any character limit to worry about at all. You don't have all means at hand that you have on a static HTML Web site. Even the few dozen (streams) users who can use HTML in social media posts don't have the same influence on the layout of their posts as Web designers have on Web sites. Still, you aren't bound to Mastodon's self-imposed limitations.
And yet, those Mastodon alt-text guides tell you you have to squeeze all information into the alt-text as if you don't have any room in the post. Which, unlike most Mastodon users, you do have.
It certainly doesn't help that the Fediverse's entire accessibility culture comes from Mastodon, concentrates on Mastodon and only takes Mastodon into consideration with all its limitations. Apparently, if you describe an image for the blind and the visually-impaired, you must describe everything in the alt-text. After all, according to the keepers of accessibility in the Fediverse, how could you possibly describe anything in a post with a 500-character limit?
In addition, all guides always only cover their specific standard cases. For example, an image description guide for static scientific Web sites only covers images that are typical for static scientific Web sites. Graphs, flowcharts, maybe a portrait picture. Everything else is an edge-case that is not covered by the guide.
There are even pictures that are edge-cases for all guides and not sufficiently or not at all covered by any of them. When I post an image, it's practically always such an edge-case, and I can only guess what might be the right way to describe it.
Even single feedback for image descriptions, media descriptions, transcripts etc. is not that useful. If one user gives you feedback, you know what this one user needs. But you do not know what the general public with disabilities needs. And what actually matters is just that. Another user might give you wholly different feedback. Two different blind users are likely to give you two different feedbacks on the same image description.
What is needed so direly is open discussion about accessibility in the Fediverse. People gathering together, talking about accessibility, exchanging experiences, exchanging ideas, exchanging knowledge that others don't have. People with various disabilities and special requirements in the Fediverse need to join this discussion because "nothing about them without them", right? After all, it is about them.
And people from outside of Mastodon need to join, too. They are needed to give insights on what can be done on Pleroma and Akkoma, on Misskey, Firefish, Iceshrimp, Sharkey and Catodon, on Friendica, Hubzilla and (streams), on Lemmy, Mbin, PieFed and Sublinks and everywhere else. They are needed to combat the rampant Mastodon-centricism and keep reminding the Mastodon users that the Fediverse is more than Mastodon. They are needed to explain that the Fediverse outside of Mastodon offers many more possibilities than Mastodon that can be used for accessibility. They are needed for solutions to be found that are not bound to Mastodon's restrictions. And they need to learn about there being accessibility in the Fediverse in the first place because it's currently pretty much a topic that only exists on Mastodon.
There are so many things I'd personally like to be discussed and ideally brought to a consensus of sorts. For example:
Alas, this won't happen. Ever. It won't happen because there is no place in the Fediverse where it could sensibly happen.
Now you might wonder what gives me that idea. Can't this just be done on Mastodon?
No, it can't. Yes, most participants would be on Mastodon. And Mastodon users who don't know anything else keep saying that Mastodon is sooo good for discussions.
But seriously, if you've experienced anything in the Fediverse that isn't purist microblogging like Mastodon, you've long since have come to the realisation that when it comes to discussions with a certain number of participants, Mastodon is utter rubbish. It has no concept of conversations whatsoever. It's great as a soapbox. But it's outright horrible at holding a discussion together. How are you supposed to have a meaningful discussion with 30 people if you burn through most of your 500-character limit mentioning the other 29?
Also, Mastodon has another disadvantage: Almost all participants will be on Mastodon themselves. Most of them will not know anything about the Fediverse outside Mastodon. At least some will not even know that the Fediverse is more than just Mastodon. And that one poor sap from Friendica will constantly try to remind people that the Fediverse is not only Mastodon, but he'll be ignored because he doesn't always mention all participants in this thread. Because mentioning everyone is not necessary on Friendica itself, so he isn't used to it, but on Mastodon, it's pretty much essential.
Speaking of Friendica, it'd actually be the ideal place in the Fediverse for such discussions because users from almost all over the place could participate. Interaction between Mastodon users and Friendica forums is proven to work very well. A Friendica forum can be moderated, unlike a Guppe group. And posts and comments reach all members of a Friendica forum without mass-mentioning.
The difficulty here would be to get it going in the first place. Ideally, the forum would be set up and run by an experienced Friendica user. But accessibility is not nearly as much an issue on Friendica as it is on Mastodon, so the difficult part would be to find someone who sees the point in running a forum about it in the first place. A Mastodon user who does see the point, on the other hand, would have to get used to something that is a whole lot different from Mastodon while being a forum admin/mod.
Lastly, there is the Threadiverse, Lemmy first and foremost. But Lemmy has its own issues. For starters, it's federated with the Fediverse outside the Threadiverse only barely and not quite reliably, and the devs don't seem to be interested in non-Threadiverse federation. So everyone interested in the topic would need a Lemmy account, and many refuse to make a second Fediverse account for whichever purpose.
If it's on Lemmy, it will naturally attract Lemmy natives. But the vast majority of these have come from Reddit straight to Lemmy. Just like most Mastodon users know next to nothing about the Fediverse outside Mastodon, most Lemmy users know next to nothing about the Fediverse outside Lemmy. I am on Lemmy, and I've actually run into that wall. After all, they barely interact with the Fediverse outside Lemmy. As accessibility isn't an issue on Lemmy either, they know nothing about accessibility on top of knowing nothing about most of the Fediverse.
So instead of having meaningful discussions, you'll spend most of the time educating Lemmy users about the Fediverse outside Lemmy, about Mastodon culture, about accessibility and about why all this should even matter to people who aren't professional Web devs. And yes, you'll have to do it again and again for each newcomer who couldn't be bothered to read up on any of this in older threads.
In fact, I'm not even sure if any of the Threadiverse projects are accessible to blind or visually-impaired users in the first place.
Lastly, I've got some doubts that discussing accessibility in the Fediverse would even possible if there was a perfectly appropriate place for it. I mean, this Fediverse neither gives advice on accessibility within itself beyond linking to always the same useless guides, nor does it give feedback on accessibility measures such as image descriptions.
People, disabled or not, seem to want perfect accessibility. But nobody wants to help others improve their contributions to accessibility in any way. It's easier and more convenient to expect things to happen by themselves.
Just like accessibility in the digital realm in general, this is not only targetted at professional Web or UI developers. This is targetted at any and all social media users just as well.
However, this would be a great deal easier if it wasn't still "without them" all the time.
Lack of necessary feedback
Alt-text and image descriptions are one example and one major issue. How are we, the sighted Fediverse users, supposed to know what blind or visually-impaired users really need and where they need it if we never get any feedback? And we never get any feedback, especially not from blind or visually-impaired users.
Granted, only sighted users can call us out for an AI-generated alt-text that's complete rubbish because non-sighted users can't compare the alt-text with the image.
But non-sighted users could tell us whether they're sufficiently informed or not. They could tell us whether they're satisfied with an image description mentioning that something is there, or whether they need to be told what this something looks like. They could tell us which information in an image description is useful to them, which isn't, and what they'd suggest to improve its usefulness.
They could tell us whether certain information that's in the alt-text right now should better go elsewhere, like into the post. They could tell us whether extra information needed to understand a post or an image should be given right in the post that contains the image or through an external link. They could tell us whether they need more explanation on a certain topic displayed in an image, or whether there is too much explanation that they don't need. (Of course, they should take into consideration that some of us do not have a 500-character limit.)
Instead, we, the sighted users who are expected to describe our images, receive no feedback for our image descriptions at all. We're expected to know exactly what blind or visually-impaired users need, and we're expected to know it right off the bat without being told so by blind or visually-impaired users. It should be crystal-clear how this is impossible.
What are we supposed to do instead? Send all our image posts directly to one or two dozen people who we know are blind and ask for feedback? I'm pretty sure I'm not the only one who considers this very bad style, especially in the long run, not to mention no guarantee for feedback.
So with no feedback, all we can do is guess what blind or visually-impaired users need.
Common alt-text guides are not helpful
Now you might wonder why all this is supposed to be such a big problem. After all, there are so many alt-text guides out there on the Web that tell us how to do it.
Yes, but here in the Fediverse, they're all half-useless.
The vast majority of them is written for static Web sites, either scientific or technological or commercial. Some include blogs, again, either scientific or technological or commercial. The moment they start relying on captions and HTML code, you know you can toss them because they don't translate to almost anything in the Fediverse.
What few alt-text guides are written for social media are written for the huge corporate American silos. 𝕏, Facebook, Instagram, LinkedIn. They do not translate to the Fediverse which has its own rules and cultures, not to mention much higher character limits, if any.
Yes, there are one or two guides on how to write alt-text in the Fediverse. But they're always about Mastodon, only Mastodon and nothing but Mastodon. They're written for Mastodon's limitations, especially only 500 characters being available in the post itself versus a whopping 1,500 characters being available in the alt-text. And they're written with Mastodon's culture in mind which, in turn, is influenced by Mastodon's limitations.
Elsewhere in the Fediverse than Mastodon, you have much more possibilities. You have thousands of characters to use up in your post. Or you don't have any character limit to worry about at all. You don't have all means at hand that you have on a static HTML Web site. Even the few dozen (streams) users who can use HTML in social media posts don't have the same influence on the layout of their posts as Web designers have on Web sites. Still, you aren't bound to Mastodon's self-imposed limitations.
And yet, those Mastodon alt-text guides tell you you have to squeeze all information into the alt-text as if you don't have any room in the post. Which, unlike most Mastodon users, you do have.
It certainly doesn't help that the Fediverse's entire accessibility culture comes from Mastodon, concentrates on Mastodon and only takes Mastodon into consideration with all its limitations. Apparently, if you describe an image for the blind and the visually-impaired, you must describe everything in the alt-text. After all, according to the keepers of accessibility in the Fediverse, how could you possibly describe anything in a post with a 500-character limit?
In addition, all guides always only cover their specific standard cases. For example, an image description guide for static scientific Web sites only covers images that are typical for static scientific Web sites. Graphs, flowcharts, maybe a portrait picture. Everything else is an edge-case that is not covered by the guide.
There are even pictures that are edge-cases for all guides and not sufficiently or not at all covered by any of them. When I post an image, it's practically always such an edge-case, and I can only guess what might be the right way to describe it.
Discussing Fediverse accessibility is necessary...
Even single feedback for image descriptions, media descriptions, transcripts etc. is not that useful. If one user gives you feedback, you know what this one user needs. But you do not know what the general public with disabilities needs. And what actually matters is just that. Another user might give you wholly different feedback. Two different blind users are likely to give you two different feedbacks on the same image description.
What is needed so direly is open discussion about accessibility in the Fediverse. People gathering together, talking about accessibility, exchanging experiences, exchanging ideas, exchanging knowledge that others don't have. People with various disabilities and special requirements in the Fediverse need to join this discussion because "nothing about them without them", right? After all, it is about them.
And people from outside of Mastodon need to join, too. They are needed to give insights on what can be done on Pleroma and Akkoma, on Misskey, Firefish, Iceshrimp, Sharkey and Catodon, on Friendica, Hubzilla and (streams), on Lemmy, Mbin, PieFed and Sublinks and everywhere else. They are needed to combat the rampant Mastodon-centricism and keep reminding the Mastodon users that the Fediverse is more than Mastodon. They are needed to explain that the Fediverse outside of Mastodon offers many more possibilities than Mastodon that can be used for accessibility. They are needed for solutions to be found that are not bound to Mastodon's restrictions. And they need to learn about there being accessibility in the Fediverse in the first place because it's currently pretty much a topic that only exists on Mastodon.
There are so many things I'd personally like to be discussed and ideally brought to a consensus of sorts. For example:
- Explaining things in the alt-text versus explaining things in the post versus linking to external sites for explanations.
The first is the established Mastodon standard, but any information exclusively available in the alt-text is inaccessible to people who can't access alt-text, including due to physical disabilities.
The second is the most accessible, but it inflates the post, and it breaks with several Mastodon principles (probably over 500 characters, explanation not in the alt-text).
The third is the easiest way, but it's inconvenient because image and explanation are in different places. - What if an image needs a very long and very detailed visual description, considering the nature of the image and the expected audience?
Describe the image only in the post (inflates the post, no image description in the alt-text, breaks with Mastodon principles, impossible on vanilla Mastodon)?
Describe it externally and link to the description (no image description anywhere near the image, image description separated from the image, breaks with Mastodon principles, requires an external space to upload the description)?
Only give a description that's short enough for the alt-text regardless (insufficient description)?
Refrain from posting the image altogether? - Seeing as all text in an image must always be transcribed verbatim, what if text is unreadable for some reason, but whoever posts the image can source the text and transcribe it regardless?
Must it be transcribed because that's what the rule says?
Must it be transcribed so that even sighted people know what's written there?
Must it not be transcribed?
...but it's nigh-impossible
Alas, this won't happen. Ever. It won't happen because there is no place in the Fediverse where it could sensibly happen.
Now you might wonder what gives me that idea. Can't this just be done on Mastodon?
No, it can't. Yes, most participants would be on Mastodon. And Mastodon users who don't know anything else keep saying that Mastodon is sooo good for discussions.
But seriously, if you've experienced anything in the Fediverse that isn't purist microblogging like Mastodon, you've long since have come to the realisation that when it comes to discussions with a certain number of participants, Mastodon is utter rubbish. It has no concept of conversations whatsoever. It's great as a soapbox. But it's outright horrible at holding a discussion together. How are you supposed to have a meaningful discussion with 30 people if you burn through most of your 500-character limit mentioning the other 29?
Also, Mastodon has another disadvantage: Almost all participants will be on Mastodon themselves. Most of them will not know anything about the Fediverse outside Mastodon. At least some will not even know that the Fediverse is more than just Mastodon. And that one poor sap from Friendica will constantly try to remind people that the Fediverse is not only Mastodon, but he'll be ignored because he doesn't always mention all participants in this thread. Because mentioning everyone is not necessary on Friendica itself, so he isn't used to it, but on Mastodon, it's pretty much essential.
Speaking of Friendica, it'd actually be the ideal place in the Fediverse for such discussions because users from almost all over the place could participate. Interaction between Mastodon users and Friendica forums is proven to work very well. A Friendica forum can be moderated, unlike a Guppe group. And posts and comments reach all members of a Friendica forum without mass-mentioning.
The difficulty here would be to get it going in the first place. Ideally, the forum would be set up and run by an experienced Friendica user. But accessibility is not nearly as much an issue on Friendica as it is on Mastodon, so the difficult part would be to find someone who sees the point in running a forum about it in the first place. A Mastodon user who does see the point, on the other hand, would have to get used to something that is a whole lot different from Mastodon while being a forum admin/mod.
Lastly, there is the Threadiverse, Lemmy first and foremost. But Lemmy has its own issues. For starters, it's federated with the Fediverse outside the Threadiverse only barely and not quite reliably, and the devs don't seem to be interested in non-Threadiverse federation. So everyone interested in the topic would need a Lemmy account, and many refuse to make a second Fediverse account for whichever purpose.
If it's on Lemmy, it will naturally attract Lemmy natives. But the vast majority of these have come from Reddit straight to Lemmy. Just like most Mastodon users know next to nothing about the Fediverse outside Mastodon, most Lemmy users know next to nothing about the Fediverse outside Lemmy. I am on Lemmy, and I've actually run into that wall. After all, they barely interact with the Fediverse outside Lemmy. As accessibility isn't an issue on Lemmy either, they know nothing about accessibility on top of knowing nothing about most of the Fediverse.
So instead of having meaningful discussions, you'll spend most of the time educating Lemmy users about the Fediverse outside Lemmy, about Mastodon culture, about accessibility and about why all this should even matter to people who aren't professional Web devs. And yes, you'll have to do it again and again for each newcomer who couldn't be bothered to read up on any of this in older threads.
In fact, I'm not even sure if any of the Threadiverse projects are accessible to blind or visually-impaired users in the first place.
Lastly, I've got some doubts that discussing accessibility in the Fediverse would even possible if there was a perfectly appropriate place for it. I mean, this Fediverse neither gives advice on accessibility within itself beyond linking to always the same useless guides, nor does it give feedback on accessibility measures such as image descriptions.
People, disabled or not, seem to want perfect accessibility. But nobody wants to help others improve their contributions to accessibility in any way. It's easier and more convenient to expect things to happen by themselves.
AI superiority at describing images, not so alleged?
Could it be that AI can image-describe circles even around me? And that the only ones whom my image descriptions satisfy are Mastodon's alt-text police?
Artikel ansehen
Zusammenfassung ansehen
I think I've reached a point at which I only describe my images for the alt-text police anymore. At which I only keep ramping up my efforts, increasing my description quality and declaring all my previous image descriptions obsolete and hopelessly outdated only to have an edge over those who try hard to enforce quality image descriptions all over the Fediverse, and who might stumble upon one of my image posts in their federated timelines by chance.
For blind or visually-impaired people, my image descriptions ought to fall under "better than nothing" at best and even that only if they have the patience to have them read out in their entirety. But even my short descriptions in the alt-text are too long already, often surpassing the 1,000-character mark. And they're often devoid of text transcripts due to lack of space.
My full descriptions that go into the post are probably mostly ignored, also because nobody on Mastodon actually expects an image description anywhere that isn't alt-text. But on top of that, they're even longer. Five-digit character counts, image descriptions longer than dozens of Mastodon toots, are my standard. Necessarily so because I can't see it being possible to sufficiently describe the kind of images I post in significantly fewer characters, so I can't help it.
But it isn't only about the length. It also seems to be about quality. As @Robert Kingett, blind points out in this Mastodon post and this blog post linked in the same Mastodon post, blind or visually-impaired people generally prefer AI-written image descriptions over human-written image descriptions. Human-written image descriptions lack effort, they lack details, they lack just about everything. AI descriptions, in comparison, are highly detailed and informative. And I guess when they talk about human-written image descriptions, they mean all of them.
I can upgrade my description style as often as I want. I can try to make it more and more inclusive by changing the way I describe colours or dimensions as much as I want. I can spend days describing one image, explaining it, researching necessary details for the description and explanation. But from a blind or visually-impaired user's point of view, AI can apparently write circles around that in every way.
AI can apparently describe and even explain my own images about an absolutely extreme niche topic more accurately and in greater detail than I can. In all details that I describe and explain, with no exception, plus even more on top of that.
If I take two days to describe an image in over 60,000 characters, it's still sub-standard in terms of quality, informativity and level of detail. AI only takes a few seconds to generate a few hundred characters which apparently describe and explain the self-same image at a higher quality, more informatively and at a higher level of detail. It may even be able to not only identify where exactly an image was created, even if that place is only a few days old, but also explain that location to someone who doesn't know anything about virtual worlds within no more than 100 characters or so.
Whenever I have to describe an image, I always have to throw someone in front of the bus. I can't perfectly satisfy everyone all the same at the same time. My detailed image descriptions are too long for many people, be it people with a short attention span, be it people with little time. But if I shortened them dramatically, I'd have to cut information to the disadvantage of not only neurodiverse people who need things explained in great detail, but also blind or visually-impaired users who want to explore a new and previously unknown world through only that one image, just like sighted people can let their eyes wander around the image.
Apparently, AI is fully capable of actually perfectly satisfying everyone all the same at the same time because it can convey more information with only a few hundred characters.
Sure, AI makes mistakes. But apparently, AI still makes fewer mistakes than I do.
#AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI
For blind or visually-impaired people, my image descriptions ought to fall under "better than nothing" at best and even that only if they have the patience to have them read out in their entirety. But even my short descriptions in the alt-text are too long already, often surpassing the 1,000-character mark. And they're often devoid of text transcripts due to lack of space.
My full descriptions that go into the post are probably mostly ignored, also because nobody on Mastodon actually expects an image description anywhere that isn't alt-text. But on top of that, they're even longer. Five-digit character counts, image descriptions longer than dozens of Mastodon toots, are my standard. Necessarily so because I can't see it being possible to sufficiently describe the kind of images I post in significantly fewer characters, so I can't help it.
But it isn't only about the length. It also seems to be about quality. As @Robert Kingett, blind points out in this Mastodon post and this blog post linked in the same Mastodon post, blind or visually-impaired people generally prefer AI-written image descriptions over human-written image descriptions. Human-written image descriptions lack effort, they lack details, they lack just about everything. AI descriptions, in comparison, are highly detailed and informative. And I guess when they talk about human-written image descriptions, they mean all of them.
I can upgrade my description style as often as I want. I can try to make it more and more inclusive by changing the way I describe colours or dimensions as much as I want. I can spend days describing one image, explaining it, researching necessary details for the description and explanation. But from a blind or visually-impaired user's point of view, AI can apparently write circles around that in every way.
AI can apparently describe and even explain my own images about an absolutely extreme niche topic more accurately and in greater detail than I can. In all details that I describe and explain, with no exception, plus even more on top of that.
If I take two days to describe an image in over 60,000 characters, it's still sub-standard in terms of quality, informativity and level of detail. AI only takes a few seconds to generate a few hundred characters which apparently describe and explain the self-same image at a higher quality, more informatively and at a higher level of detail. It may even be able to not only identify where exactly an image was created, even if that place is only a few days old, but also explain that location to someone who doesn't know anything about virtual worlds within no more than 100 characters or so.
Whenever I have to describe an image, I always have to throw someone in front of the bus. I can't perfectly satisfy everyone all the same at the same time. My detailed image descriptions are too long for many people, be it people with a short attention span, be it people with little time. But if I shortened them dramatically, I'd have to cut information to the disadvantage of not only neurodiverse people who need things explained in great detail, but also blind or visually-impaired users who want to explore a new and previously unknown world through only that one image, just like sighted people can let their eyes wander around the image.
Apparently, AI is fully capable of actually perfectly satisfying everyone all the same at the same time because it can convey more information with only a few hundred characters.
Sure, AI makes mistakes. But apparently, AI still makes fewer mistakes than I do.
#AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI
Mike Macgirvin stopped maintaining the streams repository
August 31st: Mike Macgirvin has resigned from maintaining the streams repository and let the community take over
Artikel ansehen
Zusammenfassung ansehen
@Fediverse News
Today, on August 31st, 2024, @Mike Macgirvin 🖥️ has officially resigned from maintaining the streams repository. He won't shut it down, and he said he will add contributors if anyone wants to contribute, but he won't actively work on it anymore.
No link to the the source because the source is private.
The streams repository is the home of an intentionally nameless, brandless, public-domain Fediverse server application which its community semi-officially refers to as (streams). Its features include, but are not limited to:
(streams) is the latest stable release in a family of server applications that started in 2010 with a decentralised Facebook alternative named Mistpark, now known as Friendica.
The evolution in the family started in 2011 when Mike invented the concept of nomadic identity, the simultaneous existence of the same Fediverse identity with the same content on multiple server instances, to help overcome the issue of server instances shutting down and their users losing everything. It was first implemented in a Friendica fork named Red in 2012 which was turned into Hubzilla in 2015.
The streams repository came into existence in October, 2021, with a whole tree of eight forks between it and Hubzilla since 2018. Just a few weeks ago, Mike forked it into a new project named Forte, almost nothing about which is known yet, and which is probably very experimental, seeing as Mike has been working on implementing nomadic identity in ActivityPub as of late.
There hasn't been any statement about Forte's future either, but Mike is known to pass stable, daily-driver projects on to the community when he starts something new, such as Friendica in 2012 when he started working on Red and Hubzilla in 2018 when he started working on Osada and Zap. And as small as (streams) may be, seeing as it's sitting in roughly the same niche as Friendica and Hubzilla, it has become a stable daily driver for about a couple dozen users.
(streams) won't go away, but its development will slow down dramatically because new maintainers have yet to be found, and until now, Mike has pretty much done all the work on it. It will probably take longer for the dust to fully settle after (streams) has introduced portable objects as per FEP-ef61 on its way to nomadic identity via ActivityPub. Also, @silverpill, the maintainer of Mitra which currently is the only other Fediverse software to implement FEP-ef61, will have other and more people to talk to.
Today, on August 31st, 2024, @Mike Macgirvin 🖥️ has officially resigned from maintaining the streams repository. He won't shut it down, and he said he will add contributors if anyone wants to contribute, but he won't actively work on it anymore.
No link to the the source because the source is private.
The streams repository is the home of an intentionally nameless, brandless, public-domain Fediverse server application which its community semi-officially refers to as (streams). Its features include, but are not limited to:
- federation via Nomad, Zot6 (Hubzilla) and ActivityPub (optionally, but on by default)
- multiple independent channels/identities on the same account/login
- nomadic identity
- virtually unlimited character count
- full blogging-level text formatting using BBcode, Markdown and/or HTML, including in-line images
- advanced, extensive permission controls for privacy and security second to none in the Fediverse, customisable for each individual contact with 15 permission settings
- optional individual word filters per contact
- optional automatic reader-side content warning generator
- support for flagging images sensitive for Mastodon
- built-in file space with WebDAV connectivity per channel
- built-in, headless CardDAV and CalDAV servers per channel
(streams) is the latest stable release in a family of server applications that started in 2010 with a decentralised Facebook alternative named Mistpark, now known as Friendica.
The evolution in the family started in 2011 when Mike invented the concept of nomadic identity, the simultaneous existence of the same Fediverse identity with the same content on multiple server instances, to help overcome the issue of server instances shutting down and their users losing everything. It was first implemented in a Friendica fork named Red in 2012 which was turned into Hubzilla in 2015.
The streams repository came into existence in October, 2021, with a whole tree of eight forks between it and Hubzilla since 2018. Just a few weeks ago, Mike forked it into a new project named Forte, almost nothing about which is known yet, and which is probably very experimental, seeing as Mike has been working on implementing nomadic identity in ActivityPub as of late.
There hasn't been any statement about Forte's future either, but Mike is known to pass stable, daily-driver projects on to the community when he starts something new, such as Friendica in 2012 when he started working on Red and Hubzilla in 2018 when he started working on Osada and Zap. And as small as (streams) may be, seeing as it's sitting in roughly the same niche as Friendica and Hubzilla, it has become a stable daily driver for about a couple dozen users.
(streams) won't go away, but its development will slow down dramatically because new maintainers have yet to be found, and until now, Mike has pretty much done all the work on it. It will probably take longer for the dust to fully settle after (streams) has introduced portable objects as per FEP-ef61 on its way to nomadic identity via ActivityPub. Also, @silverpill, the maintainer of Mitra which currently is the only other Fediverse software to implement FEP-ef61, will have other and more people to talk to.
PBR and the shitstorm against the new Firestorm
How the new version of the Firestorm viewer with support for Physically-Based Rendering enrages its users
Artikel ansehen
Zusammenfassung ansehen
As to be expected, the Second Life community is completely exploding over PBR, now that the single most popular viewer has rolled out the first version with Physically-Based Rendering. And I don't mean exploding with cheer.
The announcement thread on Reddit shows people with Nvidia GeForce RTX cards who suddenly have slideshow-like FPS for some reason. I must admit this makes me wonder because I get fairly great results out of a Radeon RX590 which is even less high-end. Under Linux. With an open-source driver from the Debian testing repos. In OpenSim, but that shouldn't make so much of a difference unless Second Life surrounds you with 2K PBR content everywhere now.
Another Reddit thread is about how Second Life users take their frustration out on the volunteer Firestorm support in Second Life as in in-world. They catch all the anger that should rather go directly to Linden Lab.
Despite what some users experience with dedicated video hardware that partly isn't even six years old, it's apparent that many of those who complain about the PBR viewers being slow are on toasters that shouldn't have been used for anything 3-D in the first place, especially not virtual worlds full of amateur-made, unoptimised content. Worlds in which optimisation is quality degradation, and ARC is a measurement for good looks.
At least among the Firestorm users, over 10% of them are on mobile hardware that's at least ten years old which usually means on-board graphics. In fact, people are still whining over 32-bit Windows support being axed because their only (or most powerful) computer is so ancient that it still boots 32-bit Windows. And yet, they use it for 3-D virtual worlds because they haven't been able to afford any computer, new or used, in a decade and a half.
So the sharp drop in FPS came not only from a new rendering engine, but also from turning stuff on that was off before and then ripping off the switches. Advanced lighting model, bump maps and normal maps, transparent water, shaders, light sources other than the Sun, the Moon and ambient...
The irony is that Linden Lab and the Firestorm team decided to turn the Advanced Lighting Model including normal maps and specular maps permanently on to make normal maps more convenient and more attractive for content creators. I mean, what they currently do is make their content for potato computers on which all graphical bells and whistles have to be turned off, including normal maps. So how do you make small surface details if you can't rely on normal maps? You build them into the mesh itself, making it vastly more complex in the course and cutting into everyone's FPS.
It's also apparent that nobody could be bothered to read up about PBR. Many seem utterly surprised about the FPS drop. They're used to Firestorm becoming slower and slower to them with every release, but not by such degrees. They seem not to have read that this would happen.
The complaints about how stuff suddenly looks differently come for the same reason: People didn't read up on PBR. They seem to think that PBR is ALM with mirrors instead of an entirely new lighting and rendering model. However, PBR also includes High Dynamic Range, and at least in Second Life, both forward rendering and the old ALM have such a low dynamic range that they render everything in pastel tones, and content creators had to tint everything in garishly cartoonish colours to balance that.
What's happening is largely exactly the same as whenever Linden Lab introduces something new: Conservative users reject it because they reject all changes that actually change stuff and can't be turned off. I guess the outcry when viewers dropped the mesh option and permanently forced everyone to see mesh must have been as big as the outcry when mesh was introduced.
At this point, it really is a pity that there's no real OpenSim forum on which people from all grids can congregate and discuss things. OpenSimWorld has built-in forums, but hardly anyone knows because nobody ever pays attention to the left-hand sidebar.
If there was a central place to discuss OpenSim matters, I guess the outcry against the new Firestorm would come a bit more slowly, but be even more extreme, and even more people would be opposed to it and PBR in general. Including those who say they'll never upgrade to Firestorm 7 while still using Firestorm 6.5.6 or 6.4.21 or so.
There would be four reasons for this. One, while the Second Life community is already so old that it needs newbies who stick around to equal users passing away, the OpenSim community manages to be even older on average, and that means even more conservative. Even more than Second Life users, OpenSim users are likely to want OpenSim back the way it was when they joined. There are still people in OpenSim who vocally oppose mesh. And it isn't too unnormal in OpenSim for users who have been around for long enough to have avatars on a 2010 or even 2007 level whereas you risk being ostracised in Second Life if your mesh body is older than 12 months.
Two, OpenSim is basically Second Life for those who can't afford Second Life. You can get land for dirt cheap, and you can get e.g. a Maitreya LaraX, LeLutka EvoX heads and Doux EvoX skins and hair for absolutely free. The latter isn't legal, but still. So it isn't only the cheapskates and the anti-capitalists who flock into OpenSim, but especially those who genuinely don't have the money to have a decent Second Life experience. And if they don't have money for that, it's highly unlikely that they have money for a decent computer. In other words, many of those who use the Firestorm Viewer on mobile hardware from before 2015 are probably OpenSim users. OpenSim has to have an even higher number of toasters per 1,000 users than Second Life.
Three, and this comes on top: Second Life has a three-versions rule. Only the three most recent versions of any given viewer are allowed to connect. OpenSim doesn't have such a rule. Certain grids or sims might limit which viewers their visitors are allowed to use and mostly do so to keep copybotters out, but in general, such a rule doesn't exist. You can use OpenSim with a Firestorm 5.x if you want to, and if you're living in a bubble on a grid that still runs on OpenSim 0.8.2.1 in which next to nobody has a mesh body, and nobody uses BoM. Absolutely having to upgrade your viewer is not part of OpenSim's culture. Instead, it's perfectly normal to keep using old viewers if you reject certain new features, e.g. EEP.
And four, most OpenSim users aren't even used to seeing Blinn-Phong, i.e. the old normal map and specular map model. Most of the time when content is illegally exported from Second Life and put back together, normal maps and specular maps are omitted. Doing so saves time that can be used to churn out more stuff which probably also explains why some importers don't even add the missing AVsitter back into furniture unless it's sex furniture. And besides, so many OpenSim users are on toasters and have normal maps and specular maps off anyway, and it isn't worth adding what next to nobody can see. It's really mostly only a few of OpenSim's own original creators who add normal maps and specular maps, but their creations aren't available on the big popular freebie sims where everyone picks up their stuff nowadays.
So criticism on PBR in OpenSim would be mixed with a lot of "change is bad" attitude. Expect people demanding OpenSim's development split from Second Life's, and OpenSim finally get its own viewer, just so that OpenSim doesn't have to take over all the "new crap" that Linden Lab whips up. Expect some saying this should have happened long ago, up to the point of some old-timers saying that the introduction of mesh was a mistake already and basically wanting OpenSim to look like Second Life did in 2008 for all eternity because that's what they're used to. And that's what they think their toasters can handle because they've all but forgotten what it's like to be surrounded by thousands of prims.
The announcement thread on Reddit shows people with Nvidia GeForce RTX cards who suddenly have slideshow-like FPS for some reason. I must admit this makes me wonder because I get fairly great results out of a Radeon RX590 which is even less high-end. Under Linux. With an open-source driver from the Debian testing repos. In OpenSim, but that shouldn't make so much of a difference unless Second Life surrounds you with 2K PBR content everywhere now.
Another Reddit thread is about how Second Life users take their frustration out on the volunteer Firestorm support in Second Life as in in-world. They catch all the anger that should rather go directly to Linden Lab.
Despite what some users experience with dedicated video hardware that partly isn't even six years old, it's apparent that many of those who complain about the PBR viewers being slow are on toasters that shouldn't have been used for anything 3-D in the first place, especially not virtual worlds full of amateur-made, unoptimised content. Worlds in which optimisation is quality degradation, and ARC is a measurement for good looks.
At least among the Firestorm users, over 10% of them are on mobile hardware that's at least ten years old which usually means on-board graphics. In fact, people are still whining over 32-bit Windows support being axed because their only (or most powerful) computer is so ancient that it still boots 32-bit Windows. And yet, they use it for 3-D virtual worlds because they haven't been able to afford any computer, new or used, in a decade and a half.
So the sharp drop in FPS came not only from a new rendering engine, but also from turning stuff on that was off before and then ripping off the switches. Advanced lighting model, bump maps and normal maps, transparent water, shaders, light sources other than the Sun, the Moon and ambient...
The irony is that Linden Lab and the Firestorm team decided to turn the Advanced Lighting Model including normal maps and specular maps permanently on to make normal maps more convenient and more attractive for content creators. I mean, what they currently do is make their content for potato computers on which all graphical bells and whistles have to be turned off, including normal maps. So how do you make small surface details if you can't rely on normal maps? You build them into the mesh itself, making it vastly more complex in the course and cutting into everyone's FPS.
It's also apparent that nobody could be bothered to read up about PBR. Many seem utterly surprised about the FPS drop. They're used to Firestorm becoming slower and slower to them with every release, but not by such degrees. They seem not to have read that this would happen.
The complaints about how stuff suddenly looks differently come for the same reason: People didn't read up on PBR. They seem to think that PBR is ALM with mirrors instead of an entirely new lighting and rendering model. However, PBR also includes High Dynamic Range, and at least in Second Life, both forward rendering and the old ALM have such a low dynamic range that they render everything in pastel tones, and content creators had to tint everything in garishly cartoonish colours to balance that.
What's happening is largely exactly the same as whenever Linden Lab introduces something new: Conservative users reject it because they reject all changes that actually change stuff and can't be turned off. I guess the outcry when viewers dropped the mesh option and permanently forced everyone to see mesh must have been as big as the outcry when mesh was introduced.
At this point, it really is a pity that there's no real OpenSim forum on which people from all grids can congregate and discuss things. OpenSimWorld has built-in forums, but hardly anyone knows because nobody ever pays attention to the left-hand sidebar.
If there was a central place to discuss OpenSim matters, I guess the outcry against the new Firestorm would come a bit more slowly, but be even more extreme, and even more people would be opposed to it and PBR in general. Including those who say they'll never upgrade to Firestorm 7 while still using Firestorm 6.5.6 or 6.4.21 or so.
There would be four reasons for this. One, while the Second Life community is already so old that it needs newbies who stick around to equal users passing away, the OpenSim community manages to be even older on average, and that means even more conservative. Even more than Second Life users, OpenSim users are likely to want OpenSim back the way it was when they joined. There are still people in OpenSim who vocally oppose mesh. And it isn't too unnormal in OpenSim for users who have been around for long enough to have avatars on a 2010 or even 2007 level whereas you risk being ostracised in Second Life if your mesh body is older than 12 months.
Two, OpenSim is basically Second Life for those who can't afford Second Life. You can get land for dirt cheap, and you can get e.g. a Maitreya LaraX, LeLutka EvoX heads and Doux EvoX skins and hair for absolutely free. The latter isn't legal, but still. So it isn't only the cheapskates and the anti-capitalists who flock into OpenSim, but especially those who genuinely don't have the money to have a decent Second Life experience. And if they don't have money for that, it's highly unlikely that they have money for a decent computer. In other words, many of those who use the Firestorm Viewer on mobile hardware from before 2015 are probably OpenSim users. OpenSim has to have an even higher number of toasters per 1,000 users than Second Life.
Three, and this comes on top: Second Life has a three-versions rule. Only the three most recent versions of any given viewer are allowed to connect. OpenSim doesn't have such a rule. Certain grids or sims might limit which viewers their visitors are allowed to use and mostly do so to keep copybotters out, but in general, such a rule doesn't exist. You can use OpenSim with a Firestorm 5.x if you want to, and if you're living in a bubble on a grid that still runs on OpenSim 0.8.2.1 in which next to nobody has a mesh body, and nobody uses BoM. Absolutely having to upgrade your viewer is not part of OpenSim's culture. Instead, it's perfectly normal to keep using old viewers if you reject certain new features, e.g. EEP.
And four, most OpenSim users aren't even used to seeing Blinn-Phong, i.e. the old normal map and specular map model. Most of the time when content is illegally exported from Second Life and put back together, normal maps and specular maps are omitted. Doing so saves time that can be used to churn out more stuff which probably also explains why some importers don't even add the missing AVsitter back into furniture unless it's sex furniture. And besides, so many OpenSim users are on toasters and have normal maps and specular maps off anyway, and it isn't worth adding what next to nobody can see. It's really mostly only a few of OpenSim's own original creators who add normal maps and specular maps, but their creations aren't available on the big popular freebie sims where everyone picks up their stuff nowadays.
So criticism on PBR in OpenSim would be mixed with a lot of "change is bad" attitude. Expect people demanding OpenSim's development split from Second Life's, and OpenSim finally get its own viewer, just so that OpenSim doesn't have to take over all the "new crap" that Linden Lab whips up. Expect some saying this should have happened long ago, up to the point of some old-timers saying that the introduction of mesh was a mistake already and basically wanting OpenSim to look like Second Life did in 2008 for all eternity because that's what they're used to. And that's what they think their toasters can handle because they've all but forgotten what it's like to be surrounded by thousands of prims.
Things that'll happen at OpenSim parties
If you're a frequent partygoer in OpenSim, you're likely to know at least some of these
Artikel ansehen
Zusammenfassung ansehen
- In general, people who are genuinely completely clueless about what kind of event they teleport to. They haven't read any announcements, not in any group, not on an in-world billboard with built-in teleporter, not on OpenSimWorld. They might not even know that the website OpenSimWorld exists. They just took an OpenSimWorld beacon which to them is nothing but a teleporter and picked one of the top three sims with the most avatars on them.
- The location has a dress code. The event has the same dress code. But the only ones who follow the dress code instead of coming as they are are the DJ, the sim owners and maybe one avatar who loves to show off their stylistic flexibility or their audacity to actually go nude when nudity is encouraged.
- Happens mostly at events that start at 9 PM UTC or earlier: In the middle of the party, someone entirely new shows up and greets everyone in their home language. Which is not the language that's spoken at the party. For example, an Italian who speaks neither German nor English at a German party. That someone stays for maybe ten minutes before teleporting out again, disappointed because people didn't start talking Italian instead of German, nor did everyone immediately put on a translator.
- Variant: There are enough regulars who don't speak the official event language for everyone to have to wear two or three translators, cluttering the local chat with translations of everything, including chat spam gestures.
- Someone teleports onto the party sim, stands around for five to ten minutes and teleports back out again. That's because they didn't land directly at the party. As they don't see the party right in front of their virtual nose, they can't figure out where it is. Sometimes not even when the party is inside a building, and they landed right outside the entrance door.
- The bigger the event, the more people can't hold back their chat spam gestures. Like, if there are a dozen people or fewer, nobody chat-spams, and you can actually chat. If there are two dozen people or more, every other guest chat-spams, rendering the local chat useless as a chat.
- There's a DJ desk on the sim. There's a poseball behind the DJ desk, or the DJ desk has a built-in sit script with DJ animations. But the DJ's avatar is dancing on the dance floor.
- Voice moderation, and the DJ forgets to turn the mic off afterwards.
- Voice moderation, and the DJ fails to turn the mic on before saying something. Bonus points for turning it on after saying something.
- The DJ announces a fairly long piece of music, six minutes or more. And a toilet break.
- Events with a musical theme, but song wishes that have absolutely nothing to do with the theme. That's often not although, but because the wisher attends these events regularly. They never read any announcements because they don't have to, because they know for certain where and when this event is going to be. So they don't even know where the events are announced as they never look it up. Besides, they know nothing about musical genres or eras or such, and they don't care. And so they wish for a classic rock song, a 1990s eurodance tune or some disco-fox schlager in the middle of a reggae party at which they're the only avatar who isn't dressed in Rasta colours and smoking virtual pot.
Bonus points for the DJ actually playing that song. - First-time visitors who are completely irritated upon finding out that there is such a thing as musical themes at DJ events.
- First-time visitors who are completely irritated upon finding out that "musical theme" doesn't always mean EDM because they find themselves in the middle of something like a krautrock set.
- First-time visitors who are completely irritated upon finding out that a "musical theme" doesn't even necessarily have to be one musical genre, but it can also be a topic that's covered by lots of different genres. Songs about love, songs about the colour black, songs about vehicles, songs about other musicians, songs produced by Alan Parsons, originals of covers that are vastly more well-known than the originals, cool recent indie releases on Bandcamp, songs from 1970s' Italy etc.
- The DJ plays the album version of something of which people only know the single/radio edit. People silently judge the DJ as being lazy and having deliberately stretched the set with overly long songs.
- The DJ plays the single/radio version of something that has a much longer album version. The music nerds judge the DJ as being incompetent.
- People leave during the last few minutes of the event, during the last song. And the last song has been announced as such.
- The DJ leaves during the last song because their job is done. Bonus points if they don't have an immediately following DJ set elsewhere to teleport to.
- New people arrive during the last five minutes of the event. That's usually Americans who come to a European party. First they're surprised that the event is about to end. Then they're surprised to learn that there are events in OpenSim not run by Americans.
- The event is over, but after ten minutes or even later, there are still one or two avatars dancing. Either their users not only went AFK, but don't follow the stream closely enough to have noticed that it has switched or stopped entirely. Or they've tried to teleport out but failed, leaving a ghost avatar behind that remains until either they come back into the grid, or the sim is restarted. Or they've fallen asleep.
Konversationsmerkmale
Lädt...
Lädt...