#long

2025-06-18
@Johannes Ernst The first step is already done:

Forte, @Mike Macgirvin ?️ most recent project from the same family that started with Friendica 15 years ago, is the first and only stable Fediverse server application that uses ActivityPub for nomadic identity. Nomadic identity itself is a concept created by Mike in 2011 and first implemented by himself in 2012 in a very early version of Hubzilla which he called Red back then.

This means that you can have the exact same channel/identity (think Mastodon account, but without its own login) on multiple server instances with one account each. If one server goes down, you still have at least one clone (depending on how many clones you make).

@silverpill is working on implementing this on Mitra. It's still only available in development versions, though. The difference is that Mike had already created a whole bunch of Fediverse server applications with nomadic identity since 2012; he "only" had to port nomadic identity from the Zot or Nomad protocol to ActivityPub. Silverpill, on the other hand, has to implement nomadic identity in something that was built upon ActivityPub with no nomadic identity.

Both recognise each other's nomadic identities. (For comparison: Mastodon doesn't recognise any nomadic identities. It takes the two instances of this Hubzilla channel of mine for two fully separate identities.) But that's all for now.

The next step, and that's way into the future, would be to be able to clone from Forte to Mitra or from Mitra to Forte. This would give you one identity on at least two server instances of two separate Fediverse server applications.

The obvious downside is that you won't be able to take everything with you everywhere when you clone to other server types. For example, if you clone a Forte channel to Mitra, you won't be able to take your permissions settings, your permission roles, your friend zoom settings, the contents of your cloud storage, your CalDAV calendars and your CardDAV addressbook with you over to Mitra. That's simply because Mitra doesn't have any of these features.

What you envision is another step further. And that's the adoption of nomadic identity via ActivityPub and ideally also OpenWebAuth magic single sign-on, another one of Mike's creations, by all Fediverse server applications. And I mean all of them. Including extremely minimalist stuff like snac2 or GoToSocial. Including stuff that isn't actively being worked on like Plume. Including stuff that's dead, but that still has running servers, like Calckey, Firefish or /kbin. And including Mastodon which stubbornly refuses to make itself more compatible with the "competition" in the Fediverse and adopt technologies created by anyone else in the Fediverse, even more so if that someone is Mike Macgirvin.

In other words, this won't happen. Mastodon would rather turn itself into its own federated walled garden by becoming incompatible with all other ActivityPub implementations.

What many Mastodon users who know nothing about decentralisation wish for is another step further. And that's to create one account on one server instance of one Fediverse server software, no matter which, and then to have full-blown user permissions on any instance of any Fediverse server software.

Like, create one account on mastodon.social, go to a Pixelfed instance, post pictures Instagram-style, go to a PeerTube instance, upload videos, go to a WriteFreely instance, blog away, go to a Hubzilla hub, build a webpage, all with only your mastodon.social login.

Of course, this is impossible to do. This would mean that if you create an account on one Fediverse server instance, it would have to be cloned to all 30,000+ servers in the whole Fediverse instantaneously. And if you start your own instance, it would have to trigger 30,000+ servers to clone their tens of millions of accounts and channels over to your instance.

Usually, when I explain this to people who want to use everything with one login, they tell me that they don't want to use every server in the Fediverse. No, but they want to use any server in the Fediverse. Any one of the 30,000+.

And they want to use it immediately. Like, go there, use it with full-blown local user permissions right away, no delay.

Now you may argue that their account or channel could be cloned to that server when they visit it for the first time. Drive-by cloning, so-to-speak. Still, won't happen. Cloning takes time. I myself have cloned enough Hubzilla and (streams) channels over the years to be able to estimate just how long it takes. And none of my channels has ever contained tens of thousands of posts and thousands of pictures.

Besides, drive-by cloning would inflate Fediverse instances senselessly, not to mention bog them down with extra network traffic. Whenever you visit a Fediverse server instance for whichever reason (like, you want to look at a post on Friendica or Hubzilla to see what it looks like without being botched by Mastodon), your account or channel would automagically be cloned to that server instance. Another account (and channel, if necessary) on that server instance, another deluge of posts and files flooding into the database, and that clone would have to be synced with your 600 other previous drive-by clones on the 600 Fediverse server instances you've visited before.

Extra nefarious: Some "websites" that have to do with Hubzilla or a certain aspect of Hubzilla are parts of Hubzilla channels themselves. This includes the official Hubzilla website. If you visited them, you'd create a drive-by clone on the Hubzilla hub which hosts that website.

So if someone set up a single-user Hubzilla hub with their personal channel and a website channel on it, and the website is interesting enough, and 10,000 Fediverse users visit it, it'll end up bigger than the biggest current Hubzilla hub within days. It'll have 10,001 accounts, namely the owner's account with two channels and 10,000 accounts with drive-by clones, automatically created by the 10,000 external visitors.

But this will remain utopic not only because it's technologically pretty much impossible and very much not feasible at all. It also requires a mechanism for one Fediverse server to recognise logins on other Fediverse servers. You know, like OpenWebAuth. You want your Mastodon account to drive-by clone itself, Mastodon will have to implement OpenWebAuth, and I mean fully implement it.

There actually is a pull request in Mastodon's GitHub code repository that would have implemented client-side OpenWebAuth support (= Hubzilla, (streams) and Forte would recognise Mastodon logins). This isn't even about full-support that'd include login recognition on Mastodon's own side. This pull request has been there for two years. It was never merged. And it probably will never be merged.

This means that the Mastodon devs have practically rejected OpenWebAuth as a feature to implement. Won't come. Ever. Not even half of it.

And this should say everything about the chances that Mastodon will ever implement nomadic identity.

CC: @william.maggos @Richard MacManus @Tim Chambers @Ben Pate 🤘🏻

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mitra #Hubzilla #Streams #(streams) #Forte #OpenWebAuth #SingleSignOn #NomadicIdentity
2025-06-18
@Chris ABRAHAM Do you absolutely depend on being on Mastodon (as opposed of the rest of the Fediverse)?

Because there is Fediverse server software (as in federated with Mastodon, as in you can follow Mastodon users from there, and Mastodon users can follow you when you're there) that offers way more than 500 characters. It may also offer other features which you may want Mastodon to have, or which are even completely unimaginable from the point of view of someone who knows the Fediverse as only Mastodon. Some examples:

  • Misskey: 3,000 characters, hard-coded
  • Misskey forks (Iceshrimp, Sharkey etc.): thousands of characters, configurable by admin
  • Pleroma, Akkoma: 5,000 characters, configurable by admin
  • Friendica: 200,000 characters, probably hard-coded
  • Hubzilla: 16,777,215 characters, database field size limit (this is where I'm replying to you from right now)
  • (streams), Forte: over 24 million characters

CC: @Stefan Bohacek

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #Pleroma #Akkoma #Misskey #Forkey #Forkeys #Iceshrimp #Sharkey #Friendica #Hubzilla #Streams #(streams) #Forte #CharacterLimit #CharacterLimits
2025-06-18

Dấu hiệu may mắn trên lòng bàn tay#shortvideo

Dấu hiệu may mắn trên lòng bàn tay#shortvideo Your browser does not support HTML video. Dấu hiệu may mắn trên lòng bàn tay#shortvideo

simsothantai.com/2025/06/18/da

2025-06-17
@caos @Hiker @» Aakerbeere 🏖️ :mastodon: Lemmy ist ja eigentlich ein Reddit-Klon, also in erster Linie darauf ausgelegt, daß jemand ein Bild, einen Link oder sonstwas postet und darüber dann diskutiert wird. So, wie es auf Reddit thematisch spezialisierte Subreddits gibt, gibt's auf Lemmy Communities.

Lemmy ist dabei nicht die einzige Serversoftware im Fediverse, die darauf ausgelegt ist. Es gibt auch noch
  • /kbin (so mit Schrägstrich und vier Kleinbuchstaben geschrieben), das es schon gab, als Lemmy richtig durchstartete, das einiges besser machen wollte als Lemmy, das auch nebenher Microblogging kann, das aber an seinem eigenen "Erfolg" zugrundeging
  • Mbin, einen /kbin-Fork, der aktiv entwickelt wird
  • PieFed, das von den vieren (also inklusive Lemmy) noch am ausgefuchstesten erscheint
  • Sublinks, das bis heute nicht offiziell releaset worden ist

Gegenüber Lemmy haben sie zwei Vorteile.

Zum einen sind sie besser darin, sich mit etwas anderem als dem "Threadiverse" (Lemmy, /kbin, Mbin, PieFed, hoffentlich bald auch Sublinks) zu verbinden. Lemmy kann eigentlich nur mit Reddit-Klonen richtig föderieren.

Zum anderen haben sie als Entwickler nicht einen knallharten Stalinisten und einen knallharten Maoisten, die beide sowjetische und chinesische Greueltaten relativieren oder gar bestreiten.

Lemmys einziger Vorteil ist, daß Lemmyverse alle bekannten Lemmy-Communities listet, aber keine /kbin- oder Mbin-Magazine und auch keine PieFed-Communities.

Kulturell ist Lemmy völlig anders als Mastodon. Es ist nicht Mastodon mit thematischen Gruppen. 99% aller "Lemminge" kommen direkt von Reddit und kennen Reddit sehr gut, aber meistens weder Twitter noch Mastodon.

Twitter-Nutzer, die nach Mastodon abgehauen sind, fanden meistens die Twitter-Kultur grausig und vergiftet. Daher haben sie auf Mastodon eine ganz neue Kultur "erfunden", die a) von Twitter inspiriert ist, b) aber freundlicher ist und c) die bis Mitte 2022 bestehende Mastodon-Kultur regelrecht verdrängt hat.

Redditors, die nach Lemmy abgehauen sind, fanden dagegen die Reddit-Kultur nicht scheiße. Sie haben im Prinzip die Reddit-Kultur 1:1 nach Lemmy übertragen, wo sie sich nur ein bißchen wandelte, weil Lemmy eben kein zentralisierter Monolith ist. Die Lemmy-Kultur hat mit der Mastodon-Kultur nichts, aber auch gar nichts zu tun. Statt dessen ist sie die Reddit-Kultur plus Stänkern gegen andere Lemmy-Instanzen.

Das heißt in der Praxis: Wenn du nach Lemmy postest oder kommentierst und dich dabei zu sehr nicht wie ein Redditor verhältst, dann wirst du komisch angeguckt.

Von Mastodon nach Lemmy posten


Ich möchte noch einmal konkretisieren, wie man mit Mastodons beschränkten Mitteln nach Lemmy postet. Dabei sind zwei Dinge wichtig: Zum einen ist auf Lemmy ein Titel absolut essentiell wichtig, während Mastodon kaum weiß, was Titel sind. Zum anderen ist unbedingt die Lemmy-Community zu erwähnen.

Wenn du von Mastodon nach Lemmy posten willst, muß das so aussehen:

Titel
(Leerzeile, der Übersichtlichkeit halber und evtl. auch aus technischen Gründen empfohlen)
@Lemmy-Community
(Leerzeile, der Übersichtlichkeit halber und evtl. auch aus technischen Gründen empfohlen)
Post-Text

Wenn du nach Lemmy, Friendica, Hubzilla, (streams), Forte und Guppe crossposten willst, dann geht das so:

Titel
(Leerzeile, der Übersichtlichkeit halber und evtl. auch aus technischen Gründen empfohlen)
@Lemmy-Community @Friendica/Hubzilla/(streams)/Forte-Gruppe1 @Friendica/Hubzilla/(streams)/Forte-Gruppe2 @Friendica/Hubzilla/(streams)/Forte-Gruppe3 ... @Guppe-Gruppe1 @Guppe-Gruppe2 @Guppe-Gruppe3
(Leerzeile, der Übersichtlichkeit halber und evtl. auch aus technischen Gründen empfohlen)
Post-Text

Das heißt, du erwähnst:
  • erst genau eine Lemmy-Community
  • dann Friendica-Gruppen, Hubzilla-Foren, (streams)-Gruppen und/oder Forte-Gruppen (meines Wissens kann man da zumindest von Mastodon aus über mehrere crossposten, und wenn du sie auf Mastodon mit @ erwähnst, dann teilen sie deinen Post trotzdem)
  • dann Guppe-Gruppen

Auch wichtig: Alle Erwähnungen müssen in einer und derselben Zeile stehen! Du solltest sie also nicht aus irgendeinem Grunde untereinander in jeweils einer eigenen Zeile anordnen.

Außerdem wichtig: Keine Hashtags! Du kannst zwar mit Hashtags nach Lemmy posten. Aber Lemmy kennt keine Hashtags, weil Reddit keine kennt. Zum einen braucht Lemmy keine Hashtags, weil es ja die Communities gibt. Zum anderen, wenn du Hashtags postest, macht das bei den ganzen Redditors auf Lemmy keinen guten Eindruck. Auf Lemmy können Posts und Kommentare nämlich auch downgevotet werden (Upvote auf Lemmy = Fave auf Mastodon; Downvote = das Gegenteil, das Mastodon gar nicht kennt, Lemmy aber sehr wohl), und ich wage zu behaupten, es gibt einige, die Posts mit Hashtags wegen der Hashtags downvoten.

Von Mastodon nach Lemmy kommentieren


Hier wäre auch noch etwas erwähnenswert: Im Gegensatz zu Mastodon kennt Lemmy Konversationen. Und die funktionieren ohne Erwähnungen. Man braucht Lemminge nicht zu erwähnen, damit sie mitbekommen, daß man in einem Thread kommentiert hat. Das ist da völlig anders als auf Mastodon.

Das heißt auch: Wenn du in einem Lemmy-Thread kommentierst, lösch auf jeden Fall die von Mastodon automatisch generierte Erwähnung raus! Die fällt da genauso negativ auf wie Hashtags. Das macht man da nicht.

Beim Kommentieren sollte auch das Erwähnen der Lemmy-Community überflüssig sein. Dein Kommentar kommt auch so an.

#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #Friendica #Hubzilla #Streams #(streams) #Forte #Threadiverse #Lemmy #/kbin #Mbin #PieFed #Sublinks
2025-06-15

Lòng biết ơn giúp bạn thu hút nhiều may mắn đến

Lòng biết ơn giúp bạn thu hút nhiều may mắn đến Your browser does not support HTML video. Lòng biết ơn giúp bạn thu hút nhiều may mắn đến

simsothantai.com/2025/06/15/lo

2025-06-13
@nihilistic_capybara I don't know what they expect. Also, I hardly ever get any feedback for my image descriptions unless I explicitly ask someone for it.

But I've actually asked blind or visually-impaired users a few times, and in the few occasions that they actually answered, they said that this amount of description is okay.

After all, the limitations in navigating alt-text with a screen reader only apply to actual alt-text "underneath" an image. They do not apply to image descriptions in the post which can be navigated like the rest of the post text.

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
2025-06-13
@nihilistic_capybara
The description you have given is a meter long and frankly (again please forgive my ignorance I know nothing about the blind and how they navigate the web) contains too much details to the point where using a screen reader to listen to this turns into a very boring podcast.

Someone somewhere out there might be interested in all these details.

Allow me to elaborate: My original pictures are renderings from very obscure 3-D virtual worlds. You may find them boring. Many others may find them boring.

But someone somewhere out there might be interested. Intrigued. Excited even.

They've put high hopes into "the metaverse" as in 3-D virtual worlds. All they've read about so far is a) Meta Horizon failing and b) otherwise only announcements, often with AI-generated images as illustrations. Just before they saw my image, they thought that 3-D virtual worlds were dead.

But then they see my image. Not an AI picture, but an actual rendering from inside an actual 3-D virtual world! One that exists right now! It has users! It's alive! I mean, it has to have users because I have to be one to show images from inside these worlds.

They're on the edge of their seat in excitement.

Do you think they only look at what they think is important in the image? Do you think they only look what I think is important in the image?

Hell, no! They'll go on a journey through a whole new universe! Or at least what little of it they can see through my image. In other words, they take in all the big and small details.

If they're sighted.

Now, here is where accessibility and inclusion comes into play. What do accessibility and inclusion mean? They mean that someone who is disabled must have all the same chances to do all the same things and experience all the same things in all the same ways as someone without their disability. Not giving them these chances is ableist.

Okay, so what if that someone is blind? In this case, accessibility and inclusion mean that this someone must have the very same opportunity to take in all the big and small details as someone who has perfect eyesight.

But if I only describe my images in 200 characters, they can't do that. Where are they supposed to get the necessary information to experience my image like someone sighted?

They can only get this information if I give it to them. If I describe my image in all details.

And that's why I describe my original images in all details.

And stuff like the text not being legible. I don't know how you read that text cause I am unable to read it as well.

Again: I don't look at the image. I look at the real thing. The world itself. Like so:

  • I start my Firestorm Viewer.
  • I log one of my avatars in.
  • I teleport to the place where I've rendered the image.
  • If I want to read a sign, I move the camera closer to the sign. If necessary, reaaaaaally close. (I can move the camera along three axles and rotate it around two axles independently from the avatar.)
  • What's a speck of 4x3 pixels in the image unfolds before me as a 1024x768-pixel texture with three lines of text on it. In fact, I could move the camera so close to at least some surfaces that I could clearly see the individual pixels on the textures if anti-aliasing is off.
  • Not only can I easily transcribe that text, I can often even identify or at least describe the typeface.

This gives me superpowers in comparison to those who describe images only by looking at the images. For example, if there's something standing in front of a sign, partially obstructing it, I can look around that obstacle.

Imagine you're outside, taking a photo with your phone, and you want to post it on Mastodon. There's a poster on a wall somewhere in that image with text on it, but it's so small in the image that you can't read it.

Now you can say the text is too small, you can't read it, so you can't transcribe it.

Or, guess what, you can walk up close to that poster and read the text right on the poster itself.

#Long #LongPost #CWLong #CWLongPost #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
2025-06-13
@Daniel de Kay From the point of view of a Hubzilla veteran, Bonfire is Hubzilla as ordered from wish.com. Only with an easier-to-use UI and better advertising, also because it's advertised from Mastodon whereas Hubzilla is entirely advertised from Hubzilla. The latter is one reason why at least three out of four Mastodon users have never even heard or read the name "Hubzilla". This probably includes the Bonfire devs. In fact, I wouldn't be surprised if the Bonfire devs knew nothing about Friendica either.

Bonfire has Mastodon users excited and on the edges of their seats with features which they think Bonfire is the first to introduce to the Fediverse. But Friendica has had quite a few of these features since 15 years ago already. Hubzilla has had others since ten years ago.

Not to mention the features that Hubzilla has that Bonfire hasn't. For example, the second-most advanced permissions system in the Fediverse (the most advanced one can be found on (streams) and Forte, both descendants of Hubzilla from Hubzilla's own creator) with three permission levels: for the whole channel, for contacts, content-specific. Or nomadic identity. Or a cloud file storage with WebDAV connectivity. Or groupware features like a CalDAV calendar server and a headless CardDAV addressbook server.

Or can you set up entire websites on Bonfire? Hubzilla's own official website is actually built on a Hubzilla channel.

Or can you use Bonfire as a full-blown long-form blog? With post titles, with all kinds of text formatting via markup, with an unlimited number of images embedded within posts, with a tag cloud, with categories and with no character limit worth worrying about (Friendica: 200,000, Hubzilla: 16,777,215, (streams) and Forte: over 24 million)? Optionally even for non-federating blog posts?

Does Bonfire have a magic single sign-on system like OpenWebAuth implemented? Or OpenWebAuth itself? Can you even write posts, comments, articles etc. in such a way that different users may see them differently if recognised by OpenWebAuth?

How about support for threaded conversations via conversation containers as per FEP-171b? How about owning your entire discussion yourself and being able to moderate it, all the way to being able to delete individual comments? In fact, how about comment control?

At best, Bonfire is the VHS to Hubzilla's Betamax. The former is inferior, but with more publicity; the latter is better, but so obscure that next to nobody knows it.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Friendica #Hubzilla #Streams #(streams) #Forte #Bonfire
2025-06-13
@nihilistic_capybara Yes. As a matter of fact, I've had an AI describe an image after describing it myself twice already. And I've always analysed the AI-generated description of the image from the point of view of someone who a) is very knowledgeable about these worlds in general and that very place in particular, b) has knowledge about the setting in the image which is not available anywhere on the Web because only he has this knowledge and c) can see much much more directly in-world than the AI can see in the scaled-down image.

So here's an example.

This was my first comparison thread. It may not look like it because it clearly isn't on Mastodon (at least I guess it's clear that this is not Mastodon), but it's still in the Fediverse, and it was sent to a whole number of Mastodon instances. Unfortunately, as I don't have any followers on layer8.space and didn't have any when I posted this, the post is not available on layer8.space. So you have to see it at the source in your Web browser rather than in your Mastodon app or otherwise on your Mastodon timeline.

(Caution ahead: By my current standards, the image descriptions are outdated. Also, the explanations are not entirely accurate.)

If you open the link, you'll see a post with a title, a summary and "View article" below. This works like Mastodon CWs because it's the exact same technology. Click or tap "View article" to see the full post. Warning: As the summary/CW indicates, it's very long.

You'll see a bit of introduction post text, then the image with an alt-text that's actually short for my standards (on Mastodon, the image wouldn't be in the post, but below the post as a file attachment), then some more post text with the AI-generated image description and finally an additional long image description which is longer than 50 standard Mastodon toots. I've first used the same image, largely the same alt-text and the same long description in this post.

Scroll further down, and you'll get to a comment in which I pick the AI description apart and analyse it for accuracy and detail level.

For your convenience, here are some points where the AI failed:

  • The AI did not clearly identify the image as from a virtual world. It remained vague. Especially, it did not recognise the location as the central crossing at BlackWhite Castle in Pangea Grid, much less explain what either is. (Then again, explanations do not belong into alt-text. But when I posted the image, BlackWhite Castle had been online for two or three weeks and advertised on the Web for about as long.)
  • It failed to mention that the image is greyscale. That is, it actually failed to recognise that it isn't the image that's greyscale, but both the avatar and the entire scenery.
  • It referred to my avatar as a "character" and not an avatar.
  • It failed to recognise the avatar as my avatar.
  • It did not describe at all what my avatar looks like.
  • It hallucinated about what my avatar looks at. Allegedly, my avatar is looking at the advertising board towards the right. Actually, my avatar is looking at the cliff in the background which the AI does not mention at all. The AI could impossibly see my avatar's eyeballs from behind (and yes, they can move within the head).
  • It did not describe anything about the advertising board, especially not what's on it.
  • It did not know whether what it thinks my avatar is looking at is a sign or an information board, so it was still vague.
  • It hallucinated about a forest with a dense canopy. Actually, there are only a few trees, there is no canopy, the tops of the trees closer to the camera are not within the image, and the AI was confused by the mountain and the little bit of sky in the background.
  • The AI misjudged the lighting and hallucinated about the time of day, also because it doesn't know where the avatar and the camera are oriented.
  • It used the attributes "calm and serene" on something that's inspired by German black-and-white Edgar Wallace thrillers from the 1950s and the 1960s. It had no idea what's going on.
  • It did not mention a single bit of text in the image. Instead, it should have transcribed all of them verbatim. All of them. Legible in the image at the given resolution or not. (Granted, I myself forgot to transcribe a few little things in the image on the advertisement for the motel on the advertising board such as the license plate above the office door as well as the bits of text on the old map on the same board. But I didn't have any source for the map with a higher resolution, so I didn't give a detailed description of the map at all, and the text on it was illegible even to me.)
  • It did not mention that strange illuminated object towards the right at all. I'd expect a good AI to correctly identify it as an OpenSimWorld beacon, describe what it looks like, transcribe all text on it verbatim and, if asked for it, explain what it is, what it does and what it's there for in a way that everyone will understand. All 100% accurately.

CC: @🅰🅻🅸🅲🅴  (🌈🦄)

#Long #LongPost #CWLong #CWLongPost #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLM #AIVsHuman #HumanVsAI
2025-06-12
@nihilistic_capybara LLMs aren't omniscient, and they will never be.

If I make a picture on a sim in an OpenSim-based grid (that's a 3-D virtual world) which has only been started up for the first time 10 minutes ago, and which the WWW knows exactly zilch about, and I feed that picture to an LLM, I do not think the LLM will correctly pinpoint the place where the image was taken. It will not be able to correctly say that the picture was taken at <Place> on <Sim> in <Grid>, and then explain that <Grid> is a 3-D virtual world, a so-called grid, based on the virtual world server software OpenSimulator, and carry on explaining what OpenSim is, why a grid is called a grid, what a region is and what a sim is. But I can do that.

If there's a sign with three lines of text on it somewhere within the borders of the image, but it's so tiny at the resolution of the image that it's only a few dozen pixels altogether, then no LLM will be able to correctly transcribe the three lines of text verbatim. It probably won't even be able to identify the sign as a sign. But I can do that by reading the sign not in the image, but directly in-world.

By the way: All my original images are from within OpenSim grids. I've probably put more thought into describing images from virtual worlds than anyone. And I've pitted my own hand-written image description against an AI-generated image description of the self-same image twice. So I guess I know what I'm writing about.

CC: @🅰🅻🅸🅲🅴  (🌈🦄) @nihilistic_capybara

#Long #LongPost #CWLong #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #CWLongPost #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLM #AIVsHuman #HumanVsAI
2025-06-12
That's probably because threads work differently on Mastodon from Hubzilla. Mastodon doesn't know threaded conversations. And Mastodon users only receive messages
  • posted by actors that they follow
  • boosted (repeated) by actors that they follow
  • which mention them
But they do not receive messages that are comments on posts which they've already received.

Let's assume Alice posts something on Mastodon, Bob comments on Mastodon, Carol replies to Bob on Mastodon, Dave replies to Carol on Mastodon and you reply to Dave on Hubzilla.

If this was an all-Hubzilla thread, you'd only mention Dave to show that you're replying to Dave. Your comment goes straight to Alice, and Bob, Carol and Dave pick it up from Alice because they've already got Alice's post on their stream.

But since Alice, Bob, Carol and Dave are on Mastodon, if you only mention Dave, then only Dave will receive and be notified about your reply because you've mentioned Dave. Alice, Bob and Carol will never see your reply because you haven't mentioned them.

So in this constellation, if you want all four to see your reply, you have to mention them all.

Granted, other reasons may be delivery delays on Hubzilla's side, or that enough Mastodon users have muted or blocked you because, from their point of view, you as a Hubzilla user act too disturbingly un-Mastodon-like, and you break Mastodon's unwritten rules left and right.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Mastodon #Hubzilla #Conversations #ThreadedConversations
2025-06-11
In the cases of some Mastodon users, I actually wonder if it's worth telling them a) that the Fediverse is not only Mastodon, b) that I'm on something that's very very much not Mastodon and c) the implications of all this. Especially if they give the impression of wanting the Fediverse to be only Mastodon oh so very much.

Or whether I should simply Superblock them so that they'll never appear on my stream again.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #NotOnlyMastodon #FediverseIsNotMastodon #MastodonIsNotTheFediverse #Superblock

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst