piefedadmin
2025-06-09

Several new PieFed instances

The admin teams behind a few large Lemmy instances have also set up PieFed instances recently. If you’re looking for an instance run by people with plenty of technical experience and deep connections to the fediverse community, these are the instances for you:

piefed.ca – from the lemmy.ca team

piefed.world – from the lemmy.world team

piefed.blahaj.zone – from the lemmy.blahaj.zone team

More instances are listed at https://join.piefed.social/try/ and at https://piefed.fediverse.observer/list.

At the same time I have doubled the capacity of the flagship instance, piefed.social so there’s plenty to handle the influx we’re having.

#fediverse #piefed

2025-05-10

How PieFed federates “flair” on posts and comments

On the surface flair on PieFed functions very similar to how it does on Reddit – on posts they’re community-specific tags that can be used to filter posts in a community. People can also add flair to themselves which is just a piece of text that appears next to their name whenever they make posts or comments in the community. This can be helpful for giving a hint about someone’s background, interests or expertise.

However PieFed is federated and there are copies of the communities on multiple servers (instances). The way to use ActivityPub to create and maintain those copies is described in FEP 1b12 which makes no mention of flair. I have made some minimal additions to that FEP, described below:

For flair on posts, the Lemmy devs have already done quite a bit of work on this, which I added a little to, so that flair can have colors. Community actors have an additional type of tag:

{
"type": "Group",
"id": "https://piefed.social/c/piefed_meta",
"name": "piefed_meta",
/* ... */
"lemmy:tagsForPosts": [
{
"type": "lemmy:CommunityTag",
"id": "https://piefed.socia1/c/piefed_meta/tag/whatever",
"display_name": "Some Post Tag Name",
"text_color": "#000000",
"background_color": "#dedede"
}
]
}

lemmy:tagsForPosts is a list of lemmy:CommunityTag objects.

So now all the different copies of the community will know which flair can be used there. When creating a post in the community we just need to add one or more lemmy:CommunityTag objects to the Page activity:

{
"id": "https://piefed.social/post/1",
"actor": "https://piefed.social/u/rimu",
"type": "Page",
/* ... */
"tag": [
{
"type": "lemmy:CommunityTag",
"id": "https://piefed.social/c/piefed_meta/tag/whatever",
"display_name": "Some Post Tag Name"
},
{
"href": "https://piefed.social/post/1",
"name": "asdf",
"type": "Hashtag"
}
]
}

In this example the post also has a #asdf hashtag on it.

User flair is simpler because it’s not managed by the community moderators and is not a fixed list. PieFed simply adds the author’s flair to every comment (federated as a Note activity) they make. When a Note is received the author’s flair is updated on the receiving instances.

{
"id": "https://piefed.social/comment/1",
"actor": "https://piefed.social/u/rimu",
"type": "Note",
/* ... */
"flair": "PieFed dev"
}

This means that when someone changes their flair it will take effect immediately on their instance but until they write a comment it won’t propagate to other instances. As flair is primarily used on comments and the people using flair will tend to be posting a lot of comments this is kinda “good enough”.

It would be trivial to add a “flair” attribute onto posts too and have receiving instances read that. User flair shows up next to the author’s name on their posts so arguably it makes sense to send it then too.

Let’s see how it goes.

#asdf #fediverse #Lemmy #piefed #threadverse

2025-04-30

How PieFed federates feeds (aka multi-reddits or multi-comms)

Recently PieFed added a way to group communities into collections of related communities, which we called a “Feed”. Unlike Topics, Feeds are federated, can be created by anyone and can be public or private. There are now hundreds of feeds at https://piefed.social/feeds.

A feed being federated means that people using other instances can subscribe to feeds that were created on your instance, in the same way that people can subscribe/join communities on remote instances.

I’ve written up the technical details of how this all works behind the scenes at:

https://codeberg.org/rimu/pyfedi/src/branch/main/docs/activitypub_examples/feeds.md

#threadverse

2025-03-19

Tuning PostgreSQL for PieFed

Every instance will be different but generally speaking the default settings that PostgreSQL has will be Ok to start with but once the instance has been running for a few months the amount of data being stored will mean things start to bog down and some tuning is needed.

https://pgtune.leopard.in.ua will give you a good starting point.

What we use on piefed.social, with 4 CPU cores and 8 GB of RAM is:

synchronous_commit = offwal_writer_delay = 2000msmax_connections = 200shared_buffers = 1GBeffective_cache_size = 4GBmaintenance_work_mem = 1GBcheckpoint_completion_target = 0.9wal_buffers = 16MBdefault_statistics_target = 100random_page_cost = 1.1effective_io_concurrency = 200work_mem = 10MBhuge_pages = offmin_wal_size = 1GBmax_wal_size = 3GBmax_worker_processes = 4max_parallel_workers_per_gather = 2max_parallel_workers = 4max_parallel_maintenance_workers = 2

That’s pretty much the same as what pgtune suggested except work_mem is a bit bigger.

Save this text in a file at /etc/postgresql/14/main/conf.d/piefed.conf.

The “14” part of that path will vary depending on your postgresql version.

#postgresql #webPerformance

2024-08-09

I usually rely on PyCharm which I thought was catching lots of linter-type-things, but I see now that this is not enough to catch everything.

2024-08-09

Watch out for this footgun in the Python requests library

requests is a very widely used Python package, used for making HTTP requests. As you can imagine, a project like PieFed does a lot of HTTP and so I reached for the most widely used tool for the job, thinking it would be robust and easy.

And it has been, for the most part.

some_variable = requests.get(‘whatever/that_thing.json’)

The thing is, unless you add a ‘timeout‘ parameter to the function call, it will wait literally forever for a response from the remote server. 99.9% of the time this will be fine and the request will either succeed or raise an exception when something goes wrong but very very occasionally your script will just hang “for no reason”.

These kinds of intermittent bugs can be very difficult to solve because the trigger (a remote server malfunctioning in a very particular way) is outside of the development environment, making it impossible to reproduce unless you already know exactly what you’re looking for. In this case I tried half a dozen other fixes before eventually giving up and writing a separate script to monitor PieFed and restart it when it got stuck.

A few weeks later a random event gave me an idea and I combed through the codebase looking for requests without timeouts, found a few and put in a timeout on each. With no way to reproduce the problem I still didn’t know if it was fixed but it’s now been about 2 weeks with no hangs so I’m pretty sure it’s solved now.

Hopefully this post finds some future Python developer and saves them a few days/weeks.

#python

2024-06-23

Apologies, I’m using WordPress with the ActivityPub plugin and was not aware of how to avoid that until just now.

2024-06-22

Yeah. It’d be pretty trivial to change this from being hardcoded to a setting that admins can tweak on their instances. Often the only reason I hardcode things like this is because making it configurable takes longer, not because I really really want it to be hardcoded.

At the moment there is only a handful of PieFed instances so there isn’t much demand for configuration. I hope that changes.

2024-06-22

As someone who has moderated way too many communities, forums, etc, I’ve seen the pattern again and again. Nazis love a good dog-whistle.

2024-06-22

Most software similar to PieFed delegates the job of maintaining the health of communities to the moderators of those communities. This frees up the instance administrators to focus on technical issues and leave a lot of the politics and social janitorial work to others.

Some issues with this:

  • Moderators are not always very good at it, sometimes lacking experience or maturity.
  • Moderators can become inactive, leaving their communities unmaintained.
  • Moderators only have influence over their communities – if someone is removed from one community they can just go to another to cause havoc there. This leads to a lot of duplication of effort.
  • Moderators can have quite different priorities and values from the admins, leaving admins paying to run a service for people they don’t feel very aligned with.

PieFed gives admins a suite of tools to take a more hands-on approach to gardening all the communities on their instance. It does this by ignoring the community divisions and instead treating all posts and accounts as a big pool of things to be managed. Of course there are solid community moderation tools available but I will not be focusing on them in this blog post.

Find people who have low karma

When someone is consistently getting downvoted it’s likely they are a problem. PieFed provides a list of accounts with low karma, sorted by lowest first. Clicking on their user name takes you to their profile which shows all their posts and comments in one place. Every profile has “Ban” and “Ban + Purge” buttons that have instance-wide effects and are only visible to admins.

The ‘Rep’ column is their reputation. As you can see, some people have been downvoted thousands of times. They’re not going to change their ways, are they?

The ‘Reports’ column is how often they’ve been reported, IP shows their IP address and ‘Source’ shows which website linked to PieFed when they initially registered. If an unfriendly forum starts sending floods of toxic people to your instance, spotting them is easy. (In the image above all the accounts are from other instances so we don’t know their IP address or Source).

Find people who downvote too much

Once an account has made a few votes, an “attitude” is calculated each time they vote which is the percentage of up votes vs. down votes.

People who downvote more than upvote tend to be the ones who get in fights a lot and say snarky, inflammatory and negative things. If you were at a dinner party, would you want them around? By reviewing the list of people with bad attitudes you can make decisions about who you want to be involved in our communities.

All these accounts have been downvoting a lot (Attitude column) and receiving some downvotes (Rep column). Their profiles are worth a look and then making a decision about whether they’re bringing down the vibe or not.

Spot spam easily

A lot of spam does not get reported or it is only removed on the original instance, leaving copies of it on every instance that federates with them. To help deal with this PieFed has a list of all content posted by recently created accounts which has been heavily downvoted. 

Don’t award karma in low-quality communities

Some communities are inherently useless and anyone posting popular content in them will accumulate lots of karma which makes them seem like a valuable account – unless the admin has flagged that community as “Low quality” which has the effect of severing the link between upvotes of posts and the author’s karma. Down votes still decrease karma though, so an account that only ever posts in low quality communities will slowly reduce in karma.

All communities with the word ‘meme’ in the name are automatically flagged as low quality but admins can override this on a case-by-case basis.

Warnings on unusual communities

There are a diverse range of communities, catering to different needs and ideologies. A community about “World News” will be completely different on lemmy.ml from one on beehaw.org and different again on lemmy.world. Even when the written rules of the communities look the same, the way they are interpreted and enforced can come as a big surprise to people new to the fediverse.

To deal with this, admins can add a note that is displayed above the ‘post a comment’ form, saying whatever they want. On piefed.social I’ve used this to put a note on every beehaw.org community about the ‘good vibes only’ nature of that instance and one community on lemmy.ml has a note about the unusual mostly-unwritten moderation policies employed there. 

Icons next to comments by low karma accounts

Accounts that get downvoted a lot end up with a negative karma. Once this happens they get a small red icon next to their user name so everyone knows that they might not be worth engaging with. Once their karma drops even further they have two red icons. 

These icons also bring the account to the attention of moderators and admins in a more passive way – as they go about reading and interacting, not in a special admin area which might be forgotten or rarely visited.

Icon for new accounts

New accounts have a special icon next to their user name for the first 7 days. An account with this icon AND some red warning icons is especially likely to be a spammer or troll account. While admins have dedicated parts of the admin area to find these accounts (described earlier) these icons bring this to the attention of everyone.

Ban evasion detection

I don’t go into too much detail on this as it’ll reduce its effectiveness. Suffice to say, when people get IP banned from PieFed, they stay banned more often. It’s not perfect but it’ll mean they are much more likely to become some other instances’ problem.

Automatically delete content based on phrases in user name

One of the perennial issues with federated systems is banned spammers & trolls can just move to another instance and resume their work there. When they do so they often use the same user name. PieFed lets admins maintain a list key words that are used to filter incoming posts by checking those words against the author’s user name.

You can be pretty sure anyone with 1488 or even just 88 in their user name is a nazi, for example.

Speaking of which, PieFed has an optional approval queue for new registrations. New accounts with “88” in their name are always put in a different /dev/null queue that leads nowhere. The UI tells them they’re waiting for approval but that approval will never come.

Report accounts, not just posts

As well as reporting content, entire accounts can be reported to admins (not community mods). This smooths the workflow out a bit because usually when a post is reported the person handling the report needs to check out the entire profile of the offending account to see if it’s part of a pattern of behaviour. A report of an account takes the admin straight there.

Instance-wide domain block

PieFed has a predefined list of blocked domains comprising 3000+ disinformation, conspiracy and fake news websites. No posts linking to those sites can be created. The UI makes it easy to manage the list and add new domains to it.

Automatic reporting of 4chan content

Image recognition technology is used to detect and then automatically report screenshots of 4chan posts. The spreading of 4chan memes normalises 4chan, a nazi meme generation forum, and builds the alt-right pipeline. PieFed instances will not be an unwitting participant in that.

I’m sure there are a few things I forgot but hopefully this tour conveys the way PieFed does things differently. There is always more to be done or things that can be improved so I welcome feedback, code contributions and ideas – please check out https://join.piefed.social for ways to get involved.

https://join.piefed.social/2024/06/22/piefed-features-for-growing-healthy-communities/

#fediverse #moderation #piefed

2024-05-02

The bigger picture of how we ended up here

https://www.baldurbjarnason.com/2024/react-electron-llms-labour-arbitrage/

#react #electron #llm

2024-04-17

AFAIK once there are more items in the queue than the burst value (300 in my config) Nginx starts returning HTTP 503, which will cause a retry attempt on some senders (e.g. Lemmy). All other times it returns 200.

So if you wanted to be very careful you could set a tiny burst value (maybe zero??) which would return 503 as soon as the rate limit kicked in.

2024-04-17

Probably, yes.

I find that if a POST fails to be processed I don’t really want the sender to retry anyway, I want them to stop doing it. So if the sender thinks it was successful it’s usually not the worst thing in the world.

It would be nice if Nginx responded with a HTTP 202 (Accepted, yet queued) if a POST was throttled and it would be nice if sending fediverse software knew what to do with that info. But I expect this is an edge case that hasn’t been dealt with by most.

2024-04-17

Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backend’s ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.

For example, PieFed has a backend which listens on port 5000. Nginx listens on port 443 for POSTs from outside and sends them through to port 5000:

upstream app_server {   server 127.0.0.1:5000 fail_timeout=0;}
server {   listen 443 ssl;   listen [::]:443 ssl;   server_name piefed.social www.piefed.social;   root /var/www/whatever;   location / {       # Proxy all requests to Gunicorn       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       proxy_set_header X-Forwarded-Proto $scheme;       proxy_set_header Host $http_host;       proxy_redirect off;       proxy_http_version 1.1;       proxy_set_header Connection "";       proxy_pass http://app_server;       ssi off;   }

To this basic config we need to add rate limiting, using the ‘limit_req_zone’ directive. Google that for further details.

limit_req_zone $binary_remote_addr zone=one:100m rate=10r/s;

This will use up to 100 MB of RAM as a buffer and limit POSTs to 10 per second, per IP address. Adjust as needed. If the sender is using multiple IP addresses the rate limit will not be as effective. Put this directive outside your server {} block.

Then after our first location / {} block, add a second one that is a copy of the first except with one additional line (and change it to apply to location /inbox or whatever the inbox URL is for your instance):

location /inbox {       limit_req zone=one burst=300;#       limit_req_dry_run on;       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       proxy_set_header X-Forwarded-Proto $scheme;       proxy_set_header Host $http_host;       proxy_redirect off;       proxy_http_version 1.1;       proxy_set_header Connection "";       proxy_pass http://app_server;       ssi off;  }

300 is the maximum number of POSTs it will have in the queue. You can use limit_req_dry_run to test the rate limiting without actually doing any limiting – watch the nginx logs for messages while doing a dry run.

It’s been a while since I set this up so please let me know if I mixed anything crucial out or said something misleading.

https://join.piefed.social/2024/04/17/handling-large-bursts-of-post-requests-to-your-activitypub-inbox-using-a-buffer-in-nginx/

#nginx #webPerformance

2024-03-15

In this screencast I code some basic moderation features for #PieFed – https://www.youtube.com/watch?v=9f3MQIcoix0. Hopefully it’ll help new contributors get familiar with the codebase.

2024-03-12

By default, all posts show up in search results on #PieFed, #Lemmy and #Kbin. But in a first for the threadverse, PieFed has just added some privacy features that Mastodon had for a long time – being searchable is now optional!

Un-tick the “My posts appear in search results” checkbox in your settings and not only will your posts be hidden from the PieFed search on your instance, but on all other PieFed instances too (yes, it federates, but only to PieFed instances).

But wait, there’s more. Google will not add your profile or any of your posts to it’s index (because of the <meta name=”robots” content=”noindex”> tag used when rendering your posts) AND comments you made on other people’s posts won’t be indexed by Google either (because of the <!–googleoff: all–> tag).

So if you’re tired of living in a fishbowl, want a bit of privacy or would rather people can’t follow you around, PieFed is here for you.

https://join.piefed.social/2024/03/12/piefed-privacy-control-your-search-visibility/

#fediverse #Kbin #Lemmy #piefed #privacy #search #threadverse

2024-03-07

The Prosocial Design Network curates and researches evidence-based design solutions to bring out the best in human nature online: https://www.prosocialdesign.org/

2024-03-07

Recently @siderea wrote a fantastic thread about social homogeneity, moderation, the design of social platforms and what they could be. They covered a lot of ground and I can’t respond to it all so I’ll just pick some highlights

I cannot tell you how many conversations I have seen about the topic of “moderation” and how necessary it is in which nobody has ever bothered to set down what exactly it is that they think a moderator is supposed to accomplish.

I mean, it’s all of them. I’ve been on the internet since the 1980s, and I have never seen anyone stop and actually talk about what they thought moderators were trying to do or should try to do.

That sounds easy. I’ll take a shot at that, below.

Also they draw a parallel between designing buildings and designing social platforms:

Why should our societies tolerate the existence of *irresponsibly* designed and operated social media platforms, that increase violence and other antisocial behavior?

Primarily buildings are built to be used, and as such they are tools, and we judge them, as we do all tools, by how fit they are for their purpose, whatever that might be.

And the purposes of buildings are to afford various ways of people interacting or avoiding interacting.

So architects think a lot about that. It’s a whole thing.

Those who put together social media platforms need to think about the same sort of thing.

Preach!

The upshot is that we can do better than what we have in the past. We can go beyond the bare minimum of “delete the spam, ban the nazis” moderation. When we build social software the features it has will determine what kind of moderation is possible, what kind of interactions people will have. We should be intentional about that.

I’d like to share some of my ideas for how we can do that but first, let’s get the basics covered:

What I think a moderator is supposed to accomplish

Obviously every online space is different and has it’s own values and priorities. What follows is what I consider to be the minimum necessary to avoid devolving into 4chan as soon as the normies arrive.

The goal of moderators is to create a positive, inclusive, and constructive online community where users feel comfortable engaging in discussions and sharing their thoughts and ideas. To that end, their responsibilities include:

  1. Enforcing Community Guidelines:
    • Moderators ensure that users adhere to the forum’s rules and guidelines. This may involve removing or editing content that violates these rules.
  2. Fostering a Positive Atmosphere:
    • They work to create a welcoming and friendly atmosphere within the forum. This includes encouraging respectful communication and discouraging any form of harassment or bullying.
  3. Managing Conflict:
    • Moderators intervene when conflicts arise between users, helping to de-escalate situations and resolve disputes. This may involve mediating discussions or issuing warnings to users.
  4. Preventing Spam and Irrelevant Content:
    • They monitor the forum for spam, irrelevant content, or any form of disruptive behaviour. This helps maintain the quality of discussions and keeps the forum focused on its intended topics.
  5. Addressing Technical Issues:
    • Moderators often assist users with technical issues related to the forum platform. This includes addressing bugs, helping users navigate the site, and forwarding technical problems to the appropriate channels.
  6. Encouraging Positive Contributions:
    • Moderators actively encourage users to contribute positively to discussions. This can involve highlighting valuable contributions, providing constructive feedback, and recognizing members for their positive engagement.
  7. Applying Consequences:
    • When necessary, moderators may apply consequences for rule violations, such as issuing warnings, temporary suspensions, or permanent bans. This ensures accountability and helps maintain a healthy community.
  8. Staying Informed:
    • Moderators stay informed about the forum’s community and culture, as well as any changes in policies or guidelines. This helps them address issues effectively and stay responsive to the evolving needs of the community.
  9. Collaborating with Community Members:
    • Moderators listen to concerns and feedback from the community. Taking a collaborative approach helps build trust and ensures that the moderation team understands the community’s needs.

Ok, cool. But:

We can and should accomplish more

When we think about moderation tools for a platform that serves millions of people, we are shaping the nature of social interactions on a grand scale. As we engineer these virtual societies, the question we need to ask ourselves is, “What is the nature of the society we want to create?” and within that, “What do we want moderation to accomplish that supports that nature?” and eventually “What software features do moderators need to do their work?”

The nature of the society

We want to create an ideal society where everyone is safe, respected, empowered, entertained and encouraged to grow and find meaning according to their individual free choices. Members of this online society contribute meaningfully and positively to the rest of society, support the actualisation of human rights for all and work to help democracy to live up to it’s promise.

Remember the 1990s, when the internet hadn’t been corrupted yet? Yeah. I do.

What we want moderation to accomplish to maintain this ideal society

Defining the Role of Moderation

Moderation should not be a passive, reactive role. Instead, it should be proactive, shaping the community’s social dynamics intentionally. The first step towards this is defining what our platforms aim to achieve. Do we want a space for free and open discussions, a supportive community, or a platform for specific interests? This vision will shape the guidelines we develop, the tools we use, and the strategies we implement.

Developing Clear Guidelines and Empowering Moderators

Once we have our vision, we need to create a set of rules that align with this vision. These guidelines should be clear, easily accessible, and comprehensive. Moreover, we need to empower our moderators with the right tools and authority to enforce these guidelines. This can include features for deleting posts, banning users, or moving discussions.

Investing in Technology

Incorporating technology is crucial in supporting our moderators. Automated moderation tools can detect and remove inappropriate content, while algorithms can promote high-quality posts. Technology can also help in combating challenges like trolls who use new IP addresses to create accounts. Techniques like browser fingerprinting can identify users regardless of their IP, and restrictions on new accounts can deter trolls.

Addressing Complex Issues

Online communities also need to grapple with complex issues such as the formation of high-control groups, disinformation propagation, social isolation, and internet addiction. Tackling these problems requires more advanced tools and strategies:

  • For high-control groups, we need to implement robust reporting systems and use AI tools to detect patterns of manipulation.
  • To combat disinformation, we need to establish strong fact-checking protocols, possibly collaborating with external fact-checking organizations.
  • To mitigate social isolation and internet addiction, platforms can implement features to promote healthier usage, like reminders to take breaks or limits on usage time.
  • To manage trolls, we can use advanced techniques that track users beyond their IP address and limit the activities of new accounts until they show they can be trusted.

Continuous Evaluation and User Education

Finally, moderation should be an ongoing process of improvement and adaptation. We need to regularly review and update our strategies based on their effectiveness and changing conditions. Additionally, we need to educate our users about these issues and how to report them. An informed user base can greatly aid in maintaining a healthy community.

In conclusion, moderation in online communities is not just about maintaining order but about intentionally shaping the dynamics of these spaces. As we navigate the digital age, we must recognize the power and responsibility we hold in engineering these virtual societies, and use it to create healthier, safer, and more inclusive communities.

https://join.piefed.social/2024/03/07/moderation-the-design-of-social-platforms/

#culture #moderation #society

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst