Admin of a lot of fediverse servers, among which the .world ones:


You can find me on these servers as @ruud

I receive a lot of messages, direct and mentions. I can’t reply to them all. If you have an issue, please e-mail at

  • 41 Posts
Joined 6M ago
Cake day: Jun 01, 2023


Depends on what sort of invoices. For my invoices for billable hours, I use Kimai

I fixed that link, but actually should redirect to, which it apparently doesn’t always do. So we’ll look into that as well.

Thanks for pointing this out.

Yes, there has been quite a lot of trolls and attacks on our server, so we’re very careful about who we let on the team. I’m sure you’ll understand.

No. Just like now , they store the metadata there, the images are on disk or S3

That’s not the pictures… The picture take up 1.2 TB… it’s just the database with the metadata about the pictures. (11.8 GB now…)

It’s building a 0.4 database, which is already twice as big as the 0.3 one

OK, the pictrs database upgrade is taking it’s time… please wait… ;-)

pictrs_1     | {"timestamp":"2023-10-02T16:31:44.746467Z","level":"WARN","fields":{"message":"new"},"target":"pict_rs::repo","span":{"name":"Migrating Database from 0.3 layout to 0.4 layout"},"spans":[]}

(The thread. Not the, I don’t like that ;-) )

I've written a short blog about what happened in August, and the finances.

I just setup a matrix server at, so just register an account on that if you like ;-)

Yes, that’s the plan. But determining which instances are suitable isn’t easy… I’ll ask them what’s the status on this.

Ah OK. Yeah lemmy had a big growth past months. Mastodons growth is mostly absorbed by .social

[Fixed] Comments temporarily broken, being worked on
There was another attack going on (as you might have noticed). We're working on a fix. In the meantime, we've blocked the listing of comments, so we at least aren't down, but it did break comments. Hope to have a fix in the next hour. Stay tuned! **Update** OK we've implemented a fix, again many thanks to []( for his assistance. This will prevent the outages we've seen last couple of days. Let's see what they will come up with next...

A few days ago I saw some cool [JoinLemmy stickers]( created by []( . I asked her if she could also create stickers, and she did! You can see and order them [here](, also check the [other cool stickers in her shop]( Thanks for creating them!

Outage today (2023-07-31) from 02:00 UTC - 05:45 UTC has been down between 02:00 UTC and 05:45 UTC. This was caused by the database spiking to 100% cpu (all 32 cores/64 threads!) due to inefficient queries been fired to the db very often. I’ve collected the logs and we’ll be checking how to prevent this. (And what caused this)

[Done] Lemmy world was upgraded to 0.18.3 today (2023-07-30)
**Update** The upgrade was done, DB migrations took around 5 minutes. We'll keep an eye out for (new) issues but for now it seems to be OK. **Original message** We will upgrade to 0.18.3 today at 20:00 UTC+2 ([Check what this isn in your timezone]( Expect the site to be down for a few minutes. ""Edit"" I was warned it could be more than a few minutes. The database update might even take 30 minutes or longer. Release notes for 0.18.3 can be found here: (This is unrelated to the downtimes we experienced lately, those are caused by attacks that we're still looking into mitigating. Sorry for those)
fedilink update: Downtime today / Cloudflare
Today, like the past few days, we have had some downtime. Apparently some script kids are enjoying themselves by targeting our server (and others). Sorry for the inconvenience. Most of these 'attacks' are targeted at the database, but some are more ddos-like and can be mitigated by using a CDN. Some other Lemmy servers are using Cloudflare, so we know that works. Therefore we have chosen Cloudflare as CDN / DDOS protection platform for now. We will look into other options, but we needed something to be implemented asap. For the other attacks, we are using them to investigate and implement measures like rate limiting etc.

As requested by some users: 'old' style now accessible via Code can be found here: , created by [Ryan]( (Is he here?) (Yes he appears to be! []( ! Thanks for this awesome front-end!)

Updated Voyager to 0.23.1 on
Thanks to []( for another release with awesome enhancements, see release notes here:

I blogged about what happened in June, and the financial overview.

I think I fixed the thumbnails issue :-)
It's always the small things you overlook... The `docker-compose.yml` I copied from somewhere when setting up apparently was missing the external network for the pictrs container.. So pictrs was working, as long as it got the images via Lemmy. Getting the images via URL didn't work... Looks like it's working now. Looks a whole lot better with all the images :-) **Edit** For existing posts: Edit the post, then Save. (No need to change anything). This also fetches the image.
fedilink updated to 0.18.2
(Duplicate post :-) see

Voyager (fka wefwef) now available at
We've installed Voyager and it's reachable at, you can browse Lemmy, and login there (also if your account isn't on **PS** Thanks go out to @stux[]( , he came up with the idea (see
fedilink (and some others) were hacked
While I was asleep, apparently the site was hacked. Luckily, (big) part of the team is in US, and some early birds in EU also helped mitigate this. As I am told, this was the issue: - There is an vulnerability which was exploited - Several people had their JWT cookies leaked, including at least one admin - Attackers started changing site settings and posting fake announcements etc Our mitigations: - We removed the vulnerability - Deleted all comments and private messages that contained the exploit - Rotated JWT secret which invalidated all existing cookies The vulnerability will be fixed by the Lemmy devs. [Details of the vulnerability are here]( Many thanks for all that helped, and sorry for any inconvenience caused! **Update** While we believe the admins accounts were what they were after, it could be that other users accounts were compromised. Your cookie could have been 'stolen' and the hacker could have had access to your account, creating posts and comments under your name, and accessing/changing your settings (which shows your e-mail). For this, you would have had to be using at that time, and load a page that had the vulnerability in it.
fedilink updated to 0.18.1
We've updated to Lemmy 0.18.1. For the release notes, see

Some system load graphs of last 24h
For those who find it interesting, enjoy!
fedilink status update 2023-07-05
Another day, another update. More troubleshooting was done today. What did we do: - Yesterday evening @phiresky[]( did some SQL troubleshooting with some of the admins. After that, phiresky submitted some PRs to github. - []( created a docker image containing 3PR's: [Disable retry queue](, [Get follower Inbox Fix](, [Admin Index Fix]( - We started using this image, and saw a big drop in CPU usage and disk load. - We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a `return 404` in nginx conf for `/api/v3/ws`. - We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs - We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set ~~`proxy_next_upstream timeout;`~~ `max_fails=5` in nginx. Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the ~~`proxy_next_upstream timeout;`~~ `max_fails=5` workaround but for now it seems to hold with 1. Thanks to []( , []( , [](, []( , []( , []( for their help! And not to forget, thanks to []( and []( for their continuing hard work on Lemmy! And thank you all for your patience, we'll keep working on it! Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs. ![]( **Edit** So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that's now started, and I noticed the `proxy_next_upstream timeout` setting didn't work (or I didn't set it properly) so I used `max_fails=5` for each upstream, that does actually work.

It’s a single server with 32core/64 thread AMD EPYC and 128GB RAM. At the moment we run multiple containers for lemmy so restarting doesn’t mean outage.

Same happened with in November. Family goes first, then work, and then all of my hobbies, of which this is one. (But the one taking up most time at the moment…)

I think I use all chat software there is. I’m in hundreds of Matrix rooms. But I think one of the team at least didn’t like or use Matrix. Don’t remember. And I have Discord anyway for the Mastodon channels…

We have 128GB of RAM. It just skyrockets after a while!

Yes he’s one of the other admins in our Discord, he’s very helpful! status update 2023-07-04
# Status update July 4th Just wanted to let you know where we are with ## Issues As you might have noticed, things still won't work as desired.. we see several issues: ### Performance - Loading is mostly OK, but sometimes things take forever - We (and you) see many 502 errors, resulting in empty pages etc. - System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%) ### Bugs - Replying to a DM doesn't seem to work. When hitting reply, you get a box with the original message which you can edit and save (which does nothing) - 2FA seems to be a problem for many people. It doesn't always work as expected. ## Troubleshooting We have many people helping us, with (site) moderation, sysadmin, troubleshooting, advise etc. There currently are 25 people in our Discord, including admins of other servers. In the Sysadmin channel we are with 8 people. We do troubleshooting sessions with these, and sometimes others. One of the Lemmy devs, []( is also helping with current issues. So, all is not yet running smoothly as we hoped, but with all this help we'll surely get there! Also thank you all for the donations, this helps giving the possibility to use the hardware and tools needed to keep running!

Sorry hadn’t seen the message. Still interested?

Yeah with nginx doing load balancing

Ohh 1MB is too small. I’ll look into that

This is after 4 weeks:

58G	pictrs
34G	postgres

Need support?
If you need support, best is to not DM me here or mention me in comments. I now have 300 notifications and probably no time to read them soon. Also I don’t do moderation so any moderation questions I have to forward to the moderation team. ## where to get support There’s the [!]( community, and another option is to send mail to Mail is converted to tickets which can be picked up by admins and moderators. Thanks! Enjoy your day!

Well we now have 3 lemmy containers and I have the feeling some are faster than others…

On a physical server in a datacenter updated to 0.18.1-rc
Looks like it works. **Edit still see some performance issues. Needs more troubleshooting** **Update: Registrations re-opened** We encountered a bug where people could not log in, see . As a workaround we opened registrations. ## Thanks First of all, I would like to thank the team and the 2 admins of other servers []( and []( for their help! We did some thorough troubleshooting to get this working! ## The upgrade The upgrade itself isn't too hard. Create a backup, and then change the image names in the `docker-compose.yml` and restart. But, like the first 2 tries, after a few minutes the site started getting slow until it stopped responding. Then the troubleshooting started. ## The solutions What I had noticed previously, is that the lemmy container could reach around 1500% CPU usage, above that the site got slow. Which is weird, because the server has 64 threads, so 6400% should be the max. So we tried what []( had suggested before: we created extra lemmy containers to spread the load. (And extra lemmy-ui containers). And used nginx to load balance between them. Et voilà. That seems to work. Also, as suggested by him, we start the lemmy containers with the scheduler disabled, and have 1 extra lemmy running with the scheduler enabled, unused for other stuff. There will be room for improvement, and probably new bugs, but we're very happy is now at 0.18.1-rc. This fixes a lot of bugs.

[Done] New try at upgrading to 0.18.1 July 1st 20:00 CET
We'll give the upgrade new try tomorrow. I've had some good input from admins of other instances, which are also gonna help troubleshoot during/after the upgrade. Also there are newer RC versions with fixed issues. Be aware that might we need to rollback again, posts posted between the upgrade and the rollback will be lost. We see a huge rise in new user signups (duh.. it's July 1st) which also stresses the server. Let's hope the improvements in 0.18.1 will also help with that.

Federation troubleshooting
So I've been troubleshooting the federation issues with some other admins: ![]( (Thanks for the help) So what we see is that when there are many federation workers running at the same time, they get too slow, causing them to timeout and fail. I had federation workers set to 200000. I've now lowered that to 8192, and set the activitypub logging to debugging to get queue stats. `RUST_LOG="warn,lemmy_server=warn,lemmy_api=warn,lemmy_api_common=warn,lemmy_api_crud=warn,lemmy_apub=warn,lemmy_db_schema=warn,lemmy_db_views=warn,lemmy_db_views_actor=warn,lemmy_db_views_moderator=warn,lemmy_routes=warn,lemmy_utils=warn,lemmy_websocket=warn,activitypub_federation=debug"` Also, I saw that there were many workers retrying to servers that are unreachable. So, I've blocked some of these servers: ```,,,,,,,,,,,,,,, ``` This gave good results, way less active workers, so less timeouts. (I see that above 3000 active workers, timeouts start). (If you own one of these servers, let me know once it's back up, so I can un-block it) Now it's after midnight so I'm going to bed. Surely more troubleshooting will follow tomorrow and in the weekend. Please let me know if you see improvements, or have many issues still.

[Update: Failed again] Update to 0.18.1-rc.1 tried and rolled back
We've upgraded to 0.18.1-rc.1 and rolled back that upgrade because of issues. (If you had posted anything in those 10 minutes between upgrade and rollback, that post is gone. Sorry!) The main issue we saw is that users can't login anymore. Existing sessions still worked, but new logins failed (from macos, ios and android. From linux and windows it worked) Also new account creation didn't work. I'll create an issue for the devs and retry once it's fixed. **Edit** Contacted the devs, they tell me to try again with lemmy-ui at version 0.18.0. Will try again, brace for some downtime! **Edit 2** So we upgraded again, and it seemed to work nicely! But then it slowed down so much it was unuseable. There were many locks in the database. People reported many JSON errors. Sorry, we won't be on 0.18.1 any time soon I'm afraid..
fedilink upgraded to 0.18.1-rc.1
We've upgraded the instance to 0.18.1-rc.1 (to be completed)

Jerboa app and Lemmy 0.18
The 0.18 version of Lemmy was [announced]( This will solve many issues. **But we can't upgrade yet** because the captcha was removed, and captcha relied on Websockets, which are removed in 0.18 so despite the devs agreeing on [my request to add captcha back](, this will not be until 0.18.1. Without captcha we will be overrun by bots. Hopefully this 0.18.1 will be released soon, because another issue is that the newest version of the Jerboa app won't work with servers older than 0.18. So if you're on, please (temporarily) use another app or the web version.

Added some more known isues
I added some known issues with websockets / spinning wheel to the [known issues post](

I wrote my fist post about When June is finished, I'll also include Lemmy in the financial update on the same blog.

[Solved] Temporarily closed signups because of spam signups
So some spam signups just happened (all format e-mail) This caused bounced mail to increase, causing Mailgun to block our domain to prevent it getting blacklisted. So: - Mail temporarily doesn't work - I closed signups for now - I will ban the spam accounts - I will check how to prevent (maybe approval required again?) Stay tuned. **Edit**: so apparently there is a captcha option which I now enabled. Let's see if this prevents spam. Registrations open again. **Edit2** : Hmm Mailgun isn't that fast in unblocking the domain. Closing signups again because validation mails aren't sent **Edit 3**: I convinced Mailgun to lift the block. Signups open again.

Posting slowness issue seems solved!
Thanks to a comment by []( , I checked and saw that 'Federation debugging' mode was enabled. I had enabled that when the server just started (less than 3 weeks ago) and I had an issue with federation. I thought I had switched that off again, but apparently not. This mode causes the federation to be done in the foreground, so your 'Post' or 'Comment' action will wait for that to finish... This solves the most annoying issue, and makes the site way more useable. There are many other issues, but we'll get there.

ipv6 enabled
I enabled the ipv6 address for Should work now. Next step would be enable dnssec, have to figure out how that worked again.

[Guess not…] Installed lemmy-ui 0.18.0 RC-1
I just installed the 0.18.0 release candidate 1 of the lemmy-ui component. This version removes websockets and should solve many strange issues. Like the glitching vote totals, sudden changes of posts etc. Let me know if you see improvements, or new issues.
fedilink About post / Rules / FAQ.
To be created # About # Rules # FAQ

Workaround for the performance issue with posting in large communities
We're still working to find a solution for the posting slowness in large communities. We have seen that a post does get submitted right away, but yet the page keeps 'spinning' So right after you clicked 'Post' or 'Reply' you can refresh the page and the post should be there. (But maybe to be sure you could copy the contents of your post first, so you can paste again if anything would go wrong..)

[Done] Server will be migrated (More power!)
So after we've extended the virtual cloud server twice, we're at the max for the current configuration. And with this crazy growth (almost 12k users!!) even now the server is more and more reaching capacity. Therefore I decided to order a dedicated server. Same one as used for So the bad news... we will need some downtime. Hopefully, not too much. I will prepare the new server, copy (rsync) stuff over, stop Lemmy, do last rsync and change the DNS. If all goes well it would take maybe 10 minutes downtime, 30 at most. (With it took 20 minutes, mainly because of a typo :-) ) For those who would like to donate, to cover server costs, you can do so at our [OpenCollective]( or [Patreon]( Thanks! **Update** The server was migrated. It took around 4 minutes downtime. For those who asked, it now uses a dedicated server with a AMD EPYC 7502P 32 Cores "Rome" CPU and 128GB RAM. Should be enough for now. I will be tuning the database a bit, so that should give some extra seconds of downtime, but just refresh and it's back. After that I'll investigate further to the cause of the slow posting. Thanks []( for assisting with that.

[Done - for now…] Expect some brief restarts today (Jun 12 CET)
I'm trying to fix this annoying slowness when posting to larger communities. (Just try replying here...) I'll be doing some restarts of the docker stack and nginx. Sorry for the inconvenience. **Edit**: Well I've changed the nginx from running in a docker container to running on the host, but that hasn't solved the posting slowness..
fedilink starting guide
(I'm creating a starting guide post here. Have patience, it will take some time...) **Disclaimer**: I am new to Lemmy like most of you. Still finding my way. If you see something that isn't right, let me know. Also additions, please comment! # Welcome! Welcome to Lemmy (on whichever server you're reading this) # About Lemmy Lemmy is a federated platform for news aggregagtion / discussion. It's being developed by the Lemmy devs: ## About Federation What does this federation mean? It means Lemmy is using a protocol (Activitypub) which makes it possible for all Lemmy servers to interact. - You can search and view communities on remote servers from here - You can create posts in remote communities - You can respond to remote posts - You will be notified (if you wish) of comments on your remote posts - You can follow Lemmy users/communities on other platforms that also use Activitypub (like Mastodon, Calckey etc) (There's currently a known issue with that, see [here]( Please note that a server only starts indexing a server/community once it has been interacted with by a user of this server. A great image describing this, made by []( : ![]( # About is one of the many servers hosting the Lemmy software. It was started on June 1st, 2023 by []( , who is also running, and others. A list of Lemmy servers and their statistics can be found at [FediDB ]( # Quick start guide ## Account You can use your account you created to log in to the server on which you created it. Not on other servers. Content is federated to other servers, users/accounts are **not**. ## Searching In the top menu, you'll see the search icon. There, you can search for posts, communities etc. ![]( You can just enter a search-word and it will find the Post-titles, post-content, communities etc containing that word **that the server knows of**. So any content any user of this server ever interacted with. You can also search for a community by it's link, e.g. `!`. Even if the server hasn't ever seen that community, it will look it up remotely. Sometimes it takes some time for it to fetch the info (and displays 'No results' meanwhile..) so just be patient and search a second time after a few seconds. ## Creating communities First, make sure the community doesn't already exist. Use search (see above). Also try []( to see if there are remote communities on other Lemmy instances that aren't known to yet. If you're sure it doesn't exist yet, go to the homepage and click 'Create a Community'. ![]( It will open up the following page: ![]( Here you can fill out: - Name: should be all lowercase letters. This will be the /c/ - Display name: As to be expected, this will be the displayed name. - You can upload an icon and banner image. Looks pretty. - The sidebar should contain things like description, rules, links etc. You can use Markdown (yey!) - If the community will contain mainly NSFW content, check the NSFW mark. NSFW is allowed as long as it doesn't break [the rules]( - If you only want moderators to be able to post, check that checkbox. - Select any language you want people to be able to post in. Apparently you shouldn't de-select 'Undetermined'. I was told some apps use 'Undetermined' as default language so don't work if you don't have it selected ## Reading I think the reading is obvious. Just click the post and you can read it. SOmetimes when there are many comments, they will partly be collapsed. ## Posting When viewing a community, you can create a new post in it. First of all make sure to check the community's rules, probably stated in the sidebar. ![]( In the Create Post page these are the fields: - URL: Here you can paste a link which will be shown at the top of the post. Also the thumbnail of the post will link there. **Alternatively** you can upload an image using the image icon to the right of the field. That image will also be displayed as thumbnail for the post. - Title: The title of the post. - Body: Here you can type your post. You can use Markdown if you want. - Community: select the community where you want this post created, defaults to the community you were in when you clicked 'create post' - NSFW: Select this if you post any NSFW material, this blurs the thumbnail and displays 'NSFW' behind the post title. - Language: Specify in which language your post is. Also see the [Lemmy documentation]( on formatting etc. ## Commenting ## Moderating / Reporting ## Client apps There are some apps available or in testing. See [this post]( for a list! # Issues When you find any issue, please report so here: if you think it's server related (or not sure). Report any issues or improvement requests for the Lemmy software itself here: ## Known issues Known issues can be found in the beforementioned post, one of the most annoying ones is the fact that post/reply in a somewhat larger community can take up to 10 seconds. It seems like that's related to the number of subscribers of the community. I'll be looking into that one, and hope the devs are too.

Who wants to moderate SelfHosted community?
Looking for help with moderation, I have my hands full administering this server ;-) Requirements: - Need to have read and agree with the rules ( - Need a little bit of time to keep an eye here

1000 users! just reached 1000 users. Please remember that the server was created June 1st! So still might notice some startup issues... but so far so good! Welcome @all!
fedilink improvements and issues
In this post I will list the known issues or possible improvements for Please comment with any issue or area for improvement you see and I will add it here. Remember: this instance was only started June 1st so a lot of troubleshooting and tweaking to be done. Issues can be: - Local ( (also performance issues) - Lemmy software issues - Other software related (apps/Fediverse platforms etc) - Remote server related - (User error? ...) ## Known issues ### Websockets issues There are some issues with the Websockets implementation used in Lemmy, which handles the streaming. Websockets will be removed in version 0.18 so let's hope these issues will be all gone then! - Top posts page gets a stream of new posts ? Websockets issue - You're suddenly in another post than you were before > Websockets issue - Your profile will briefly display another name/avatar in the top right corner ### Spinning wheel issues Error handling is not one of Lemmy's strongpoints. Sometimes something goes wrong, but instead of getting an error, the button will have a 'spinning wheel' that lasts until eternity. These are some of the known cases: - You want to create an account but the username is already taken - You want to create an account but the username is too long (>20 characters) - You want to create an account but the password is too long - You want to create a community but the name is already taken - You want to create a community but the name is not in all lowercase letters - You want to create a post over 2000 characters - You want to post something in a language that isn't allowed in the community ## Other issues - Federation not always working; Apparently not everything gets synced all the time. This needs troubleshooting. - “404: FetchError: invalid json response body at http://lemmy:8536/api/v3/site” This sometimes happens when the Lemmy app container is very busy. Needs troubleshooting ## Enhancement requests - Can themes be added? > To be checked if this can be done without changing code. For support with issues at, go to [the Support community](

Woo-hoo! 100 users!!
I see we've just reached 100 users!! In 5 days..

Some federation issues [solved]
I still see some federation issues: - It sometimes takes a few tries before a remote post or community is found - Remote replies don't show up - Subscriptions to remote communties are stuck in 'pending' I'll look into that.