Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KB
/kbin meta @kbin.social

/kbin server update - or how the server didn't blow up

Currently, on the main instance, people have created 40191 accounts (+214 marked as deleted). I don't know how many are active because I don't monitor it, but once again, I greet all of you here :) In recent days, the traffic on the website has been overwhelming. It's definitely too much for the basic docker-compose setup, primarily designed for development use. I was aware of the possible consequences of the situation happening on Reddit, but I assumed that most people would migrate to one of the Lemmy instances, which already has an established position. I hoped that a few stray enthusiasts would find their way to kbin ;)

The first step was to upscale the VPS to a higher version (66.91EUR). It quickly turned out that it wasn't enough. I had to enable CF protection just to keep the website responsive, but the response times were still very slow. At this stage, the instance was practically unusable. The next step was a full migration to a dedicated server (100EUR, the current hardware). It can be done relatively quickly, so it resulted in a 5-minute technical break. Despite the much higher parameters, it didn't get any better. It became clear that the problem didn't lie there. I'm really frustrated when it comes to server administration. That was the moment when I started looking for help. Or rather, it found me.

A couple days ago I wrote about how kbin qualified for the Fast Forward program. To be honest, I did it out of pure curiosity and completely forgot because a lot was happening during that time. During the biggest fire incident, Hannah ( @haubles ) reached out with a proposal to help. I outlined the situation (in short: the server is dying, I don't even know what I need, help! ;). She quickly connected us with Vlad ( @vvuksan ) and Renaud ( @renchap ). I was probably too tired because I don't know if the whole operation lasted 60 minutes or 6 hours, but after a series of precise questions and getting an understanding of the situation, the guys themselves adjusted the entire job. I love working with experts, and it's not often that you come across individuals so well-versed in the fediverse. Thanks to Hannah's kindness, we will be staying there a bit longer. Currently, fastly.com handles the caching layer and processes images. Hence those cool moving thumbnails ;)

Things were going well at that point. I could disable Cloudflare protection. Probably thanks to that, many of you are here today, and we got to know each other a bit better :) However, even then, when I tried to enable federation, the server would stop working.

Around the same time, Piotr ( @piotrsikora ), whom I already knew from the Polish fediverse, contacted me. He is the administrator of the Polish Mastodon instance pol.social, operates within the ftdl.pl foundation, and specializes in administering applications with a very similar tech stack. I made the decision to grant him server access. It only took him a few moments, and he came back to me with a few tips that allowed us to enable federation. In the following days, there was more of it, and we managed to reach the current level. I think it's not too bad.

Nevertheless, managing the instance has taken up about 60% or more of my time so far, which prevents me from fully focusing on current tasks. That's why I would like to collaborate with Piotr and hand over full care of the server to him. Piotr will also take care of the security side. Now I have to take this much more seriously. We still need to work out the terms of cooperation, but I want you to know the direction I intend to pursue.

We also need to migrate to a new environment because one server will sooner or later become insufficient. This time, I want to be prepared for it. This may be associated with transient issues with the website in the coming days.

The next two updates will still be about project funding (I still can't believe what happened) and moderation. The following ones will be more technical, with descriptions of changes and what contributors are doing on Codeberg. I would like to be here more often, but not as an admin, just as myself.

Thank you all for this.

P.S. In private messages, I also received numerous offers of help that I didn't even have a chance to read and respond to. You are the best!

175 comments
  • Hell yes, ernest. Keep up the momentum, stay humble of your shortcomings, and don't burn yourself out.

    We all love this community you developed, and hopefully we can attract the kinds of people who are as open, humble, courageous, and intelligent as you seem to have gravitating towards you in your personal and professional life.

  • It's cool to see renchap helping out here in addition to his work with mastodon.social! The amount of cross-pollination and coordination within the fediverse is so cool to see.

    • @MeowdyPardner @haubles fascinating reading comments like this and having to figure out the context for “here” (as I read it via Ivory with my mastodon.social account)

      One of the things that will have to evolve in our discourse online in the fediverse is the distributed nature of “here”

  • Thank you ernest... and fastly... and welcome aboard Piotr!

    OSPs make me so happy :)

  • Thanks for the update. Sounds like you're having an "exciting" time. You've done a great job so far and I hope that the additional help you are receiving means that you might be able to take some time off in the near future, for your own sake.

    Edit: buy Ernest a coffee for those able to contribute

  • @ernest Fantastic news and big thanks to all that are helping! @ernest I feel you will keep seeing several thousand users join in daily as the reddit shitshow continues to play out. There are now 10 apps being developed that will further support the transition especially as the July 1st shutdown takes place. That’s probably when a much bigger wave of new users will come in. Is your infrastructure scalable to allow for this onslaught? Thank you and everyone else again for your hard work!

  • Nice work! I can't imagine what would happen if a hobby project of mine is suddenly used by 35k users, it would be insanely stressful, keep it up, we are loving it here.

  • Thank you @ernest! Please consider setting up a Patreon, or similar, account so I can donate monthly. I've bought you a few coffees but I'd like it to be automatic.

  • @ernest this was an extraordinary situation with several crisis points on a platform in early development (as far as mass usage is concerned) and you were able to keep a cool head, keep things going and also keep people informed of the situations as they were developing. I am happy to read that you have help now (and it looks like really great expert help) and that you can take a break/retirement from admin and enjoy it with the rest of us. Bravo!

  • @ernest

    Can you break down your costs in a spreadsheet?

    Many scaling issues can be solved by throwing money at it - and I think seeing the cost in black and white might help!

  • Thanks for the update, Ernest. Maybe a tracker can be posted on the sidebar to make sure we're supplying you with enough 'coffee' to keep the lights on? A lot of us are loving this platform enough to want to invest in your work and I hope everyone will crowdsource funds to keep you afloat.

  • I just want to say that I.. we appreciate all of you and all of the work that you are doing! We are happy to be here!

  • Huge shoutout to you and all of those who have gotten kbin to this point. Super crazy to have seen it go from struggling to a place I can happily and easily browse

  • that is such a great update, never knew the stuff that goes behind the scenes. You got this!

  • I had a feeling that the silence over the last few days was a sign that a huge amount of work was happening behind the scenes. I also think you’re doing an excellent job of communicating with us. Thank you for all your effort that’s allowed this community to grow.

  • Thanks for your transparency and hard work! Loving the site and looking forward to it getting even better!

  • Thanks for the hard work ernest!! I'm glad you were able to find some help and take some workload off you.

  • @ernest Regarding servers... did you have a look at Hetzner's server auctions. They tend to have 8c/16t servers for 40-50 bucks.

    Also, I've seen kbin uses PHP at it's core. Do you consider switching to a golang stack, which is known to be more resource-friendly than PHP.

    • That isn't the issue.

      A complete rewrite of the application might add capacity, but its vertical, you stack increase load in one instance. No matter how much performance you extract eventually you run out of capacity.

      As scales increase you need to add horizontal capacity. This is the idea of adding 2, 10, 100 servers. That means breaking out services into stateless parts which can run concurrently (or managed state behaviour).

      This is where something like Kubernetes comes into play, since its designed to manage docker images over hubdreds of servers. Instead of using every last bit of capacity from one server you spread it.

      Similarly postgres like most SQL platforms doesn't particularly scale beyond 1 instance.

      Facebook invented Apache Cassandra for this reason, it was the first NoSQL database and is designed to deploy in multiples of its replicaset number (3 is the default).

      Having data spread over 3, 30, 300 is less efficient, but you know have 3,30, 300 servers responding.

      The other advantage is horizontal scaling is fault tolerant by design.

      There is an argument for compiled languages like Go, C# and Java, but honestly the next big win is making as much as possible scale horizontally.

    • Methinks that a rewrite from PHP to Go would be a pretty massive undertaking. PHP's performance characteristics have gotten a lot better as the language and various runtimes have improved, although it's not anything like Go. I think the best route would be for someone to implement another federated link aggregation system in Go, so then we'd have a diverse selection to choose from — Lemmy in Rust, kbin in PHP, this hypothetical new platform in Go, along with everything else out there. A heterogeneous system is good for the continued health of the threadiverse IMO.

  • Looking forward to the follow-up posts with technical details, if you do find the time to write them up ofc! As a new kbin user, my thanks for all the hard work and for welcoming us here 3

  • Glad you got help with server administration! Hopefully account migration is a feature that can be implemented, I would be happy to move my account to a less crowded instance, once it/they come to fruition.

  • Exciting times indeed! Thanks for the updates, it's always interesting and fun to read these. Seems like you've adjusted pretty well considering the massive influx of users in less than 2 weeks, hopefully it gets a bit easier and less stressful from this point forward.

  • I'd been doing a little shopping around over the past few weeks as I've been getting ready to properly leave my 11 year old (cringe) Reddit account behind. Lemmy does seem promising as well, but I do have some concerns about the developers, and while of course no one is perfect, as it's still the early days, I'd much prefer to throw my support behind people to whom I can do so guilt-free. So far at least, this has felt right up my alley, and while I am trying to use this moment to cut down a bit on my internet time in general, I'm definitely happy to be here!

    • The biggest reason I'm here, is Ernest. Getting in when a project is just getting off the ground, let's users have some say about the direction of things and features. And Ernest is very responsive to the community and asking for feedback. That's exciting, and makes the whole thing feel more of a community.

175 comments