Systemd v259

(github.com)

82 points | by voxadam 22 hours ago

14 comments

  • sovietmudkipz 19 hours ago
    Hobbyist game dev here with random systemd thoughts. I’ve recently started to lean on systemd more as my ‘local game server process manager’ process. At first I thought I’d have to write this up myself as a whole slew of custom code, but then I realized the linux distros I use have systemd. That + cgroups and profiling my game server’s performance lets me pack an OS with as many game servers dynamically (target 80% resource utilization, funny things happen after that — things I don’t quite understand).

    In this way I’m able to set up AWS EC2 instances or digital ocean droplets, a bunch of game servers spin up and report back their existence to a backend game services API. So far it’s working but this part of my project is still in development.

    I used to target containerizing my apps, which adds complexity, but often in AWS I have to care about VMs as resources anyways (e.g. AWS gamelift requires me to spin up VMs, same with AWS EKS). I’m still going back and forth between containerizing and using systemd; having a local stack easily spun up via docker compose is nice, but with systemd what I write locally is basically what runs in prod environment, and there’s less waiting for container builds and such.

    I share all of this in case there’s a gray beard wizard out there who can offer opinions. I have a tendency to explore and research (it’s fuuun!) so I’m not sure if I’m on a “this is cool and a great idea” path or on a “nobody does this because <reasons>” path.

    • miladyincontrol 10 hours ago
      > I’m still going back and forth between containerizing and using systemd

      Why not both? Systemd allows you to make containers via nspawn, which are defined just about the exact same as you do a regular systemd service. Best of both worlds.

    • dijit 19 hours ago
      This is sort of how I designed Accelbytes managed gameserver system (previously called: Armada).

      You provide us a docker image, and we unpack it, turn it into a VM image and run as many instances as you want side-by-side with CPU affinity and NUMA awareness. Obviating the docker network stack for latency/throughput reasons - since you can

      They had tried nomad, agones and raw k8s before that.

      • sovietmudkipz 19 hours ago
        Checking out the website now. Looks enticing. Would a user of accelbyte multiplayer services still be in the business of knowing about underlying VMs? I caught some copy on the website that led me to question.

        As a hobbyist part of me wants the VM abstracted completely (which may not be realistic). I want to say “here’s my game server process, it needs this much cpu/mem/network per unit, and I need 100 processes” and not really care about the underlying VM(s), at least until later. The closest thing I’ve found to this is AWS fargate.

        Also holy smokes if you were a part of the team that architected this solution I’d love to pick your brain.

        • maccard 17 hours ago
          There’s a couple of providers that give you that kind of abstraction. Playfab is _pretty close_ but it’s fairly slow to ramp up and down. There is/was multiplay - they’ve had some changes recently and I’m not sure what their situation is right now. There’s also stuff like Hathora (they’re great but expensive).

          At a previous job, we used azure container apps - it’s what you _want_ fargate to be. AIUI, Google Cloud Run is pretty much the same deal but I’ve no experience with it. I’ve considered deploying them as lambdas in the past depending on session length too…

          • gcr 12 hours ago
            Cloud Run tries to be this but every service like this has quirks. For example, GCR doesn’t let you deploy to high-CPU/MEM instances, has lower performance due to multi-tenant hosts, etc
        • dijit 17 hours ago
          That was was actually the original intent. If we scale to bare metal providers we can get much more performance. m

          By making it an “us” problem to run the infrastructure at a good cost, and be cheaper then than AWS for us to run, meaning we could take no profit on cloud vms. making us cost competitive as hell.

          • sovietmudkipz 8 hours ago
            If I understand correctly you're saying you manage hardware yourself (colocate in a data center? Dedicated hosting?) and that gives you an edge in pricing. That's pretty cool, and I think I can see how it could be less expensive to purchase hardware & maintain it rather than renting that compute from a third party. There is obviously the tradeoff of then being responsible for capacity planning for the workloads supported among other downsides and maintaining hardware lifecycle but I wouldn't be surprised to hear this downside is overstated compared to benefits reaped.
    • madjam002 19 hours ago
      Definitely don't recommend going down this path if you're not already familiar with Nix, but if you are, a strategy that I find works really well is to package your software with Nix, then you can run it easily via systemd but also create super lightweight containers using nix-snapshotter[0] so you don't have to "build" container images if you still want the flexibility of containers. You can then run the containers on Docker or Kubernetes without having to build heavy images.

      [0] https://github.com/pdtpartners/nix-snapshotter

      • frantathefranta 18 hours ago
        I don't recommend getting familiar with Nix because your chances of getting nerd sniped by random HN comments increase exponentially.
        • sovietmudkipz 8 hours ago
          Funny. I probably will dive into Nix some day but I've been content letting it sit waiting for me to check it out.
      • throwaway091025 18 hours ago
        [dead]
    • esseph 19 hours ago
      If you use podman quadlets, you get containers and systemd together as a first class citizen, in a config that is easily portable to kubernetes if you need more complex features.
      • sovietmudkipz 18 hours ago
        O.O this may be the feature that gets me into podman over docker.
        • asmor 10 hours ago
          They're very cool. I actually combine them with Nix. Because why not.

          https://github.com/SEIAROTg/quadlet-nix

        • esseph 18 hours ago
          The shift from docker to podman was originally quite painful at first, but it's much better, very usable, and quite stable now.

          Still, I can see the draw for independent devs to use docker compose. Teams and orgs though makes sense to use podman and systemd for the smaller stuff or dev, and then literally export the config as a kubernetes yaml.

    • reactordev 18 hours ago
      This actually works really well with custom user scripts to do the initial setup. It’s also trivial to do this with docker/podman if you don’t want it to take over the machine. Batching/Matchmaking is the hard part of this, setting up a fleet is the fun part of this.

      I’ve also done Microsoft Orleans clusters and still recommend the single pid, multiple containers/processes approach. If you can avoid Orleans and kubernetes and all that, the better. It just adds complexity to this setup.

    • baggy_trough 19 hours ago
      Did you try systemd's containers (nspawn)?
    • rbjorklin 19 hours ago
      You sound like you've explored at least a few options in this space. Have you looked at https://agones.dev/ ?
      • sovietmudkipz 18 hours ago
        Yes! It’s a great project. I’m super happy they have a coherent local development story. I kinda abandoned using it though when I said “keeeep it simple” and stopped using containers/k8s. I think I needed to journey through understanding why multiplayer game services like Agones/gamelift/photon were set up like they were. I read through Multiplayer Game Programming: Architecting Networked Games by Joshua Glazer and Sanjay Madhav really helped (not to mention allowed me to better understand GDC talks over multiplayer topics much better).

        This all probably speaks to my odd prioritization: I want to understand and use. I’ve had to step back and realize part of the fun I have in pursuing these projects is the research.

        • dontlaugh 1 hour ago
          I’ve also found docker / k8s to mostly just get in the way. Even VMs are often a problem, depending on the details of the game.

          Bare metal is the only actually good option, but you often have to do a lot yourself. Multiplay did offer it last time I looked, but I don’t know what’s going on with them now.

    • colechristensen 18 hours ago
      > (target 80% resource utilization, funny things happen after that — things I don’t quite understand).

      The closer you get to 100% resource utilization the more regular your workload has to become. If you can queue requests and latency isn't a problem, no problem, but then you have a batch process and not a live one (obviously not for games).

      The reason is because live work doesn't come in regular beats, it comes in clusters that scale in a fractal way. If your long term mean is one request per second what actually happens is you get five requests in one second, three seconds with one request each, one second with two requests, and five seconds with 0 requests (you get my point). "fractal burstiness"

      You have to have free resources to handle the spikes at all scales.

      Also very many systems suffer from the processing time for a single request increasing as overall system loads increase. "queuing latency blowup"

      So what happens? You get a spike, get behind, and never ever catch up.

      https://en.wikipedia.org/wiki/Network_congestion#Congestive_...

      • sovietmudkipz 17 hours ago
        Yea. I realize I ought to dig into things more to understand how to push past into 90%-95% utilization territory. Thanks for the resource to read through.
        • mpyne 12 hours ago
          You absolutely do not want 90-95% utilization. At that level of utilitization random variability alone is enough to cause massive whiplash in average queue lengths.

          The cycle time impact of variability of a single-server/single-queue system at 95% load is nearly 25x the impact on the same system at 75% load, and there are similar measures for other process queues.

          As the other comment notes, you should really work from an assumption that 80% is max loading, just as you'd never aim to have a swap file or swap partition of exactly the amount of memory overcommit you expect.

          • rcxdude 11 hours ago
            Man, if there's one idea I wish I could jam into the head of anyone running an organization, it would be queuing theory. So many people can't understand that slack is necessary to have quick turnaround.
            • sovietmudkipz 9 hours ago
              Mmmm, I remember reading this in Systems Performance Brendan Gregg. I should revisit what was written…
          • sovietmudkipz 9 hours ago
            I target 80% utilization because I’ve seen that figure multiple times. I suppose I should rephrase: I’d like to understand the constraints and systems involved that make 80% considered full utilization. There’s obviously something that limits a OS; is it tunable?

            Questions I imagine a thorough multiplayer solutions engineer would be curious of, the kind of person whose trying to squeeze as much juice out of the hardware specs as possible.

            • btschaegg 5 hours ago
              It might not be the OS, but just statistical inevitability. If you're talking about CPU utilization on Linux, for example, it's not all that unlikely that the number you're staring at isn't "time spent by CPU doing things" but "average CPU run queue length". "100%" then doesn't only mean the CPU gets no rest, but "there's always someone waiting for a CPU to become free". It likely pays off to understand where the load numbers in your tooling actually come from.

              Even if that weren't the case, lead times for tasks will always increase with more utilization; see e.g. [1]: If you push a system from 80% to 95% utilization, you have to expect a ~4.75x increase in lead time for each task _on average_: (0.95/0.05) / (0.8/0.2)

              Note that all except the term containing ρ in the formula are defined by your system/software/clientele, so you can drop them for a purely relative comparison.

              [1]: https://en.wikipedia.org/wiki/Kingman%27s_formula

              Edit: Or, to try to picture the issue more intuitively: If you're on a highway nearing 100% utilization, you're likely standing in a traffic jam. And if that's not (yet) strictly the case, the probabilty of a small hiccup creating one increases exponentially.

        • colechristensen 16 hours ago
          One way to think about it is 80% IS full utilization.

          The engineering time, the risks of decreased performance, and the fragility of pushing the limit at some point become not worth the benefits of reaching some higher metric of utilization. If it's not where you are, that optimum trade off point is somewhere.

  • anotherhue 20 hours ago

      systemd-networkd now implements a resolve hook for its internal DHCP
          server, so that the hostnames tracked in DHCP leases can be resolved
          locally. This is now enabled by default for the DHCP server running
          on the host side of local systemd-nspawn or systemd-vmspawn networks.
    
    Hooray.local
  • nix0n 20 hours ago
    > Support for System V service scripts is deprecated and will be removed in v260

    All the services you forgot you were running for ten whole years, will fail to launch someday soon.

    • noosphr 19 hours ago
      Every release of redhat software makes me happy I switched to openbsd for my human scale computers.
    • nish__ 20 hours ago
      How hard is it to just call your init.d scripts from a systemd unit?
      • bonzini 20 hours ago
        Not only it's easy, the exact contents of the systemd unit can already be found in /run/systemd/system.
        • nish__ 19 hours ago
          Honestly. I'm sick of people complaining about systemd.
          • nottorp 19 hours ago
            Were you paid to learn it?

            Because last time I wrote systemd units it looked like a job.

            Also, way over complex for anything but a multi user multi service server. The kind you're paid to maintain.

            • tapoxi 18 hours ago
              Why would a server use a different init system than a desktop or embedded device?

              Why wouldn't you want unit files instead of much larger init shell scripts which duplicate logic across every service?

              It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.

              • bonzini 18 hours ago
                > Why wouldn't you want unit files instead of much larger init shell scripts which duplicate logic across every service?

                Indeed, that criticism makes no sense at all.

                > It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.

                Don't forget VMs. Even in server space, they use hotplug/hotunplug as much as traditional desktops.

                • throw0101a 9 hours ago
                  >> It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.

                  > Don't forget VMs. Even in server space, they use hotplug/hotunplug as much as traditional desktops.

                  I was doing hot plugging of hardware awo+ decades ago when I still administered Solaris machines. IBM/mainframes has been doing it since forever.

                  Even on Linux udevd existed before systemd did.

              • throw0101a 9 hours ago
                > Why would a server use a different init system than a desktop or embedded device?

                The futzing around with resolv.conf(5) for one.

                I take to setting the immutable flag on the file given all the shenanigans that "dynamic" elements of desktop-y system software does with the file when I want the thing to never change after I install the server. (If I do need to change something (which is almost never) I'll remove/re-add the flag via Anisble's file:attr.)

                Of course nowadays "init system" now also means "network settings" for some reason, and I have often have to fight between system-networkd and NetworkManager on some distros: I was very happy with interfaces(5), also because once I set the thing on install on a server, I hardly have to change it and the dynamic-y stuff is an anti-feature.

                SystemD as init replacement is "fine"; SystemD as kitchen-sink-of-the-server-with-everything-tightly-coupled can get annoying.

              • yjftsjthsd-h 17 hours ago
                > Why would a server use a different init system than a desktop or embedded device?

                The server and desktop have a lot more disk+RAM+CPU than the embedded device, to the point that running systemd on the low end of "just enough to run Linux" would be a pain.

                Outside embedded, though, it probably works uniformly enough.

                • nottorp 11 minutes ago
                  Heh. I still have a pre systemd machine around. It uses 300 M of RAM for the OS and a few services I use in my home.

                  I recently set up a "modern" systemd based Ubuntu server in a VM and it used closer to 1 G before I installed any service.

            • bigstrat2003 18 hours ago
              I think you're way overstating things. Systemd units can be complex, but for most things they are dead simple to write.
            • 0x457 18 hours ago
              > a multi user multi service server. The kind you're paid to maintain.

              TIL. Didn't know I can get paid to maintain my PC because I have a background service that does not run as my admin user.

            • jauntywundrkind 17 hours ago
              A systemd service can be:

                [Service]
                Type=simple
                ExecStart=/usr/bin/my-service
              
              If this is a hard job for you well maybe get another career mate. Especially now with LLMs.

              The thing to me is that services sometimes do have cause to be more complex, or more secure, or to be better managed in various ways. Over time we might find (for ex) oh actually waiting for this other service to be up and available first helps.

              And if you went to run a service in the past, you never know what you are going to get. Each service that came with (for ex) Debian was it's own thing. Many forked off from one template or a other. But often forked long ago, with their own idiosyncratic threads woven in over time. Complexity emerged, and it wasn't contained, and it crrtainly wasn't normalized complexity across services: there would be dozens of services each one requiring careful staring at an init script to understand, with slightly different operational characteristics and nuance.

              I find the complaints about systemd being complex almost always look at the problem in isolation. "I just want to run my (3 line) service, but I don't want to have to learn how systemd works & manages unit: this is complex!". But it ignores the sprawl of what's implied: that everyone else was out there doing whatever, and that you stumble in blind to all manners of bespoke homegrown complexity.

              Systemd offers a gradient of complexity, that begins with extremely simple (but still offering impressive management and oversight), and that lets services wade into more complexity as they need. I think it is absolutely humbling and to some people an affront to see man pages with so so so many options, that it's natural to say: I don't need this, this is complex. But given how easy it is, how much great ability to see the state of the world we get that SysV never offered, given the standard shared culture tools and means, and given the divergent evolutionary chaos of everyone muddling through init scripts themselves, systemd feels vastly more contained, learnable, useful, concise, and less complex than the nightmares of old. And it has simple starting points, as shown at the top, that you can add onto and embelish onwards as you find cause to move further along the gradient of complexity, and you can do so in a simple way.

              It's also incredibly awesome how many amazing tools for limiting process access, for sandboxing and securing services systemd has. The security wins can be enormous.

              > Because last time I wrote systemd units it looked like a job

              Last, an LLM will be able to help you with systemd, since it is common knowledge with common practice. If you really dislike having to learn anything.

              • ewoodrich 15 hours ago
                Yeah, I've been using Claude and Codex to create bespoke systemd services for my random tools and automation stuff and have been really impressed by how easy it is and how rock solid they are once setup. It's really nice not living in constant terror that a reboot, network connectivity loss or gentle breeze will cause my duct taped scripts to collapse under their own weight.
              • nottorp 16 hours ago
                Somehow that's never enough though.
                • jauntywundrkind 5 hours ago
                  I dunno man. The past was a shit show & you seem extremely resistant to trying at all.

                  I struggle to figure out what it is that the systemd haters club actually struggles with, what is actually the hard parts. I do in fact sometimes use a 3 line .service file and it works fine. It feels like there is a radically conservative anti progress anti learning anti trying force that is extremely vocal that shows up all the time everywhere in any thread, to protest against doing anything or learning anything. I really really am so eager to find the learnable lessons, to find the hard spots, but it's almost entirely the same low grade discursive trashing with no constructive or informative input.

                  It feels like you use emotional warfare rather than reason. The culture I am from is powerless against that if that's all you bring but I also feel no respect for a culture that is so unable to equivocate what the fuck it's problems actually are. Imo we all need a social defense against complaints that are wildly vacuous & unspecific. Imo you are not meeting any baselines for taking your whinges seriously.

                  • nottorp 2 hours ago
                    > unable to equivocate what the fuck it's problems actually are

                    ... or doesn't care to discuss it any more. RedHat's push was succesful, linux is not a hobby OS any more, you won.

                    I can agree with you that linux needed something better than sysv init.

                    I can't agree with you that this monolithic solution that takes over more and more services is better.

                    Oh, you want a specific complaint?

                    Why the fuck does systemd lock up the entire startup process for 2 minutes if you start a desktop machine without network connectivity?

            • nailer 17 hours ago
              > Because last time I wrote systemd units it looked like a job.

              Fascinating. Last time I wrote a .service file I thought how muhc easier it was than a SysV init script.

              • egorfine 2 hours ago
                Until you need to actually dive deep into complicated scenarios. In sysv init you were on your own, which could be for better or for worse. In the world of systemd you either do as LP says or you do not at all.

                I vastly prefer #1.

    • sebazzz 19 hours ago
      For me it is quite a list.

      However, it is not easy figuring out which of those script are actually a SysVInit script and which simply wrap systemd.

      • bonzini 19 hours ago
        As I wrote in another comment, just check out /run/systemd/system. You'll find the wrapper units that systemd creates for your sysvinit scripts.
    • sidewndr46 17 hours ago
      Wasn't this support listed as one of the reasons why systemD would be fine for everyone to adopt?
      • bonzini 15 hours ago
        That was almost 15 years ago and the support is evidently not as useful.

        Also it's entirely contained within a program that creates systemd .service files. It's super easy to extract it in a separate project. I bet someone will do it very quickly if there's need.

  • egorfine 2 hours ago
    It's time for a new, leaner init system. systemd became an os of it's own.
  • A4ET8a8uTh0_v2 20 hours ago
    Despite being philosophically opposed to it, I can't deny that it is as common as it, because of how easy it seems to make the initial setup. By comparison, when I recently tried void linux, it simply requires ( maybe even demands ) more of its user.
  • throw0101d 20 hours ago
    • Nextgrid 19 hours ago
      Who needs to read mail when you can even make it receive mail!

      Make an `smtp.socket`, which calls `smtp.service`, which receives the mail and prints it on standard output, which goes to a custom journald namespace (thanks `LogNamespace=mail` in the unit) so you can read your mail with `journalctl --namespace=mail`.

  • Levitating 3 hours ago
    glad to see the varlink IPC expanded upon
  • MarkusWandel 16 hours ago
    So they're finally nuking rc.local altogether.

    Probably no biggie to google the necessary copypasta to launch stuff from .service files instead. Which, being custom, won't have their timeout set back to "infinity" with every update. Unlike the existing rc.local wrapper service. Which, having an infinity timeout, and sometimes deciding that whatever was launched by rc.local can't be stopped, can cause shutdown hangs.

  • wpollock 18 hours ago
    > The cgroup2 file system is now mounted with the "memory_hugetlb_accounting" mount option, supported since kernel 6.6.

    > Required minimum versions of following components are planned to be raised in v260:

    * Linux kernel >= 5.10 (recommended >= 5.14),

    Don't these two statements contradict each other?

    • blucaz 17 hours ago
      It gracefully falls back if the new option is not available at runtime
  • phendrenad2 7 hours ago
    Surprising to see how many people are still mucking around with init systems. Shows that k8s really has a lot more adoption left to go.
    • mentos1386 5 hours ago
      As a long time K8S user and general fan of. Systemd nas defenetly itd own place in the world, for one, št least to start up your k8s nodes. But more and more i have started to rely more on it for any non k8s workload (yes those exist and always will).
  • snvzz 14 hours ago
    I find musl support most remarkable.

    Breaking systemd was a thorn on distributions trying to use musl.

    • egorfine 2 hours ago
      musl support is excellent. If you were unhappy with transparency, simplicity, maintainability and thinness of Alpine Linux - now you can install systemd and loose all of these disadvantages.
  • vaxman 18 hours ago
    The downside of drawing the interest of Brewsters (https://youtu.be/fwYy8R87JMA) in Linux.

    v259? [cue https://youtu.be/lHomCiPFknY]

  • Mikhail_K 20 hours ago
    [flagged]
    • orangeboats 20 hours ago
      Can we put a stop to this weird obsession with attacking Pottering under _every_ systemd-related thread?

      Fine, we get it, you don't like him. Or you don't like systemd. Whichever it is, comments like yours often provide zero substance to the discussion.

      • nicolaslem 19 hours ago
        Maybe I have been here too long but I can guess exactly the content of each thread about systemd/Gnome/Wayland/Firefox before opening the link.
        • Klonoar 11 hours ago
          Apple and Electron are similar topics that belong on that list.
      • sam_lowry_ 20 hours ago
        I agree emotionally, but OTOH we should not forget about the incentives of people and the history of projects.
        • beanjuiceII 20 hours ago
          the project is largely successful though, supporting linux has been way more pain when we have to do it for non systemd systems.. but i guess good news is we just charge customers more for their niche setups
      • McDyver 20 hours ago
        I agree with your 2nd statement, but people should bring up things that should be discussed.

        Otherwise, at some point, one of the 10000 [0] won't know there are alternatives and different ways of doing things.

        [0] https://xkcd.com/1053/

      • fleroviumna 20 hours ago
        [dead]
  • nottorp 19 hours ago
    What has it taken over this time?