New Nginx Exploit

(github.com)

249 points | by hetsaraiya 5 hours ago

17 comments

  • RagingCactus 5 hours ago
    As a security person it is tiring to see so many people here either directly claim or at least allude to the claim that this is somehow much less scary because the _published_ exploit does not bypass ASLR. The writeup claims there is a way to reliably bypass ASLR with this attack. And that is a good default assumption I would be willing to believe without evidence.

    ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.

    It is straight up wrong to say "if you have ASLR enabled, you're not at any risk from this" and saying this is extremely harmful for anyone that trusts claims like that.

    This wrong belief that you shouldn't care about security vulnerabilities because mitigations may make exploitation more difficult has already caused so much harm in the past. Be glad that modern mitigations exist, but patch your stuff asap. If you are a vendor, do not treat vulnerability reports as invalid because the researcher has not provided an ASLR bypass. Fix the root cause and hope mitigations buy you enough time to patch before you get owned.

    • kro 3 hours ago
      No remotely reachable vuln should be taken lightly.

      At the moment though, the preconditions look odd. I've been using nginx in various constellations for 10 years and never once combined rewrite and set.

      • buzer 2 hours ago
        There can be situations where you set some variables on top level and then override those in the location block with rewrite. These variables could be then used e.g. in log lines or in other "global" contexts.

        Not extremely common, but it does happen.

    • embedding-shape 4 hours ago
      > and saying this is extremely harmful for anyone that trusts claims like that.

      Kind of feels like the burden is on the one who is reading it though, good luck stopping people from spreading misinformation on the internet, most of them don't even know they're wrong.

      What's extremely harmful is trusting random internet comments stating stuff confidently. Get good at seeing through that, and it'll serve you well in security and beyond.

    • nicce 1 hour ago
      > ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.

      I disagree with this take, or I would at least phrase it differently. ASLR is like an extra password you need to guess. It has certain amount of entropy and it is usually stable. Unless vulnerability has a portion that leaks information, ASLR completely mitigates it - or you need a second vulnerability. And that is a different conversation. ASLR can completely mitigate individual vulnerability, but not possibly exploit chain.

      I would use the argument of possible second vulnerability that leaks information for making people patch quickly anyway. But exploit chains are risk for all kinds of vulns.

  • danslo 5 hours ago
    This one's pretty bad but there are some preconditions.

    Requires a "rewrite" directive with a questionmark in the replacement string, and then a subsequent "set" directive that references a regex capture group (e.g. set $var $1).

    Also the POC assumes ASLR is disabled.

    • argee 5 hours ago
    • dsr_ 5 hours ago
      Does any distro disable ASLR by default?

      If you were to do it by hand, nginx doesn't come to mind as a likely candidate.

      • Bender 2 hours ago
        Not the person you asked but I am not aware of any that disable ASLR by default, though most default to 1 which only enables ASLR for applications compiled to enable it vs 2 forcing it on or 3 on some distributions that use a hardened kernel. Rather than trusting any assumptions I prefer to run checksec [1] on every OS I touch. It's an old script but works just as well today as it did long ago. One may find that some applications are missing some basic hardening compile time options. The script is not an exhaustive test of all modern hardening options. Example of ASLR being forced on:

            # sysctl kernel.randomize_va_space
            kernel.randomize_va_space = 2
        
        Typical invocation:

            checksec.sh --proc-all
        
        This invocation will list the status of RELRO, Stack Canary, NX/PaX, PIE of all running daemons. My CachyOS installation for example is missing Stack Canaries for all daemons.

            checksec.sh --fortify-proc 732
            * Process name (PID)                         : sshd (732)
            * FORTIFY_SOURCE support available (libc)    : Yes
            * Binary compiled with FORTIFY_SOURCE support: N
        
        Some additional compile time hardening options [2] and discussion [3]. Even Rust apparently has some compile time security related options.

        [1] - https://www.trapkit.de/tools/checksec/ # some Linux repositories already contain "checksec".

        [2] - https://best.openssf.org/Compiler-Hardening-Guides/Compiler-...

        [3] - https://news.ycombinator.com/item?id=43533516

  • neomantra 5 hours ago
    The official F5 page is here: https://my.f5.com/manage/s/article/K000161019

    As noted elsewhere, ASLR protects you. While you are waiting for your affected platform to get the fix, they note the mitigation:

    "use named captures instead of unnamed captures in rewrite definition"

    "To mitigate this vulnerability for this example, replace $1 and $2 with the appropriate named captures, $user_id and $section"

    F5 patched 1.31.0 and 1.30.1.

    OpenResty has a patch for 1.27 and 1.29: https://github.com/openresty/openresty/commit/ee60fb9cf645c9...

    You can track OpenResty's (a Lua application server based on Nginx) progress here: https://github.com/openresty/openresty/issues/1119

  • jcalvinowens 5 hours ago
    • linkregister 5 hours ago
      Worker processes are forked from the master, which means they receive the same memory layout. You get unlimited crashes against the worker. There's probably a way to exploit that to get a read oracle. At the very least this is a reliable denial of service.

      Depth First's full writeup: https://depthfirst.com/research/nginx-rift-achieving-nginx-r...

      • jcalvinowens 4 hours ago
        Sure, but I think the github README ought to make it more clear the POC as-is doesn't work against nginx on any current Linux distro.
        • gavinsyancey 4 hours ago
          So you're not vulnerable to script-kiddies running the published PoC. Still probably vulnerable to to a sufficiently-motivated attacker.
          • jcalvinowens 1 hour ago
            I doubt it: aslr is not as easy to break on modern Linux as everyone in this thread wants to pretend it is. And anybody who actually cares so much about security that a compromised web frontend is the end of the world should be doing other things which would additionally mitigate this...

            I know they claimed they can bypass it: if that's true, they should publish it. The forking nature of nginx is uniquely bizarre and vulnerable, and I strongly suspect that's the only way they're pulling it off. I feel like that's the interesting thing here, not the buffer overrun.

  • ptx 4 hours ago
    Is there a good alternative to Apache and Nginx that's written in a memory-safe language and not full of security holes? I briefly looked at Jetty (written in Java) and Caddy (written in Go) but they seem to have a history of vulnerabilities of other types (e.g. shell injection in Jetty) so I'm not sure they would be any better.
    • nobody42 31 minutes ago
      Memory safety is good, but does not protect from every threat. In this day and age infrastructure operators should familiarize themselves with proactive defenses, MAC: SElinux and AppArmor. It required much friction earlier, but there are more tools to ease the usage today.

      https://presentations.nordisch.org/apparmor/

      https://github.com/nobody43/apparmor-profiles/blob/master/ng...

      https://github.com/nobody43/apparmor-suggest

      Disclaimer: I'm the author of both repos.

    • dgellow 4 hours ago
      Any software used at the scale of Apache and nginx will have a history of vulnerabilities. The fact they both survived with their market share for so long is a good sign
      • ptx 2 hours ago
        Right, that's essentially what I'm thinking.

        On the one hand Apache and Nginx are mature and proven but, being written in C, they will always suffer from memory-safety issues like this one and the recent Apache vulnerabilities.

        On the other hand, the alternatives are perhaps not as mature and perhaps not implemented as securely as they could be, given that e.g. Caddy had multiple vulnerabilities in its request parsing this year and Jetty's shell injection vulnerability seems easily foreseeable and avoidable. Using a memory-safe language doesn't help much if you then (to take an unrelated but well-known example) implement arbitrary code execution as a feature in the logging library.

    • embedding-shape 4 hours ago
      Caddy been a breeze to use, bit sucky model with "we have thousands of binaries depending on what combination of plugins you want" instead of a proper plugin system, but if you're building it from source, it's pretty nifty and simple anyways.
      • eikenberry 3 hours ago
        Recompiling with the features you want is a great model for a free software project. So much simpler to write and maintain compared to a plugin system that it really makes more sense in a lot of cases.
        • seanw444 42 minutes ago
          Can often also be noticeably more performant.
      • sharperguy 43 minutes ago
        I've switched to using traefik from caddy. For simple use cases it's a little more verbose in the configuration, but for more involved things like multiple load balancing backends, rewriting paths and headers and so on I've found it really good.
      • dboreham 2 hours ago
        Go doesn't support runtime linking, which is why "no plugins" (even though Go docs claim it does, no it doesn't).
      • vbernat 3 hours ago
        nginx had this defect for a long time too!
    • toast0 2 hours ago
      Apache and I think Nginx have a huge list of features and stuff. Most alternate http servers limit the scope a lot, so you'd need to specify what features you're interested in.

      But I haven't seen a whole lot of discussion of http servers in memory safe languages. The big three C-based servers: Apache, Nginx, and lighttpd are all pretty solid... I don't think there's a lot of people interested in giving that up for a new project just because of the language.

      I'll also add that when you pick up most memory safe languages, you're also picking up their sometimes extensive runtime / virtual machine and all the accoutrements. A Java webserver probably uses log4j because any random Java project probably does, etc.

  • panzi 5 hours ago
    Does Debian 12 have this patched? But I guess I'm not affected if I don't use `rewrite` or `set` anywhere?
    • aftbit 3 hours ago
    • wiredfool 3 hours ago
      Ubuntu has patched as of this morning. Debian doesn't look like they've patched trixie yet.
      • rslashuser 2 hours ago
        Just as a PSA, I found that "nginx -v" was not detailed about the version sufficient to check, but "apt list nginx" gave the full version number that was checkable, and indeed the 24.04 version of this morning (1.24.0-2ubuntu7.8) is patched.
    • lpcvoid 4 hours ago
      [dead]
    • iririririr 4 hours ago
      I find it very unlikely that anyone using nginx does NOT use `set` at least.

      Most nginx use cases are to end tls and then pass the request to node/php/go/etc. So, I bet you have at least one set with attacker controller data on a line like 'proxy_set_header X-Host $host;'

      edit: nvm. aparently named captures are not affect. Unless you have a $1 somewhere, it should be fine.

      • babuskov 2 hours ago
        The default NGINX PHP integration uses this:

            # regex to split $uri to $fastcgi_script_name and $fastcgi_path
            fastcgi_split_path_info ^(.+?\.php)(/.*)$;
            set $path_info $fastcgi_path_info;
  • trilogic 2 hours ago
    Good to know, thanks. Wondering how long to the next.
  • ChrisArchitect 5 hours ago
  • geophph 2 hours ago
    Someone tell LowLevel
  • FlyThruTheSun 5 hours ago
    [dead]
  • kitsune1 4 hours ago
    [dead]
  • jhatemyjob 4 hours ago
    tl;dr If you don't use ngx_http_rewrite_module, you're fine

    Honestly it's such a weird feature, if you're doing complicated redirects like this in nginx where PCRE is necessary, you should do it in your application code. And if you need speed use ngx_http_lua_module.

    • tredre3 1 hour ago
      Your opinion is that if, for a godforsaken reason, someone needs to rewrite URLs in their web server, they should avoid PCRE (something designed for string manipulation) because it's overkill, and they should use Lua (a full programming language) instead?

      Am I understanding you correctly?

    • PaulDavisThe1st 3 hours ago
      We do this for 3 sub-domains of ardour.org; there's no application code involved, because we're rewriting historical URLs to their current form, and the "application" doesn't do that or need to do that or need to know about that.
  • hetsaraiya 5 hours ago
    Just saw this pop up — full public PoC for CVE-2026-42945 ("NGINX Rift"), a heap buffer overflow in NGINX's ngx_http_rewrite_module that's been there since 0.6.27 (2008).

    It triggers on a very common pattern: a `rewrite` directive (with an unnamed capture like $1/$2 and a `?` in the replacement string) followed by `set`, `if`, or another `rewrite`. The root cause is a classic two-pass script engine bug (length calculation vs. actual copy pass with ngx_escape_uri).

    The PoC turns it into unauthenticated RCE using cross-request heap feng shui + pool cleanup pointer corruption. Tested with a simple Docker setup.

    - Repo + Python exploit: https://github.com/DepthFirstDisclosures/Nginx-Rift - Full technical write-up: https://depthfirst.com/research/nginx-rift-achieving-nginx-r... - F5 advisory + patches (1.31.0 / 1.30.1 for OSS, plus Plus updates): https://my.f5.com/manage/s/article/K000160932 (or the latest K000161019)

    Affects basically any NGINX doing URL rewriting in front of apps/PHP/etc. Workaround mentioned is switching to named captures.

    The discovery angle is also interesting — it was found autonomously by depthfirst's security analysis tool after one-click onboarding of the NGINX source.

    Anyone running NGINX in production using rewrite rules? How are you checking your configs? Thoughts on the exploit chain or the AI-assisted finding process?

  • stephenlf 5 hours ago
    Crap
    • Twirrim 5 hours ago
      Given it relies on ASLR being disabled, it's extremely unlikely you're at any risk from this.
      • bink 1 hour ago
        The exploit they chose assumes ASLR is disabled for simplicity's sake, but if you read the full writeup they say they could've used the vulnerability to map memory layout. It's nice to have ASLR but some types of vulnerabilities can be used to bypass it.
      • Tepix 4 hours ago
        That‘s wishful thinking
    • hmokiguess 5 hours ago
      I read that in my own voice, so relatable hahahaha
  • pjmlp 4 hours ago
    Looks into the CVE, ah an heap memory corruption, business as usual.
  • jmaw 5 hours ago
    Wow, coming from the webdev world. It is so funny seeing NGINX, one of the widest used web servers in the world, on version 1.x. React is on version 19. Really shows how differently new vs. old software is designed and built, and not necessarily in a good way.

    https://world.hey.com/dhh/finished-software-8ee43637 https://josem.co/the-beauty-of-finished-software/

    • 0x457 5 hours ago
      That's because nginx doesn't break things for end user every release, so there is no reason to bump major version.
      • embedding-shape 5 hours ago
        I bet nginx doesn't even follow semantic versioning, which you seem to be talking about.
        • 0x457 25 minutes ago
          Don't have to bet: Nginx doesn't follow it. It has its own linux-kernel (odd vs evens) inspired convention.

          Doesn't change the fact that only "breaking" changes in 1.x.x line are changes to defaults.

    • chasd00 5 hours ago
      anyone can choose any version string convention they want for their project. Comparing two different pieces of software by their version string doesn't make sense.
    • syoc 5 hours ago
      I guess someone need to update https://0ver.org/ then.
    • ranger_danger 5 hours ago
      I chalk that up more to different versioning schemes rather than how much work is being done. If nginx changed whole numbers like react did, I bet it would be even higher.
    • joecool1029 5 hours ago
      lighttpd still around too, on 1.4.82, not too much changed there.
      • ranger_danger 5 hours ago
        They've been working on version 2.0 for many years now as well, I wonder when they think a release might happen.
    • shooly 5 hours ago
      > not necessarily in a good way

      How do you think versioning works? You know that it's completely arbitrary and up to the author, right? Very ironic comment.