A Faster Alternative to Jq

(micahkepe.com)

101 points | by pistolario 2 hours ago

14 comments

  • 1a527dd5 29 minutes ago
    I appreciate performance as much as the next person; but I see this endless battle to measure things in ns/us/ms as performative.

    Sure there are 0.000001% edge cases where that MIGHT be the next big bottleneck.

    I see the same thing repeated in various front end tooling too. They all claim to be _much_ faster than their counterpart.

    9/10 whatever tooling you are using now will be perfectly fine. Example; I use grep a lot in an ad hoc manner on really large files I switch to rg. But that is only in the handful of cases.

    • montroser 23 minutes ago
      Then this is for the handful of cases for you. When it matters it matters.
    • dalvrosa 24 minutes ago
      Fair, but agentic tooling can benefit quite a lot from this

      Opencode, ClaudeCode, etc, feel slow. Whatever make them faster is a win :)

      • jamespo 9 minutes ago
        It's not running jq locally that's causing that
  • Kovah 1 hour ago
    I wonder so often about many new CLI tools whose primary selling point is their speed over other tools. Yet I personally have not encountered any case where a tool like jq feels incredibly slow, and I would feel the urge to find something else. What do people do all day that existing tools are no longer enough? Or is it that kind of "my new terminal opens 107ms faster now, and I don't notice it, but I simply feel better because I know"?
    • n_e 1 hour ago
      I process TB-size ndjson files. I want to use jq to do some simple transformations between stages of the processing pipeline (e.g. rename a field), but it so slow that I write a single-use node or rust script instead.
      • eru 1 hour ago
        This reminds me of someone who wrote a regex tool that matches by compiling regexes (at runtime of the tool) via LLVM to native code.

        You could probably do something similar for a faster jq.

      • nchmy 1 hour ago
        This isn't for you then

        > The query language is deliberately less expressive than jq's. jsongrep is a search tool, not a transformation tool-- it finds values but doesn't compute new ones. There are no filters, no arithmetic, no string interpolation.

        Mind me asking what sorts of TB json files you work with? Seems excessively immense.

      • messe 1 hour ago
        Now I'm really curious. What field are you in that ndjson files of that size are common?

        I'm sure there are reasons against switching to something more efficient–we've all been there–I'm just surprised.

        • overfeed 1 hour ago
          > Now I'm really curious. What field are you in that ndjson files of that size are common?

          I'm not OP,but structured JSON logs can easily result in humongous ndjson files, even with a modest fleet of servers over a not-very-long period of time.

          • messe 1 hour ago
            So what's the use case for keeping them in that format rather than something more easily indexed and queryable?

            I'd probably just shove it all into Postgres, but even a multi terabyte SQLite database seems more reasonable.

            • carlmr 1 hour ago
              Replying here because the other comment is too deeply nested to reply.

              Even if it's once off, some people handle a lot of once-offs, that's exactly where you need good CLI tooling to support it.

              Sure jq isn't exactly super slow, but I also have avoided it in pipelines where I just need faster throughput.

              rg was insanely useful in a project I once got where they had about 5GB of source files, a lot of them auto-generated. And you needed to find stuff in there. People were using Notepad++ and waiting minutes for a query to find something in the haystack. rg returned results in seconds.

              • messe 43 minutes ago
                You make some good points. I've worked in support before, so I shouldn't have discounted how frequent "once-offs" can be.
            • paavope 1 hour ago
              The use case could be e.g. exactly processing an old trove of logs into something more easily indexed and queryable, and you might want to use jq as part of that processing pipeline
              • messe 1 hour ago
                Fair, but for a once-off thing performance isn't usually a major factor.

                The comment I was replying to implied this was something more regular.

                EDIT: why is this being downvoted? I didn't think I was rude. The person I responded to made a good point, I was just clarifying that it wasn't quite the situation I was asking about.

                • bigDinosaur 43 minutes ago
                  Certain people/businesses deal with one-off things every day. Even for something truly one-off, if one tool is too slow it might still be the difference between being able to do it once or not at all.
                • adastra22 43 minutes ago
                  At scale, low performance can very easily mean "longer than the lifetime of the universe to execute." The question isn't how quickly something will get done, but whether it can be done at all.
    • swiftcoder 10 minutes ago
      Deal with really big log files, mostly.

      If you work at a hyperscaler, service log volume borders on the insane, and while there is a whole pile of tooling around logs, often there's no real substitute for pulling a couple of terabytes locally and going to town on them.

    • InfinityByTen 1 hour ago
      You don't know something is slow until you encounter a use case where the speed becomes noticeable. Then you see the slowness across the board. If you can notice that a command hasn't completed and you are able to fully process a thought about it, it's slow(er than your mind, ergo slow!).

      Usually, a perceptive user/technical mind is able to tweak their usage of the tools around their limitations, but if you can find a tool that doesn't have those limitations, it feels far more superior.

      The only place where ripgrep hasn't seeped into my workflow for example, is after the pipe and that's just out of (bad?) habit. So much so, sometimes I'll do this foolishly rg "<term>" | grep <second filter>; then proceed to do a metaphoric facepalm on my mind. Let's see if jg can make me go jg <term> | jq <transformation> :)

    • password4321 15 minutes ago
      Optimization = good

      Prioritizing SEO-ing speed over supporting the same features/syntax (especially without an immediately prominent disclosure of these deficiencies) = marketing bullshit

      A faster jq except it can't do what jq does... maybe I can use this as a pre-filter when necessary.

    • Jakob 1 hour ago
      Speed is a quality in itself. We are so bugged down by slow stuff that we often ignore that and don’t actively search for another.

      But every now and then a well-optimised tool/page comes along with instant feedback and is a real pleasure to use.

      I think some people are more affected by that than others.

      Obligatory https://m.xkcd.com/1205

      • Imustaskforhelp 2 minutes ago
        I am not sure if it was simon or pg who might've quoted this but I remembered a quote about that a 2 magnitude order in speed (quantity) is a huge qualititative change in it of itself.
    • hrmtst93837 3 minutes ago
      [dead]
    • hrmtst93837 1 hour ago
      [dead]
  • hackrmn 58 minutes ago
    Having used `jq` and `yq` (which followed from the former, in spirit), I have never had to complain about performance of the _latter_ which an order of magnitude (or several) _slower_ than the former. So if there's something faster than `jq`, it's laudable that the author of the faster tool accomplished such a goal, but in the broader context I'd say the performance benefit would be required by a niche slice of the userbase. People who analyse JSON-formatted logs, perhaps? Then again, newline-delimited JSON reigns supreme in that particular kind of scenario, making the point of a faster `jq` moot again.

    However, as someone who always loved faster software and being an optimisation nerd, hat's off!

    • bungle 37 minutes ago
      Integrating with server software, the performance is nice to have, as you can have say 100 kRPS requests coming in that need some jq-like logic. For CLI tool, like you said, the performance of any of them is ok, for most of the cases.
    • mroche 29 minutes ago
      > Having used `jq` and `yq`

      If you don't mind me asking, which yq? There's a Go variant and a Python pass-through variant, the latter also including xq and tomlq.

    • alcor-z 46 minutes ago
      [dead]
  • Bigpet 2 hours ago
    When initially opening the page it had broken colors in light mode. For anyone else encountering it: switch to dark mode and then back to light mode to fix it.
    • shellac 20 minutes ago
      I think this has just been fixed. A bit of dark mode was leaking into light in the css.
    • CodeCompost 47 minutes ago
      I suspect the website is vibe-coded, like the tool itself.
    • jvdvegt 1 hour ago
      Fine in Firefox on Android. Note that the scales of the charts are all different, which makes them hard to compare.

      Also, there are lots of charts without comparison so the numbers mean nothing...

    • qwe----3 1 hour ago
      White text with light background, yeah.
    • keysersoze33 1 hour ago
      I had the same problem (brave browser)
    • vladvasiliu 1 hour ago
      Looks fine to me on Edge/Windows.
    • youngtaff 1 hour ago
      Broken on iOS Safari too
  • jiehong 31 minutes ago
    First of all, congratulations! Nice tool!

    Second, some comments on the presentation: the horizontal violin graphs are nice, but all tools have the same colours, and so it's just hard to even spot where jsongrep is. I'd recommend grouping by tool and colour coding it. Besides, jq itself isn't in the graphs at all (but the title of the post made me think it would be!).

    Last, xLarge is a 190MiB file. I was surprised by that. It seems too low for xLarge. I daily check 400MiB json documents, and sometimes GiB ones.

  • ifh-hn 1 hour ago
    I learned a number of data processing cli tools: jq, mlr, htmlq, xsv, yq, etc; to name a few. Not to the level of completing advent of code or anything, but good enough for my day to day usage. It was never ending with the amount of formats I needed to extract data from, and the different syntax's. All that changed when I found nushell though, its replaced all of these tools for me. One syntax for everything, breath of fresh air!
    • joknoll 1 hour ago
      Same here, nushell is awesome! It helped me to automate so many more things than I did with any other shell. The syntax is so much more intuitive and coherent, which really helps a lot for someone who always forgot how to write ifs or loops in bash ^^
  • maxloh 1 hour ago
    From their README [0]:

    > Jq is a powerful tool, but its imperative filter syntax can be verbose for common path-matching tasks. jsongrep is declarative: you describe the shape of the paths you want, and the engine finds them.

    IMO, this isn't a common use case. The comparison here is essentially like Java vs Python. Jq is perfectly fine for quick peeking. If you actually need better performance, there are always faster ways to parse JSON than using a CLI.

    [0]: https://github.com/micahkepe/jsongrep

  • coldtea 29 minutes ago
    Speed is good! Not a big fan of the syntax though.
  • steelbrain 1 hour ago
    Surprised to see that there's no official binaries for arm64 darwin. Meaning macOS users will have to run it through the Rosetta 2 translation layer.
    • QuantumNomad_ 1 hour ago
      I’d install it via cargo anyway and that would build it for arm64.

      If the arm64 version was on homebrew (didn’t check if it is but assume not because it’s not mentioned on the page), I’d install it from there rather than from cargo.

      I don’t really manually install binaries from GitHub, but it’s nice that the author provides binaries for several platforms for people that do like to install it that way.

      • maleldil 21 minutes ago
        You can use cargo-binstall to retrieve Github binary releases if there are any.
    • baszalmstra 1 hour ago
      Really? That is your response? This is an high quality article from someone who spend a lot of time implementing a cool tool and also sharing the intricate inner workings of it. And your response is, "eh there are no official binaries for my platform". Give them some credit! Be a little more constructive!
      • coldtea 25 minutes ago
        His response at least fits the discussion and is relevant to the tool, not generic hollier-than-thou scolding.

        To address the concern, anyway, I'm sure it would soon be available in brew as an arm binary.

  • keysersoze33 1 hour ago
    I was a bit skeptical at first, but after reading more into jsongrep, it's actually very good. Only did a very quick test just now, and after stumbling over slightly different syntax to jq, am actually quite impressed. Give it a try
    • carlmr 1 hour ago
      What were your syntax stumbling blocks? I must be honest I've used jq enough but can never remember the syntax. It's one of the worst things about jq IMO (not the speed, even though I'm a fan of speedups). There's something ungrokkable about that syntax for me.
  • furryrain 1 hour ago
    If it's easier to use than jq, they should sell the tool on that.
  • adastra22 46 minutes ago
    The fastest alternative to jq is to not use JSON.
  • quotemstr 1 hour ago
    Reminder you can also get DuckDB to slurp the JSON natively and give you a much more expressive query model than anything jq-like.