Absurd Workflows: Durable Execution with Just Postgres

(lucumr.pocoo.org)

11 points | by ingve 8 hours ago

2 comments

  • oulipo2 8 hours ago
    Really cool! How does it compare to DBOS ? https://docs.dbos.dev/architecture
    • the_mitsuhiko 7 hours ago
      I'm sure with time DBOS will be great, I just did not have a lot of success with it when I tried it. It's quite complex, the quality of the SDKs was not overly amazing (when I initially used it, it had a ton of dependencies in it) and it just felt early.
  • oulipo2 7 hours ago
    Other question: why reimplementing your framework, rather than using an existing agent framework like Claude + MCP, or OpenAI + tool calling? Is it because you're using your own LM models, or just because you wanted more control on retries, etc?
    • the_mitsuhiko 7 hours ago
      There are not that many agent frameworks around at the moment. If you want to be provider independent you most likely either use pydantic AI or the vercel AI SDK would be my guess. Neither one have built-in solution for durable execution so you end up driving the loop yourself. So it's not that I don't use these SDKs, it's just that I need to drive the loop myself.
      • oulipo2 7 hours ago
        Okay very clear! I was saying that because your post example is just a kind of basic "tool use" example which is already implemented by MCP/OpenAI tool use, but obviously I guess your code can be suited to more complex scenarios

        Two small questions:

        1. in your README you give this example for durable execution:

        const shipment = await ctx.awaitEvent(`shipment.packed:${params.orderId}`);

        I was just wondering, how does it work? I was more expecting a generator with a `yield` statement to run "long-running tasks" in the background... otherwise is the node runtime keeping the thread running with the await? doesn't this "pile up"?

        2. would your framework be suited to long-running jobs with multiple steps? I have sometimes big jobs running in the background on all of my IoT devices, eg:

        for each d in devices: doSomeWork(d)

        and I'd like to run the big outerloop each hour (say), but only if the previous one is complete (eg max num of workers per task = 1), and that the inner-loop be some "steps" that can be cached, but can be retried if they fail

        would your framework be suited for that? or is that just a simpler use-case for pgmq and I don't need the Absurd framework?

        • the_mitsuhiko 6 hours ago
          > Okay very clear! I was saying that because your post example is just a kind of basic "tool use" example which is already implemented by MCP/OpenAI tool use, but obviously I guess your code can be suited to more complex scenarios

          That's mostly just because I found that to be the easiest way to use any existing AI API to work. There are things like vercel's AI SDK which internally runs the agentic loop in generateText, but then there is no way to checkpoint that.

          > I was just wondering, how does it work? I was more expecting a generator with a `yield` statement to run "long-running tasks" in the background... otherwise is the node runtime keeping the thread running with the await? doesn't this "pile up"?

          When you `awaitEvent` or `sleepUntil`/`sleepFor` it sets a wake point or sets a re-schedule on the database. Then it raises `SuspendTask` and ends the execution of the task temporarily until it's rescheduled.

          As for your IOT case: yes, you should be able to do that.

        • oulipo2 7 hours ago
          Ah, got it, it throws Exception in order to stop the task each time https://github.com/earendil-works/absurd/blob/main/sdks/type...