What is Hatchet? Hatchet is a platform for running background tasks, built on top of Postgres. Instead of managing your own task queue or pub/sub system, you can use Hatchet to distribute your functions between a set of workers with minimal configuration or infrastructure. When should I use Hatchet? Background tasks are critical for offloading work from your main web application. Usually background tasks are sent through a FIFO (first-in-first-out) queue, which helps guard against traffic spikes (queues can absorb a lot of load) and ensures that tasks are retried when your task handlers error out. Most stacks begin with a library-based queue backed by Redis or RabbitMQ (like Celery or BullMQ). But as your tasks become more complex, these queues become difficult to debug, monitor and start to fail in unexpected ways. This is where Hatchet comes in. Hatchet is a full-featured background task management platform, with built-in support for chaining complex tasks together into workflows, alerting on failures, making tasks more durable, and viewing tasks in a real-time web dashboard. Features 📥 Queues Hatchet is built on a durable task queue that enqueues your tasks and sends them to your workers at a rate that your workers can handle. Hatchet will track the progress of your task and ensure that the work gets completed (or you get alerted), even if your application crashes. This is particularly useful for: Ensuring that you never drop a user request Flattening large spikes in your application Breaking large, complex logic into smaller, reusable tasks Read more ➶ Python # 1. Define your task input class SimpleInput ( BaseModel ): message : str # 2. Define your task using hatchet.task @ hatchet . task ( name = "SimpleWorkflow" , input_validator = SimpleInput ) def simple ( input : SimpleInput , ctx : Context ) -> dict [ str , str ]: return { "transformed_message" : input . message . lower (), } # 3. Register your task on your worker worker = hatchet . worker ( "test-worker"...
First seen: 2025-04-03 20:58
Last seen: 2025-04-04 21:03