Skip to content

bunqueue FAQ: Bun Job Queue Questions Answered

bunqueue is a high-performance job queue for Bun that uses SQLite for persistence instead of Redis. It provides a BullMQ-compatible API, making migration easy.

  • Simplicity: No external service to manage
  • Performance: Bun’s native SQLite is incredibly fast
  • Persistence: Data survives restarts by default
  • Cost: No Redis hosting costs
  • Portability: Single file database, easy to backup

Yes. bunqueue includes:

  • Stall detection for crashed workers
  • Dead letter queues for failed jobs
  • Automatic retries with backoff
  • S3 backups for disaster recovery
  • Rate limiting and concurrency control
  • Bun: Version 1.0 or higher
  • OS: macOS, Linux, Windows (WSL)
  • Memory: Minimum 512MB recommended
  • Disk: SSD recommended for best performance

bunqueue uses Bun-specific APIs:

  • bun:sqlite for database access
  • Bun.serve for HTTP server
  • Bun.listen for TCP server

These APIs are not available in Node.js.

Terminal window
# macOS/Linux
curl -fsSL https://bun.sh/install | bash
# Windows (PowerShell)
powershell -c "irm bun.sh/install.ps1 | iex"
# Homebrew
brew install oven-sh/bun/bun

What’s the difference between embedded and server mode?

Section titled “What’s the difference between embedded and server mode?”

Embedded Mode:

  • Queue runs in the same process as your app
  • Best for single-process applications
  • No network overhead

Server Mode:

  • Queue runs as a separate server
  • Multiple workers can connect via TCP
  • Best for distributed systems

No. Each mode uses its own database file. You should choose one mode per deployment.

Jobs are stored in SQLite with WAL (Write-Ahead Logging) mode:

  • Writes are fast and atomic
  • Reads don’t block writes
  • Data survives process crashes
  • Automatic checkpointing

On typical hardware (M2 Pro, 16GB RAM):

  • Push: 125,000 jobs/second
  • Pull: 100,000 jobs/second
  • Latency: 0.1-0.5ms p99
  1. Increase concurrency

    const worker = new Worker('queue', processor, {
    concurrency: 50
    });
  2. Use batch operations

    await queue.addBulk(jobs);
    await queue.ackBatch(jobIds);
  3. Enable WAL mode (default)

    Terminal window
    sqlite3 queue.db "PRAGMA journal_mode=WAL;"

Common causes:

  • Low concurrency setting
  • Slow job processor function
  • Database on HDD instead of SSD
  • Too many indexes

bunqueue uses BullMQ-style idempotent job creation with jobId:

// First call creates the job
const job1 = await queue.add('task', data, { jobId: 'unique-123' });
// Second call with same jobId returns existing job
const job2 = await queue.add('task', data, { jobId: 'unique-123' });
console.log(job1.id === job2.id); // true

This is useful for:

  • Service restart recovery: Restore jobs without duplicates
  • Webhook deduplication: Safe handling of retried webhooks
  • Idempotent operations: Multiple calls have the same effect as one

With stall detection enabled:

  1. Worker misses heartbeat
  2. Job is marked as stalled
  3. Job is retried automatically
  4. If max stalls exceeded, sent to DLQ
await queue.add('task', data, {
attempts: 5, // Max attempts
backoff: 1000 // Base delay in ms (doubles each retry)
});
// Or with advanced config
await queue.add('task', data, {
attempts: 5,
backoffConfig: {
type: 'exponential', // or 'fixed'
delay: 1000,
maxDelay: 300000, // Cap at 5 minutes (default: 1 hour)
}
});

Retry delays follow exponential backoff with jitter (±50%) to prevent thundering herd. Example base delays: ~1s → ~2s → ~4s → ~8s → ~16s (actual values vary due to jitter). Delays are capped at 1 hour by default (configurable via maxDelay).

The DLQ stores jobs that:

  • Exceeded max retry attempts
  • Had unrecoverable errors
  • Exceeded max stalls

You can inspect, retry, or purge DLQ jobs.

Yes, use LIFO mode:

await queue.add('task', data, { lifo: true });

Or use priority:

await queue.add('high', data, { priority: 10 });
await queue.add('low', data, { priority: 1 });

Yes. In server mode, multiple workers can connect:

// Worker 1
const worker1 = new Worker('queue', processor);
// Worker 2 (different process/machine)
const worker2 = new Worker('queue', processor);

Not built-in. For high availability:

  1. Use S3 backups for failover
  2. Run read replicas with SQLite replication
  3. Use load balancer for multiple servers
  1. Horizontal scaling: Add more workers
  2. Rate limiting: Protect downstream services
  3. Priority queues: Process important jobs first
  4. Batch processing: Reduce overhead

Default: ./data/bunq.db

Configure with:

Terminal window
DATA_PATH=./data/production.db bunqueue start

Option 1: S3 Automatic Backup

Terminal window
S3_BACKUP_ENABLED=1 \
S3_BUCKET=my-bucket \
S3_ACCESS_KEY_ID=xxx \
S3_SECRET_ACCESS_KEY=xxx \
bunqueue start

Option 2: Manual Backup

Terminal window
sqlite3 queue.db ".backup backup.db"
Terminal window
bunqueue backup list
bunqueue backup restore backups/2024-01-15/queue.db --force

Multiple writers are conflicting. Solutions:

  1. Use server mode for multi-process
  2. Ensure only one embedded instance
  3. Check for stale lock files

The job was already:

  • Completed and removed (removeOnComplete: true)
  • Failed and removed (removeOnFail: true)
  • Manually deleted

Common causes:

  1. Too many jobs in memory
  2. DLQ accumulating failed jobs
  3. Job data is too large

Solutions:

// Remove completed jobs
await queue.add('task', data, { removeOnComplete: true });
// Purge old DLQ entries
queue.purgeDlq();
// Clean old jobs
queue.clean(3600000); // 1 hour

Yes. See the Migration Guide.

Key differences:

  • No Redis connection needed
  • Backoff is simplified
  • Rate limiting is on queue, not worker

bunqueue uses a standard job format. Export your jobs as JSON and use:

const jobs = loadJobsFromOldQueue();
await queue.addBulk(jobs.map(j => ({
name: j.type,
data: j.payload,
opts: { priority: j.priority }
})));

The Workflow Engine is a built-in multi-step orchestration system for defining sequential processes with:

  • Saga compensation — automatic rollback on failure
  • Conditional branching — route execution based on runtime data
  • Parallel steps — run independent steps concurrently via .parallel()
  • Step retry — automatic retry with exponential backoff and jitter
  • Human-in-the-loop — pause and wait for external signals, with optional timeout
  • Nested workflows — compose workflows with .subWorkflow()
  • LoopsdoUntil() and doWhile() for conditional iteration with safety limits
  • forEach — iterate over dynamic item lists with indexed step results
  • Map — synchronous data transforms between steps
  • Schema validation — validate step input/output with Zod, ArkType, or any .parse() schema
  • Subscribe — monitor a specific execution’s events in real-time
  • Observability — typed event emitter with 11 event types
  • Cleanup & archival — manage execution history with cleanup/archive
  • Step timeouts — per-step timeout configuration

No Temporal, no Inngest, no cloud service required.

What’s the difference between Flow and Workflow?

Section titled “What’s the difference between Flow and Workflow?”

They solve different problems:

FlowProducerWorkflow Engine
PatternParent-child job DAGSequential/parallel step pipeline
Use caseFan-out/fan-in, dependenciesBusiness processes, approvals
RollbackNoSaga compensation
BranchingNoConditional paths
ParallelVia job tree.parallel() with Promise.allSettled
RetryJob-levelStep-level with exponential backoff
Human inputNowaitFor signals with timeout
LoopsNodoUntil() / doWhile() / forEach()
Data transformNo.map() (synchronous)
Schema validationNoinputSchema / outputSchema (Zod, ArkType, etc.)
CompositionNested trees.subWorkflow()
ObservabilityQueue events11 typed workflow events + subscribe(id)

Use FlowProducer when you need parallel job trees with dependencies. Use Workflow when you need ordered steps with rollback, branching, or human decisions.

Can I use the Workflow Engine with TCP server mode?

Section titled “Can I use the Workflow Engine with TCP server mode?”

Yes. The Engine constructor accepts the same connection options as Queue:

const engine = new Engine({
connection: { host: 'localhost', port: 6789 }
});

Execution state (current step, step results, received signals) is stored in SQLite via the workflow_executions table. This means workflows survive process restarts and can be inspected or resumed at any time.

  1. Report bugs on GitHub Issues
  2. Submit PRs for bug fixes
  3. Propose features in Discussions
  4. Improve documentation
Terminal window
git clone https://github.com/egeominotti/bunqueue
cd bunqueue
bun install
bun test
bun run dev