Classify permanent vs retryable handler errors in the indexer engine
## Problem
All handler failures are treated as retryable. The engine retries up to `max_deliver` times (default 5) before DLQ'ing, regardless of whether the error is transient or deterministic.
Corrupt tar archives, symlink escapes, fatal parse errors, malformed payloads — none of these will ever succeed on retry. During the recent NATS redelivery storm, every poison message burned through 5 attempts before finally getting evicted.
## Proposed solution
Add a `Permanent` variant to `HandlerError` for deterministic failures that should skip retries.
- Engine checks `error.is_permanent()` before retry logic. Permanent errors get term-acked (dropped) on first attempt — no DLQ, no retries.
- Known permanent errors are dropped rather than DLQ'd because they'll be replayed on schema version bump. DLQ should only contain unknown failures needing manual intervention.
- `RepositoryCacheError::Archive` (corrupt tar, symlink escape) maps to `Permanent`. `RepositoryCacheError::Io` (ENOTEMPTY race, disk pressure) stays retryable.
- Fatal code indexing pipeline errors map to `Permanent`.
- `Deserialization` errors count as permanent via `is_permanent()`.
- Unit test: permanent error with `max_attempts: 5` hits `Dropped` on attempt 1.
- Integration test: real NATS round-trip, permanent error is dropped with empty DLQ.
issue