feat: default database.enabled to "prefer" instead of "false"

What does this MR do?

Operators who omit database.enabled from their config now get "prefer" semantics instead of "false". Fresh installs with a reachable database use it automatically. Existing installs that explicitly set database.enabled are unaffected.

Related to #1694 (closed)

Why

The zero value of DatabaseEnabled was DatabaseEnabledFalse (iota 0), so an omitted config field silently disabled the database. This blocked the metadata database rollout goal in &19638 (closed): every operator had to explicitly opt in, and fresh installs defaulted to the legacy filesystem path.

How

Introduce DatabaseEnabledUnset as the new zero value (iota 0), shifting the other enum values by +1. This is safe because the type is never persisted as an integer; serialization goes through MarshalYAML/UnmarshalYAML string maps.

ApplyDefaults converts Unset to DatabaseEnabledPrefer. IsEnabled() treats Unset as disabled (defense-in-depth for any code path that skips ApplyDefaults).

Every test and config file that previously relied on the zero value meaning "disabled" now explicitly sets DatabaseEnabledFalse:

  • config/filesystem.yml (the default Docker image config)
  • registry/handlers/integration_helpers_test.go (newConfig non-DB path)
  • registry/registry_test.go (setupRegistry)
  • registry/handlers/app_test.go (Test_updateOnlineGCSettings_SkipIfDatabaseDisabled)

The GC command error message now explains the new default and how to opt out.

Testing approach: PBT, fuzzing, and separate authorship

The tests for this change use property-based testing (pgregory.net/rapid) and Go native fuzzing, both new to this codebase.

Why PBT and fuzzing for AI-authored code: When the same AI agent writes both implementation and tests in the same context, the tests tend to confirm the implementation's assumptions rather than challenge them. Multiple studies document this correlated blind spot:

  • Perry et al. (2023) found that developers using AI assistants produced less secure code while believing it was more secure (ACM CCS '23).
  • Siddiq & Santos (2023) showed that LLM-generated tests have low mutation scores, meaning they confirm existing behavior without catching injected faults.
  • The GitClear 2024 code quality report found increased code churn in AI-assisted codebases, suggesting generated code is revised more often.

PBT and fuzzing structurally break this correlation because test inputs are generated by the framework, not chosen by the author. The tests here were written first from the spec (the issue description and enum contract), then the implementation was written to make them pass. Test and implementation authorship used separate agent contexts.

Four PBT tests cover:

  • Roundtrip: Marshal(Unmarshal(x)) == x for all enum variants (catches map inconsistencies)
  • ApplyDefaults idempotency: Unset becomes Prefer, all other values are left alone
  • Unmarshal robustness: arbitrary strings never panic (complements the fuzz test)
  • Method consistency: IsEnabled() and IsPrefer() match an independently-written truth table across all Enabled x PreferFallback combinations

One fuzz test (FuzzDatabaseEnabledUnmarshalYAML) provides coverage-guided mutation for longer runs via go test -fuzz.

Author checklist

  • CODEOWNERS Review: This MR requires approval from at least one CODEOWNER per category/file.
  • feat: Signals the introduction of a new feature, triggers a minor release.
  • Change is considered high risk - apply the label high-risk-change

Reviewer checklist

  • Ensure the commit and MR tittle are still accurate.
  • If the change contains a breaking change, verify the breaking change label.
  • If the change is considered high risk, verify the label high-risk-change
  • Identify if the change can be rolled back safely. (note: all other reasons for not being able to rollback will be sufficiently captured by major version changes).
Edited by Hayley Swimelar

Merge request reports

Loading