Panic in browser-based DAST when syncing to the DB
Summary
We're moving our old DAST config that scans GitLab to v5 and one of our test jobs hit this panic when the job was trying to wrap up after timing out.
2024-05-20T14:00:10.827 INF MAIN giving a few seconds to sync db...
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xab4b26]
goroutine 1 [running]:
github.com/dgraph-io/badger/v3.(*memTable).IncrRef(...)
/go/pkg/mod/github.com/dgraph-io/badger/v3@v3.2103.2/memtable.go:231
github.com/dgraph-io/badger/v3.(*DB).getMemTables(0xc00012db00)
/go/pkg/mod/github.com/dgraph-io/badger/v3@v3.2103.2/db.go:696 +0x126
github.com/dgraph-io/badger/v3.(*DB).get(0xc00012db00, {0xc024612270, 0x21, 0x21})
/go/pkg/mod/github.com/dgraph-io/badger/v3@v3.2103.2/db.go:730 +0x8e
github.com/dgraph-io/badger/v3.(*Txn).Get(0xc012f64400, {0xc0130e96c0, 0x19, 0x20})
/go/pkg/mod/github.com/dgraph-io/badger/v3@v3.2103.2/txn.go:478 +0x29e
gitlab.com/browserker/store.BFS.scanWithDistance({}, 0xc000570c60, 0x0?, 0xc0166e5888, 0x4, 0x0, 0x0, 0x270f)
/go/builds/store/bfs_strategy.go:71 +0x2f2
gitlab.com/browserker/store.BFS.Find({}, 0xc000139d44?, 0x32?, 0x270f, 0xa)
/go/builds/store/bfs_strategy.go:22 +0x13b
gitlab.com/browserker/store.(*CrawlGraph).Find.func1(0xc00012db00?)
/go/builds/store/crawlgraph.go:219 +0x50
github.com/dgraph-io/badger/v3.(*DB).View(0xc0166e59a0?, 0xc0166e59c8)
/go/pkg/mod/github.com/dgraph-io/badger/v3@v3.2103.2/txn.go:806 +0x95
gitlab.com/browserker/store/database.(*Database).View(...)
/go/builds/store/database/database.go:88
gitlab.com/browserker/store.(*CrawlGraph).Find(0xc0166e5a48?, 0xc5?, 0xc01ac89c20?)
/go/builds/store/crawlgraph.go:218 +0x6b
gitlab.com/browserker/clicmds/services.(*ScanSummaryService).WriteSummary(0xc00b4ad800)
/go/builds/clicmds/services/scan_summary_service.go:45 +0x3d
gitlab.com/browserker/clicmds.(*BrowserkRunner).Run(0xc000573cc0)
/go/builds/clicmds/runner.go:89 +0x265
main.main.func2(0xc00048c800?)
/go/builds/main.go:43 +0x3a
github.com/urfave/cli/v2.(*Command).Run(0xc000128120, 0xc000499640)
/go/pkg/mod/github.com/urfave/cli/v2@v2.2.0/command.go:164 +0x583
github.com/urfave/cli/v2.(*App).RunContext(0xc000002000, {0xfafd80?, 0x15d19a0}, {0xc000034040, 0x2, 0x2})
/go/pkg/mod/github.com/urfave/cli/v2@v2.2.0/app.go:306 +0xb16
github.com/urfave/cli/v2.(*App).Run(...)
/go/pkg/mod/github.com/urfave/cli/v2@v2.2.0/app.go:215
main.main()
/go/builds/main.go:49 +0x37c
Steps to reproduce
Unclear, but a long scan that times out?
Example Project
https://gitlab.com/ngeorge1/dast-test-project/-/jobs/6890953830
What is the current bug behavior?
The scanner panics
What is the expected correct behavior?
The scanner should not panic
Relevant logs and/or screenshots
https://gitlab.com/ngeorge1/dast-test-project/-/jobs/6890953830
Output of checks
This bug happens on GitLab.com