Granular Pass/Fail configuration for DAST API (API Security Testing) jobs
Problem to solve
From a high level, it’s difficult for a security team to know if development teams are running the scans properly. If we see a green check, we assume it’s working. Developers on individual teams aren’t going to dig into the jobs to see which ones failed. There’s a false sense of security because the jobs pass.
Proposal
Create a variable that customers can configure to control whether a DAST API job passes or fails. Variable should be based on the initial request response. Give greater control over what passes/fails — if the initial request responses aren’t in the 200 range, fail the job— we don’t want that pipeline to succeed.
Intended users
- Delaney, Development Team Lead
- Ingrid, Infrastructure Operator
- Amy, Application Security Engineer
- Cameron, Compliance Manager
Implementation Plan
Prior to starting this work, consider asking for a walkthrough of the implementation plan from @mikeeddington.
-
Main source repository: https://gitlab.com/gitlab-org/security-products/analyzers/api-fuzzing-src
-
Add a new variable
DAST_API_SUCCESS_STATUS_CODES
that is a comma separated list of success codes to use during the record phase.Recommend looking over the design diagrams here. The worker-entry component is a Python script that takes the configuration variables and calls the scanner engine to start a testing job. In order to add the new option changes will be made to both the scanner and worker-entry.
- Update scanner
- Add to
RunnerOptions
- Add to
- Update worker-entry
- Add new entry to
variables
inSDK/worker-entry/src/worker_entry/main.py::run
- Add variable definition to
class WorkerEntry
- Initialize variable in
class WorkerEntry::__init__
- Update
worker.runner_options
inclass RunnerProvider::handler
- Add new entry to
- Update scanner
-
Add the status code check and error if they don't match.
Prior to testing, API Security performs a set of initial requests we call the recording phase. The status code check should be performed after each record request occurs. If the status code check fails, throw an exception that can be caught in the main run loop. The existing method for reporting errors can be used to propagate the failure to work-entry and cause a console message to get written. This approach will fail on the first request to not match the status codes.
-
Define a new exception
RecordStatusCodeFailure
- Operation name (
METHOD URL
) - Result status code
- Result message
- Operation name (
-
Add check for status code after call to
var response = await requestContext.SendRequest();
and the declaration ofvar exchange =....
inWebRunnerCheckStratengy.cs::RecordRequest
- Throw
RecordStatusCodeFailure
- Throw
-
Add catch handler for
RecordStatusCodeFailure
toWebRunnerMachine.cs::Run
near the finally handler._job.State = JobState.Error; _job.Reason = $"Initial request to 'OPERATION NAME' failed with status code STATUS CODE"; _job.FinishedAt = DateTime.UtcNow; _job.HasFaults = _strategy.HasFaults; await Job.UpdateJob(_job); _logger.LogWarning($"* Session failed: {_job.Reason}");
-
-
Add integration tests. Integration tests are located in
web/SDK/worker-entry/tests
.Start by reading through
web/SDK/worker-entry/tests/README.md
and getting a working test environment going. It's also suggested to read through a few test suites to get an idea of how tests are constructed.- Add a test for success and failure
- Add new configuration variable to
class Variables
in__init__.py
- Add test to
test_int_general.py
- Check:
- Return code is error (
assert we.rc != 0
) - Output contains expected message (
assert we.stdout.find('msg')
)
- Return code is error (
- Test your test
pytest -x test_int_general.py -k TESTNAME
-
Add end-to-end test
All of the end-to-end tests are run a downstream pipelines making use of the production DAST-API template. To make maintaining the e2e test job definitions easier, they are defined in a template and generated by the
build/generate_e2e.py
script.- Group with e2e test projects: https://gitlab.com/gitlab-org/security-products/tests/api-fuzzing-e2e
- Add new e2e template and verify script:
- Bash scripting cheat sheet
- Add a new verify script
script/verify-record-status.sh
to e2e-image.- Input variables:
- EXPECT_SUCCESS - (1 True, 0, False) Expect job to pass/fail
- EXPECT_MESSAGE - Message to expect in worker-entry output (job console)
- Tests:
- Did the job match our expected success or failure state?
- Did the console output contain the correct message?
- Input variables:
- Add a test for script in
tests/all.sh
. - Add a new verify template
template-verify-record-status.yml
.- Copy
template-verify.yml
- Call
verify-record-status.sh
instead ofverify.sh
- Copy
- Add new
template-dastapi-record-status.yml
template- Copy
template-dastapi.yml
- Include
template-verify-record-status.yml
instead oftemplate-verify.yml
- Copy
- Create a new project in the group
dast-record-status
- Copy
.gitlab-ci.yml
fromdast-generic
- Update to include
template-dastapi-record-status.yml
- Update to include
- Create
assets
folder - Add
dast-generic/assets/dast-har/test_rest_target.har
asassets/no_token.har
- Copy
assets/no_token.har
toassets/with_token.har
-
Edit the har file and add Authorization header to each request's
headers
array:{ "name": "Authorization", "value": "Token b5638ae7-6e77-4585-b035-7d9de2e3f6b3" },
-
- Copy
- Define two new e2e tests variations in
.gitlab-ci.e2e-template.yml
inapi-fuzzing-src
project-
Define new tests in
tests:
section -
dast_record_status_success:
dast_record_status_success: variables: << : *common_variables EXPECT_SUCCESS: 1 EXPECT_MESSAGE: "201 POST XXXX" trigger: project: gitlab-org/security-products/tests/api-fuzzing-e2e/dast-record-status strategy: depend
-
dast_record_status_failure:
dast_record_status_failure: variables: << : *common_variables EXPECT_SUCCESS: 0 EXPECT_MESSAGE: FAILURE MESSAGE trigger: project: gitlab-org/security-products/tests/api-fuzzing-e2e/dast-record-status strategy: depend
-
Run
build/generate_e2e,py
to rebuild.gitlab-ci.e2e-generated.yml
-
-
Document new variable