Skip to content

Granular Pass/Fail configuration for DAST API (API Security Testing) jobs

Problem to solve

From a high level, it’s difficult for a security team to know if development teams are running the scans properly. If we see a green check, we assume it’s working. Developers on individual teams aren’t going to dig into the jobs to see which ones failed. There’s a false sense of security because the jobs pass.

Proposal

Create a variable that customers can configure to control whether a DAST API job passes or fails. Variable should be based on the initial request response. Give greater control over what passes/fails — if the initial request responses aren’t in the 200 range, fail the job— we don’t want that pipeline to succeed.

Intended users

Implementation Plan

Prior to starting this work, consider asking for a walkthrough of the implementation plan from @mikeeddington.

  1. Main source repository: https://gitlab.com/gitlab-org/security-products/analyzers/api-fuzzing-src

    1. Development environment configuration MR
  2. Add a new variable DAST_API_SUCCESS_STATUS_CODES that is a comma separated list of success codes to use during the record phase.

    Recommend looking over the design diagrams here. The worker-entry component is a Python script that takes the configuration variables and calls the scanner engine to start a testing job. In order to add the new option changes will be made to both the scanner and worker-entry.

    1. Update scanner
      1. Add to RunnerOptions
    2. Update worker-entry
      1. Add new entry to variables in SDK/worker-entry/src/worker_entry/main.py::run
      2. Add variable definition to class WorkerEntry
      3. Initialize variable in class WorkerEntry::__init__
      4. Update worker.runner_options in class RunnerProvider::handler
  3. Add the status code check and error if they don't match.

    Prior to testing, API Security performs a set of initial requests we call the recording phase. The status code check should be performed after each record request occurs. If the status code check fails, throw an exception that can be caught in the main run loop. The existing method for reporting errors can be used to propagate the failure to work-entry and cause a console message to get written. This approach will fail on the first request to not match the status codes.

    1. Define a new exception RecordStatusCodeFailure

      • Operation name (METHOD URL)
      • Result status code
      • Result message
    2. Add check for status code after call to var response = await requestContext.SendRequest(); and the declaration of var exchange =.... in WebRunnerCheckStratengy.cs::RecordRequest

      1. Throw RecordStatusCodeFailure
    3. Add catch handler for RecordStatusCodeFailure to WebRunnerMachine.cs::Run near the finally handler.

      _job.State = JobState.Error;
      _job.Reason = $"Initial request to 'OPERATION NAME' failed with status code STATUS CODE";
      _job.FinishedAt = DateTime.UtcNow;
      _job.HasFaults = _strategy.HasFaults;
      await Job.UpdateJob(_job);
      
      _logger.LogWarning($"* Session failed: {_job.Reason}");
  4. Add integration tests. Integration tests are located in web/SDK/worker-entry/tests.

    Start by reading through web/SDK/worker-entry/tests/README.md and getting a working test environment going. It's also suggested to read through a few test suites to get an idea of how tests are constructed.

    1. Add a test for success and failure
    2. Add new configuration variable to class Variables in __init__.py
    3. Add test to test_int_general.py
    4. Check:
      1. Return code is error (assert we.rc != 0)
      2. Output contains expected message (assert we.stdout.find('msg'))
    5. Test your test pytest -x test_int_general.py -k TESTNAME
  5. Add end-to-end test

    All of the end-to-end tests are run a downstream pipelines making use of the production DAST-API template. To make maintaining the e2e test job definitions easier, they are defined in a template and generated by the build/generate_e2e.py script.

    1. Group with e2e test projects: https://gitlab.com/gitlab-org/security-products/tests/api-fuzzing-e2e
    2. Add new e2e template and verify script:
      1. Bash scripting cheat sheet
      2. Add a new verify script script/verify-record-status.sh to e2e-image.
        1. Input variables:
          1. EXPECT_SUCCESS - (1 True, 0, False) Expect job to pass/fail
          2. EXPECT_MESSAGE - Message to expect in worker-entry output (job console)
        2. Tests:
          1. Did the job match our expected success or failure state?
          2. Did the console output contain the correct message?
      3. Add a test for script in tests/all.sh.
      4. Add a new verify template template-verify-record-status.yml.
        1. Copy template-verify.yml
        2. Call verify-record-status.sh instead of verify.sh
      5. Add new template-dastapi-record-status.yml template
        1. Copy template-dastapi.yml
        2. Include template-verify-record-status.yml instead of template-verify.yml
    3. Create a new project in the group dast-record-status
      1. Copy .gitlab-ci.yml from dast-generic
        1. Update to include template-dastapi-record-status.yml
      2. Create assets folder
      3. Add dast-generic/assets/dast-har/test_rest_target.har as assets/no_token.har
      4. Copy assets/no_token.har to assets/with_token.har
        1. Edit the har file and add Authorization header to each request's headers array:

          {
            "name": "Authorization",
            "value": "Token b5638ae7-6e77-4585-b035-7d9de2e3f6b3"
          },
    4. Define two new e2e tests variations in .gitlab-ci.e2e-template.yml in api-fuzzing-src project
      1. Define new tests in tests: section

      2. dast_record_status_success:

         dast_record_status_success:
           variables:
             << : *common_variables
             EXPECT_SUCCESS: 1
             EXPECT_MESSAGE: "201 POST XXXX"
         trigger:
           project: gitlab-org/security-products/tests/api-fuzzing-e2e/dast-record-status
           strategy: depend
      3. dast_record_status_failure:

         dast_record_status_failure:
           variables:
             << : *common_variables
             EXPECT_SUCCESS: 0
             EXPECT_MESSAGE: FAILURE MESSAGE
           trigger:
             project: gitlab-org/security-products/tests/api-fuzzing-e2e/dast-record-status
             strategy: depend
      4. Run build/generate_e2e,py to rebuild .gitlab-ci.e2e-generated.yml

  6. Document new variable

Edited by Sara Meadzinger