Skip to content

Draft: Intercept API calls from E2E specs

Sean Gregory requested to merge qa/capture_correlation_ids into master

Description of the test

This MR introduces a capability to intercept api responses from the E2E tests and:

  • log them and their metadata from the test
  • manipulate the response data before it is processed by the frontend.

This is in a Proof of Concept phase and:

  • is configured to only work locally on 127.0.0.1:3000
  • only supports chrome
  • runs behind an environment variable
  • only captures errors from async api calls, not postbacks

The capability could be a potential solve for capturing correlation ids from failed responses, and other metadata if needed.

Strategy

  • The Webdriver starts with an argument that loads the extension
  • The extension injects a JavaScript script on document start, before any frontend code is loaded
  • The script replaces fetch and XMLHttpRequest with functions that intercept responses and allows intervention from the test.
  • By default errored api details are saved to sessionStorage, and persist across page loads.
  • The test has access to the sessionStorage data via execute_script
  • The test can register interceptors for certain urls, and pass a callback function that receives the response, headers, url, and method. (It must pass back the response to the frontend code.)

Risks

  • That the interceptors breaking existing functionality (especially XHR)
  • Make sure not to log sensitive data in the test/pipelines

There are a few items to consider, and a few tasks to do if this strategy seems like an appropriate solve.

  • lots of tests to make sure that the interceptor's don't break existing functionality
  • rewrite the extension entirely (remove XHR interception if possible)
  • formalize access to the sessionStorage from the test, perhaps trigger on a failed test.

Hoping this could be a conversation around whether or not this MR hints at a viable solution.

Checklist

  • Confirm the test has a testcase: tag linking to an existing test case in the test case project.
  • Note if the test is intended to run in specific scenarios. If a scenario is new, add a link to the MR that adds the new scenario.
  • Follow the end-to-end tests style guide and best practices.
  • Use the appropriate RSpec metadata tag(s).
  • Ensure that a created resource is removed after test execution. A Group resource can be shared between multiple tests. Do not remove it unless it has a unique path. Note that we have a cleanup job that periodically removes groups under gitlab-qa-sandbox-group.
  • Ensure that no transient bugs are hidden accidentally due to the usage of waits and reloads.
  • Verify the tags to ensure it runs on the desired test environments.
  • If this MR has a dependency on another MR, such as a GitLab QA MR, specify the order in which the MRs should be merged.
  • (If applicable) Create a follow-up issue to document the special setup necessary to run the test: ISSUE_LINK
  • If the test requires an admin's personal access token, ensure that the test passes on your local environment with and without the GITLAB_QA_ADMIN_ACCESS_TOKEN provided.
Edited by Sean Gregory

Merge request reports