Skip to content
Snippets Groups Projects

Red Team Handbook rewrite

Merged charlie ablett requested to merge cablett-red-team-hb-update into main
All threads resolved!
Compare and Show latest version
6 files
+ 60
65
Compare changes
  • Side-by-side
  • Inline
Files
6
@@ -19,10 +19,10 @@ It's important to us to intentionally and enthusiastically collaborate with the
There are several ways we emulate initial access:
- **Research**. We may conduct operations specifically looking for initial access vectors to exploit. These require substantial time and resources, so we ensure the investment is justified by the potential for security improvements and learning. For example, the [2024 Okta bypass](https://gitlab-com.gitlab.io/gl-security/security-tech-notes/red-team-tech-notes/okta-verify-bypass-sept-2024/) we researched and responsibly disclosed to Okta.
- **Research**. We may conduct operations specifically looking for initial access vectors to exploit. These require substantial time and resources, so we ensure the investment is justified by the potential for security improvements and learning. For example, the [2024 Okta bypass](https://gitlab-com.gitlab.io/gl-security/security-tech-notes/red-team-tech-notes/okta-verify-bypass-sept-2024/) was researched by our team before being responsibly disclosed to Okta.
- **Opportunistic**. Red Team members can also hunt for ways to "break in" to GitLab at any time in the context of an [Opportunistic Attack](../#opportunistic-attacks). This allows us to draw attention to any discoveries and GitLab can quickly remediate. Successful intrusions can then be re-used in future stealth operations as proof of a realistic initial access vector.
- **Collaborative**. [Club Red](../opportunistic-attacks/#club-red) allows team members to collaborate with us to develop an initial access idea they have, and we can leverage their domain knowledge for a greater overall security result for GitLab.
- **Assumed Breach**. Sometimes we create a scenario where we gain initial access to GitLab's systems through a trusted insider. This is done in a realistic manner, leaving indicators of compromise ([IoCs](https://en.wikipedia.org/wiki/Indicator_of_compromise)) that reflect an actual breach. From there, we focus on post-exploitation tactics and techniques such as establishing persistence and elevating privileges.
- **Collaborative**. [Club Red](../opportunistic-attacks/#club-red) allows team members to collaborate with us to develop an initial access plan. We can leverage their domain knowledge for a greater overall security result for GitLab.
- **Assumed Breach**. Sometimes we create a scenario where we gain initial access to GitLab systems through a trusted insider. This is done in a realistic manner, leaving indicators of compromise ([IoCs](https://en.wikipedia.org/wiki/Indicator_of_compromise)) that reflect an actual breach. From there, we focus on post-exploitation tactics and techniques such as establishing persistence and elevating privileges.
### 2. Operation execution
@@ -38,13 +38,13 @@ Sometimes our operations involve attacking infrastructure set up by a certain te
In any retrospective, we **always** aim to focus on improvements rather than assign blame.
If social engineering is involved, we must be careful to ensure that the individuals involved in the exercise feel well supported and not blamed.
We offer meet with anyone who was involved in social engineering to and thank them for being a part of our operation and to reassure them.
We offer to meet with anyone involved in our social engineering efforts to reassure them and thank them for being part of our operation.
We **never** want anyone to feel like they did something wrong, since our operations test **processes**, not individuals.
### 4. Report and recommendations for security improvements
We then release a [report](#reporting) summarising the operation and our recommendations for improving security posture. We create issues using the [issue template](https://gitlab.com/gitlab-com/gl-security/security-operations/redteam/redteam-public/resources/red-team-issue-templates), apply the relevant labels, and use this for tracking [metrics](#red-team-metrics). We then hand all our tools and techniques to the Blue Team so they can create relevant detections.
We then release a [report](#reporting) summarizing the operation and our recommendations for improving security posture. We create issues using the [issue template](https://gitlab.com/gitlab-com/gl-security/security-operations/redteam/redteam-public/resources/red-team-issue-templates), apply the relevant labels, and use this to track [metrics](#red-team-metrics). We then provide our tools and techniques to the Blue Team so they can create relevant detections.
We often work with [Signals Engineering](../../signals-engineering/) and [Security Incident Response Team (SIRT)](../../sirt/) to review our findings, attack steps and review detections and alerts.
@@ -52,11 +52,11 @@ We often work with [Signals Engineering](../../signals-engineering/) and [Securi
All operations end with a final report. We use a publicly shared [issue template](https://gitlab.com/gitlab-com/gl-security/security-operations/redteam/redteam-public/resources/red-team-issue-templates).
Security risks affect everyone, and it is essential to make our reports approachable and consumable to a broad audience. Our goal is to ensure that anyone in the company can understand the reports, even if they don't have a technical background or a in security, so we strive to [use simple language](/handbook/communication/#simple-language).
Security risks affect everyone, and it is essential to make our reports approachable and consumable to a broad audience. Our goal is to ensure that anyone in the company can understand the reports, even if they don't have a technical or security background, so we strive to [use simple language](/handbook/communication/#simple-language).
There may also be a short (five minutes or less) video summary, if we feel it's needed.
If we feel it's needed, we also provide a short (five minutes or less) video summary.
For stealth operations, or higher-visibility operations, it's beneficial to share the story with the entire company. In that case, we post the following the Slack channel `#whats-happening-at-gitlab` and cross-post it in `#security`:
For stealth or higher-visibility operations, it's beneficial to share the story with the entire company. In that case, we post the following to the Slack channel `#whats-happening-at-gitlab` and cross-post it in `#security`:
- A very short summary of the operation, including the video overview if there is one
- A link to the final report
@@ -66,19 +66,15 @@ For stealth operations, or higher-visibility operations, it's beneficial to shar
By doing this, we help foster a culture of security awareness throughout the organization and ensure that everyone can benefit from our work.
### Post-operation technique handover
While this may result in product fixes or infrastructure changes, it is possible that vulnerable configurations may reappear in the environment. At this point, GitLab's [Vulnerability Management](/handbook/security/product-security/vulnerability-management/) group will take over any ongoing scanning required to monitor for this scenario. The Red Team will share any tools they used for the initial discovery, but Vulnerability Management will generally implement a more production-ready permanent scanning solution.
## Red Team Maturity Model
We use a custom maturity model to measure our progress and help guide our decisions. This is loosely based on the [Capabilities Maturity Model (CMM)](https://en.wikipedia.org/wiki/Capability_Maturity_Model). Our model contains five stages of maturity, each with very specific behaviors we strive to demonstrate and states we hope to achieve.
We use a custom maturity model to measure our progress and help guide our decisions. This is loosely based on the [Capabilities Maturity Model (CMM)](https://en.wikipedia.org/wiki/Capability_Maturity_Model). [Our model](https://gitlab.com/gitlab-com/gl-security/security-operations/redteam/redteam-internal/red-team-maturity-model/-/boards/5905165) (available internally only) contains five stages of maturity, each with very specific behaviors we strive to demonstrate and states we hope to achieve.
## Red Team Metrics
### Adoption Rate
A successful Red Team program strengthens an organization's security through recommendations that are adopted, (i.e. accepted and ultimately implemented) by the organization. We track the lifecycle of these recommendations through to implementation using GitLab.com, calling this metric our "Adoption Rate."
A successful Red Team program strengthens an organization's security through recommendations that are accepted and implemented by the organization. We track the lifecycle of these recommendations through to implementation using GitLab.com, calling this metric our "Adoption Rate."
Recommendations start as GitLab.com issues in the project closest to the team that can address them. We classify recommendations using labels:
@@ -116,7 +112,7 @@ For each completed operation, we build a flow chart to visualize the attack path
That same ATT&CK Flow file is imported into our ATT&CK Navigator project, which generates a heatmap visualizing our coverage across the ATT&CK matrix. We maintain a single heatmap for each operation, as well as a combined heatmap for all previous operations.
This is s great way to visualize the types of attack techniques we've emulated, and to help us understand areas we should focus on in future operations.
This is s great way to visualize the types of attack techniques we've emulated and helps us understand areas to focus in future operations.
## Is this the Red Team?
@@ -144,6 +140,6 @@ If the Red Team is ever asked _"Is this you?"_ by someone other than the designa
Because we want to treat all activity as potentially malicious, anyone else receiving this question should also use a consistent response. Feel free to use your own words. The following can be a guide:
> We want to treat any suspicious activity as potentially malicious. Let's continue following our normal procedures to report and investigate this. Any Red Team operation will have controls in place to keep things from escalating too far. You can read more about this here: [{{< ref ".#is-this-the-red-team" >}}]({{< ref ".#is-this-the-red-team" >}}).
> We want to treat any suspicious activity as potentially malicious. Let's continue following our normal procedures to report and investigate this. Any Red Team operation has controls in place to keep things from escalating too far. You can read more about this here: [{{< ref ".#is-this-the-red-team" >}}]({{< ref ".#is-this-the-red-team" >}}).
If the person receiving this question happens to be a Security Director or a trusted participant in an ongoing stealth operation, they can then use established channels to communicate with the Red Team.
Loading