Commit c1cc01dc authored by Cynthia "Arty" Ng's avatar Cynthia "Arty" Ng 💬
Browse files

Fix batch of broken links

parent a6fb160c
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -213,7 +213,7 @@ Models that will be reconstructed as part of building or testing changes are cla
Using the environment defined during the planning, construct the local development environment using the following steps:

1. Clean local database tables by dropping all schemas found with in the `<user_name>_PREP` and `<user_name>_PROD`.  This will ensure that the data used for testing is only the data added recently.  This can be done through the Snowsight UI or using a SQL command and query details can be found in the [Snowflake Dev Clean Up](https://gitlab.com/gitlab-data/runbooks/-/blob/main/Snowflake/snowflake_dev_clean_up.md) runbook.
1. Using the environment models command, [clone](/handbook/enterprise-data/platform/dbt-guide/#cloning-into-local-user-db) tables necessary for development and testing.  This ensures that the data is up to date, and cloning the tables is more efficient than rebuilding the tables from the source data.
1. Using the environment models command, [clone](/handbook/enterprise-data/platform/dbt-guide/#cloning-models-locally) tables necessary for development and testing.  This ensures that the data is up to date, and cloning the tables is more efficient than rebuilding the tables from the source data.
    - The clone job will not clone under the following conditions. Any models meeting those conditions must be included in the subsequent build steps:
      - The table is new
      - The table has a new name
+3 −3
Original line number Diff line number Diff line
@@ -57,7 +57,7 @@ Risk is a measure of how much potential harm can a feature cause to a user, an o

We have a variety of tools and processes to help you navigate various states of confidence and risk they include:

* [UX Bash](../product/ux/ux-research/ux-bash/)
* [UX Bash](/handbook/product/ux/experience-research/ux-bash/)
* [AI Evaluation Guidelines](https://docs.gitlab.com/development/ai_features/ai_evaluation_guidelines/)
* [Intro to Prompt Engineering](https://www.promptingguide.ai/introduction)

@@ -101,7 +101,7 @@ When in this unknown territory, you should not proceed with releasing an AI feat

When an AI-powered feature has high confidence in the output quality and high risk of producing harmful or incorrect results, the feature must develop an evaluation dataset.

It's required that you have a multiple [UX Bashs](../product/ux/ux-research/ux-bash/) with at least 10 external users each time validating the quality of the output. We have a framework for how to run them to evaluate output quality.
It's required that you have a multiple [UX Bashs](/handbook/product/ux/experience-research/ux-bash/) with at least 10 external users each time validating the quality of the output. We have a framework for how to run them to evaluate output quality.

* Develop a dataset of at least 500 evaluations that focus on the areas of risk you've identified.
* The dataset is supported in the CEF and included in the daily CEF run.
@@ -113,7 +113,7 @@ It's required that you have a multiple [UX Bashs](../product/ux/ux-research/ux-b

When an AI-powered feature has low confidence in the output quality and high risk of producing harmful or incorrect results, the feature should not proceed until the team increases confidence of the output quality.

It's required that you have multiple [UX Bashes](../product/ux/ux-research/ux-bash/) with at least 10 external users each validating the quality of the output. We have a framework for how to run them to evaluate output quality.
It's required that you have multiple [UX Bashes](/handbook/product/ux/experience-research/ux-bash/) with at least 10 external users each validating the quality of the output. We have a framework for how to run them to evaluate output quality.

* Develop a dataset of at least 100 evaluations that focus on the areas of risk you've identified.
* The dataset is supported in the CEF and included in the daily CEF run.
+2 −2
Original line number Diff line number Diff line
@@ -141,7 +141,7 @@ We follow the [Product Designer workflows](/handbook/product/ux/product-designer
- we have issue boards so we can see what everyone is up to. Refer to issue boards in our planning issues. For example, this is the [template for Acquisition Planning issues](https://gitlab.com/gitlab-org/growth/team-tasks/-/blob/master/.gitlab/issue_templates/growth_acquisition_planning_template.md).
- we **label** our issues with `UX`, `devops::growth` and `group::`.
- we will label experiments with `UX problem validation` and `UX solution validation` according to the [UX Research Workflow](/handbook/product/ux/operations/#ux-labels) definitions to indicate the type of learning the experiment achieves. The purpose of these labels is to track [this UX KPI](/handbook/product/ux/performance-indicators/#ux-research-velocity) related to research velocity.
- we use the [workflow labels](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=workflow%3A%3A) for regular issues and [experiment workflow labels](/handbook/engineering/development/growth/#experiment-workflow-labels) for experiment issues.
- we use the [workflow labels](https://gitlab.com/groups/gitlab-org/-/labels?utf8=%E2%9C%93&subscribed=&search=workflow%3A%3A) for regular issues and [experiment workflow labels](/handbook/engineering/development/growth/experimentation/#overview) for experiment issues.
- we use **milestones** to aid in planning and prioritizing the four growth groups of Acquisition, Conversion, Expansion and Retention.
  - PMs provide an [ICE score for experiments](https://docs.google.com/spreadsheets/d/1yvLW0qM0FpvcBzvtnyFrH6O5kAlV1TEFn0TB8KM-Y1s/edit#gid=0) and by using [priority labels](https://docs.gitlab.com/development/labels/#priority-labels) for other issues.
  - The Product Designer applies the milestone in which they plan to deliver the work (1-2 milestones in advance, or backlog for items that are several months out. For example, if an issue is not doable for a designer in the current milestone, they can add the next milestone to the issue, which will communicate to the PM when the work will be delivered.
@@ -241,6 +241,6 @@ In alignment with GitLab's [DACI](/handbook/people-group/directly-responsible-in
| Post-Experiment Analysis | Data Team |
| Post-Experiment Decision | Growth Product/Stage Product/Engineering |
| Maintenance | Stage Product/Engineering |
| Alert creation | Growth Product/Engineering : [How to create a Sisense SQL alert](/handbook/engineering/development/growth/sisense_alert/) |
| Alert creation | Growth Product/Engineering |

It is in the Growth teams purview to run any experiment in any area of the application. Growth's ability to experiment in this manner is designed to further learning potential for the larger GitLab team and support business priorities. When an experiment impacts the area of another product owner/team, Growth informs them following the collaboration model above. Product owners/teams are encouraged to raise concerns and provide further context. Ultimately, Growth determines whether the experiment is deemed a success using data and input from relevant teams. The product owner/team is made aware of the result and next steps.
+1 −1
Original line number Diff line number Diff line
@@ -52,7 +52,7 @@ your role is `JSMITH` and your development databases are `JSMITH_PROD` and `JSMI

### Clone a lineage using the command line

Please see instructions [here](/handbook/enterprise-data/platform/dbt-guide/#cloning-into-local-user-db).
Please see instructions [here](/handbook/enterprise-data/platform/dbt-guide/#cloning-models-locally).
on how to clone a single model or an entire lineage (similar to the CI jobs) using the
command line. This is the preferred method for cloning models locally.

+1 −1
Original line number Diff line number Diff line
@@ -60,7 +60,7 @@ We work across other teams often and are striving to get better at engaging with

#### Product Marketing Enagement

We partner really closely with [Product Marketing Management](/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/core-product-marketing/). In the Verify Stage, we have a stable counterpart assigned, as defined in the [PMM Team Structure](/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/core-product-marketing/#pmm-team-structure), which aligns to the [CI Use Case](/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/usecase-gtm/ci/).
We partner really closely with [Product Marketing Management](/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/core-product-marketing/). In the Verify Stage, we have a stable counterpart assigned, as defined in the [PMM Team Structure](/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/core-product-marketing/), which aligns to the [CI Use Case](/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/usecase-gtm/ci/).

We have four main processes:

Loading