9 - AI Ethics and Trust Design < AI Up-Skill
Don't change this
If you're a trainee, don't edit this module.
How to start
- Type
/clonein the comment box below to copy this issue. - Go to your new issue through the link in the activity feed. Follow the other steps from there.
- In the sidebar, set the Parent to your training epic. Check the welcome issue if you need help.
- Now you can start! Delete this section and the “don't change” note before you begin.
What you'll learn
After this module, you'll be able to:
- Recognize ethical risks in AI features - Identify potential problems with bias, privacy, transparency, and societal impact that could emerge in GitLab's AI products
- Think through unintended consequences - Use creative thinking (like the "Black Mirror" exercise) to imagine how AI features might be misused or cause unexpected problems when they scale
- Conduct an Ethics and Societal Impact Review - Systematically analyze an AI feature by identifying risks, defining principles, and planning specific mitigations to reduce harm
- Apply GitLab's AI ethics principles - Connect ethical concerns to GitLab's values and use our principles to guide design decisions
- Understand AI's broader impact on work and society - Explain how AI affects jobs, power dynamics, and social relationships, especially in the software development community
- Design with ethics in mind - Incorporate ethical considerations into feature planning, not as an afterthought but as part of the core design process
Before you start
-
Assign this issue to yourself. -
Optional. Set a due date you can meet. This module should take about 5 hourstotal.
Need help? Ask @pedroms or post in #ux_ai-up-skill.
Learn
TODO
Add handbook content
-
( 🆕 Add) Handbook content — Why it matters, identify potential ethical issues, Ethics and Societal reviews -
~10 minRead GitLab AI Ethics Principles for Product Development. -
~12 minRead Prepare for potential pitfalls. -
~1h 40mRead Trust + Explanations and its subchapters. -
~16 minRead How will Artificial Intelligence Affect Jobs 2025-2030. -
~14 minWatch What Are The Ethical Concerns With AI?
Do and share
Take the quiz ~10 min
-
Take this quiz to test what you learned. It's private - only you see the results.
Write a “Black Mirror” episode 1h max.
Create a short story like the TV show Black Mirror, so you can think about the effects of technology you're working on. These stories show a problem with technology that amplifies the anxieties and issues it may create.
How to do it
- Think about future technology: Take the feature you've been designing in this training. What might it look like in 5–10 years if it becomes really popular?
- Find the problems: What worries or issues might this future technology create? Think about how it could affect people and society. Examples: people losing jobs, less privacy, less freedom to choose.
- Create a character and story: Make up a person who shows the problems with this technology. What's the setting? What happens to them? How does the technology make their life worse?
- Keep your story to one page.
- Use Claude to help you write and improve your story. Idea: you can use this Prompt creator to create instructions for an expert storyteller!
Tasks
-
Comment here with your story or link to it. -
Copy your story and share it in #ux_ai-up-skill.
Write an Ethics and Societal Impact Review 2h max.
Write an Ethics and Societal Impact Review that looks at the ethical problems your feature might cause and how to mitigate them.
-
Create a document with a table that has three columns: Risk, Principle, Mitigation. See example table below. -
List the risks: ethical problems and negative effects that could happen when more people use your feature. How people might misuse it or how it might cause unintended problems? -
For each risk, list the principles: rules that should guide how you design to prevent the risks. How you might ensure that feature development and use are responsible? -
For each principle, list the mitigations: specific ways to build these principles into the design of the feature. What changes or practices would you use to reduce harm? -
Comment here with a link to your document. -
Copy your comment and share it in #ux_ai-up-skill.
Example table
|
Risk |
Principle |
Mitigation |
|---|---|---|
|
Copying Protected Code and Privacy Problems |
Respect Code Ownership and Privacy |
Promise to use privacy-safe AI training that doesn't copy specific private code.
|
Reflect
Almost done! Think about what you experienced to help remember what you learned.
Tips
- Focus on the most important parts.
- Keep it short - 3 points max per section.
- For improvements, think about what you'd do differently with more time or resources.
- For rating, think about what was helpful and what could be better.
Tasks
-
Use the template below to comment about what you learned. -
Copy your comment and share it in #ux_ai-up-skill.
Reflection template
## What I did well
-
-
-
## What I could do better
-
-
-
## My main takeaways
-
-
-
## How I rate this module
My rating: ⭐⭐⭐⭐⭐ <!-- Remove stars as needed -->
-
-
-
cc @pedroms @jackib @emilybauman
You're done!
For the next module, go to Modules templates (&18581).