Spike: AI assistant + Figma MCP for code-to-Figma workflows
## Summary Investigate using AI assistants with the [Figma MCP (Model Context Protocol) server](https://github.com/nichochar/figma-mcp) to explore **code-to-Figma** workflows — using AI to manipulate Figma files based on code as the source of truth. This spike has two goals: 1. **Personal workflow evaluation** — Assess how Claude Code + Figma MCP could streamline design system maintenance by pushing code-side changes into Figma (generating or updating Figma components from code, scaffolding new Figma structures from component definitions). 2. **Designer support** — Better understand the experience designers already using these tools are having, so we can provide informed guidance and support. ## Background MCP (Model Context Protocol) allows AI tools like Claude Code and Duo to interact with external services through standardized server integrations. The Figma MCP server gives Claude Code the ability to read _and write_ to Figma files, enabling it to create, modify, and organize design elements programmatically. This is relevant to Pajamas because code is often the leading source of truth for component behavior, design tokens, and API surface — yet keeping the Figma UI Kit in sync with those changes is a manual, time-consuming process. AI-assisted code-to-Figma workflows could significantly reduce that overhead. ## Spike goals - [x] Set up Claude Code with the Figma MCP server locally - [ ] Test key workflows: - [x] **Component scaffolding (code → Figma)**: Point Claude Code at a Vue/SCSS component and have it generate or update the corresponding Figma component structure (layers, variants, auto-layout) - [x] **Bulk Figma manipulation**: Test using Claude Code to perform batch operations in Figma (e.g., renaming layers, restructuring variants, applying token changes across multiple components) - [x] **Change propagation**: Explore whether a code-side change (e.g., a new prop or token update) can be semi-automatically reflected in the Figma UI Kit - [x] Document findings, including: - What worked well - What didn't work or felt unreliable - Fidelity of generated Figma output (does it match our UI Kit conventions?) - Limitations and risks (e.g., Figma API write constraints, destructive edits, token usage, rate limits, data privacy considerations) - Potential impact on designer and developer workflows - [x] Summarize recommendations for next steps (adopt, iterate, or shelve) ## Out of scope - Building any production tooling or CI integration - Committing to a long-term AI workflow strategy (this is exploratory only) - Figma-to-code direction (may be a future spike) ## Spike findings _Handwritten, but feel free to create your own AI summary. This is also available in video format_ https://www.youtube.com/watch?v=5aNg3M7hnGY :video_camera: This spike explored using generative AI assistants to perform tasks in Figma. The exploration used GitLab Duo through VS Code, Claude Desktop, and Claude Code CLI from a cloned copy of the design.gitlab.com repo. Interfacing through Figma was through the Figma Console MCP (`southleft/figma-console-mcp`) as it supported both read and write operations. At the start of the spike Figma's official MCP was read only, the official remote MCP now supports read and write but the security position is TBD. It did feel magical — similar to how early code generation felt to me. But it also is unrefined. Improvements are needed to the tech and me before I'd expect to see workflows transformed, but it is cool and it could become useful. ### Workflows tested #### 1. Component health auditing (Code vs Figma comparison) Claude was prompted to compare the Figma component against the gitlab-ui implementation. It successfully identified some discrepancies that are worth further investigation. https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3357#note_3203360879 This type of comparison typically takes 5–30min depending on component complexity. The prompt was able to complete it in about 1 minute, suggesting significant time savings at scale. #### 2. Component scaffolding (code to Figma) Claude and Duo were prompted to create a `GlAvatarsInline` Figma component from the Vue/SCSS source. This component has been avoided in the UI kit because it is tricksy, especially if we want to support the hover/focus/active states. https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3357#note_3212363907 More detailed prompts produced better results. Prompting an approach, rather than letting it freestyle, also had better results. Duo was vastly faster with similar results to Claude Code. The process was not hands-off, and I am not sure how/where it would be beneficial to us in its current form. #### 3. Batch operations Three batch workflows were tested https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3357#note_3212764566: * Auditing and updating component descriptions * Refactoring component properties * Applying design tokens Claude/Duo completed all three tasks well, though it did encounter some limitations. Auditing and updating component descriptions could be a periodic task if we invested in a consistent style and the prompt to generate it. Updating component properties and applying design tokens could speed up some backlogged refactoring work, but the bottleneck will be review rather than generation. #### 4. Recreating a prototype in Figma Claude/Duo attempted to recreate the flows of a Vue prototype in Figma using library components. Results were mixed. https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3357#note_3216420294 The output was resistant to use UIK to create the screens, preferring to create its own assets. It was able to use the UIK after more direct instruction and some workarounds. The prototypes did not use gitlab-ui in their code. The results would likely be different if they were. ### What went well * Set up had a few steps, but felt repeatable. * Component health auditing at scale. * Batch description updates, especially where there were Figma specific implementation details that needed to be preserved. * First draft design token implementations. * First draft property renaming, following established guidelines. ### What didn't work well * Complex component creation. * UIK library component usage in files. * Hand-off workflows, such as automated updates. * Duo cannot process images so cannot 'see' the results. ## Suggestions * Figure out how to make our Figma library and documentation known to the AI assistant. https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3405 https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3406 * Investigate a two way workflow from Figma to Prototype and back to Figma. * I suspect at least part of this will be implementing code connect for all UIK components. https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3407 * Continue experimenting with using this tool to better understand its strengths and weaknesses. * Refactor all of the component properties to use the current guidelines as a one time effort. The work to codify these for the prompt could also be valuable for a future review tool. https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3408 * Determine, codify, and implement consistent component descriptions throughout the UI kit. https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3410 * Invest in a testing harness for component health auditing, including a repeatable prompt, and visual comparisons rather than just static analysis. https://gitlab.com/gitlab-org/gitlab-services/design.gitlab.com/-/work_items/3411 ## Resources - [Claude Code documentation](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview) - [Figma MCP server (nichochar/figma-mcp)](https://github.com/nichochar/figma-mcp) - [MCP specification](https://modelcontextprotocol.io/) - [Pajamas UI Kit (Figma)](https://www.figma.com/community/file/781156790581391771)
issue