Custom flows feedback

One of our AI hackathon participants @asifdotpy provided some valuable feedback/learnings we should look into.

https://gitlab.com/gitlab-ai-hackathon/participants/11553323

Here's my working diagnosis.yml (v3.9.0, all 5 tools, FINISHED status confirmed): name: "RunnerIQ - Pipeline Diagnosis"

description: "Diagnose pipeline failures by analyzing failing jobs, logs, and recent commits to identify root causes and recommend fixes."
public: true
definition:
  version: v1
  environment: ambient

  components:
    - name: "pipeline_diagnosis"
      type: AgentComponent
      prompt_id: "diagnosis_prompt"
      inputs:
        - from: "context:goal"
          as: "user_request"
        - from: "context:project_id"
          as: "project_id"
      toolset:
        - "get_project"
        - "get_pipeline_failing_jobs"
        - "get_job_logs"
        - "list_commits"
        - "get_commit_diff"
      ui_log_events:
        - on_agent_final_answer

Key things that will save you hours:

  1. public: true — if it's false, @ai- mentions silently do nothing. No error, no bot reply, just silence.
  2. inputs format — this is the #1 (closed) gotcha. Must be from:/as: objects. The string format - "context:goal" passes validation but causes instant WebSocket close at runtime.
  3. context:project_id as second input — without this, tools likeget_project have no project to query.
  4. No model: block — don't add one. The validator rejects it. Runtime resolves the model automatically.
  5. Add tools incrementally — I tested 1→2→3→4→5 tools with separate test flows before adding each to the main flow. If you get WebSocket closes, strip back to 2 tools and build up.

cc @bastirehm

Edited by 🤖 GitLab Bot 🤖