Skip to content

Trim Duo Chat prompts

Shinya Maeda requested to merge trim-duo-chat-prompts into master

What does this MR do and why?

This MR addresses the issue Chat Diagnostic Experiment Recommendation : 02-... (gitlab-org/modelops/ai-model-validation-and-research/ai-evaluation/ai-experiments#1 - closed) with the following changes:

  • Remove the instruction in the ReACT prompt that LLM must use tools to generate an answer otherwise respond "I don't know" answer.
  • Removing feedback instruction as Duo Chat has a dedicated feedback frontend component in UI.

Prompt Library configuration

  • Input dataset: duo_chat_external.experiment_code_generation__input_v1
  • Output dataset: duo_chat_external_results.sm_experiment_code_generation__input_v1_for_issue_1_attempt_2.
full configuration
{
  "beam_config": {
    "pipeline_options": {
      "runner": "DirectRunner",
      "project": "dev-ai-research-0e2f8974",
      "region": "us-central1",
      "temp_location": "gs://prompt-library/tmp/",
      "save_main_session": false
    }
  },
  "input_bq_table": "dev-ai-research-0e2f8974.duo_chat_external.experiment_code_generation__input_v1",
  "output_bq_table": "dev-ai-research-0e2f8974.duo_chat_external_results.sm_experiment_code_generation__input_v1_for_issue_1_attempt_2",
  "throttle_sec": 0.1,
  "batch_size": 10,
  "input_adapter": "mbpp",
  "eval_setup": {
    "answering_models": [
      {
        "name": "claude-2",
        "prompt_template_config": {
          "templates": [
            {
              "name": "claude-2",
              "template_path": "data/prompts/duo_chat/answering/claude-2.txt.example"
            }
          ]
        }
      },
      {
        "name": "duo-chat",
        "parameters": {
          "base_url": "http://gdk.test:3000"
        },
        "prompt_template_config": {
          "templates": [
            {
              "name": "empty",
              "template_path": "data/prompts/duo_chat/answering/empty.txt.example"
            }
          ]
        }
      }
    ],
    "metrics": [
      {
        "metric": "similarity_score"
      },
      {
        "metric": "independent_llm_judge",
        "evaluating_models": [
          {
            "name": "claude-2",
            "prompt_template_config": {
              "templates": [
                {
                  "name": "claude-2",
                  "template_path": "data/prompts/duo_chat/evaluating/claude-2.txt.example"
                }
              ]
            }
          }
        ]
      }
    ]
  }
}

Evaluation results - Independent LLM Judge - Correctness

  • Before: Production (master - SHA: 446571bef621b5e99732cb5b245782d0d9a51355)
  • After: This MR (trim-duo-chat-prompts - SHA: ebde786f41f355dbb7ae6627199d43a110d39d94)
grade before_percentage after_percentage
4 65.0 45.0
3 0.0 0.0
2 5.0 30.0
1 30.0 20.0
query
WITH grades as (
  SELECT 4 as grade union all
  SELECT 3 as grade union all
  SELECT 2 as grade union all
  SELECT 1 as grade
), before_base_table AS (
  SELECT *
  FROM `dev-ai-research-0e2f8974.duo_chat_external_results.sm_experiment_code_generation__input_v1_legacy__independent_llm_judge`
  WHERE answering_model = 'duo-chat'
), after_base_table AS (
  SELECT *
  FROM `dev-ai-research-0e2f8974.duo_chat_external_results.sm_experiment_code_generation__input_v1_for_issue_1_attempt_2__independent_llm_judge`
  WHERE answering_model = 'duo-chat'
), before_correctness_grade AS (
  SELECT correctness as grade, COUNT(*) as count
  FROM before_base_table
  GROUP BY correctness
), after_correctness_grade AS (
  SELECT correctness as grade, COUNT(*) as count
  FROM after_base_table
  GROUP BY correctness
)

SELECT grades.grade AS grade,
       ROUND((COALESCE(before_correctness_grade.count, 0) / (SELECT COUNT(*) FROM before_base_table)) * 100.0, 1) AS before_percentage,
       ROUND((COALESCE(after_correctness_grade.count, 0) / (SELECT COUNT(*) FROM after_base_table)) * 100.0, 1) AS after_percentage,
FROM grades
LEFT OUTER JOIN before_correctness_grade ON before_correctness_grade.grade = grades.grade
LEFT OUTER JOIN after_correctness_grade ON after_correctness_grade.grade = grades.grade;

Evaluation results - Similarity score

similarity_score_range before_percentage after_percentage
1.0 15.0 25.0
0.9 30.0 10.0
0.8 20.0 30.0
0.7 5.0 15.0
0.6 20.0 20.0
0.5 10.0 0.0
0.4 0.0 0.0
0.3 0.0 0.0
0.2 0.0 0.0
0.1 0.0 0.0
query
WITH buckets as (
  SELECT 1.0 as bucket union all
  SELECT 0.9 as bucket union all
  SELECT 0.8 as bucket union all
  SELECT 0.7 as bucket union all
  SELECT 0.6 as bucket union all
  SELECT 0.5 as bucket union all
  SELECT 0.4 as bucket union all
  SELECT 0.3 as bucket union all
  SELECT 0.2 as bucket union all
  SELECT 0.1 as bucket
), before_similarity_score AS (
  SELECT *
  FROM `dev-ai-research-0e2f8974.duo_chat_external_results.sm_experiment_code_generation__input_v1_legacy__similarity_score`
  WHERE answering_model = 'duo-chat'
), after_similarity_score AS (
  SELECT *
  FROM `dev-ai-research-0e2f8974.duo_chat_external_results.sm_experiment_code_generation__input_v1_for_issue_1_attempt_2__similarity_score`
  WHERE answering_model = 'duo-chat'
)

SELECT buckets.bucket AS similarity_score_range,
    (
        SELECT ROUND((COUNT(*) / (SELECT COUNT(*) FROM before_similarity_score)) * 100.0, 1)
        FROM before_similarity_score
        WHERE buckets.bucket = ROUND(before_similarity_score.comparison_similarity, 1)
    ) AS before_percentage,
    (
        SELECT ROUND((COUNT(*) / (SELECT COUNT(*) FROM after_similarity_score)) * 100.0, 1)
        FROM after_similarity_score
        WHERE buckets.bucket = ROUND(after_similarity_score.comparison_similarity, 1)
    ) AS after_percentage,
FROM buckets

MR acceptance checklist

Please evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.

Screenshots or screen recordings

Screenshots are required for UI changes, and strongly recommended for all other merge requests.

Before After

How to set up and validate locally

Numbered steps to set up and validate the change are strongly suggested.

Edited by Shinya Maeda

Merge request reports