Skip to content

Add token validation for GPT4_o model

Tan Le requested to merge fix-validation-gpt-4o into main

What does this merge request do and why?

Add token validation for GPT4_o model.

How to set up and validate locally

  1. Check out to this merge request's branch.
  2. Ensure OPENAI_TOKEN is not set as an environment variable.
  3. Run a pipeline with OpenAI models, either as an answering or evaluating model.
  4. Confirm that a validation error is raised.
    pydantic_core._pydantic_core.ValidationError: 2 validation errors for DuoChatEvalConfig
    eval_setup.answering_models.2
      Assertion failed, OpenAI token not provided. Please set OPENAI_TOKEN env variable. [type=assertion_error, input_value={'name': 'gpt-4o', 'promp..._system.example.txt'}]}}, input_type=dict]
        For further information visit https://errors.pydantic.dev/2.7/v/assertion_error
    eval_setup.metrics.2.evaluating_models.0
      Assertion failed, OpenAI token not provided. Please set OPENAI_TOKEN env variable. [type=assertion_error, input_value={'name': 'gpt-4o', 'promp...ependent-system.txt'}]}}, input_type=dict]
        For further information visit https://errors.pydantic.dev/2.7/v/assertion_error

Merge request checklist

  • I've ran the affected pipeline(s) to validate that nothing is broken.
  • Tests added for new functionality. If not, please raise an issue to follow up.
  • Documentation added/updated, if needed.
Edited by Tan Le

Merge request reports