Integrated environment to test AI end to end across all systems
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
In order to deliver a stable AI product, several systems must interact successfully in order to complete the request (LLM Provider, CustomersDot, Zuora, AI Gateway, GitLab Monolith, IDE or IDE extension) When an error occurs, it is difficult to debug throughout the stack. During development, it is also not clear that these other systems may impact your slice of work, even if it is tested and working functionally on its own.
This issue serves as a proposal to test the integrations throughout the lifecycle of requests, in order to validate that each is working properly in order to access and use an AI feature.
Edited by 🤖 GitLab Bot 🤖