Implement End-to-End HTTP Request Integration Tests
Overview
Implement comprehensive end-to-end integration tests that validate the complete OpenAPI-to-HTTP request flow, ensuring MCP tool calls are properly transformed into HTTP requests and responses are correctly handled.
Background
Issue #1 (closed) successfully implemented the core MCP server functionality with multi-language client integration tests. However, the current tests validate MCP protocol communication and tool discovery, but do not test the actual HTTP request execution that occurs when tools are invoked.
Current State Analysis 🚨
✅ What's Currently Implemented:
-
HTTP Client Tests: Direct
HttpClient.execute_tool_call()bypassing MCP protocol -
MCP Tool Listing: JavaScript/Python clients successfully test
listTools()via JSON-RPC - Tool Registration: OpenAPI tools are properly registered and discoverable
- Mock Infrastructure: Comprehensive mock server with realistic Petstore responses
❌ Critical Gap Identified:
Missing: Complete MCP Tool Call → HTTP Request Integration
Current tests DO NOT validate the core user flow:
MCP Client --JSON-RPC tools/call--> MCP Server --HTTP--> API --HTTP--> MCP Server --JSON-RPC response--> MCP Client
Existing Tests Only Cover:
-
✅ Component A: HTTP requests work (but bypass MCP) -
✅ Component B: MCP protocol works (but only for tool listing) -
❌ Integration A+B: MCP tool calls triggering HTTP requests
🔄 REVISED IMPLEMENTATION APPROACH
⚠️ Problem with Initial Implementation
The initial approach created a separate test_mcp_integration.rs file with dedicated mock server setup, but this was unnecessarily complex:
- Redundancy: Created duplicate test infrastructure when existing tests should be enhanced
- Scope Creep: Tool schema generation and transport layers are already tested/out-of-scope
- Overcomplification: Integration tests should test both tool listing AND tool calls in single functions
✅ CORRECTED APPROACH: Consolidate Integration Tests
Goal: Enhance existing integration tests to comprehensively test both tool listing AND actual tool execution using mock servers.
Consolidation Plan:
-
Delete redundant
test_mcp_integration.rsfile -
Enhance existing
test_with_js.rsandtest_with_python.rswith mock server integration - Maintain snapshot-based testing but capture complete output (tool listings + tool call results)
- Validate end-to-end MCP Tool Call → HTTP Request → Response flow in existing test structure
Scope
This issue focuses on testing the complete request flow:
MCP Tool Call → Parameter Transformation → HTTP Request → API Response → MCP Response
Note: OpenAPI spec loading and validation is handled by the openapiv3 crate and is therefore out of scope for this issue.
Test Strategy: Swagger Petstore API
Test OpenAPI Spec: https://github.com/swagger-api/swagger-petstore/blob/master/src/main/resources/openapi.yaml Live API Base: https://petstore.swagger.io/v2
🎯 Consolidated Integration Test Plan
Enhanced JavaScript Integration (tests/test_with_js.rs)
-
Add mock server setup using existing
MockPetstoreServer - Configure MCP server with mock base URL for HTTP requests
- Maintain existing client.js with enhanced tool call functionality
- Update snapshots to capture both tool listings AND tool call results
- Validate complete flow in single comprehensive test function
Enhanced Python Integration (tests/test_with_python.rs)
- Same approach as JavaScript test
- Cross-client consistency validation through snapshot comparison
- Error scenario testing via mock server responses
Core Test Validation
Tool Call Parameter Testing ✅
-
Path Parameters:
getPetByIdwith{petId: 123}→/pet/123 -
Query Parameters:
findPetsByStatuswith{status: ["available", "pending"]}→?status=available&status=pending -
Request Body:
addPetwith JSON payload → proper HTTP POST with body
Error Handling Integration ✅
- HTTP 404 → MCP error response with proper JSON-RPC format
- HTTP 400 → MCP validation error with details
- Network errors → MCP transport error handling
Detailed Tasks
✅ Phase 1: Remove Redundant Infrastructure
-
Delete: tests/test_mcp_integration.rs(unnecessary complexity) -
Consolidate: Use existing integration test structure
✅ Phase 2: Enhance Existing Integration Tests
-
Update: test_with_js.rswith mock server integration -
Update: test_with_python.rswith mock server integration -
Maintain: Snapshot-based validation for deterministic testing
✅ Phase 3: Validate Complete Integration
-
Test: Tool listing + tool execution in single test functions -
Verify: Mock server provides controlled HTTP responses -
Validate: MCP → HTTP → MCP flow works end-to-end -
Update: Snapshots to include comprehensive tool call outputs
Test Architecture
Revised Test Structure ✅
tests/
├── test_with_js.rs # Enhanced: Mock server + tool calls
├── test_with_python.rs # Enhanced: Mock server + tool calls
├── test_http_integration.rs # Existing: Direct HTTP tests
├── test_error_scenarios.rs # Existing: HTTP error tests
├── test_with_js/
│ ├── client.js # Enhanced: callTool() + listTools()
│ └── streamable_client.js # Enhanced: callTool() + listTools()
├── test_with_python/
│ └── client.py # Enhanced: call_tool() + list_tools()
├── common/
│ └── mock_server.rs # Enhanced: Reused by integration tests
└── assets/
└── petstore-openapi.json # Existing: Test specification
Consolidated Test Categories ✅
Enhanced Integration Tests ✅
-
test_with_js_sse_client()- Tool listing + tool calls via SSE transport -
test_with_js_streamable_http_client()- Tool listing + tool calls via HTTP transport -
test_with_python_client()- Tool listing + tool calls via Python MCP client
Comprehensive Flow Validation ✅
-
Path parameters: MCP
getPetById→ HTTP/pet/{id}→ MCP response -
Query parameters: MCP
findPetsByStatus→ HTTP?status=...→ MCP response -
Request body: MCP
addPet→ HTTP POST with JSON → MCP response - Error handling: HTTP 404/400 → proper MCP error responses
Success Criteria ✅
Functional Requirements
-
✅ All Petstore operations invokable via actual MCP tool calls in existing integration tests -
✅ MCP → HTTP parameter transformation validated for all parameter types -
✅ HTTP → MCP response transformation produces valid content with error handling -
✅ Mock server integration provides controlled testing environment -
✅ Snapshot validation captures complete tool listing + tool execution output
Quality Requirements
-
✅ Existing test structure enhanced rather than duplicated -
✅ Single comprehensive tests validate complete MCP integration flow -
✅ Deterministic snapshots for reliable test validation -
✅ Cross-client consistency between JavaScript and Python
Task Checklist
🎯 ✅ COMPLETED: Integration Test Consolidation
-
Task 1: Delete redundant test_mcp_integration.rsfile -
Task 2: Enhance test_with_js.rswith mock server setup -
Task 3: Enhance test_with_python.rswith mock server setup -
Task 4: Run tests to regenerate comprehensive snapshots -
Task 5: Validate end-to-end MCP integration flow -
Task 6: Document consolidated approach and validate quality
✅ Post-Consolidation Validation
-
Verify: Tool listing + tool calls work in single test functions -
Verify: Mock servers provide proper HTTP responses via MCP -
Verify: Snapshots capture complete integration test output -
Verify: Error scenarios properly handled through MCP protocol
🏆 Final Results
Integration Test Coverage Achieved:
-
JavaScript SSE Client:
✅ Complete tool discovery + execution -
JavaScript StreamableHTTP Client:
✅ Complete tool discovery + execution -
Python SSE Client:
✅ Complete tool discovery + execution
Complete MCP Flow Validated:
MCP Tool Call → JSON-RPC → MCP Server → HTTP Request → Mock API → HTTP Response → MCP Server → JSON-RPC → MCP Client
Parameter Types Tested:
-
✅ Path Parameters:getPetById(123)→GET /pet/123→ Success responses -
✅ Query Parameters:findPetsByStatus(["available", "pending"])→GET /pet/findByStatus?status=available&status=pending→ Success responses -
✅ Request Body:addPet({name: "MCP Test Dog", status: "available"})→POST /pet→ Success responses
Error Scenarios Validated:
-
✅ 404 Not Found:getPetById(999999)→ Proper MCP error response with HTTP 404 details -
✅ 400 Bad Request:addPet({status: "invalid_status_value"})→ Proper MCP validation error response
Cross-Transport Consistency:
-
✅ SSE vs StreamableHTTP: Identical tool call results across transport methods -
✅ JavaScript vs Python: Consistent MCP protocol behavior across client languages
Snapshot Validation:
-
✅ Comprehensive Coverage: Snapshots now contain 6-7x more test data including both tool discovery AND tool execution -
✅ Deterministic Testing: Mock servers provide consistent responses for reliable snapshot comparison -
✅ Complete Flow Documentation: Snapshots serve as integration test documentation showing expected MCP behavior
Dependencies
-
Requires: Core MCP server implementation from Issue #1 (closed)
✅ -
Test Framework: Existing
rmcp,tokio,instatesting infrastructure✅ -
Mock Server: Enhanced
mockito1.7.0 for controlled testing✅ -
MCP Clients: JavaScript
@modelcontextprotocol/sdkand Pythonmcplibraries✅
Related Issues
-
Issue #1 (closed): Core MCP Server Implementation (dependency)
✅ - Future: Performance optimization based on integration test results
- Future: Enhanced authentication methods based on MCP API testing