chore(deps): update dependency litellm to v1.52.10
This MR contains the following updates:
Package | Type | Update | Change |
---|---|---|---|
litellm | dependencies | patch |
1.52.3 -> 1.52.10
|
⚠️ WarningSome dependencies could not be looked up. Check the warning logs for more information.
WARNING: this job ran in a Renovate pipeline that doesn't support the configuration required for common-ci-tasks Renovate presets.
Release Notes
BerriAI/litellm (litellm)
v1.52.10
What's Changed
- add openrouter/qwen/qwen-2.5-coder-32b-instruct by @paul-gauthier in https://github.com/BerriAI/litellm/pull/6731
- Update routing references by @emmanuel-ferdman in https://github.com/BerriAI/litellm/pull/6758
- (Doc) Add section on what is stored in the DB + Add clear section on key/team based logging by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6769
- (Admin UI) - Remain on Current Tab when user clicks refresh by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6777
- (UI) fix - allow editing key alias on Admin UI by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6776
- (docs) add doc string for /key/update by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6778
- (patch) using image_urls with
vertex/anthropic
models by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6775 - (fix) Azure AI Studio - using
image_url
in content with both text and image_url by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6774 - build: add gemini-exp-1114 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6786
- (fix) httpx handler - bind to ipv4 for httpx handler by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6785
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.52.9...v1.52.10
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.10
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
240.0 | 271.7799367877801 | 6.1248828197277065 | 0.0 | 1833 | 0 | 213.09577699997817 | 2144.701510999994 |
Aggregated | Passed |
240.0 | 271.7799367877801 | 6.1248828197277065 | 0.0 | 1833 | 0 | 213.09577699997817 | 2144.701510999994 |
v1.52.9
What's Changed
- (feat) add bedrock/stability.stable-image-ultra-v1:0 by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6723
- [Feature]: Stop swallowing up AzureOpenAi exception responses in litellm's implementation for a BadRequestError by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6745
- [Feature]: json_schema in response support for Anthropic by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6748
- fix: import audio check by @IamRash-7 in https://github.com/BerriAI/litellm/pull/6740
- (fix) Cost tracking for
vertex_ai/imagen3
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6752 - (feat) Vertex AI - add support for fine tuned embedding models by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6749
- LiteLLM Minor Fixes & Improvements (11/13/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6729
- feat - add us.llama 3.1 models by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6760
- (Feat) Add Vertex Model Garden llama 3.1 models by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6763
- (fix) Fix - don't allow
viewer
roles to create virtual keys by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6764 - (feat) Use
litellm/
prefix when storing virtual keys in AWS secret manager by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6765
New Contributors
- @IamRash-7 made their first contribution in https://github.com/BerriAI/litellm/pull/6740
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.52.8...v1.52.9
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.9
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed |
280.0 | 314.28547197285235 | 6.039371468840217 | 0.0 | 1805 | 0 | 226.56484299994872 | 2776.9337409999935 |
Aggregated | Failed |
280.0 | 314.28547197285235 | 6.039371468840217 | 0.0 | 1805 | 0 | 226.56484299994872 | 2776.9337409999935 |
v1.52.8
What's Changed
- (chore) ci/cd fix - use correct
test_key_generate_prisma.py
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6718 - Litellm key update fix by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6710
- Update code blocks in huggingface.md by @Aiden-Jeon in https://github.com/BerriAI/litellm/pull/6737
- Doc fix for prefix support by @CamdenClark in https://github.com/BerriAI/litellm/pull/6734
- (Feat) Add support for storing virtual keys in AWS SecretManager by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6728
- LiteLLM Minor Fixes & Improvement (11/14/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6730
New Contributors
- @Aiden-Jeon made their first contribution in https://github.com/BerriAI/litellm/pull/6737
- @CamdenClark made their first contribution in https://github.com/BerriAI/litellm/pull/6734
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.52.6...v1.52.8
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.8
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
270.0 | 298.55231204572533 | 6.139888957283805 | 0.0 | 1837 | 0 | 232.112771000061 | 1744.873116000008 |
Aggregated | Passed |
270.0 | 298.55231204572533 | 6.139888957283805 | 0.0 | 1837 | 0 | 232.112771000061 | 1744.873116000008 |
v1.52.6
What's Changed
- LiteLLM Minor Fixes & Improvements (11/12/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6705
- (feat) helm hook to sync db schema by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6715
- (fix proxy redis) Add redis sentinel support by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6154
- Fix: Update gpt-4o costs to those of gpt-4o-2024-08-06 by @klieret in https://github.com/BerriAI/litellm/pull/6714
- (fix) using Anthropic
response_format={"type": "json_object"}
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6721 - (feat) Add cost tracking for Azure Dall-e-3 Image Generation + use base class to ensure basic image generation tests pass by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6716
New Contributors
- @klieret made their first contribution in https://github.com/BerriAI/litellm/pull/6714
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.52.5...v1.52.6
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.6
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
240.0 | 266.21521040425523 | 6.127671245386762 | 0.0 | 1833 | 0 | 215.80195500001764 | 2902.9665340000292 |
Aggregated | Passed |
240.0 | 266.21521040425523 | 6.127671245386762 | 0.0 | 1833 | 0 | 215.80195500001764 | 2902.9665340000292 |
v1.52.5
What's Changed
- Litellm dev 11 11 2024 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6693
- Add docs to export logs to Laminar by @dinmukhamedm in https://github.com/BerriAI/litellm/pull/6674
- (Feat) Add langsmith key based logging by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6682
- (fix) OpenAI's optional messages[].name does not work with Mistral API by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6701
- (feat) add xAI on Admin UI by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6680
- (docs) add benchmarks on 1K RPS by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6704
- (feat) add cost tracking stable diffusion 3 on Bedrock by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6676
- fix raise correct error 404 when /key/info is called on non-existent key by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6653
New Contributors
- @dinmukhamedm made their first contribution in https://github.com/BerriAI/litellm/pull/6674
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.52.4...v1.52.5
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.5
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
200.0 | 216.13288200000045 | 6.215294300193555 | 0.0 | 1859 | 0 | 166.97629999998753 | 1726.1806539999611 |
Aggregated | Passed |
200.0 | 216.13288200000045 | 6.215294300193555 | 0.0 | 1859 | 0 | 166.97629999998753 | 1726.1806539999611 |
v1.52.4
What's Changed
- (feat) Add support for logging to GCS Buckets with folder paths by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6675
- (feat) add bedrock image gen async support by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6672
- (feat) Add Bedrock Stability.ai Stable Diffusion 3 Image Generation models by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6673
- (Feat) 273% improvement GCS Bucket Logger - use Batched Logging by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6679
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.52.3...v1.52.4
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.4
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
260.0 | 290.15274785816086 | 6.102299282865334 | 0.0 | 1826 | 0 | 221.48416699997142 | 3998.8694860000464 |
Aggregated | Passed |
260.0 | 290.15274785816086 | 6.102299282865334 | 0.0 | 1826 | 0 | 221.48416699997142 | 3998.8694860000464 |
Configuration
-
If you want to rebase/retry this MR, check this box
This MR has been generated by Renovate Bot.