Skip to content

chore(deps): update dependency litellm to v1.40.15

renovate requested to merge renovate/litellm-1.x-lockfile into main

This MR contains the following updates:

Package Type Update Change
litellm dependencies patch 1.40.0 -> 1.40.15

Warning

Some dependencies could not be looked up. Check the warning logs for more information.


Release Notes

BerriAI/litellm (litellm)

v1.40.15

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.14...v1.40.15

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.15
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 120.0 132.58387302297749 6.398687111538595 0.0 1915 0 97.12711200000967 1186.0091809999744
Aggregated Passed 120.0 132.58387302297749 6.398687111538595 0.0 1915 0 97.12711200000967 1186.0091809999744

v1.40.14

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.13...v1.40.14

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.14
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 120.0 141.18410333195084 6.441903839147897 0.0 1928 0 105.22602600002529 510.8018800000025
Aggregated Passed 120.0 141.18410333195084 6.441903839147897 0.0 1928 0 105.22602600002529 510.8018800000025

v1.40.13

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.12...v1.40.13

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.13
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 74 85.12421177852299 6.470441187117138 0.0 1937 0 63.80303100002038 1377.5951729999178
Aggregated Passed 74 85.12421177852299 6.470441187117138 0.0 1937 0 63.80303100002038 1377.5951729999178

v1.40.12

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.11...v1.40.12

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.12
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 94 113.78855698077908 6.432303146239259 0.0 1925 0 80.02467099998967 1025.8250419999513
Aggregated Passed 94 113.78855698077908 6.432303146239259 0.0 1925 0 80.02467099998967 1025.8250419999513

v1.40.11

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.10...v1.40.11

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.11
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 120.0 140.50671503682315 6.351765918831661 0.0 1901 0 96.28972799998792 1490.2560670000184
Aggregated Passed 120.0 140.50671503682315 6.351765918831661 0.0 1901 0 96.28972799998792 1490.2560670000184

v1.40.10

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.9...v1.40.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.10
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 140.0 172.37660025809805 6.297822628765798 0.0 1883 0 114.60945100003528 3651.5153230000124
Aggregated Passed 140.0 172.37660025809805 6.297822628765798 0.0 1883 0 114.60945100003528 3651.5153230000124

v1.40.9

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.8...v1.40.9

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.9
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 95 118.26463258740928 6.42020613574963 0.0 1922 0 78.571060999991 1634.9082140000064
Aggregated Passed 95 118.26463258740928 6.42020613574963 0.0 1922 0 78.571060999991 1634.9082140000064

v1.40.8

Compare Source

What's Changed

Client Side Fallbacks: https://docs.litellm.ai/docs/proxy/reliability#test---client-side-fallbacks

fallbacks py

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.7...v1.40.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.8
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.8
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 140.0 169.11120714803027 6.281005310183787 0.0 1878 0 114.50119100004486 1457.4686270000257
Aggregated Passed 140.0 169.11120714803027 6.281005310183787 0.0 1878 0 114.50119100004486 1457.4686270000257

v1.40.7

Compare Source

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.6...v1.40.7

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.7
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 97 126.50565680197539 6.4278560269757214 0.003340881510902142 1924 1 82.64289499999222 1316.4627209999935
Aggregated Passed 97 126.50565680197539 6.4278560269757214 0.003340881510902142 1924 1 82.64289499999222 1316.4627209999935

v1.40.6

Compare Source

🚨 Note: LiteLLM Proxy Added opentelemetry as a dependency on this release. We recommend waiting for a stable release before upgrading your production instances

LiteLLM Python SDK Users: You should be unaffected by this change (opentelemetry was only added for the proxy server)

🔥 LiteLLM 1.40.6 - Proxy 100+ LLMs AT Scale with our production grade OpenTelemetry logger. Trace LLM API Calls, DB Requests, Cache Cache Requests 👉 Start here: https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput-in-opentelemetry-format

🐞 [Fix]- Allow redacting messages from slack alerting https://docs.litellm.ai/docs/proxy/alerting#advanced---redacting-messages-from-alerts

🔨 [Refactor] - Refactor proxy_server.py to use common function for add_litellm_data_to_request

[Feat] OpenTelemetry - Log Exceptions from Proxy Server

[FEAT] OpenTelemetry - Log Redis Cache Read / Writes

[FEAT] OpenTelemetry - LOG DB Exceptions

[Feat] OpenTelemetry - Instrument DB Reads

🐞 [Fix] UI - Allow custom logout url and show proxy base url on API Ref Page

Xnapper-2024-06-07-21 44 06

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.5...v1.40.6

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.6
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 130.0 151.53218399526997 6.362696017911015 0.0 1903 0 109.01354200001379 1319.1295889999992
Aggregated Passed 130.0 151.53218399526997 6.362696017911015 0.0 1903 0 109.01354200001379 1319.1295889999992

v1.40.5

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.4...v1.40.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.5
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 98 123.75303621190369 6.512790176735744 0.0 1949 0 80.83186400000386 1991.117886999973
Aggregated Passed 98 123.75303621190369 6.512790176735744 0.0 1949 0 80.83186400000386 1991.117886999973

v1.40.4

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.3...v1.40.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.4
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.4
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 74 89.43947919222931 6.450062450815326 0.0 1930 0 64.37952199996744 1143.0389689999743
Aggregated Passed 74 89.43947919222931 6.450062450815326 0.0 1930 0 64.37952199996744 1143.0389689999743

v1.40.3

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.2...v1.40.3

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.3
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 130.0 168.35103872813087 6.385058663866248 0.0 1909 0 109.50845100001061 8353.559378
Aggregated Passed 130.0 168.35103872813087 6.385058663866248 0.0 1909 0 109.50845100001061 8353.559378

v1.40.2

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.1...v1.40.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.2
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 72 86.0339053382131 6.392727588765549 0.0 1913 0 61.2748209999836 896.4834699999642
Aggregated Passed 72 86.0339053382131 6.392727588765549 0.0 1913 0 61.2748209999836 896.4834699999642

v1.40.1

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.40.0...v1.40.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.1
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.1
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 120.0 139.78250550967104 6.395300383667639 0.0 1913 0 95.28932899991105 1526.2213239999483
Aggregated Passed 120.0 139.78250550967104 6.395300383667639 0.0 1913 0 95.28932899991105 1526.2213239999483

Configuration

📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever MR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this MR and you won't be reminded about this update again.


  • If you want to rebase/retry this MR, check this box

This MR has been generated by Renovate Bot.

Edited by renovate

Merge request reports