Skip to content
Snippets Groups Projects

chore(deps): update dependency litellm to v1.60.2

Merged Soos requested to merge renovate/litellm-1.x-lockfile into main
1 unresolved thread

This MR contains the following updates:

Package Type Update Change
litellm dependencies minor 1.55.9 -> 1.60.2

:warning: Warning

Some dependencies could not be looked up. Check the warning logs for more information.

WARNING: this job ran in a Renovate pipeline that doesn't support the configuration required for common-ci-tasks Renovate presets.


Release Notes

BerriAI/litellm (litellm)

v1.60.0

What's Changed

Important Changes between v1.50.xx to 1.60.0

Known Issues

:rotating_light: Detected issue with Langfuse Logging when Langfuse credentials are stored in DB

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.10...v1.60.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.0
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 240.0 281.07272626532927 6.158354312051399 0.0 1843 0 215.79772499995897 3928.489000000013
Aggregated Passed :white_check_mark: 240.0 281.07272626532927 6.158354312051399 0.0 1843 0 215.79772499995897 3928.489000000013

v1.59.10

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.9...v1.59.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.10
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 210.0 239.24647793068146 6.21745665443628 0.00334092243655899 1861 1 73.25327600000264 3903.3159660000083
Aggregated Passed :white_check_mark: 210.0 239.24647793068146 6.21745665443628 0.00334092243655899 1861 1 73.25327600000264 3903.3159660000083

v1.59.9

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.8...v1.59.9

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.9
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed :x: 270.0 301.01550717582927 6.14169679840119 0.0 1837 0 234.85362500002793 3027.238808999982
Aggregated Failed :x: 270.0 301.01550717582927 6.14169679840119 0.0 1837 0 234.85362500002793 3027.238808999982

v1.59.8

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.7...v1.59.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.8
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed :x: 280.0 325.48398318207154 6.003526201462839 0.0 1796 0 234.56590200004257 3690.442290999954
Aggregated Failed :x: 280.0 325.48398318207154 6.003526201462839 0.0 1796 0 234.56590200004257 3690.442290999954

v1.59.7

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.6...v1.59.7

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.7
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 260.0 294.5630730660492 6.1254059494010225 0.0 1832 0 231.04980300001898 2728.9633709999634
Aggregated Passed :white_check_mark: 260.0 294.5630730660492 6.1254059494010225 0.0 1832 0 231.04980300001898 2728.9633709999634

v1.59.6

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.5...v1.59.6

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.6
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed :x: 250.0 302.94444351157557 6.065526445072595 0.0 1814 0 184.99327999995785 3192.1896389999915
Aggregated Failed :x: 250.0 302.94444351157557 6.065526445072595 0.0 1814 0 184.99327999995785 3192.1896389999915

v1.59.5

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.3...v1.59.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.5
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 210.0 227.08635060543418 6.150672112760015 0.0 1840 0 180.76872099999264 2652.4827009999967
Aggregated Passed :white_check_mark: 210.0 227.08635060543418 6.150672112760015 0.0 1840 0 180.76872099999264 2652.4827009999967

v1.59.3

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.2...v1.59.3

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.3
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 200.0 229.9985951234699 6.27846665942667 0.0 1879 0 179.09318400000984 3769.753647000016
Aggregated Passed :white_check_mark: 200.0 229.9985951234699 6.27846665942667 0.0 1879 0 179.09318400000984 3769.753647000016

v1.59.2

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.1...v1.59.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.2
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 277.37964377510815 6.123201928767048 0.0 1832 0 225.21770500003413 1457.6771990000168
Aggregated Passed :white_check_mark: 250.0 277.37964377510815 6.123201928767048 0.0 1832 0 225.21770500003413 1457.6771990000168

v1.59.1

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.59.0...v1.59.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.1
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 295.7613582714676 6.034086428263315 0.0 1805 0 224.12125900001456 3576.6714410000304
Aggregated Passed :white_check_mark: 250.0 295.7613582714676 6.034086428263315 0.0 1805 0 224.12125900001456 3576.6714410000304

v1.59.0

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.58.4...v1.59.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.0
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 285.129348931583 6.106818187164813 0.0 1827 0 224.69302100000732 2869.612018000055
Aggregated Passed :white_check_mark: 250.0 285.129348931583 6.106818187164813 0.0 1827 0 224.69302100000732 2869.612018000055

v1.58.4

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.58.2...v1.58.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.4
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 200.0 237.21547618310757 6.133261155980474 0.0 1835 0 175.96439100003636 4047.4063279999655
Aggregated Passed :white_check_mark: 200.0 237.21547618310757 6.133261155980474 0.0 1835 0 175.96439100003636 4047.4063279999655

v1.58.2

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.58.1...v1.58.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.2
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 289.8090936126223 6.143711740946042 0.0 1838 0 228.12097899998207 2196.5017750000015
Aggregated Passed :white_check_mark: 250.0 289.8090936126223 6.143711740946042 0.0 1838 0 228.12097899998207 2196.5017750000015

v1.58.1

Compare Source

:rotating_light:Alpha - 1.58.0 has various perf improvements, we recommend waiting for a stable release before bumping in production

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.58.0...v1.58.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.1
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 294.2978673554448 6.045420383532543 0.0 1809 0 223.72276400000146 3539.4181890000027
Aggregated Passed :white_check_mark: 250.0 294.2978673554448 6.045420383532543 0.0 1809 0 223.72276400000146 3539.4181890000027

v1.58.0

Compare Source

v1.58.0 - Alpha Release

:rotating_light: This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.11...v1.58.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.58.0
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 240.0 273.2166563012582 6.118315985413586 0.0033451700302972037 1829 1 75.1692759999969 3821.228761000043
Aggregated Passed :white_check_mark: 240.0 273.2166563012582 6.118315985413586 0.0033451700302972037 1829 1 75.1692759999969 3821.228761000043

v1.57.11

Compare Source

v1.57.11 - Alpha Release

:rotating_light: This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.10...v1.57.11

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.11
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 240.0 270.55759577820237 6.130862160194138 0.0 1835 0 224.79750500002638 1207.8732939999952
Aggregated Passed :white_check_mark: 240.0 270.55759577820237 6.130862160194138 0.0 1835 0 224.79750500002638 1207.8732939999952

v1.57.10

Compare Source

v1.57.10 - Alpha Release

:rotating_light: This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.8...v1.57.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.10
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 240.0 264.0629029362514 6.184926091214754 0.0 1851 0 213.62108399998192 1622.618584999998
Aggregated Passed :white_check_mark: 240.0 264.0629029362514 6.184926091214754 0.0 1851 0 213.62108399998192 1622.618584999998

v1.57.8

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.7...v1.57.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.8
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 210.0 225.29799695056985 6.153370698253471 0.0 1841 0 177.73327700001573 2088.13791099999
Aggregated Passed :white_check_mark: 210.0 225.29799695056985 6.153370698253471 0.0 1841 0 177.73327700001573 2088.13791099999

v1.57.7

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.5...v1.57.7

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.7
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 200.0 218.4749677188173 6.216185012755876 0.0 1860 0 177.92223199990076 3911.6109139999935
Aggregated Passed :white_check_mark: 200.0 218.4749677188173 6.216185012755876 0.0 1860 0 177.92223199990076 3911.6109139999935

v1.57.5

Compare Source

:rotating_light::rotating_light: Known issue - do not upgrade - Window's compatibility issue on this release

Relevant issue: https://github.com/BerriAI/litellm/issues/7677

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.4...v1.57.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.5
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 230.0 282.70225500655766 6.115771768544881 0.0 1830 0 206.44150200001832 3375.4479410000044
Aggregated Passed :white_check_mark: 230.0 282.70225500655766 6.115771768544881 0.0 1830 0 206.44150200001832 3375.4479410000044

v1.57.4

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.3...v1.57.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.4
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 200.0 218.7550845980808 6.268875045928877 0.0 1876 0 170.9488330000113 1424.4913769999812
Aggregated Passed :white_check_mark: 200.0 218.7550845980808 6.268875045928877 0.0 1876 0 170.9488330000113 1424.4913769999812

v1.57.3

Compare Source

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.2...v1.57.3

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.3
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 240.0 273.577669278204 6.101109800829093 0.0 1826 0 209.38834100002168 2450.7287210000186
Aggregated Passed :white_check_mark: 240.0 273.577669278204 6.101109800829093 0.0 1826 0 209.38834100002168 2450.7287210000186

v1.57.2

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.1...v1.57.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.2
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 190.0 212.2353391522645 6.34173008698281 0.0 1898 0 174.4866640000282 3470.5951910000013
Aggregated Passed :white_check_mark: 190.0 212.2353391522645 6.34173008698281 0.0 1898 0 174.4866640000282 3470.5951910000013

v1.57.1

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.57.0...v1.57.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.1
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.1
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 286.96666935492755 6.035628429692609 0.0 1806 0 226.66728699999794 3887.529271000062
Aggregated Passed :white_check_mark: 250.0 286.96666935492755 6.035628429692609 0.0 1806 0 226.66728699999794 3887.529271000062

v1.57.0

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.56.10...v1.57.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.0
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 200.0 212.84027329611826 6.1961289027318704 0.0 1854 0 174.45147399996586 1346.3216149999653
Aggregated Passed :white_check_mark: 200.0 212.84027329611826 6.1961289027318704 0.0 1854 0 174.45147399996586 1346.3216149999653

v1.56.10

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.56.9...v1.56.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.10
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 230.0 268.3301603401397 6.21711064668469 0.0 1861 0 212.36320399998476 3556.7401620000396
Aggregated Passed :white_check_mark: 230.0 268.3301603401397 6.21711064668469 0.0 1861 0 212.36320399998476 3556.7401620000396

v1.56.9

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.56.8...v1.56.9

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.9
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 240.0 269.3983699320639 6.149252570882109 0.0 1840 0 211.95807399999467 2571.210135000001
Aggregated Passed :white_check_mark: 240.0 269.3983699320639 6.149252570882109 0.0 1840 0 211.95807399999467 2571.210135000001

v1.56.8

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.56.6...v1.56.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 230.0 247.81903455189286 6.181081075067931 0.0 1850 0 191.81740900000932 2126.8676100000903
Aggregated Passed :white_check_mark: 230.0 247.81903455189286 6.181081075067931 0.0 1850 0 191.81740900000932 2126.8676100000903

v1.56.6

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.56.5...v1.56.6

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.6
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 287.411814751915 6.114731230663012 0.0 1830 0 228.32058200003758 3272.637599999939
Aggregated Passed :white_check_mark: 250.0 287.411814751915 6.114731230663012 0.0 1830 0 228.32058200003758 3272.637599999939

v1.56.5

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.56.4...v1.56.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.5
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 230.0 268.0630784626629 6.174316845767241 0.0 1848 0 212.08500100010497 3189.481879000027
Aggregated Passed :white_check_mark: 230.0 268.0630784626629 6.174316845767241 0.0 1848 0 212.08500100010497 3189.481879000027

v1.56.4

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.56.3...v1.56.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.4
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 240.0 268.74238744669225 6.116896356155644 0.0 1829 0 214.29422199992132 1969.7571099999323
Aggregated Passed :white_check_mark: 240.0 268.74238744669225 6.116896356155644 0.0 1829 0 214.29422199992132 1969.7571099999323

v1.56.3

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.56.2...v1.56.3

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.3
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 276.9724297749999 6.148940938190872 0.003341815727277648 1840 1 112.37049800001842 1700.1428350000083
Aggregated Passed :white_check_mark: 250.0 276.9724297749999 6.148940938190872 0.003341815727277648 1840 1 112.37049800001842 1700.1428350000083

v1.56.2

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.55.12...v1.56.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.2
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 275.3240164096845 6.143891773397197 0.0 1838 0 224.26387399997338 1437.5524760000076
Aggregated Passed :white_check_mark: 250.0 275.3240164096845 6.143891773397197 0.0 1838 0 224.26387399997338 1437.5524760000076

v1.55.12

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.55.11...v1.55.12

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.12
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 220.0 241.51418849604215 6.334659319234715 0.0 1895 0 191.11329300005764 3854.987871999924
Aggregated Passed :white_check_mark: 220.0 241.51418849604215 6.334659319234715 0.0 1895 0 191.11329300005764 3854.987871999924

v1.55.11

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.55.10...v1.55.11

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.11
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 250.0 290.3865391657403 6.034920682874279 0.0 1804 0 229.06071099987457 2909.605226000167
Aggregated Passed :white_check_mark: 250.0 290.3865391657403 6.034920682874279 0.0 1804 0 229.06071099987457 2909.605226000167

v1.55.10

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.55.9...v1.55.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.10
Don't want to maintain your internal proxy? get in touch :tada:

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed :white_check_mark: 200.0 218.24862748744047 6.256831142894005 0.0 1871 0 177.71721199983403 1940.1571020000574
Aggregated Passed :white_check_mark: 200.0 218.24862748744047 6.256831142894005 0.0 1871 0 177.71721199983403 1940.1571020000574

Configuration

:date: Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

:vertical_traffic_light: Automerge: Disabled by config. Please merge this manually once you are satisfied.

:recycle: Rebasing: Whenever MR becomes conflicted, or you tick the rebase/retry checkbox.

:no_bell: Ignore: Close this MR and you won't be reminded about this update again.


  • If you want to rebase/retry this MR, check this box

This MR has been generated by Renovate Bot.

Edited by Soos

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
76 76 [tool.poetry.group.lint.dependencies]
77 77 flake8 = "^7.0.0"
78 78 isort = "^5.12.0"
79 black = "^25.0.0"
79 black = "^24.0.0"
  • Tan Le mentioned in merge request !1955 (merged)

    mentioned in merge request !1955 (merged)

  • Tan Le changed milestone to %17.9

    changed milestone to %17.9

  • Please register or sign in to reply
    Loading