LiteLLM - Erste Schritte
https://github.com/BerriAI/litellm
100+ LLMs über das OpenAI-Eingabe-/Ausgabeformat aufrufen​
- Übersetzt Eingaben in die
completion-,embedding- undimage_generation-Endpunkte des Anbieters - Konsistenter Output, Textantworten sind immer unter
['choices'][0]['message']['content']verfügbar - Wiederholungs-/Fallback-Logik über mehrere Bereitstellungen hinweg (z. B. Azure/OpenAI) - Router
- Ausgaben verfolgen & Budgets pro Projekt festlegen LiteLLM Proxy Server
Wie man LiteLLM benutzt​
Sie können litellm entweder über
- LiteLLM Proxy Server - Server (LLM Gateway) zum Aufrufen von 100+ LLMs, Lastenausgleich, Kostenverfolgung über Projekte hinweg
- LiteLLM Python SDK - Python-Client zum Aufrufen von 100+ LLMs, Lastenausgleich, Kostenverfolgung
Wann man LiteLLM Proxy Server (LLM Gateway) verwendet​
Verwenden Sie den LiteLLM Proxy Server, wenn Sie einen zentralen Dienst (LLM Gateway) für den Zugriff auf mehrere LLMs wünschen
Typischerweise verwendet von Gen AI Enablement / ML Platform Teams
- Der LiteLLM Proxy bietet Ihnen eine einheitliche Schnittstelle für den Zugriff auf mehrere LLMs (100+ LLMs)
- LLM-Nutzung verfolgen und Schutzmechanismen einrichten
- Protokollierung, Schutzmechanismen, Caching pro Projekt anpassen
Wann man LiteLLM Python SDK verwendet​
Verwenden Sie das LiteLLM Python SDK, wenn Sie LiteLLM in Ihrem Python-Code verwenden möchten
Typischerweise verwendet von Entwicklern, die LLM-Projekte erstellen
- Das LiteLLM SDK bietet Ihnen eine einheitliche Schnittstelle für den Zugriff auf mehrere LLMs (100+ LLMs)
- Wiederholungs-/Fallback-Logik über mehrere Bereitstellungen hinweg (z. B. Azure/OpenAI) - Router
LiteLLM Python SDK​
Grundlegende Verwendung​
pip install litellm
- OpenAI
- Anthropic
- xAI
- VertexAI
- NVIDIA
- HuggingFace
- Azure OpenAI
- Ollama
- Openrouter
- Novita AI
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-api-key"
response = completion(
model="openai/gpt-4o",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
import os
## set ENV variables
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
response = completion(
model="anthropic/claude-3-sonnet-20240229",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
import os
## set ENV variables
os.environ["XAI_API_KEY"] = "your-api-key"
response = completion(
model="xai/grok-2-latest",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
import os
# auth: run 'gcloud auth application-default'
os.environ["VERTEXAI_PROJECT"] = "hardy-device-386718"
os.environ["VERTEXAI_LOCATION"] = "us-central1"
response = completion(
model="vertex_ai/gemini-1.5-pro",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
import os
## set ENV variables
os.environ["NVIDIA_NIM_API_KEY"] = "nvidia_api_key"
os.environ["NVIDIA_NIM_API_BASE"] = "nvidia_nim_endpoint_url"
response = completion(
model="nvidia_nim/<model_name>",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
import os
os.environ["HUGGINGFACE_API_KEY"] = "huggingface_api_key"
# e.g. Call 'WizardLM/WizardCoder-Python-34B-V1.0' hosted on HF Inference endpoints
response = completion(
model="huggingface/WizardLM/WizardCoder-Python-34B-V1.0",
messages=[{ "content": "Hello, how are you?","role": "user"}],
api_base="https://my-endpoint.huggingface.cloud"
)
print(response)
from litellm import completion
import os
## set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""
# azure call
response = completion(
"azure/<your_deployment_name>",
messages = [{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
response = completion(
model="ollama/llama2",
messages = [{ "content": "Hello, how are you?","role": "user"}],
api_base="https://:11434"
)
from litellm import completion
import os
## set ENV variables
os.environ["OPENROUTER_API_KEY"] = "openrouter_api_key"
response = completion(
model="openrouter/google/palm-2-chat-bison",
messages = [{ "content": "Hello, how are you?","role": "user"}],
)
from litellm import completion
import os
## set ENV variables. Visit https://novita.ai/settings/key-management to get your API key
os.environ["NOVITA_API_KEY"] = "novita-api-key"
response = completion(
model="novita/deepseek/deepseek-r1",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
Antwortformat (OpenAI-Format)​
{
"id": "chatcmpl-565d891b-a42e-4c39-8d14-82a1f5208885",
"created": 1734366691,
"model": "claude-3-sonnet-20240229",
"object": "chat.completion",
"system_fingerprint": null,
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Hello! As an AI language model, I don't have feelings, but I'm operating properly and ready to assist you with any questions or tasks you may have. How can I help you today?",
"role": "assistant",
"tool_calls": null,
"function_call": null
}
}
],
"usage": {
"completion_tokens": 43,
"prompt_tokens": 13,
"total_tokens": 56,
"completion_tokens_details": null,
"prompt_tokens_details": {
"audio_tokens": null,
"cached_tokens": 0
},
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0
}
}
Streaming​
Setzen Sie stream=True in den completion-Argumenten.
- OpenAI
- Anthropic
- xAI
- VertexAI
- NVIDIA
- HuggingFace
- Azure OpenAI
- Ollama
- Openrouter
- Novita AI
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-api-key"
response = completion(
model="openai/gpt-4o",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
import os
## set ENV variables
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
response = completion(
model="anthropic/claude-3-sonnet-20240229",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
import os
## set ENV variables
os.environ["XAI_API_KEY"] = "your-api-key"
response = completion(
model="xai/grok-2-latest",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
import os
# auth: run 'gcloud auth application-default'
os.environ["VERTEX_PROJECT"] = "hardy-device-386718"
os.environ["VERTEX_LOCATION"] = "us-central1"
response = completion(
model="vertex_ai/gemini-1.5-pro",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
import os
## set ENV variables
os.environ["NVIDIA_NIM_API_KEY"] = "nvidia_api_key"
os.environ["NVIDIA_NIM_API_BASE"] = "nvidia_nim_endpoint_url"
response = completion(
model="nvidia_nim/<model_name>",
messages=[{ "content": "Hello, how are you?","role": "user"}]
stream=True,
)
from litellm import completion
import os
os.environ["HUGGINGFACE_API_KEY"] = "huggingface_api_key"
# e.g. Call 'WizardLM/WizardCoder-Python-34B-V1.0' hosted on HF Inference endpoints
response = completion(
model="huggingface/WizardLM/WizardCoder-Python-34B-V1.0",
messages=[{ "content": "Hello, how are you?","role": "user"}],
api_base="https://my-endpoint.huggingface.cloud",
stream=True,
)
print(response)
from litellm import completion
import os
## set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""
# azure call
response = completion(
"azure/<your_deployment_name>",
messages = [{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
response = completion(
model="ollama/llama2",
messages = [{ "content": "Hello, how are you?","role": "user"}],
api_base="https://:11434",
stream=True,
)
from litellm import completion
import os
## set ENV variables
os.environ["OPENROUTER_API_KEY"] = "openrouter_api_key"
response = completion(
model="openrouter/google/palm-2-chat-bison",
messages = [{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
import os
## set ENV variables. Visit https://novita.ai/settings/key-management to get your API key
os.environ["NOVITA_API_KEY"] = "novita_api_key"
response = completion(
model="novita/deepseek/deepseek-r1",
messages = [{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
Streaming-Antwortformat (OpenAI-Format)​
{
"id": "chatcmpl-2be06597-eb60-4c70-9ec5-8cd2ab1b4697",
"created": 1734366925,
"model": "claude-3-sonnet-20240229",
"object": "chat.completion.chunk",
"system_fingerprint": null,
"choices": [
{
"finish_reason": null,
"index": 0,
"delta": {
"content": "Hello",
"role": "assistant",
"function_call": null,
"tool_calls": null,
"audio": null
},
"logprobs": null
}
]
}
Fehlerbehandlung​
LiteLLM bildet Ausnahmen über alle unterstützten Anbieter hinweg auf OpenAI-Ausnahmen ab. Alle unsere Ausnahmen erben von den Ausnahmetypen von OpenAI, sodass jede Fehlerbehandlung, die Sie für diese haben, sofort mit LiteLLM funktioniert.
from openai.error import OpenAIError
from litellm import completion
os.environ["ANTHROPIC_API_KEY"] = "bad-key"
try:
# some code
completion(model="claude-instant-1", messages=[{"role": "user", "content": "Hey, how's it going?"}])
except OpenAIError as e:
print(e)
Logging-Beobachtbarkeit - LLM-Eingaben/Ausgaben protokollieren (Docs)​
LiteLLM stellt vordefinierte Callbacks bereit, um Daten an Lunary, MLflow, Langfuse, Helicone, Promptlayer, Traceloop und Slack zu senden.
from litellm import completion
## set env variables for logging tools (API key set up is not required when using MLflow)
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key" # get your public key at https://app.lunary.ai/settings
os.environ["HELICONE_API_KEY"] = "your-helicone-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["OPENAI_API_KEY"]
# set callbacks
litellm.success_callback = ["lunary", "mlflow", "langfuse", "helicone"] # log input/output to lunary, mlflow, langfuse, helicone
#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
Kosten, Nutzung und Latenz für Streaming verfolgen​
Verwenden Sie dafür eine Callback-Funktion - weitere Informationen zu benutzerdefinierten Callbacks: https://docs.litellm.de/docs/observability/custom_callback
import litellm
# track_cost_callback
def track_cost_callback(
kwargs, # kwargs to completion
completion_response, # response from completion
start_time, end_time # start/end time
):
try:
response_cost = kwargs.get("response_cost", 0)
print("streaming response_cost", response_cost)
except:
pass
# set callback
litellm.success_callback = [track_cost_callback] # set custom callback function
# litellm.completion() call
response = completion(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Hi 👋 - i'm openai"
}
],
stream=True
)
LiteLLM Proxy Server (LLM Gateway)​
Ausgaben über mehrere Projekte/Personen hinweg verfolgen
Der Proxy bietet
📖 Proxy-Endpunkte - Swagger Docs​
Hier finden Sie ein vollständiges Tutorial mit Schlüsseln + Ratenbegrenzungen - hier
Schnellstart Proxy - CLI​
pip install 'litellm[proxy]'
Schritt 1: LiteLLM Proxy starten​
- Pip-Paket
- Docker-Container
$ litellm --model huggingface/bigcode/starcoder
#INFO: Proxy running on http://0.0.0.0:4000
Schritt 1. CONFIG.YAML ERSTELLEN
Beispiel litellm_config.yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/<your-azure-model-deployment>
api_base: os.environ/AZURE_API_BASE # runs os.getenv("AZURE_API_BASE")
api_key: os.environ/AZURE_API_KEY # runs os.getenv("AZURE_API_KEY")
api_version: "2023-07-01-preview"
Schritt 2. DOCKER-IMAGE AUSFÜHREN
docker run \
-v $(pwd)/litellm_config.yaml:/app/config.yaml \
-e AZURE_API_KEY=d6*********** \
-e AZURE_API_BASE=https://openai-***********/ \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-latest \
--config /app/config.yaml --detailed_debug
Schritt 2: ChatCompletions-Anfrage an den Proxy senden​
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])
print(response)