Zum Hauptinhalt springen

Featherless AI

https://featherless.ai/

Tipp

Wir unterstützen ALLE Featherless AI Modelle, setzen Sie einfach model=featherless_ai/<any-model-on-featherless> als Präfix, wenn Sie liteLLM-Anfragen senden. Für die vollständige Liste der unterstützten Modelle besuchen Sie bitte https://featherless.ai/models

API-Schlüssel​

# env variable
os.environ['FEATHERLESS_AI_API_KEY']

Beispielverwendung​

from litellm import completion
import os

os.environ['FEATHERLESS_AI_API_KEY'] = ""
response = completion(
model="featherless_ai/featherless-ai/Qwerky-72B",
messages=[{"role": "user", "content": "write code for saying hi from LiteLLM"}]
)

Beispielverwendung - Streaming​

from litellm import completion
import os

os.environ['FEATHERLESS_AI_API_KEY'] = ""
response = completion(
model="featherless_ai/featherless-ai/Qwerky-72B",
messages=[{"role": "user", "content": "write code for saying hi from LiteLLM"}],
stream=True
)

for chunk in response:
print(chunk)

Chat-Modelle​

ModellnameFunktionsaufruf
featherless-ai/Qwerky-72Bcompletion(model="featherless_ai/featherless-ai/Qwerky-72B", messages)
featherless-ai/Qwerky-QwQ-32Bcompletion(model="featherless_ai/featherless-ai/Qwerky-QwQ-32B", messages)
Qwen/Qwen2.5-72B-Instructcompletion(model="featherless_ai/Qwen/Qwen2.5-72B-Instruct", messages)
all-hands/openhands-lm-32b-v0.1completion(model="featherless_ai/all-hands/openhands-lm-32b-v0.1", messages)
Qwen/Qwen2.5-Coder-32B-Instructcompletion(model="featherless_ai/Qwen/Qwen2.5-Coder-32B-Instruct", messages)
deepseek-ai/DeepSeek-V3-0324completion(model="featherless_ai/deepseek-ai/DeepSeek-V3-0324", messages)
mistralai/Mistral-Small-24B-Instruct-2501completion(model="featherless_ai/mistralai/Mistral-Small-24B-Instruct-2501", messages)
mistralai/Mistral-Nemo-Instruct-2407completion(model="featherless_ai/mistralai/Mistral-Nemo-Instruct-2407", messages)
ProdeusUnity/Stellar-Odyssey-12b-v0.0completion(model="featherless_ai/ProdeusUnity/Stellar-Odyssey-12b-v0.0", messages)