Perplexity AI (pplx-api)
API-Schlüssel
# env variable
os.environ['PERPLEXITYAI_API_KEY']
Beispielverwendung
from litellm import completion
import os
os.environ['PERPLEXITYAI_API_KEY'] = ""
response = completion(
model="perplexity/sonar-pro",
messages=messages
)
print(response)
Beispielverwendung - Streaming
from litellm import completion
import os
os.environ['PERPLEXITYAI_API_KEY'] = ""
response = completion(
model="perplexity/sonar-pro",
messages=messages,
stream=True
)
for chunk in response:
print(chunk)
Unterstützte Modelle
Alle hier aufgeführten Modelle https://docs.perplexity.ai/docs/model-cards werden unterstützt. Verwenden Sie einfach model=perplexity/<model-name>.
| Modellname | Funktionsaufruf |
|---|---|
| sonar-deep-research | completion(model="perplexity/sonar-deep-research", messages) |
| sonar-reasoning-pro | completion(model="perplexity/sonar-reasoning-pro", messages) |
| sonar-reasoning | completion(model="perplexity/sonar-reasoning", messages) |
| sonar-pro | completion(model="perplexity/sonar-pro", messages) |
| sonar | completion(model="perplexity/sonar", messages) |
| r1-1776 | completion(model="perplexity/r1-1776", messages) |
Info
Weitere Informationen zur Übergabe von anbieterspezifischen Parametern finden Sie hier