Frequently Asked Question
LLM - Send a prompt and optional embedding to our Large language processor
Last Updated 8 days ago
LLM API
Overview
The LLM API provides access to language model capabilities for text generation and processing.
Version
1.001
Endpoint
/v2/llm
Authentication
This API requires authentication using an API key. The key should be passed in the X-API-Key
header.
Request
The API accepts a form-urlencoded POST request with the following parameters:
Parameter | Type | Required | Description |
---|---|---|---|
model | string | No | The specific language model to use (e.g., 'solar'). If not specified, a default model will be used. |
prompt | string | Yes | The text prompt to send to the language model. |
embed | string | No | Additional text content to be processed alongside the prompt. |
Response
Based on the usage in transcribe.php, the API returns a JSON object with at least the following structure:
Field | Type | Description |
---|---|---|
answer | string | The generated text response from the language model. |
Example Request
POST /v2/llm HTTP/1.1
Host: api endpoint
X-API-Key: your_api_key_here
Content-Type: application/x-www-form-urlencoded
model=solar&prompt=Please tell me a joke&embed=These are the jokes I like
Example Response
{
"answer": "I am not very good at jokes",
"status": "success",
"time": 5
}
Error Handling
While the exact error handling is not visible from the available code, it's likely that the API follows similar patterns to other APIs in the system:
- If the API key is invalid or the user is not found, the response will likely contain an error status.
- If there's an issue with the language model service, an appropriate error message will likely be returned.
Notes
Time is in seconds
Security
- Ensure that the API key is kept secure and not exposed in client-side code.
- Sensitive information should not be included in prompts or embedded content.