Skip to main content

Example Integration (REST API)

Sending LLM Logs to LogSpend REST API

In cases where you can’t use the LogSpend SDK, you can directly send your LLM logs to the LogSpend REST API.

Endpoint

POST https://api.logspend.com/llm/v1/log

Headers

Replace <LOGSPEND_API_KEY> and <LOGSPEND_PROJECT_ID> with your LogSpend API key and the relevant LogSpend project ID accordingly.

curl -X POST https://api.logspend.com/llm/v1/log \
-H "Authorization: Bearer <LOGSPEND_API_KEY>" \
-H "Content-Type: application/json" \
-H "LogSpend-Project-ID: <LOGSPEND_PROJECT_ID>" \
-d "<PAYLOAD>"

Body

The body of the request should follow the structure outlined below:

{
"input": dict, # Input data passed during the LLM API call
"output": dict, # Response generated by the LLM API call,
"identity": dict, # Identity object to identify the user interacting with the AI Assistant. It must contain a session_id and can include a user_id
"custom_properties": dict, # Any additional custom datapoints you would like to add like task_name, customer_id, etc.
"start_time_ms": int, # Timestamp in milliseconds just before making the LLM call
"end_time_ms": int # Timestamp in milliseconds after receiving the response of the LLM call
}
  1. Prepare your input data, custom properties and record the start_time before calling the LLM API.
import time

input_data = {
"provider": "openai",
"model": "gpt-3.5-turbo-instruct",
"prompt": "Say this is a test",
"messages": [{
"role": "assistant",
"content": "",
"function_call": {},
}],
"functions": [{}],
"max_tokens": 7,
"temperature": 0
}

identity_data = {
"session_id": "session-123",
"user_id": "123455",
}

custom_properties_data = {
"task_name": "chatbot-qa",
"customer_id": "chatbot-qa",
}

start_time_ms = int(time.time() * 1000)
  1. Make a call to OpenAI (or whichever provider you're using) to generate the output_data and record the end_time.
def call_openai(input_data):
# Placeholder for actual OpenAI call
output_data = {
"id": "chatcmpl-123",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello there, how may I assist you today?",
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
},
"http_status_code": 200,
"http_error_message": "",
}
return output_data

output_data = call_openai(input_data)
end_time_ms = int(time.time() * 1000)
  1. Build the json payload as described in the body section above, only after the API call to the LLM
json_payload = {
"input": input_data,
"output": output_data,
"identity": identity_data,
"custom_properties": custom_properties_data,
"start_time_ms": start_time_ms,
"end_time_ms": end_time_ms,
}
  1. Send the log to LogSpend.
curl -X POST https://api.logspend.com/llm/v1/log \
-H "Authorization: Bearer <LOGSPEND_API_KEY>" \
-H "Content-Type: application/json" \
-H "LogSpend-Project-ID: <LOGSPEND_PROJECT_ID>" \
-d "<json_payload>"