Self-hosted models and Custom setups
Additional Step for Self-hosted models and Custom setups
We try to align with OpenAI’s API structure when possible, but when hosting your models, your setup might be different. In those cases, we might not be able to correctly estimate the latency of each LLM API call out-of-the-box, so you will need to directly set the start and end times yourself. See below for an example
# Create an instance of LogBuilder with some input data
builder = LogBuilder(input_data)
# Capture the start time before making the LLM API request
start_time = time.time()
builder.set_start_time(start_time)
# Make a call to your LLM provider...
# Set the end time after getting the LLM API response
end_time = time.time()
builder.set_end_time(end_time)