JavaScript / TypeScript SDK

SDK docs for tracking LLM usage in a few lines.

Install the package, initialize it once, and report token usage after each OpenAI, Anthropic, or other provider call. The SDK batches automatically and keeps the integration lightweight.

1. Install the package

Add the SDK to the service where your LLM calls run.

2. Initialize once

Call llmetrics.init at process startup with your API key.

3. Track after each model call

Send feature, provider, model, and token counts after every request.

4. Flush before shutdown

Call llmetrics.flush in short-lived jobs or before your process exits.

Quickstart

Track a request right after the model call returns.

The main flow is initialize once, then call llmetrics.track with the token counts returned by your provider SDK.

import OpenAI from 'openai';
import { llmetrics } from '@llmetrics/sdk';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

llmetrics.init({
  apiKey: process.env.LLMETRICS_API_KEY!,
});

const response = await openai.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Summarize this lesson.' }],
});

llmetrics.track({
  feature: 'lesson-generation',
  provider: 'openai',
  model: response.model,
  inputTokens: response.usage.prompt_tokens,
  outputTokens: response.usage.completion_tokens,
  userId: 'user_123',
  meta: { promptVersion: 2 },
});

Installation

Add the SDK with your package manager of choice.

Command

npm install @llmetrics/sdk

Command

yarn add @llmetrics/sdk

Command

pnpm add @llmetrics/sdk

Initialization

Initialize once with a dashboard API key.

Call llmetrics.init during startup. If you forget the API key, initialization throws immediately so the integration fails loudly in development.

FieldTypeDefault / RequiredDescription
apiKeystringRequiredYour LLMetrics API key from the dashboard.
flushIntervalMsnumber1500How often queued events are flushed automatically.
maxQueueSizenumber50Flush immediately once the queue reaches this size.
timeoutMsnumber2000Timeout for each ingest request.
debugbooleanfalseLogs flush failures to the console.

Tracking

Use queued tracking by default, async tracking in short-lived runtimes.

llmetrics.track(event)

Fire-and-forget tracking. Events are added to an in-memory queue and flushed automatically based on queue size or interval. This call never throws.

llmetrics.trackAsync(event)

Sends immediately and throws on failure. This is the safer choice in serverless handlers, cron jobs, and other runtimes that may exit before the queue flushes.

await llmetrics.trackAsync({
  feature: 'summarize',
  provider: 'anthropic',
  model: response.model,
  inputTokens: response.usage.input_tokens,
  outputTokens: response.usage.output_tokens,
});

await llmetrics.flush();

Reference

Event payload fields

The dashboard groups by feature, provider, and model, so those fields should be stable identifiers from your application and provider response objects.

FieldTypeDefault / RequiredDescription
featurestringYesLogical feature name used for dashboard grouping.
providerstringYesProvider slug such as openai or anthropic.
modelstringYesModel identifier returned by your provider.
inputTokensnumberYesPrompt or input token count.
outputTokensnumberYesCompletion or output token count.
userIdstringNoYour internal user identifier for per-user cost analysis.
tsnumberNoUnix timestamp in milliseconds. Defaults to Date.now().
metaobjectNoExtra metadata stored alongside the event.