FabricFabricSDK
Plugins

OpenAI

Generate chat completions, embeddings, and manage Assistants threads.

Authentication

TypeDefaultDetails
api_keyAPI key (sk-... or sk-proj-...) sent as Authorization: Bearer

Optional organization ID can be passed to set the OpenAI-Organization header.

Endpoints

PathRiskDescription
chat.completions.createwriteGenerate a chat completion (consumes tokens — metered).
embeddings.createwriteGenerate vector embeddings for text (consumes tokens).
threads.createwriteCreate a new Assistants thread.
threads.getreadGet an Assistants thread.
threads.deletedestructiveDelete an Assistants thread and all its messages.
messages.createwriteAppend a user message to a thread.
messages.listreadList messages on a thread.
runs.createwriteCreate a run on a thread (kicks off the assistant).
runs.getreadPoll a run's status.
runs.cancelwriteCancel an in-progress run.

chat.completions.create

Generate a chat completion.

Input:

FieldTypeRequiredDescription
modelstringModel ID (e.g. gpt-4o)
messagesarrayArray of { role, content, ... } messages
temperaturenumberSampling temperature (0–2)
max_tokensnumberMax tokens to generate
toolsarrayFunction tools definitions
tool_choicestring | objectauto, none, required, or { type: "function", function: { name } }
response_formatobject{ type: "text" | "json_object" }

Output: ChatCompletion

embeddings.create

Generate text embeddings.

Input:

FieldTypeRequiredDescription
modelstringModel ID (e.g. text-embedding-3-small)
inputstring | string[]Text to embed
dimensionsnumberOutput dimensions
encoding_formatenumfloat (default) or base64

Output: { object: "list"; data: Embedding[]; model: string; usage: { prompt_tokens: number; total_tokens: number } }

threads.create

Create a new Assistants thread.

Input:

FieldTypeDescription
messagesarrayInitial messages: { role: "user"; content: string }[]
metadataobjectKey-value metadata

Output: AssistantThread

threads.get

Get a thread by ID.

Input:

FieldTypeRequiredDescription
threadIdstringThread ID

Output: AssistantThread

threads.delete

Delete a thread and all its messages.

Input:

FieldTypeRequiredDescription
threadIdstringThread ID

Output: { id: string; object: "thread.deleted"; deleted: true }

messages.create

Add a message to a thread.

Input:

FieldTypeRequiredDescription
threadIdstringThread ID
roleenumuser
contentstringMessage text
metadataobjectKey-value metadata

Output: AssistantMessage

messages.list

List messages in a thread.

Input:

FieldTypeDescription
threadIdstring✅ Thread ID
limitnumberMax results
orderenumasc or desc
afterstringPagination cursor

Output: { object: "list"; data: AssistantMessage[]; has_more: boolean }

runs.create

Create a run to process a thread.

Input:

FieldTypeRequiredDescription
threadIdstringThread ID
assistant_idstringAssistant ID
instructionsstringOverride instructions
additional_instructionsstringAdditional instructions
toolsarrayTools for this run
metadataobjectKey-value metadata

Output: AssistantRun

runs.get

Poll a run's status.

Input:

FieldTypeRequiredDescription
threadIdstringThread ID
runIdstringRun ID

Output: AssistantRun

runs.cancel

Cancel an in-progress run.

Input:

FieldTypeRequiredDescription
threadIdstringThread ID
runIdstringRun ID

Output: AssistantRun

Usage

import { createFabric } from "@fabricorg/integrations";
import { openai } from "@fabricorg/integrations/plugins";

const fabric = createFabric({
  plugins: [openai({ apiKey: process.env.OPENAI_API_KEY })],
});

// Chat completion
const completion = await fabric.openai.api.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "What is the capital of France?" },
  ],
});

// Embeddings
const { data: embeddings } = await fabric.openai.api.embeddings.create({
  model: "text-embedding-3-small",
  input: "The quick brown fox",
});

// Assistants flow
const thread = await fabric.openai.api.threads.create({
  messages: [{ role: "user", content: "Explain quantum computing" }],
});

const run = await fabric.openai.api.runs.create({
  threadId: thread.id,
  assistant_id: "asst_abc123",
});

// Poll until complete
let status = run.status;
while (status === "queued" || status === "in_progress") {
  await new Promise((r) => setTimeout(r, 1000));
  const updated = await fabric.openai.api.runs.get({
    threadId: thread.id,
    runId: run.id,
  });
  status = updated.status;
}

Webhooks

OpenAI does not support webhooks. The Assistants API requires polling (runs.get) to check run status.

Types

interface ChatCompletionMessage {
  role: "system" | "user" | "assistant" | "tool";
  content: string | null;
  name?: string;
  tool_call_id?: string;
  tool_calls?: Array<{
    id: string;
    type: "function";
    function: { name: string; arguments: string };
  }>;
}

interface ChatCompletion {
  id: string;
  object: "chat.completion";
  created: number;
  model: string;
  choices: Array<{
    index: number;
    message: ChatCompletionMessage;
    finish_reason: string | null;
  }>;
  usage?: {
    prompt_tokens: number;
    completion_tokens: number;
    total_tokens: number;
  };
}

interface Embedding {
  object: "embedding";
  embedding: number[];
  index: number;
}

interface AssistantThread {
  id: string;
  object: "thread";
  created_at: number;
  metadata?: Record<string, string>;
}

interface AssistantMessage {
  id: string;
  object: "thread.message";
  thread_id: string;
  role: "user" | "assistant";
  content: Array<{ type: string; text?: { value: string } }>;
  created_at: number;
}

interface AssistantRun {
  id: string;
  object: "thread.run";
  thread_id: string;
  assistant_id: string;
  status: "queued" | "in_progress" | "requires_action" | "cancelling" | "cancelled" | "failed" | "completed" | "expired";
  created_at: number;
  required_action?: {
    type: "submit_tool_outputs";
    submit_tool_outputs: {
      tool_calls: Array<{
        id: string;
        type: "function";
        function: { name: string; arguments: string };
      }>;
    };
  };
}