Plugins
Firecrawl
LLM-tuned web scraping, crawling, and search.
Firecrawl returns clean markdown / structured JSON instead of raw HTML — ideal for feeding agent context windows.
Authentication
| Type | Default | Details |
|---|---|---|
api_key | ✅ | API key passed as Authorization: Bearer fc-... |
Endpoints
| Path | Risk | Description |
|---|---|---|
scrape.run | read | Scrape a single URL into LLM-ready formats. |
crawl.start | write | Async crawl of a domain. Consumes credits per page. |
crawl.get | read | Poll a crawl's status (and results when complete). |
crawl.cancel | write | Cancel an in-flight crawl. |
search.run | read | Web search with optional in-line scraping per result. |
scrape.run
const { data } = await fabric.firecrawl.api.scrape.run({
url: 'https://example.com/article',
formats: ['markdown', 'links'],
onlyMainContent: true,
});
console.log(data.markdown);
console.log(data.metadata?.title);formats selects which fields to populate on the response. onlyMainContent: true strips nav, footer, and ads.
crawl.start + crawl.get
Crawl is asynchronous — start returns an id you poll until status is 'completed'.
const { id } = await fabric.firecrawl.api.crawl.start({
url: 'https://docs.example.com',
limit: 100,
maxDepth: 3,
excludePaths: ['/blog', '/changelog'],
});
while (true) {
const status = await fabric.firecrawl.api.crawl.get({ id });
if (status.status === 'completed' || status.status === 'failed') {
console.log(status.data);
break;
}
await new Promise((r) => setTimeout(r, 5000));
}search.run
const { data } = await fabric.firecrawl.api.search.run({
query: 'fabric integrations sdk site:fabric.pro',
limit: 10,
scrapeOptions: { formats: ['markdown'] },
});When scrapeOptions is set, each result includes the scraped page contents — saves a separate scrape.run per result.
Factory
import { firecrawl } from '@fabricorg/integrations/plugins';
const fabric = createFabric({
plugins: [firecrawl({ apiKey: process.env.FIRECRAWL_API_KEY })],
});