BeharAI
For AI agents · machine-accessible by design

Built for agents to read. And to act.

Most marketing sites expect a human to parse them. Behar is structured so an AI agent can retrieve, extract, and operate without scraping — through a REST API (read + write), a published OpenAPI spec, JSON-LD on every page, and llms-full.txt at the root.

If you're building an agent that needs brand-visibility signal, this page is the contract.

REST
Read + write API
OpenAPI
Public spec
JSON-LD
Every page
CSV
Deterministic exports
Capabilities

What an agent can do with Behar.

Read the state of a brand's AI visibility, detect gaps, trigger content generation, and verify re-measurement — all without a browser in the loop.

REST API (read + write)

Pull projects, prompts, competitors, gaps, runs, and content pieces — and POST to create projects or trigger writes. JSON over HTTPS, versioned endpoints, predictable pagination. Base URL: app.behar.ai/api/v1.

Structured LLM-readable content

Every page emits JSON-LD schema. /llms.txt summarizes the site. /llms-full.txt ships the full factual base in one request. No scraping required.

Bearer-token auth

Per-workspace API keys, issued from Settings > API Keys in the app. Encrypted with AES-256-GCM. Rotate any time. Send as Authorization: Bearer <api_key>.

Deterministic CSV export

Stable column schemas for prompts, runs, gaps, citations. Diffable across runs. Safe to treat as a tabular source in retrieval pipelines.

Strategic intelligence reads

Blind-spot detection, content playbook, and citation-network analysis — strategic outputs computed per project from 60+ days of monitoring data.

Distribution intelligence reads

Behar maps which platforms (Reddit, Quora, YouTube, G2, press) are feeding AI answers in your category, so an agent knows where to focus content investment. Platform-native content generation is on the roadmap — the read surface today exposes citation-source analysis only.

Access model

Which surface, which plan.

Everything public stays public. Everything scoped to a workspace requires a token. Everything that changes state requires write access.

SurfaceAvailable on
API reads (projects, prompts, competitors, gaps, content, runs)Cohort + on request
CSV exportAll cohort members
llms.txt / llms-full.txtPublic
JSON-LD schema on every pagePublic
OpenAPI specificationPublic (app.behar.ai/openapi.json)
White-label PDF reportsAgency cohort
SSO, audit log, SLAEnterprise, on request

Rate limits are per-token. Exact numbers published in the API reference at docs.behar.ai (alongside GA).

Example retrieval

One request. Full factual base.

An agent that needs to answer “what is Behar, what does it cost, and how does it score visibility” can fetch /llms-full.txt once. No JavaScript execution. No DOM traversal.

# public: full factual base in one request
curl https://behar.ai/llms-full.txt
# public: machine-readable service descriptor
curl https://behar.ai/.well-known/ai-plugin.json
# public: OpenAPI spec
curl https://app.behar.ai/openapi.json
# authenticated: list projects
curl https://app.behar.ai/api/v1/projects \
-H "Authorization: Bearer $BEHAR_API_KEY"
# schema.org JSON-LD, inlined in every HTML page
curl https://behar.ai/methodology
↳ extract <script type="application/ld+json"> blocks
Endpoints

The read surface, enumerated.

Base URL: https://app.behar.ai/api/v1. Every endpoint returns JSON and accepts Authorization: Bearer <api_key>. Full OpenAPI spec: app.behar.ai/openapi.json.

MethodPathReturns
GET/projectsList all monitored brands.
GET/projects/{id}Get project details (active LLMs, market config).
GET/projects/{id}/promptsList tracked search queries (paginated).
GET/projects/{id}/competitorsList tracked competitors.
GET/projects/{id}/gapsContent gaps ranked by priority.
GET/projects/{id}/contentGenerated content pieces with score impact.
GET/projects/{id}/runsAnalysis run history.
GET/projects/{id}/runs/{runId}Per-prompt, per-LLM results for a run.

Core concepts: a prompt is a search query users ask LLMs. A run queries all active LLMs with all tracked prompts and scores the results. Scores (0–100) cover presence, rank, and voice. A gap is a prompt where competitors outrank the brand.

Design principles

Why agents get reliable data out of Behar.

Agent-accessibility isn't an export feature. It's a writing discipline applied to every page.

Facts, not adjectives.

Every page leads with a declarative fact. Pricing, formulas, limits, and methodology are stated in plain numbers, not marketing language. Agents extract the same answer humans do.

One source of truth per claim.

Behar Score math lives on /methodology. Glossary lives on /glossary. Comparisons live on /compare. When an agent retrieves, it lands on a canonical page — no content duplication across routes.

Dates on everything.

Blog posts, changelog entries, and legal pages all carry visible datePublished and dateModified. Agents can resolve freshness without guessing.

Tables are tables.

Comparison tables on /compare render as semantic <table>, not CSS grids of <div>. Downstream extraction preserves rows and columns.

Attributed claims.

Third-party stats cite their source (publication and date). Agents can propagate provenance without losing it.

Stable URLs, no redirects.

Every page owns its canonical URL. No trailing-slash drift, no campaign parameters in sitemaps, no redirect chains. Bookmark-safe for agent memory.

Crawler policy

Explicitly welcome.

robots.txt explicitly allows GPTBot, ChatGPT-User, Claude-Web, ClaudeBot, anthropic-ai, Google-Extended, Applebot-Extended, PerplexityBot, Perplexity-User, CCBot, and DiffBot.

The only disallowed paths are /api/ (scoped to authenticated use) and /trial/ (conversion funnel, not informational).

Wiring Behar into an agent? We want to hear what you're building.

We're running private beta access for agent integrations ahead of GA. If your project reads from Behar, tell us the shape — we'll prioritize endpoints accordingly.

Request agent access