AI search, decoded.
Every term you'll see on this site and in the product. Written plainly, with examples.
Answer Engine Optimization. Optimizing content so LLM-based answer engines (ChatGPT, Perplexity, Gemini) can parse, cite, and recommend your brand. A sibling of SEO, not a replacement.
Umbrella term for information retrieval through LLMs and generative answer engines, as opposed to link-based search engines.
A 0–100 score per prompt per LLM. Formula: base 40 for being mentioned, plus position bonus (up to +30 for #1), frequency bonus (up to +20), and sentiment adjustment (±10). Capped at 100. See the Methodology page for the full math.
Plugging your own LLM API keys (OpenAI, Anthropic, Google, Perplexity, DeepSeek) into Behar. Available on Growth, Advanced, and Agency plans. Each connected provider drops out of your plan’s monthly run cap — you get unlimited runs on those LLMs and pay the LLM provider directly for usage. Keys encrypted at rest, revocable anytime.
A URL referenced by an LLM in its response. Behar tracks every citation across every run. We classify each one — Editorial, UGC, Corporate, Competitor, or Uncategorized — based on the source domain.
Cited source from an editorial publication (Forbes, TechCrunch, The Verge, etc). High-trust content LLMs rely on heavily for recommendations.
User-generated content sources (Reddit, Quora, Hacker News). Increasingly cited by Perplexity and ChatGPT — almost half of Perplexity’s top citations come from Reddit alone.
Cited source from the brand’s own domain or owned properties. Signals that the brand is telling its own story effectively.
Cited source from a competitor’s domain. Useful signal for gap analysis — if competitors are cited but you aren’t, that’s an opportunity.
Behar’s content generation loop: detects a gap → writes a brief → drafts a full piece → re-measures after publish. The half of the product that separates Behar from monitoring-only tools.
A specific AI query where your brand is absent, losing, declining, or a new opportunity. Each gap is classified and prioritized by math — competitor lead plus type-specific urgency boosts.
Generative Engine Optimization. Synonym for AEO. Used more often in North American circles.
LLM Optimization. Broadest umbrella term, overlaps significantly with AEO and GEO.
A geographic region your prompts get run against (worldwide, country, city). Behar supports 12 geographic markets at launch, injecting them as natural-language suffixes to your prompts.
Extra points added to your Behar score based on where you appear in a ranked list. Position #1 earns +30, #2 earns +20, #3 earns +15, #4 earns +10, #5 and beyond earn +5.
A single query Behar tracks. The same query run against every supported LLM still counts as one prompt — we run it across all of them and aggregate.
After a content gap is filled with a published piece, Behar automatically re-runs the affected prompts roughly six weeks later to verify the fix. Closes the loop.
One execution of all your tracked prompts against every supported LLM. Starter and Growth get weekly runs (4/month). Advanced and Agency get daily runs (30/month). Pro is on-demand.
A ±10 point adjustment to your Behar score based on how the LLM describes you. Positive framing (+10), neutral (0), negative (−10). Negation-aware — ‘not the best’ correctly scores negative.
Your brand’s share of total mentions across a prompt set, relative to competitors. Measured as a percentage.
The top-level container for your team, projects, billing, and settings. One workspace per company. Agency customers can run multiple client projects inside a single workspace.