Where we use AI
Plain-English: we use Anthropic’s Claude for proposal drafting, opportunity summarisation, allied-supplier briefing generation, and the intelligence-engine assessments. The CII scoring is deterministic, not AI.
Pathfinder + Enterprise tiers (LLM-backed)
- Proposal-agent. Drafts proposal sections against the SOW you’ve matched to + your cap statement. Claude Sonnet model, prompts versioned and logged.
- Opportunity summarisation. Generates a clean summary of a SAM.gov solicitation + a GO/MONITOR/NO_GO verdict with a short rationale.
- Allied-supplier briefings. Cached “Company Brief” for tech-scout drawer; cached for 30 days per supplier to limit redundant LLM spend.
- Intelligence assessments. The signal-clustering and assessment-generation pipeline uses Claude to compose editorial assessments from clustered raw signals.
All tiers (deterministic, not AI)
- CII (Country Instability Index). Pure scoring engine. No LLM. Sources: OFAC SDN, BIS Entity List, World Bank WGI, GDELT, US State Dept advisories.
- Appropriations relevance. Pure deterministic scoring. No LLM in the rank computation.
- Opportunity matching. Deterministic NAICS / set-aside / value rules. LLM only adds the optional GO/MONITOR/NO_GO verdict if the contractor opts in.
What we will never do with your data
- Train external models on your data. Your cap statement, proposals, messages, watchlists, and pipeline content are never used to train, fine-tune, or improve any third-party LLM (Anthropic or otherwise). We do not opt into model-improvement programs that would expose your data.
- Sell or license your AI inputs/outputs. The content you submit to AI features remains yours; the AI outputs (proposal drafts, summaries) are licensed back to you under the Terms of Service.
- Use AI for fully-automated consequential decisions. AI features are designed as drafting and summary aids; the contractor reviews the output before any external-facing submission.
- Operate undisclosed AI features. Every AI-touching surface is labelled as such inside the product (small “AI” chip in the corner of the panel).
Model providers
The current LLM provider is Anthropic, PBC. Inference happens under a zero-data-retention configuration: prompts and completions are not retained by Anthropic for model improvement. Anthropic’s commitments to enterprise customers are documented at anthropic.com/legal/commercial-terms.
Your controls
- Tier gating. Scout-tier accounts have no AI features active. Pathfinder + Enterprise have AI features on by default; org admins can disable per feature in /settings.
- Per-document opt-out. When drafting in the proposal workspace, you can mark a document “no AI assist” to prevent the LLM from receiving it.
- Audit log. Every LLM call from your account is logged in the AI Usage Log (see /settings/billing for token breakdown). You can review what was sent.
Limitations & honest disclosure
LLMs hallucinate, omit, and occasionally produce content that looks plausible but is wrong. Treat AI outputs as drafts. Do not submit AI-generated proposal text to a federal agency without human review. Bridger disclaims liability for consequences of un-reviewed AI output as set out in the Warranty Disclaimer of the Terms of Service.
Reach the operator at hgad@levenhall.com. Formal notice should be addressed to Levenhall LLC, Delaware, United States.