FAQs, tips & tricks

Get more from multi-model. From the team that built it.

Eight field-tested techniques and the questions we get most often. If you only adopt the first one — the Tokyo Test — you'll already be ahead of most AI deployments we see.

Tips

Eight things power users do.

  1. 01

    Run the Tokyo Test on every important prompt

    Send the same question to up to 10 models simultaneously. If they agree, you have your answer. If they don't, the variance itself is the finding — and now you know which model assumptions you were silently relying on.

  2. 02

    Build named model bundles per use case

    Don't make users pick from 47 models every time. Bundle "contract review" (Claude + GPT-5 + Gemini 2.5 Pro), "code review" (GPT-5 + Sonnet + Llama 4), "research" (Sonar Pro + Perplexity + Grok). Assign bundles by department.

  3. 03

    Use Silent Reconstitute for friction-free PII handling

    Of the three Firewall modes, Silent Reconstitute is what most teams settle on for PII and PHI. The model never sees the sensitive token; the response comes back fully restored. Sub-40ms round trip — users won't notice.

  4. 04

    Use Hard Block for credentials and secrets

    AWS keys, database passwords, API tokens — Hard Block. Reject the prompt, log the attempt, surface it in the audit dashboard. There is no "warning" mode for credentials worth shipping.

  5. 05

    Add custom Firewall terms for your codenames

    Every regulated company has internal project codenames, M&A targets, unannounced products. Add them as custom Firewall terms in any of the three modes. The model never sees what it shouldn't.

  6. 06

    Point your phone at a contract

    The Backplain iOS app has camera multi-chat and voice multi-chat. In a client meeting, in deposition prep, anywhere a document is in front of you — snap and ask 10 models at once.

  7. 07

    Export the audit log to your SIEM

    Every prompt, response, and Firewall action streams to your SIEM via webhook or syslog. Splunk, Datadog, Sentinel, Sumo — all supported. This is the shortest path to your SOC 2 evidence requirement.

  8. 08

    Keep Backplain even if you standardize on one model

    Several customers have moved to Claude Enterprise or ChatGPT Business as a daily driver and kept 5 Backplain seats for second opinions, model comparison, and the AI Firewall their primary tool doesn't have. Co-existence is the design.

FAQs

Frequently asked questions.

How many models can I compare at once?
Up to 10 in a single Multi-Model Chat session. You can include any combination of the 47 supported models across 9 providers.
Which providers are supported?
OpenAI, Anthropic, Google, Meta, Mistral, Amazon, xAI, Perplexity, and Backplain's own models. New models are added weekly — read the blog for changelogs.
Does Backplain train on my data?
No. We do not train on customer prompts or responses. Provider-side training is also disabled by default for every supported model that offers an opt-out.
What does the AI Firewall actually do?
It intercepts every outbound prompt, identifies sensitive data (PII, PHI, financial credentials, custom terms), and applies your chosen mode — Hard Block, User Warning, or Silent Reconstitute — before the model ever sees the request. Average latency: under 40ms.
Can different teams see different models?
Yes. Bundle models into named groups and assign bundles to teams or roles. Legal sees one set; engineering sees another.
Is there an iOS app?
Yes. Camera multi-chat, voice multi-chat, full Firewall enforcement. Available on the App Store.
Can I deploy Backplain in my own cloud?
Yes. Multi-tenant SaaS, your own AWS / Azure / GCP, fully isolated private cloud, or Backplain-owned dedicated bare metal in a SOC 2 Type II Tier 3 data center in San Diego.
What happens if a model provider goes down?
Multi-Model Chat is provider-redundant by design. If one provider's API is degraded, the others still answer — and Backplain marks the failed cell rather than failing the whole prompt.

Didn't find your answer? Open a support ticket — a Backplain engineer will reply, not a script.