Get a Demo
Close

Contacts

Berlin / Tbilisi

Book a demo: calendly

davit@gobanyan.io

LLM & AI Usage Policy

Last updated: January 2026

This LLM & AI Usage Policy (“Policy”) describes how Banyan AI (“Banyan,” “we,” “us,” or “our”) uses large language
models (“LLMs”) and artificial intelligence technologies in the provision of its services. This Policy supplements our
Privacy Policy and forms part of our overall data protection and security framework.

1. Scope of LLM Usage

Banyan AI is an AI-native platform. LLMs are a core component of our product and are used to enable, among others, the following capabilities:

  • Natural-language to workflow generation
  • Natural-language to API, schema, or query translation
  • Data analysis, summarization, and explanation
  • Dashboard, report, and automation generation
  • Workflow logic validation and optimization

LLMs may be involved in both read and write operations. For write or delete actions, Banyan AI provides validation mechanisms during workflow setup and testing.
After validation, customers may choose whether workflows require ongoing human approval (“human-in-the-loop”) or run fully automated.

2. Customer-Controlled LLM Provider Selection

Banyan AI supports multiple LLM providers, including:

  • OpenAI
  • Anthropic
  • Google

A default provider is preconfigured. Customers may select and change their preferred LLM provider directly within the Banyan AI application. Once selected:

  • Requests are sent only to the chosen provider
  • No fallback, routing, or replication to other providers occurs
  • Banyan AI does not automatically switch providers without customer action

3. LLM Data Inputs & Customer Responsibility

3.1 Data Types

Depending on customer configuration and use cases, LLM inputs may include:

  • Metadata and schemas
  • Business data (e.g. CRM, billing, product usage data)
  • Personal data (e.g. names, emails, identifiers)

Banyan AI does not intentionally process special categories of personal data unless the customer explicitly provides such data through connected sources.

3.2 Customer Control

Customers retain full control over:

  • Which data sources are connected
  • Which data is queried, transformed, or sent to LLMs
  • How LLMs are used within chats, workflows, and automations

Banyan AI does not independently select or inject customer data into LLM prompts outside of customer-initiated actions or configured automations.

4. LLM Data Tenancy & Isolation

Banyan AI enforces strict tenant isolation:

  • Each customer’s data, prompts, and responses are processed separately
  • No prompts, responses, or context are shared across tenants
  • No cross-customer or shared LLM context windows are used

All stored LLM-related data (where applicable) is logically isolated on a per-tenant basis.
Banyan AI does not use customer data to train, fine-tune, or improve any LLM models.

5. LLM Data Residency

5.1 Application Infrastructure

Banyan AI’s primary application infrastructure is hosted in Germany using DigitalOcean data centers.

5.2 LLM Processing Locations

LLM processing is performed by the selected third-party provider. Where available, Banyan AI uses EU-based endpoints.
However, depending on the provider and configuration:

  • LLM processing may occur outside the customer’s country or region
  • Data residency for LLM processing is subject to the selected provider’s infrastructure

5.3 Enterprise Controls

Region-specific LLM processing and residency guarantees are available as an enterprise-only option, subject to technical feasibility and contractual agreement.

6. LLM Data Retention

6.1 Retention Duration

By default, LLM-related data (including prompts, responses, derived workflows, and configurations) is retained for the duration of the customer’s active account.
Banyan AI does not delete customer data while an account remains active, as it is required for platform functionality.

6.2 Backups

LLM-related data may be included in encrypted system backups for availability and disaster recovery purposes.

6.3 Deletion

Customers may request immediate deletion of all customer data, including LLM-related data, at any time. Upon verified request:

  • Data is removed from active systems
  • Deletion from backups and archives follows scheduled secure deletion processes

Deletion requests result in account termination and loss of access to the service.

7. LLM Provider Data Handling

Banyan AI relies on contractual and technical safeguards provided by its LLM providers, including:

  • Opt-out mechanisms for model training where available
  • No resale or secondary use of submitted data
  • Provider-defined retention limits

Customers acknowledge that LLM providers act as independent sub-processors subject to their own data protection commitments.
A current list of approved LLM sub-processors is available upon request or via our Privacy Policy.

8. Transparency & Customer Visibility

Banyan AI provides transparency into LLM usage, including:

  • Visibility into which actions and features invoke LLMs
  • Visibility into connected data sources involved in LLM interactions
  • Access to LLM usage logs within the application interface

9. No Human Review by Banyan AI

Banyan AI does not perform human review of customer data, prompts, or LLM outputs by default.

Human access to customer data occurs only:

  • At the explicit request of the customer (e.g. support)
  • Where required for security, abuse prevention, or legal compliance

10. Limitations & Customer Responsibility

LLM outputs are probabilistic by nature and may be incomplete, inaccurate, or misleading.

Customers acknowledge and agree that:

  • Banyan AI does not guarantee correctness of LLM outputs
  • Customers remain responsible for decisions, actions, and outcomes based on LLM-generated content
  • Validation, testing, and monitoring of workflows remain the customer’s responsibility

11. Changes to This Policy

We may update this LLM & AI Usage Policy from time to time to reflect technical, legal, or regulatory changes.
Updates will be published on our website and, where appropriate, communicated to customers.