AI at Montara

Last modified: August 10, 2025

 


Overview

At Montara, we believe AI Technologies (LLMs, Generative AI, etc) are going to be instrumental in helping organizations use data efficiently and effectively going forward. As such, we invest heavily in integrating and utilizing AI Technologies in the Montara Platform. 

Montara Platform and AI Technologies

At Montara, we use third-party large language model providers (currently OpenAI) to enable our cutting-edge AI capabilities in the platform, such as:
  • Data transformation copilot: helps users write better SQL/Python code, fix errors in code and suggest optimizations and improvements to code.
  • Data Validation AI Agent: scans SQL/Python code to suggest validation tests (e.g., Unique/Required column checks) and writes the tests for the user.
  • Documentation AI Agent: automatically documents the code and table columns.

How We Use AI Technologies

To provide users with the powerful benefits of AI, Montara includes rich context it generates in the platform in its requests from its third-party large language model providers. This context is fine-tuned and optimized over time.  User input may also be provided (for example, when a user requests help from our AI Copilot).
  • What is not included in the context we send to LLM providers
    • Any Personally Identifiable Information (PII)
    • Any actual data from your data warehouse tables 
  • What might be included in the context we send to LLM providers
    • Model code
    • Table schemas
    • Data validation tests
    • Metadata, such as table and column descriptions
    • Table lineage information
We do not allow our third-party providers to use any of the context data we provide them or any of the user input provided by users to train their models. We also require that any data sent to our third-party providers may only be retained for a short period of time in their systems.