Skip to main content
If your Azure SQL Database or Azure Database for PostgreSQL is in a private VNet with no public endpoint, there are several ways to securely connect it to BonData. The right approach depends on your security requirements, data volume, and infrastructure.

Blob Storage + Functions

Self-service setup with Terraform

Tunnel Agent

Lightweight agent in your VNet

Private Link

Private endpoint, no public internet

VNet Peering

Direct network link between VNets

VPN Gateway

Encrypted tunnel over the internet

ExpressRoute

Dedicated physical connection
Not sure which option is right for you? The Blob Storage + Azure Functions approach works for most teams and you can set it up entirely on your own. For all other options, reach out to our team — we’ll help you evaluate your setup and find the best path forward.

Option 1: Export to Blob Storage via Azure Functions

RecommendedSelf-service
An Azure Function runs inside your VNet via VNet integration, queries your database, converts results to Parquet, and writes them to Blob Storage. BonData reads from Blob Storage via its native S3-compatible integration.
Azure SQL (private) ──▶ Azure Function (VNet integrated) ──▶ Blob Storage ──▶ BonData

                               Timer trigger (cron)
Why this approach works best for most teams:
  • No firewall changes — the function connects via your VNet
  • Zero DB performance impact — queries run on your schedule
  • Database credentials never leave your Azure subscription
  • Fully self-service — no coordination with BonData needed

Deploy with Terraform

Create a bondata-azure-export.tf file and fill in the variables at the top. This provisions the Storage Account, Function App, VNet integration, and timer trigger in one apply.
Store db_password in Azure Key Vault and reference it as a Key Vault secret in the app settings to avoid committing secrets.
# ──────────────────────────────────────────────
# Variables — fill these in
# ──────────────────────────────────────────────
variable "resource_group"    { description = "Existing resource group name" }
variable "location"          { default = "eastus" }
variable "vnet_name"         { description = "VNet where your database lives" }
variable "vnet_rg"           { description = "Resource group of the VNet" }
variable "integration_subnet" { description = "Subnet name delegated to Microsoft.Web/serverFarms" }
variable "db_host"           { description = "Database FQDN (private endpoint)" }
variable "db_port"           { default = "5432" }
variable "db_name"           { description = "Database name" }
variable "db_user"           { description = "Database user" }
variable "db_password"       { sensitive = true }
variable "tables"            { default = "public.users,public.orders" description = "Comma-separated tables" }
variable "schedule"          { default = "0 0 * * * *" description = "NCRONTAB schedule (default: every hour)" }
variable "storage_account"   { default = "bondataexports" description = "Storage account name (globally unique)" }

provider "azurerm" {
  features {}
}

# ──────────────────────────────────────────────
# Data sources
# ──────────────────────────────────────────────
data "azurerm_resource_group" "rg" {
  name = var.resource_group
}

data "azurerm_subnet" "integration" {
  name                 = var.integration_subnet
  virtual_network_name = var.vnet_name
  resource_group_name  = var.vnet_rg
}

# ──────────────────────────────────────────────
# Storage account + container
# ──────────────────────────────────────────────
resource "azurerm_storage_account" "export" {
  name                     = var.storage_account
  resource_group_name      = data.azurerm_resource_group.rg.name
  location                 = var.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
  allow_nested_items_to_be_public = false
}

resource "azurerm_storage_container" "export" {
  name                  = "bondata-exports"
  storage_account_id    = azurerm_storage_account.export.id
  container_access_type = "private"
}

# ──────────────────────────────────────────────
# App Service Plan (Consumption)
# ──────────────────────────────────────────────
resource "azurerm_service_plan" "plan" {
  name                = "bondata-export-plan"
  resource_group_name = data.azurerm_resource_group.rg.name
  location            = var.location
  os_type             = "Linux"
  sku_name            = "EP1" # Elastic Premium required for VNet integration
}

# ──────────────────────────────────────────────
# Function App
# ──────────────────────────────────────────────
resource "azurerm_linux_function_app" "export" {
  name                       = "bondata-db-export"
  resource_group_name        = data.azurerm_resource_group.rg.name
  location                   = var.location
  service_plan_id            = azurerm_service_plan.plan.id
  storage_account_name       = azurerm_storage_account.export.name
  storage_account_access_key = azurerm_storage_account.export.primary_access_key

  virtual_network_subnet_id = data.azurerm_subnet.integration.id

  site_config {
    application_stack {
      python_version = "3.12"
    }
    vnet_route_all_enabled = true
  }

  app_settings = {
    DB_HOST                        = var.db_host
    DB_PORT                        = var.db_port
    DB_NAME                        = var.db_name
    DB_USER                        = var.db_user
    DB_PASSWORD                    = var.db_password
    STORAGE_CONNECTION_STRING      = azurerm_storage_account.export.primary_connection_string
    STORAGE_CONTAINER              = azurerm_storage_container.export.name
    TABLES                         = var.tables
    EXPORT_SCHEDULE                = var.schedule
    FUNCTIONS_WORKER_RUNTIME       = "python"
    AzureWebJobsFeatureFlags       = "EnableWorkerIndexing"
  }
}

# ──────────────────────────────────────────────
# Outputs
# ──────────────────────────────────────────────
output "storage_account" { value = azurerm_storage_account.export.name }
output "function_app"    { value = azurerm_linux_function_app.export.name }
After terraform apply, deploy the function code. Create these two files and deploy with Azure Functions Core Tools: function_app.py:
import os, io, json, logging
from datetime import datetime, timezone
import azure.functions as func
from azure.storage.blob import BlobServiceClient
import psycopg2, pyarrow as pa, pyarrow.parquet as pq

app = func.FunctionApp()

DB = dict(host=os.environ["DB_HOST"], port=int(os.environ.get("DB_PORT","5432")),
          dbname=os.environ["DB_NAME"], user=os.environ["DB_USER"], password=os.environ["DB_PASSWORD"])
CONN_STR  = os.environ["STORAGE_CONNECTION_STRING"]
CONTAINER = os.environ["STORAGE_CONTAINER"]
PREFIX    = os.environ.get("STORAGE_PREFIX", "db-exports")
TABLES    = [t.strip() for t in os.environ["TABLES"].split(",")]
CHUNK     = int(os.environ.get("CHUNK_SIZE", "50000"))

def export_table(cur, table, ts, container_client):
    safe = table.replace('"','').replace('.','__')
    cur.execute(f"SELECT * FROM {table} LIMIT 0")
    cols = [d[0] for d in cur.description]
    cur.execute(f"DECLARE _c CURSOR FOR SELECT * FROM {table}")
    part, total = 0, 0
    while True:
        cur.execute(f"FETCH {CHUNK} FROM _c")
        rows = cur.fetchall()
        if not rows: break
        tbl = pa.table({c: [r[i] for r in rows] for i,c in enumerate(cols)})
        buf = io.BytesIO()
        pq.write_table(tbl, buf); buf.seek(0)
        blob_name = f"{PREFIX}/{safe}/dt={ts}/part-{part:05d}.parquet"
        container_client.upload_blob(name=blob_name, data=buf, overwrite=True)
        total += len(rows); part += 1
    cur.execute("CLOSE _c")
    return total

@app.timer_trigger(schedule=os.environ.get("EXPORT_SCHEDULE", "0 0 * * * *"),
                   arg_name="timer", run_on_startup=False)
def bondata_export(timer: func.TimerRequest):
    ts = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H%M%SZ")
    blob_svc = BlobServiceClient.from_connection_string(CONN_STR)
    container_client = blob_svc.get_container_client(CONTAINER)
    conn = psycopg2.connect(**DB)
    try:
        conn.autocommit = False; cur = conn.cursor()
        res = {}
        for t in TABLES:
            try: res[t] = export_table(cur, t, ts, container_client)
            except Exception as e: logging.error(f"{t}: {e}"); res[t] = str(e); conn.rollback()
        conn.commit()
    finally: conn.close()
    logging.info(json.dumps(res))
requirements.txt:
azure-functions==1.*
azure-storage-blob==12.*
psycopg2-binary==2.9.9
pyarrow==15.0.0
Deploy:
func azure functionapp publish bondata-db-export

Connect Blob Storage to BonData

Once data is flowing, connect BonData to the storage account:
  1. In BonData, go to IntegrationsAdd IntegrationAmazon S3
  2. Use the Azurite S3-compatible endpoint or generate a SAS token
  3. Alternatively, contact support@bondata.ai to set up a direct Azure Blob Storage connection
Azure Blob Storage supports S3-compatible access. You can also share a read-only SAS token scoped to the export container with BonData support for direct ingestion.

Option 2: BonData Tunnel Agent

A lightweight Docker container that runs inside your VNet and creates a secure outbound tunnel to BonData. Once running, BonData can query your database directly through the encrypted connection — no inbound firewall rules, no VPN, no public exposure.
┌──────────────────────────────────────────────────┐
│                 Your Azure VNet                  │
│                                                  │
│  ┌─────────────┐       ┌────────────────┐       │
│  │  Azure SQL  │◀──────│ BonData Tunnel │───────┼──▶ BonData Cloud (port 443 outbound)
│  │  (private)  │       │    Agent       │       │
│  └─────────────┘       └────────────────┘       │
│                                                  │
└──────────────────────────────────────────────────┘
Best for: Teams that need real-time query access with minimal infrastructure changes. The agent only requires outbound HTTPS (port 443) and can run on any Docker host — Azure VMs, AKS, ACI, or Container Apps. Database credentials stay in your environment and all traffic is encrypted end-to-end.

Get started with the Tunnel Agent

Contact our team to provision your tunnel token and walk through deployment for your environment.

Azure Private Link creates a private endpoint in your VNet that routes traffic to BonData over the Microsoft backbone network, never crossing the public internet. Best for: Organizations with strict compliance requirements (HIPAA, SOC 2, FedRAMP) that prohibit any data traversal over the public internet, even when encrypted. Private Link provides the strongest network-level isolation available on Azure. How it works:
  • BonData exposes a Private Link Service in its Azure subscription
  • You create a Private Endpoint in your VNet pointing to that service
  • Your database traffic flows privately through the Microsoft backbone — no internet gateway, no public IPs

Set up Private Link

Contact our team to get BonData’s Private Link Service alias and configure the private endpoint for your subscription.

Option 4: VNet Peering

VNet Peering creates a direct network route between your VNet and BonData’s VNet, allowing private IP communication across subscriptions and tenants. Best for: Teams that want a simple, low-cost network link. VNet Peering on Azure supports high bandwidth, low latency, and works across subscriptions, tenants, and regions (global peering). How it works:
  • A peering connection is established between your VNet and BonData’s VNet
  • Routes are automatically exchanged between the peered networks
  • Your database’s network security group or firewall is updated to allow connections from BonData’s address space
VNet Peering requires non-overlapping address spaces. Global VNet Peering (cross-region) is supported but may incur data transfer charges.

Set up VNet Peering

Contact our team to exchange VNet details and coordinate the peering connection.

Option 5: Azure VPN Gateway

An Azure VPN Gateway creates an encrypted IPsec/IKE tunnel over the public internet between your network and BonData’s infrastructure. Best for: Organizations that already have VPN infrastructure, need to connect from on-premises networks, or require connectivity where VNet Peering isn’t possible due to overlapping address spaces. How it works:
  • A VPN Gateway is provisioned in your VNet’s gateway subnet
  • An IPsec/IKE tunnel is established between your gateway and BonData’s endpoint
  • All traffic is encrypted and routed through the tunnel
  • Supports both policy-based and route-based configurations

Set up VPN Gateway

Contact our team to exchange gateway details and configure the VPN tunnel.

Option 6: Azure ExpressRoute

Azure ExpressRoute provides a dedicated private connection between your infrastructure and BonData through a connectivity provider, bypassing the public internet entirely. Best for: Enterprise environments with very high data volumes, strict latency requirements, or regulatory mandates for dedicated connectivity. ExpressRoute provides the most predictable throughput and lowest latency, with options for 50 Mbps to 100 Gbps circuits. How it works:
  • A circuit is provisioned through an ExpressRoute connectivity provider
  • Private peering routes traffic between your network and BonData’s Azure VNet
  • Traffic never touches the public internet — ideal for large-scale, continuous data sync
  • ExpressRoute Global Reach can extend connectivity across regions
ExpressRoute typically takes 1-4 weeks to provision depending on the provider. ExpressRoute Direct is available for dedicated port-level access at 10 Gbps or 100 Gbps.

Set up ExpressRoute

Contact our team to discuss your throughput requirements and coordinate the connection.