Skip to content

dbt Projects

Quick Summary

dbt projects in dagctl are fully managed deployments with dedicated containers per project. You connect a Git repository, configure your warehouse credentials in the web UI, and dagctl handles the rest. No CLI required.

Setting Recommended Why
Branch main Standard default branch
dbt Version Latest Get the newest fixes and features
Python Version Match your project Avoid dependency conflicts

Key Takeaways

  • dbt version and Python version are configured in the web UI — not in .dagctl/config.yaml
  • Warehouse credentials are never stored in your repository — configure them in the web UI and they are injected at runtime via Kyverno
  • Increment .dagctl/config.yaml version to trigger a new image build — this is the only mechanism dagctl uses to detect that a deployment is needed

Creating a dbt Project

  1. Navigate to Projects in the dagctl web UI
  2. Click Create Project
  3. Select dbt-core as the framework
  4. Enter your Git repository URL (HTTPS or SSH)
  5. Select the branch to deploy from (usually main)
  6. Configure warehouse credentials as environment variables (e.g., SNOWFLAKE_ACCOUNT, SNOWFLAKE_USER, SNOWFLAKE_PASSWORD)
  7. Click Deploy

dagctl provisions all required infrastructure in your organization's namespace and builds the initial container image.

The .dagctl/config.yaml File

Warning

This file must exist in the root of your repository. dagctl will not deploy your project without it.

The .dagctl/config.yaml file tells dagctl that your repository is a managed dbt project. The schema is minimal:

version: "1.0.0"

The version field is the only required field.

dbt version and Python version are configured in the dagctl web UI, not here.

To trigger a new container image build, increment the version:

version: "1.0.1"

Commit and push that change, then create a new plan. dagctl detects the version change and kicks off a new build.

How Deployment Works

When you deploy a dbt project, dagctl:

  1. Builds a container image from your repository using Kaniko
  2. Provisions 4 containers in your organization's Kubernetes namespace: - git-sync — clones your repository before each run - dbt-api — handles dbt command execution - poller — monitors for new plan versions - job-watcher — tracks job execution status and reports back to the control plane
  3. Injects warehouse credentials at pod runtime via Kyverno mutation policies

Image rebuilds happen when you create a new plan after incrementing the version in .dagctl/config.yaml.

Environment Variables

Configure warehouse credentials in the web UI under your project's Environment Variables section. Common variables for supported warehouses:

Snowflake

SNOWFLAKE_ACCOUNT
SNOWFLAKE_USER
SNOWFLAKE_PASSWORD
SNOWFLAKE_DATABASE
SNOWFLAKE_WAREHOUSE
SNOWFLAKE_SCHEMA

BigQuery

GOOGLE_PROJECT
GOOGLE_DATASET

Redshift / Postgres

DBT_HOST
DBT_USER
DBT_PASSWORD
DBT_DATABASE
DBT_SCHEMA
DBT_PORT

These variables are stored as Kubernetes Secrets in your organization's namespace and injected into job pods at runtime by Kyverno. They are never written to your repository.

Next Steps

  • Schedule dbt commands on a crondbt Jobs
  • Deploy changes with version-based plansdbt Plans