Skip to content

Execution Metrics API

These endpoints provide access to job execution times and model-level performance data. Use them to monitor SLA compliance and identify optimization opportunities in your pipelines.

GET /api/v1/projects/:id/runs

Returns a paginated list of job runs for a project.

Query Parameters

Parameter Type Default Description
page integer 1 Page number
limit integer 50 Results per page (max 500)
cronjob_name string - Filter by CronJob name
status string - Filter by status: pending, running, success, partial, failed, error
job_type string - Filter by type: plan, run, janitor
updated_after RFC3339 - Only return runs updated after this timestamp
include_stats boolean true Set to false to omit aggregate stats from response

Response

{
  "runs": [
    {
      "id": "run_abc123",
      "project_id": "proj_xyz",
      "job_name": "sqlmesh-run-20260226-060000",
      "cronjob_name": "hourly-refresh",
      "job_type": "run",
      "status": "success",
      "start_time": "2026-02-26T06:00:00Z",
      "end_time": "2026-02-26T06:12:34Z",
      "duration_seconds": 754.123,
      "triggered_by": "scheduled",
      "created_at": "2026-02-26T06:00:00Z",
      "updated_at": "2026-02-26T06:12:34Z"
    }
  ],
  "total": 142,
  "page": 1,
  "limit": 50,
  "total_pages": 3,
  "stats": {
    "project_id": "proj_xyz",
    "total_runs": 142,
    "successful_runs": 130,
    "failed_runs": 8,
    "error_runs": 4,
    "avg_success_duration": 720.5
  }
}

Incremental polling

Use updated_after to fetch only records that changed since your last poll. Filter on updated_at (not start_time) so you catch jobs that started before your poll window but completed within it. See the Polling Guide below.

Reduce overhead

Set include_stats=false when polling for export — the aggregate stats query is skipped, reducing database load.


GET /api/v1/projects/:id/runs/daily-stats

Returns daily aggregated run statistics.

Query Parameters

Parameter Type Default Description
days integer 30 Number of days to include (max 365)
cronjob_name string - Filter by CronJob name

Response

{
  "daily_stats": [
    {
      "date": "2026-02-25",
      "cronjob_name": "hourly-refresh",
      "total_runs": 24,
      "success_count": 22,
      "partial_count": 1,
      "failed_count": 1,
      "other_count": 0,
      "avg_duration_seconds": 720.5,
      "p95_duration_seconds": 890.2,
      "max_duration_seconds": 1023.7
    }
  ],
  "total_days": 30
}

When cronjob_name is omitted, results are aggregated across all CronJobs for each day (the cronjob_name field will be absent).

SLA monitoring

Use p95_duration_seconds and max_duration_seconds to detect runs that exceed your SLA threshold. The avg_duration_seconds alone can mask outliers.


GET /api/v1/projects/:id/model-executions

Returns a paginated list of individual model executions, enriched with the parent job's CronJob name.

Query Parameters

Parameter Type Default Description
page integer 1 Page number
limit integer 100 Results per page (max 500)
model_name string - Filter to a specific model
cronjob_name string - Filter by parent job's CronJob name
updated_after RFC3339 - Only return executions updated after this timestamp

Response

{
  "executions": [
    {
      "id": "exec_def456",
      "job_run_id": "run_abc123",
      "project_id": "proj_xyz",
      "model_name": "orders.stg_orders",
      "started_at": "2026-02-26T06:01:15Z",
      "completed_at": "2026-02-26T06:03:45Z",
      "duration_seconds": 150.234,
      "status": "success",
      "rows_processed": 1250000,
      "bytes_processed": 524288000,
      "interval_start": "2026-02-25T00:00:00Z",
      "interval_end": "2026-02-26T00:00:00Z",
      "cronjob_name": "hourly-refresh",
      "created_at": "2026-02-26T06:01:15Z",
      "updated_at": "2026-02-26T06:03:45Z"
    }
  ],
  "total": 5400,
  "page": 1,
  "limit": 100,
  "total_pages": 54
}

Finding slow models

Query without model_name to get all model executions, then sort by duration_seconds descending to find your slowest models. Track rows_processed / duration_seconds over time to detect performance regressions.


GET /api/v1/projects/:id/model-executions/stats

Returns aggregate execution statistics per model. Optionally scoped to a time window.

Query Parameters

Parameter Type Default Description
start_date YYYY-MM-DD - Start of date range (inclusive)
end_date YYYY-MM-DD - End of date range (inclusive)

Path Variants

  • /model-executions/stats — Stats for all models in the project
  • /model-executions/stats/:modelName — Stats for a specific model

Response (all models)

{
  "stats": [
    {
      "project_id": "proj_xyz",
      "model_name": "orders.stg_orders",
      "total_executions": 720,
      "successful_executions": 710,
      "failed_executions": 10,
      "avg_duration_seconds": 145.6,
      "p50_duration_seconds": 132.4,
      "p95_duration_seconds": 289.1,
      "max_duration_seconds": 456.7,
      "min_duration_seconds": 98.2,
      "total_rows_processed": 900000000,
      "avg_rows_processed": 1267605.6,
      "total_bytes_processed": 377487360000,
      "avg_bytes_processed": 531672000.0,
      "last_execution_time": "2026-02-26T06:03:45Z"
    }
  ],
  "total": 45
}

Time-windowed vs. all-time

Without start_date/end_date, returns all-time aggregates. With date params, computes stats only for executions within the specified window. Use time windows to compare performance across periods (e.g., this week vs. last week).


Polling Guide

For automated export workflows, poll these endpoints on a schedule (e.g., every 5-15 minutes) using updated_after for incremental fetch.

How updated_after works

The updated_after parameter filters on the record's updated_at timestamp — the time the record was last modified in the database. This is not the execution start time (start_time / started_at). A run that started at 5:55 AM but completed at 6:10 AM will have updated_at set to 6:10 AM. This ensures polling catches status transitions (e.g., running to success) even for jobs that started before your poll window.

In-progress runs will appear in poll results when they are created and again when they complete (with a different status). Your client should handle receiving the same record ID with updated fields.

  1. Store the updated_at timestamp of the most recent record from your last poll
  2. On the next poll, subtract a small overlap (e.g., 60 seconds) from that timestamp and pass it as updated_after — this guards against records written concurrently during your previous poll
  3. Deduplicate by record id on the client side
  4. Page through all results if total exceeds your limit

Concurrent write gap

The updated_after cursor pattern can miss records if new writes occur between the count and data queries during pagination. The recommended 60-second overlap and client-side deduplication mitigate this. For mission-critical SLA monitoring, consider polling more frequently (every 1-2 minutes) with a larger overlap window.

For runnable polling implementations in cURL and Python, see Examples.