MCP & AI Agent Integration

Databricks
automation.

18 automated actions available through Cerebral OS. Connect Databricks to any workflow, Cerebral, or Map — with full governance, audit trail, and dry-run safety on every execution.

No credit card required · 1,000 free runs · 18 actions available

Execution trace
live
18
actions
100%
governed
<200ms
latency
18
Automated actions
9
Read operations
9
Write operations
2,800+
Compatible Maps
Actions

What you can do
with Databricks.

Every action below is available as an MCP tool and a verb in Cerebral OS — callable from any AI agent, Claude, Cursor, Windsurf, or your own runtime via the BYOA API. All executions are governed, audited, and dry-run safe.

Cancel Run
databricks:cancel_run
Cancel a running job. The run will transition to TERMINATING then TERMINATED state.
Write High risk
Create Cluster
databricks:create_cluster
Create a new Databricks cluster with specified configuration.
Write Medium risk
Create Notebook
databricks:create_notebook
Create a new notebook in the workspace with optional initial content.
Write Medium risk
Delete Notebook
databricks:delete_notebook
Delete a notebook or folder from the workspace. This action cannot be undone.
Write High risk
Get Cluster
databricks:get_cluster
Get detailed information about a specific cluster including configuration and status.
Read Low risk
Get Job
databricks:get_job
Get detailed information about a specific job including its configuration and settings.
Read Low risk
Get Notebook
databricks:get_notebook
Export and retrieve notebook content in the specified format.
Read Low risk
Get Run
databricks:get_run
Get detailed information about a specific job run including status and results.
Read Low risk
Install Library
databricks:install_library
Install a library on a cluster. The cluster must be running for installation to begin.
Write Medium risk
List Clusters
databricks:list_clusters
List all clusters in the workspace with their current status and configuration.
Read Low risk
List Jobs
databricks:list_jobs
List all jobs in the workspace with their configuration and recent run status.
Read Low risk
List Libraries
databricks:list_libraries
List all libraries installed on a specific cluster with their status.
Read Low risk
List Notebooks
databricks:list_notebooks
List notebooks and folders in a workspace path.
Read Low risk
List Runs
databricks:list_runs
List job runs with optional filters for job ID, state, and time range.
Read Low risk
Run Job
databricks:run_job
Trigger a job run with optional parameters. Returns the run ID to track progress.
Write Medium risk
Start Cluster
databricks:start_cluster
Start a terminated cluster. The cluster will transition to PENDING then RUNNING state.
Write Medium risk
Terminate Cluster
databricks:terminate_cluster
Terminate a running cluster. This will stop all running jobs and shut down the cluster.
Write Medium risk
Uninstall Library
databricks:uninstall_library
Uninstall a library from a cluster. The cluster must be restarted for changes to take effect.
Write Medium risk
How it works

Every Databricks action
governed end-to-end.

Cerebral OS isn't a connector. It's the execution layer that sits in front of Databricks — adding governance, dry-run safety, and a full audit trail to every operation.

Governance first
Every verb carries a risk classification. High-risk writes require explicit approval gates before they execute in production.
Dry-run safe
Simulate any Databricks action before it touches production. See exactly what would happen before a single real call is made.
Immutable audit trail
Every Databricks action is logged — what ran, what changed, who approved it, when it happened. Full history on every verb, forever.
Databricks integration

Start free.
No credit card required.

Start free with 1,000 runs — no credit card required. Connect Databricks in minutes, dry-run every action before it touches production, full audit trail on everything.

Start free — 1,000 runs Browse all integrations →