Resources and insights
Our Blog
Explore insights and practical tips on mastering Databricks Data Intelligence Platform and the full spectrum of today's modern data ecosystem.
Most teams that move to Databricks get the hard part right. They migrate the processing engine, rebuild the transformation logic, and stand up Unity Catalog. Then they leave Azure Data Factory running in the background: connected to everything, owned by nobody, and quietly accumulating cost and complexity. In this entry, that’s the gap we address.
Explore More Content
ML & AI
5 Databricks Patterns That Look Fine Until They Aren't
Five common Databricks coding patterns — including undocumented API calls, manual SparkSession instantiation, and hardcoded Spark configs — that pass code review but fail silently in serverless environments or during platform migrations. For each anti-pattern, this post explains why it breaks and shows the correct native Databricks approach using DABS, the Databricks SDK, and dynamic job parameters.
Deploy Your Databricks Dashboards to Production
Stop deploying Databricks dashboards manually. Learn how to use Git, Asset Bundles, and CI/CD for reliable, reproducible dashboard deployments across environments.
5 Reasons You Should Be Using LakeFlow Jobs as Your Default Orchestrator
External orchestrators can account for nearly 30% of Databricks’ job costs. Discover five compelling reasons why LakeFlow Jobs should be your default orchestration layer: from Infrastructure as Code to SQL-driven workflows.
DABs: Referencing Your Resources
Databricks bundle lookups failing with "does not exist" errors? Resource references solve timing issues and create strong dependencies. Complete guide with examples.
Managing Databricks CLI Versions in Your DAB Projects
Prevent Databricks deployment failures caused by CLI version conflicts. Step-by-step guide to version management in DAB projects with CI/CD automation.