close
cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
Data+AI Summit 2026 | Get hands on with AI

Limited seats are available to join our revamped training program at Data + AI Summit 2026 in San Francisco, with early-bird pricing that gives you 50% off training if you register by April 30. Key highlights Big savings on training and certification...

  • 338 Views
  • 1 replies
  • 3 kudos
Tuesday
Take Control: Customer-Managed Keys for Lakebase Postgres

Lakebase Postgres now supports customer‑managed keys (CMK), so security teams can keep encryption keys in their own cloud KMS (AWS KMS, Azure Key Vault, or Google Cloud KMS) while Databricks runs Lakebase as a managed service. Key highlights Your key...

  • 110 Views
  • 0 replies
  • 0 kudos
Thursday
Database Branching in Postgres: Git-Style Workflows with Databricks Lakebase

Earlier this week, Databricks shared a deep dive into Lakebase Postgres database branching, a “Git‑style” way to work with your databases, meaning you can now create instant, isolated Postgres environments in seconds, without copying all the data or ...

  • 585 Views
  • 0 replies
  • 1 kudos
2 weeks ago
Ending the Developer Struggle: The Future of Fast App Development - Virtual Event (May 5–7)

Developers want to write code, not provision databases. Learn how to create and deploy apps directly on the Databricks Platform. Join us on May 5-7. In this webinar, code comes first. Watch experts from Databricks and Cursor demonstrate how to get ...

  • 1067 Views
  • 2 replies
  • 5 kudos
a month ago
The Lakebase Hub: Official Community Space for Lakebase Insights

Community Space for Lakebase Insights Are you building with Lakebase? If so, do you have a single source to stay ahead of every technical shift and architectural breakthrough? More importantly, do you know what practitioners are actually saying o...

  • 62 Views
  • 0 replies
  • 1 kudos
yesterday
Databricks Learning Festival (Instructor-Led): APAC & EMEA

 We are back with instructor-led Databricks Learning Festivals for APAC & EMEA!  Both Udemy and LinkedIn recognized Databricks as one of the top fastest growing skills for 2026 — Udemy's The 100 Fastest-Growing Global Skills for 2026 and LinkedIn's T...

  • 2801 Views
  • 5 replies
  • 6 kudos
03-03-2026
Data + AI Summit 2026: Registration Now Open - Early Bird Pricing!

June 15–18, 2026 Register Now! Join thousands of data analytics and AI professionals at Data+ AI Summit 2026, the world’s largest conference for data, analytics and AI in San Francisco and virtually. Don’t miss early-bird pricing! Early-bird discoun...

  • 5197 Views
  • 4 replies
  • 3 kudos
02-19-2026
Databricks Community Champion - April 2026 - Ashwin Varadharajan

Our Community Champion Program has always been about recognizing individuals who consistently show up to support others, share their expertise, and strengthen the Databricks Community. Alongside our customers, we’re equally proud of the incredible co...

  • 687 Views
  • 6 replies
  • 8 kudos
a week ago
Solution Accelerator Series | Propensity Scoring

Customers today expect brands to recognize what they care about, whether they’re reacting to a campaign or actively browsing online. With Databricks Solution Accelerators for propensity scoring on sales data and real‑time clickstream propensity in ec...

  • 147 Views
  • 0 replies
  • 0 kudos
Wednesday
🌟 Community Pulse: Your Weekly Roundup! April 13 – 19, 2026

The Weekly Digest • April 13 – 19 Community Pulse The deepest dives and the most thoughtful insights from the people who make this space home. Spotlight Contributors   The members keeping the ecosystem moving this week: @szymon_dybczak @Sumit_...

  • 158 Views
  • 1 replies
  • 3 kudos
Thursday

Community Activity

DavidRomero
by > Visitor
  • 23 Views
  • 0 replies
  • 0 kudos

Testing lightweight app performance on Linux clusters (any simple workflow or tools?)

I’m experimenting with performance testing in Linux-based environments and was wondering what simple workflows people use to benchmark lightweight apps or workloads.Most examples I find focus on large data pipelines, but I’m more interested in how sm...

  • 23 Views
  • 0 replies
  • 0 kudos
Anantha_N
by > New Contributor III
  • 5969 Views
  • 14 replies
  • 6 kudos

Resolved! How to get Lab access provided by Vocareum

Hi TeamI have enrolled in the AI/BI for Data Analyst course. Could you please advise how to get the access for LAB to follow the demo instructions in the courseThanks

  • 5969 Views
  • 14 replies
  • 6 kudos
Latest Reply
Sri77
New Contributor
  • 6 kudos

@Advika I understand that we need to either have ILT or Databricks lab subscription. Could you just provide the access for the one I boxed it so I don't have to take the full year subscription ? If required, I can upgrade to the full subscription at ...

  • 6 kudos
13 More Replies
CarlosR
by Databricks Employee
  • 426 Views
  • 2 replies
  • 1 kudos

From 150 Lines of MERGE INTO to 7 Lines of SQL: AUTO CDC Comes to Databricks SQL

Every data warehouse has the same foundational problem: keeping dimension tables in sync with operational systems. Customer records change. Orders get updated. Accounts get closed. Getting those changes into your warehouse -- correctly, reliably, and...

Screenshot 2026-04-23 at 17.23.21.png CarlosR_1-1776958918597.png Screenshot 2026-04-23 at 16.13.06.png Screenshot 2026-04-23 at 16.13.06.png
  • 426 Views
  • 2 replies
  • 1 kudos
Latest Reply
Surya2
New Contributor II
  • 1 kudos

Thank you for such an informative post on one of my favourite topics—MERGE INTO.I completely agree that AUTO CDC significantly simplifies handling both SCD Type 1 and Type 2 patterns. The side-by-side comparison between a manual MERGE INTO implementa...

  • 1 kudos
1 More Replies
greengil
by > Contributor
  • 380 Views
  • 11 replies
  • 0 kudos

Delta Jira data import to Databricks

We need to import large amount of Jira data into Databricks, and should import only the delta changes.  What's the best approach to do so?  Using the Fivetran Jira connector or develop our own Python scripts/pipeline code?  Thanks.

  • 380 Views
  • 11 replies
  • 0 kudos
Latest Reply
abhi_dabhi
Databricks Partner
  • 0 kudos

Hi @greengil  good question, I went through this something similar recently, so sharing what I found.My instinct was also to build it in Python, but once I dug in, the "just write a script" path hides a lot of pain:Deletions are invisible. Jira's RES...

  • 0 kudos
10 More Replies
MikeGo
by > Contributor III
  • 140 Views
  • 5 replies
  • 1 kudos

Resolved! Table update trigger and File Arrival trigger latency

Hi team,When using table update or file arrival trigger, what latency I can expect for the trigger. Does Databricks poll the source by some schedule? If yes, whether the poll is free?Thanks

  • 140 Views
  • 5 replies
  • 1 kudos
Latest Reply
MikeGo
Contributor III
  • 1 kudos

Hi @Ashwin_DSA ,Thanks for the input. This is very helpful. For the last question, we thought about to create another table as staging, which is specifically used as trigger. Any time source has changes, we will update the staging table too. However ...

  • 1 kudos
4 More Replies
maikel
by > Contributor II
  • 44 Views
  • 1 replies
  • 0 kudos

Uploading file to volume and start ingestion job

Hello Community!I am writing to you with my idea about data ingestion job which we have to implement in our project.The data which we have are in CSV file format and depending on the case it differs a little bit. Before uploading we pivoting csv file...

  • 44 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 0 kudos

Hi @maikel, You don't have to build a custom solution for this. Databricks now has native components that align very well with what you want. If you want the job to start as soon as new files land in a volume, the recommended approach is to use file-...

  • 0 kudos
SantiNath_Dey
by > New Contributor III
  • 186 Views
  • 8 replies
  • 0 kudos

Handling New Columns Using Auto Loader Rescue Mode but how will get newly added column

Schema evolution (rescue mode) is being triggered for a complex JSON file. However, when a new column is added in the source, it does not appear in the target table automatically for multi-level nested JSON. How can we add newly added nested columns ...

  • 186 Views
  • 8 replies
  • 0 kudos
Latest Reply
SantiNath_Dey
New Contributor III
  • 0 kudos

The solution using the option .option("cloudFiles.schemaEvolutionMode", "addNewColumns") is working as expected. With this configuration, when a new column appears in the source, the stream fails with an UnknownFieldException. However, we would like ...

  • 0 kudos
7 More Replies
_Lilith
by > Visitor
  • 87 Views
  • 6 replies
  • 0 kudos

Resolved! Lakebase login via REST for a service principal

Hi all,I’m trying to set up REST-based communication between my Lakebase and a REST-client.I’m following the documentation “Connecting to Lakebase via REST using a service principal” to obtain a workspace-level token. After that, I use the Lakebase D...

  • 87 Views
  • 6 replies
  • 0 kudos
Latest Reply
_Lilith
Visitor
  • 0 kudos

I created a new Lakebase project to retrace all my steps. 0- I reused my service principal on the workspace1- installed databricks authentication extension: CREATE EXTENSION IF NOT EXISTS databricks_auth;2-Added the lakehouse service principal to the...

  • 0 kudos
5 More Replies
Tushar_Parekar
by Databricks Employee
  • 62 Views
  • 0 replies
  • 1 kudos

The Lakebase Hub: Official Community Space for Lakebase Insights

Community Space for Lakebase Insights Are you building with Lakebase? If so, do you have a single source to stay ahead of every technical shift and architectural breakthrough? More importantly, do you know what practitioners are actually saying o...

Ashwin (12).png
  • 62 Views
  • 0 replies
  • 1 kudos
Tushar_Parekar
by Databricks Employee
  • 76 Views
  • 0 replies
  • 1 kudos

Announcing the Public Preview of Lakeflow Designer

Databricks has launched the Public Preview of Lakeflow Designer, a visual, no‑code, AI‑native way for analysts and business users to prepare and analyze data directly on Databricks without leaving the governed environment of Unity Catalog. Key highli...

Untitled (800 x 800 px) (6).png
  • 76 Views
  • 0 replies
  • 1 kudos
murtadha_s
by > Databricks Partner
  • 48 Views
  • 1 replies
  • 0 kudos

What the maximum size to read using dbutils.fs.head

Hi,What the maximum size to read using dbutils.fs.head()?is there a limit? because AI says 10MB and I couldn't find useful info in documentations, while I tried in the actual one and it was only limited by the driver memory.Thanks in advance. 

  • 48 Views
  • 1 replies
  • 0 kudos
Latest Reply
DivyaandData
Databricks Employee
  • 0 kudos

dbutils.fs.head() itself does not have a documented hard cap like 10 MB. From the official dbutils reference, the signature is: dbutils.fs.head(file: String, max_bytes: int = 65536): String “Returns up to the specified maximum number of bytes in t...

  • 0 kudos
DavidKxx
by > Contributor
  • 56 Views
  • 2 replies
  • 1 kudos

Resolved! Data in Unity Catalog that can't be previewed

This is a small deficiency, but a fix would be nice to have.For a long time now, the Sample Data previewer in the Unity Catalog explorer has been unable to show tables that contain a certain kind of column.  Instead of showing sample rows of the tabl...

  • 56 Views
  • 2 replies
  • 1 kudos
Latest Reply
DavidKxx
Contributor
  • 1 kudos

Yes, my vector space is commonly of dimension 4000 or 8000.I don't write any dense vectors to table; can't speak to what happens previewing that type.Thanks for taking up the issue!

  • 1 kudos
1 More Replies
Tushar_Parekar
by Databricks Employee
  • 83 Views
  • 0 replies
  • 1 kudos

Learning Series | DevOps Essentials for Data Engineering

Databricks Academy offers the free DevOps Essentials for Data Engineering course, designed to help data engineers apply software engineering best practices and DevOps principles on the Databricks Data Intelligence Platform. Instead of going deep into...

Learning Series (800 x 800 px) (6).png
  • 83 Views
  • 0 replies
  • 1 kudos
vidya_kothavale
by > Contributor
  • 121 Views
  • 6 replies
  • 7 kudos

Resolved! Managed Delta table: time travel blocked after automatic VACUUM

Hi,On a managed Delta table  I get:SELECT * FROM abc VERSION AS OF 25;Error:DELTA_UNSUPPORTED_TIME_TRAVEL_BEYOND_DELETED_FILE_RETENTION_DURATION Cannot time travel beyond delta.deletedFileRetentionDuration (168 HOURS).Audit logs show VACUUM START/END...

  • 121 Views
  • 6 replies
  • 7 kudos
Latest Reply
balajij8
Contributor III
  • 7 kudos

VACUUM will never delete files on the latest version even if Version 10 was not accessed or modified as it represents the current state of the table. VACUUM targets files that are not referenced by the recent version. It identifies files that were re...

  • 7 kudos
5 More Replies
Muralidharan_A
by > New Contributor
  • 53 Views
  • 1 replies
  • 0 kudos

Supporting File unrecognition in DLT Pipeline.

We have a dlt pipeline which creates some same table, which are created based on some transformation and those transformation are kept inside a function in a seperate file. and those file were used using import function.we are deploying those changes...

  • 53 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 0 kudos

Hi @Muralidharan_A, To your question about whether retry_on_failure does more than a manual refresh, the answer is yes! retry_on_failure (along with pipelines.numUpdateRetryAttempts and pipelines.maxFlowRetryAttempts) performs classified, timed retri...

  • 0 kudos
Welcome to the Databricks Community!

Once you are logged in, you will be ready to post content, ask questions, participate in discussions, earn badges, and more.

Spend a few minutes exploring Get Started Resources, Learning Paths, Certifications, and Platform Discussions.

Join Learning Events here in the Community.

Connect with peers through Databricks User Groups and learn more about Community Events happening near you. We’re excited to see you get involved.

Top Kudoed Authors
Read Databricks Data Intelligence Platform reviews on G2

Latest from our Blog