A Guide to Working with Time Travel in Spark Delta Lake: A Deep Dive

Time travel is a powerful feature provided by Delta Lake on Apache Spark. It allows users to access and revert to older versions of data, thus simplifying data auditing and testing for temporal effects. In this blog post, we will explore how to leverage this feature and guide you on how to work with time travel in Delta Lake.

1. Prerequisites

link to this section

Before starting, ensure Apache Spark is installed, as Delta Lake is a storage layer that runs on top of it. Delta Lake is compatible with Spark 3.0 or later. You'll also need to have a basic understanding of Spark and Delta Lake.

2. Understanding Time Travel in Delta Lake

link to this section

Delta Lake's time travel feature lets you access older versions of the data. This feature is also referred to as snapshot isolation, as it allows you to work with a consistent snapshot of your data as of a specific point in time. This ability is useful in a variety of scenarios such as:

  • Data Audit : Easily access previous versions of the data for auditing purposes.
  • Reproducibility : Re-run analyses or reports on the same data for consistency.
  • Rollbacks : Revert to an older version of the data in case of erroneous writes.

3. Writing and Reading Delta Table

link to this section

First, let's write a simple Delta table:

from pyspark.sql.functions import * 
        
# Create a DataFrame 
data = spark.range(0,5) 

# Write DataFrame as a Delta table 
data.write.format("delta").save("/tmp/delta-table") 

Next, let's append some data:

# Create a new DataFrame 
newData = spark.range(5,10) 

# Append new data to the existing Delta table 
newData.write.format("delta").mode("append").save("/tmp/delta-table") 

Reading from a Delta table is straightforward:

df = spark.read.format("delta").load("/tmp/delta-table") df.show() 

4. Exploring Time Travel

link to this section

With each write operation, Delta Lake generates a new version of the table. You can view the history of the table with the history operation:

from delta.tables import DeltaTable 
        
deltaTable = DeltaTable.forPath(spark, "/tmp/delta-table") 
history = deltaTable.history().show() 

The history operation returns all versions of the table, along with the timestamp of the operation, the user who performed the operation, and other operation details.

To read an older version of the table, you use the versionAsOf option:

df = spark.read.format("delta").option("versionAsOf", 0).load("/tmp/delta-table") df.show() 

This reads version 0 of the table, which is the initial version before the new data was appended.

You can also access data as of a specific timestamp:

df = spark.read.format("delta").option("timestampAsOf", "2023-06-16 10:00:00").load("/tmp/delta-table") 
        
df.show() 

5. Rollbacks

link to this section

In case of a mistake, you can revert to an older version of the table by writing an older version to the table:

# Get old version 
oldData = spark.read.format("delta").option("versionAsOf", 0).load("/tmp/delta-table") 

# Overwrite current table with the old version 
oldData.write.format("delta").mode("overwrite").save("/tmp/delta-table") 

This code effectively rolls back the table to version 0.

6. Time Travel with SQL

link to this section

In addition to using the DataFrame API, you can also use SQL queries to leverage time travel capabilities. To do this, you first need to register your Delta table with Spark's SQL catalog:

# Register Delta table 
spark.sql("CREATE TABLE deltaTable USING DELTA LOCATION '/tmp/delta-table'") 

Then you can run SQL queries that refer to specific versions:

# Query an old version of the table 
spark.sql("SELECT * FROM deltaTable TIMESTAMP AS OF '2023-06-16 10:00:00'") 

7. Restoring Deleted Data

link to this section

With time travel, you can even restore data that was accidentally deleted:

# Delete some data 
deltaTable.delete("id < 3") 

# Oops! Let's get that back 
spark.read.format("delta").option("versionAsOf", 0).load("/tmp/delta-table")\ 
    .write.format("delta").mode("overwrite").save("/tmp/delta-table") 

In this example, we delete data where the id is less than 3. Realizing the mistake, we revert to the original version of the table, effectively restoring the deleted data.

8. Replicating Real-Time Reports

link to this section

Time travel also enables you to reproduce real-time reports as they were at a specific point in time. This is particularly useful when diagnosing issues in production systems.

# Generate the report as it was on June 16, 2023 
df = spark.read.format("delta").option("timestampAsOf", "2023-06-16 10:00:00").load("/tmp/delta-table") 
report = df.groupBy("id").count() 

9. Simplifying Data Compliance

link to this section

For industries subject to regulations that require maintaining historical data, time travel simplifies the process of data compliance. With Delta Lake's time travel, you can keep a full history of your data, making audits straightforward.

# Auditing data changes 
history = deltaTable.history().show() 

Conclusion

link to this section

The time travel feature of Delta Lake brings immense power and flexibility to your data pipelines. It facilitates data audit, experiment reproducibility, rollbacks, data compliance, and much more. With this feature, data engineers and scientists can confidently experiment and develop with their data knowing they can always revert to a previous state if needed. Through this guide, we hope you've gained a solid understanding of Delta Lake's time travel. But don't stop here! Continue exploring the official Delta Lake documentation for more insights and advanced use cases.