Saved delta file reads as an df – is it still part of delta lake?

huangapple go评论46阅读模式
英文:

Saved delta file reads as an df - is it still part of delta lake?

问题

  1. 我读取一个parquet文件:

    taxi_df = (spark.read.format("parquet").option("header", "true").load("dbfs:/mnt/randomcontainer/taxirides.parquet"))

  2. 然后我使用asTable保存它:

    taxi_df.write.format("delta").mode("overwrite").saveAsTable("taxi_managed_table")

  3. 我读取刚刚保存的托管表:

    taxi_read_from_managed_table = (spark.read.format("delta").option("header", "true").load("dbfs:/user/hive/warehouse/taxi_managed_table/"))

  4. ... 当我检查类型时,显示的是"pyspark.sql.dataframe.DataFrame",而不是deltaTable:

    type(taxi_read_from_managed_table) # 返回pyspark.sql.dataframe.DataFrame

  5. 只有在我明确使用以下命令进行转换后,我才会收到DeltaTable类型:

    taxi_delta_table = DeltaTable.convertToDelta(spark, "parquet.dbfs:/user/hive/warehouse/taxismallmanagedtable/")

    type(taxi_delta_table) # 返回delta.tables.DeltaTable

这是否意味着阶段4中的表不是Delta表,不会提供Delta Lake提供的自动优化?

如何确定某物是否是Delta Lake的一部分?

我了解到Delta Live表只能与delta.tables.DeltaTables一起使用,这正确吗?

英文:

I have problems understanding the concept of delta lake. Example:

  1. I read a parquet file:

    taxi_df = (spark.read.format("parquet").option("header", "true").load("dbfs:/mnt/randomcontainer/taxirides.parquet"))

  2. Then I save it using asTable:

    taxi_df.write.format("delta").mode("overwrite").saveAsTable("taxi_managed_table")

  3. I read the just stored managed table:

    taxi_read_from_managed_table = (spark.read.format("delta").option("header", "true").load("dbfs:/user/hive/warehouse/taxi_managed_table/"))

  4. ... and when I check the type it shows "pyspark.sql.dataframe.DataFrame", not deltaTable:

    type(taxi_read_from_managed_table) # returns pyspark.sql.dataframe.DataFrame

  5. Only after I transform it explicitly using the following command, I receive the type DeltaTable

    taxi_delta_table = DeltaTable.convertToDelta(spark,"parquet.dbfs:/user/hive/warehouse/taxismallmanagedtable/")

    type(taxi_delta_table) #returns delta.tables.DeltaTable

/////////////////////////////

Does that mean that the table in stage 4. is not a delta table and won’t provide the automatic optimizations provided by delta lake?

How do you establish if something is part of the delta lake or not?

I understand that delta live tables only work with delta.tables.DeltaTables, is that correct?

答案1

得分: 1

当你使用 spark.read...load() 时,它会返回 Spark 的 DataFrame 对象,你可以用它来处理数据。在底层,这个 DataFrame 使用 Delta Lake 表。DataFrame 抽象了数据源,因此你可以与不同的数据源一起使用相同的操作。

另一方面,DeltaTable 是一个特定的对象,允许只应用 Delta 特定的操作。你甚至不需要执行 convertToDelta 来获取它 - 只需使用 DeltaTable.forPathDeltaTable.forName 函数来获取其实例。

P.S. 如果你使用 .saveAsTable(my_name) 保存了数据,那么你就不需要使用 .load,只需使用 spark.read.table(my_name)

英文:

When you use spark.read...load() - it returns the Spark's DataFrame object that you can use to process the data. Under the hood this DataFrame use the Delta Lake table. DataFrame is abstracting the data source so you can work with different sources and apply the same operations.

On other hand, DeltaTable is a specific object that allows to apply only Delta-specific operations. You even don't need to perform convertToDelta to get it - just use DeltaTable.forPath or DeltaTable.forName functions to obtain its instance.

P.S. if you saved data with .saveAsTable(my_name), then you don't need to use .load, just use spark.read.table(my_name).

huangapple
  • 本文由 发表于 2023年2月19日 17:28:28
  • 转载请务必保留本文链接:https://go.coder-hub.com/75499149.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定