将给定日期的多个文件夹数据提取到数据框中

huangapple go评论58阅读模式
英文:

Fetch multiple folders data into dataframe for given dates

问题

Sure, here is the translated code part without the code:

我有一个存储容器,其中的数据在多个文件夹中,日期附加在末尾。 (以下)

    "dbfs:/mnt/input/raw/extract/pro_2023-01-01/这里是Parquet文件"
    "dbfs:/mnt/input/raw/extract/pro_2023-01-02/这里是Parquet文件"
    "dbfs:/mnt/input/raw/extract/pro_2023-01-03/这里是Parquet文件"
    "dbfs:/mnt/input/raw/extract/pro_2023-01-04/这里是Parquet文件"
    "dbfs:/mnt/input/raw/extract/pro_2023-01-05/这里是Parquet文件"

如果我尝试从一个文件夹中读取数据,性能很好,如下所示:

df = spark.read.parquet("dbfs:/mnt/input/raw/extract/pro_2023-01-05/")

但是,我有时需要将多天的数据加载到数据框中(通常是每周一次)。 为此,我先拉取所有数据,然后使用临时视图根据Parquet文件中的folderLoadDate列进行过滤。 在这种情况下,它可以运行,但需要很长时间来扫描所有文件夹并执行转换。 例如:


   df = 在创建下面的临时视图之前,对df进行了一些转换并添加了新列

   df.createOrReplaceTempView ("Alldata")

然后运行Spark SQL

   %sql select * from Alldata where cast(FolderDate as date) BETWEEN '2023-01-01' AND '2023-01-07'

是否有办法可以在第一步中为所需的日期提取数据到df中? 类似这样:

df = spark.read.parquet("dbfs:/mnt/input/raw/extract/BETWEEN(pro_2023-01-01 and pro_2023-01-07")

感谢任何帮助。

英文:

I have storage container where data is in multiple folders with date appended at the end. (below)

    "dbfs:/mnt/input/raw/extract/pro_2023-01-01/parquet files here"
    "dbfs:/mnt/input/raw/extract/pro_2023-01-02/parquet files here"
    "dbfs:/mnt/input/raw/extract/pro_2023-01-03/parquet files here"
    "dbfs:/mnt/input/raw/extract/pro_2023-01-04/parquet files here"
    "dbfs:/mnt/input/raw/extract/pro_2023-01-05/parquet files here"

It works fine with good performance if I try to read the data from one folder. example:

df = spark.read.parquet("dbfs:/mnt/input/raw/extract/pro_2023-01-05/")

But I have situation when I need to load the data for multiple days into dataframe (mostly weekly basis). For that I pull all data and then use temp view to filter based on folderLoadDate which one of the column in parquet files. In that case, it works but takes forever to run while it scans all the folder and do transformations, I think. Example:

   df = spark.read.parquet("dbfs:/mnt/input/raw/extract/pro_2023-*/")

   df = there are few transformations and new columns added to df before I create temp view below

   df.createOrReplaceTempView ("Alldata")

then run spark sql

   %sql select * from Alldata where cast(FolderDate as date) BETWEEN '2023-01-01' AND '2023-01-07'

Is there a way I can pull the data for needed dates into df at very first step? something like

df = spark.read.parquet("dbfs:/mnt/input/raw/extract/BETWEEN(pro_2023-2023-01-01 and pro_2023-01-07")

Any help is appreciated..

答案1

得分: 0

尝试使用**正则表达式**来处理这种情况。

要读取01-07日期,请使用0[1-7],Spark将读取01、02、03、04、05、06、07日期到数据框中。

示例:

spark.read.parquet("dbfs:/mnt/input/raw/extract/pro_2023-01-0[1-7]/")

更新:

定义**空数据框**,并在每次迭代中将数据与空数据框进行unionAll操作。

# 定义具有Parquet文件模式的空数据框
schema = StructType([
    StructField("k", StringType(), True), StructField("v", IntegerType(), False)
])

df = spark.createDataFrame([], schema)

list_directories = ['2021-02-03']
for i in list_directories:
    df1 = spark.read.parquet('')
    df = df.unionAll(df1)
英文:

Try by using regular expression for this case.

To read from 01-07 dates use 0[1-7] and spark will read 01,02,03,04,05,06,07 dates into dataframe.

Example:

spark.read.parquet("dbfs:/mnt/input/raw/extract/pro_2023-01-0[1-7]/")

UPDATE:

Define the empty dataframe and unionAll the data with the empty dataframe in every iteration.

#define empty dataframe with the schema of parquet files
schema = StructType([
    StructField("k", StringType(), True), StructField("v", IntegerType(), False)
])

df = spark.createDataFrame([], schema)

list_directories = ['2021-02-03']
for i in list_directories:
    df1 = spark.read.parquet('')
    df = df.unionAll(df1)

huangapple
  • 本文由 发表于 2023年5月25日 05:06:59
  • 转载请务必保留本文链接:https://go.coder-hub.com/76327404.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定