如何使用现有值和输入的可能值列表来填补PySpark数据框中的缺失行?

huangapple go评论48阅读模式
英文:

How to fill up missing rows in a PySpark Dataframe with a combination of existing values and an input list of possible values?

问题

我有以下的PySpark数据框架:

    Product    Location    Date        Stock
    Apple      L1          2023-01-01  100
    Apple      L2          2023-01-15  30
    Banana     L1          2023-01-01  10
    Banana     L1          2023-01-08  20
    Orange     L2          2023-01-01  50

接下来,我有一个包含日期列所有可能值的列表:

    dates = [2023-01-01, 2023-01-08, 2023-01-15]

现在,我希望数据框架包含每个现有的ProductLocation组合以及dates列表中的所有日期的行,其中每个新添加行的Stock值应该为0。

输出数据框架将如下所示:

    Product    Location    Date        Stock
    Apple      L1          2023-01-01  100
    Apple      L1          2023-01-08  0
    Apple      L1          2023-01-15  0
    Apple      L2          2023-01-15  30
    Apple      L2          2023-01-01  0
    Apple      L2          2023-01-08  0
    Banana     L1          2023-01-01  10
    Banana     L1          2023-01-08  20
    Banana     L1          2023-01-15  0
    Orange     L2          2023-01-01  50
    Orange     L2          2023-01-08  0
    Orange     L2          2023-01-15  0
英文:

I have the following PySpark Dataframe:

Product    Location    Date        Stock
Apple      L1          2023-01-01  100
Apple      L2          2023-01-15  30
Banana     L1          2023-01-01  10
Banana     L1          2023-01-08  20
Orange     L2          2023-01-01  50

Next to this I have a list with all possibilities for the Date Column:

dates = [2023-01-01, 2023-01-08, 2023-01-15]

Now I want the Dataframe to contain a row for each existing combination of Product and Location and all dates from the dates list, where the Stock value for each newly added row should be equal to 0.

The output DataFrame would look like this:

Product    Location    Date        Stock
Apple      L1          2023-01-01  100
Apple      L1          2023-01-08  0
Apple      L1          2023-01-15  0
Apple      L2          2023-01-15  30
Apple      L2          2023-01-01  0
Apple      L2          2023-01-08  0
Banana     L1          2023-01-01  10
Banana     L1          2023-01-08  20
Banana     L1          2023-01-15  0
Orange     L2          2023-01-01  50
Orange     L2          2023-01-08  0
Orange     L2          2023-01-15  0

Notes:

  • Some combinations of Product and Location are not already in the list and thus should not be added (i.e. (Banana, L2) or (Orange, L1))
  • The possible values in dates are all sundays, but that does not add any extra complexity to the algorithm since the dates values are given in a fixed list

I think Window functions in PySpark could be useful here, but I could not find any example in which missing rows are filled up by a given list of values.

Thanks in advance!

答案1

得分: 2

最简单的方法是创建一个包含所有可能日期行的数据框,然后与“stock”字段进行连接。

以下是一个示例:

data_sdf. \
    select('product', 'location'). \
    dropDuplicates(). \
    withColumn('dates', func.array(*[func.lit(k) for k in dates])). \
    selectExpr('product', 'location', 'explode(dates) as date'). \
    join(data_sdf, ['product', 'location', 'date'], 'left'). \
    fillna(0, subset=['stock']). \
    orderBy('product', 'location', 'date'). \
    show()

+-------+--------+----------+-----+

|product|location| date|stock|

+-------+--------+----------+-----+

| Apple | L1|2023-01-01| 100|

| Apple | L1|2023-01-08| 0|

| Apple | L1|2023-01-15| 0|

| Apple | L2|2023-01-01| 0|

| Apple | L2|2023-01-08| 0|

| Apple | L2|2023-01-15| 30|

| Banana| L1|2023-01-01| 10|

| Banana| L1|2023-01-08| 20|

| Banana| L1|2023-01-15| 0|

| Orange| L2|2023-01-01| 50|

| Orange| L2|2023-01-08| 0|

| Orange| L2|2023-01-15| 0|

+-------+--------+----------+-----+


<details>
<summary>英文:</summary>

the easiest way to achieve it would be to create a dataframe with all possible date rows and then `join` the `stock` field.

here&#39;s an example

```py
data_sdf. \
    select(&#39;product&#39;, &#39;location&#39;). \
    dropDuplicates(). \
    withColumn(&#39;dates&#39;, func.array(*[func.lit(k) for k in dates])). \
    selectExpr(&#39;product&#39;, &#39;location&#39;, &#39;explode(dates) as date&#39;). \
    join(data_sdf, [&#39;product&#39;, &#39;location&#39;, &#39;date&#39;], &#39;left&#39;). \
    fillna(0, subset=[&#39;stock&#39;]). \
    orderBy(&#39;product&#39;, &#39;location&#39;, &#39;date&#39;). \
    show()

# +-------+--------+----------+-----+
# |product|location|      date|stock|
# +-------+--------+----------+-----+
# | Apple |      L1|2023-01-01|  100|
# | Apple |      L1|2023-01-08|    0|
# | Apple |      L1|2023-01-15|    0|
# | Apple |      L2|2023-01-01|    0|
# | Apple |      L2|2023-01-08|    0|
# | Apple |      L2|2023-01-15|   30|
# | Banana|      L1|2023-01-01|   10|
# | Banana|      L1|2023-01-08|   20|
# | Banana|      L1|2023-01-15|    0|
# | Orange|      L2|2023-01-01|   50|
# | Orange|      L2|2023-01-08|    0|
# | Orange|      L2|2023-01-15|    0|
# +-------+--------+----------+-----+

huangapple
  • 本文由 发表于 2023年4月17日 16:37:03
  • 转载请务必保留本文链接:https://go.coder-hub.com/76033191.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定