复杂的过滤操作在 PySpark 中

huangapple go评论59阅读模式
英文:

Complex Filtering Operations in PySpark

问题

目前我正在对一个包含借款人还款信息的数据库进行计算。这是一个庞大的数据集,所以我正在使用PySpark,并且刚刚遇到了如何使用高级过滤操作的问题。

我的数据框看起来是这样的:

```plaintext
Name    ID     合同日期       借款总额   结束日期
A       ID1    2022-10-10   10      2022-10-15 
A       ID1    2022-10-15   15      null
A       ID1    2022-10-30   20      2022-11-10
B       ID2    2022-11-11   15      2022-10-14
B       ID2    2022-12-10   30      null
B       ID2    2022-12-12   35      2022-12-14
C       ID3    2022-12-19   19      2022-11-10
D       ID4    2022-12-10   10      null
D       ID4    2022-12-12   40      2022-11-29

我的目标是创建一个包含所有发放给特定借款人的贷款的数据框(按唯一ID分组),其中第一笔贷款尚未结清,但第二笔已经发放给借款人,并且贷款总额之差小于或等于5。

换句话说,我必须获得以下表格(预期结果):

Name    ID     合同日期       借款总额   状态
A       ID1    2022-10-15   15      null 
A       ID1    2022-10-30   20      2022-11-10
B       ID3    2022-12-10   30      null
B       ID3    2022-12-12   35      2022-12-14

提前感谢


<details>
<summary>英文:</summary>

Currently I&#39;m performing calculations on a database that contains information on how loans are paid by borrowers. It is a huge dataset so I&#39;m using PySpark and have just faced with an issue of how to use advanced filtering operations.

My dataframe looks like this:

Name ID ContractDate LoanSum ClosingDate
A ID1 2022-10-10 10 2022-10-15
A ID1 2022-10-15 15 null
A ID1 2022-10-30 20 2022-11-10
B ID2 2022-11-11 15 2022-10-14
B ID2 2022-12-10 30 null
B ID2 2022-12-12 35 2022-12-14
C ID3 2022-12-19 19 2022-11-10
D ID4 2022-12-10 10 null
D ID4 2022-12-12 40 2022-11-29


My goal is to create a dataframe that contains all loans issued to specific borrowers (group by unique ID) where the the first loan is not yet closed, but the second is already given to a borrower and the difference between loansums is less or equal then 5.

In other words, I have to obtain the following table (expected result):

Name ID ContractDate LoanSum Status
A ID1 2022-10-15 15 null
A ID1 2022-10-30 20 2022-11-10
B ID3 2022-12-10 30 null
B ID3 2022-12-12 35 2022-12-14

Thank you in advance

</details>


# 答案1
**得分**: 1

你可以使用PySpark的[窗口函数](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.Window.html)来按照唯一标识分区。要检查下一笔贷款是否已关闭,可以使用[Lead函数](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.lead.html)。类似地,Lag函数可以基于分区获取前一记录的值。

在这个示例中,我使用了Lead和Lag函数来确保两行都满足条件。这对于创建检查列并根据检查输出这些行非常有用。

请检查此解决方案。

```python
from datetime import date
from decimal import Decimal
from pyspark.sql import SparkSession, Window
from pyspark.sql.types import *
import pyspark.sql.functions as f

data = [
    ('A', 'ID1', '2022-10-10', 10, '2022-10-150'),
    ('A', 'ID1', '2022-10-15', 15, 'null'),
    ('A', 'ID1', '2022-10-30', 20, '2022-11-10'),
    ('B', 'ID2', '2022-11-11', 15, '2022-10-14'),
    ('B', 'ID2', '2022-12-10', 30, 'null'),
    ('B', 'ID2', '2022-12-12', 35, '2022-12-14'),
    ('C', 'ID3', '2022-12-19', 19, '2022-11-10'),
    ('D', 'ID4', '2022-12-10', 10, 'null'),
    ('D', 'ID4', '2022-12-12', 40, '2022-11-29')
]

cols = ['Name', 'ID', 'ContractDate', 'LoanSum', 'ClosingDate']
spark = SparkSession.builder.appName('test').getOrCreate()
df = spark.createDataFrame(data, cols)

print('输入---->')
df.show()

next_record = f.lead(f.col('ClosingDate')).over(Window.partitionBy('ID').orderBy("ContractDate")).alias("next_ClosingDate")
prev_record = f.lag(f.col('ClosingDate')).over(Window.partitionBy('ID').orderBy("ContractDate")).alias("prev_ClosingDate")

next_loan_sum = f.lead(f.col('LoanSum')).over(Window.partitionBy('ID').orderBy("ContractDate")).alias("next_LoanSum")
prev_loan_sum = f.lag(f.col('LoanSum')).over(Window.partitionBy('ID').orderBy("ContractDate")).alias("prev_LoanSum")

print('输出---->')
df.withColumn(
    'check',
    f.when(
        ((f.col('ClosingDate') == 'null') & (next_record != 'null')) |
        ((f.col('ClosingDate') != 'null') & (prev_record == 'null')), 1
    ).otherwise(0)
).withColumn('loan_sum_check',
             f.when(((next_loan_sum - f.col('LoanSum')) <= 5) | ((f.col('LoanSum') - prev_loan_sum) <= 5), 1)
             .otherwise(0)
).filter('check=1 and loan_sum_check=1').drop('check', 'loan_sum_check').show()

结果输出

输入---->
+----+---+------------+-------+-----------+
|Name| ID|ContractDate|LoanSum|ClosingDate|
+----+---+------------+-------+-----------+
|   A|ID1|  2022-10-10|     10|2022-10-150|
|   A|ID1|  2022-10-15|     15|       null|
|   A|ID1|  2022-10-30|     20| 2022-11-10|
|   B|ID2|  2022-11-11|     15| 2022-10-14|
|   B|ID2|  2022-12-10|     30|       null|
|   B|ID2|  2022-12-12|     35| 2022-12-14|
|   C|ID3|  2022-12-19|     19| 2022-11-10|
|   D|ID4|  2022-12-10|     10|       null|
|   D|ID4|  2022-12-12|     40| 2022-11-29|
+----+---+------------+-------+-----------+

输出---->
+----+---+------------+-------+-----------+
|Name| ID|ContractDate|LoanSum|ClosingDate|
+----+---+------------+-------+-----------+
|   A|ID1|  2022-10-15|     15|       null|
|   A|ID1|  2022-10-30|     20| 2022-11-10|
|   B|ID2|  2022-12-10|     30|       null|
|   B|ID2|  2022-12-12|     35| 2022-12-14|
+----+---+------------+-------+-----------+
英文:

You can use PySpark Window functions PartitionBy Unique ID. To check if next loan is already closed you can use Lead Function. Similarly Lag get the previous record value based on partition.

Here in this example, I used Lead and Lag together to make sure that both criteria meet on both rows. This is useful to create a check column and output these rows based on the check.

Check this solution.

from datetime import date
from decimal import Decimal
from pyspark.sql import SparkSession,Window
from pyspark.sql.types import *
import pyspark.sql.functions as f


data = [
(&#39;A&#39;,&#39;ID1&#39;,&#39;2022-10-10&#39;,10,&#39;2022-10-150&#39;),
(&#39;A&#39;,&#39;ID1&#39;,&#39;2022-10-15&#39;,15,&#39;null&#39;),
(&#39;A&#39;,&#39;ID1&#39;,&#39;2022-10-30&#39;,20,&#39;2022-11-10&#39;),
(&#39;B&#39;,&#39;ID2&#39;,&#39;2022-11-11&#39;,15,&#39;2022-10-14&#39;),
(&#39;B&#39;,&#39;ID2&#39;,&#39;2022-12-10&#39;,30,&#39;null&#39;),
(&#39;B&#39;,&#39;ID2&#39;,&#39;2022-12-12&#39;,35,&#39;2022-12-14&#39;),
(&#39;C&#39;,&#39;ID3&#39;,&#39;2022-12-19&#39;,19,&#39;2022-11-10&#39;),
(&#39;D&#39;,&#39;ID4&#39;,&#39;2022-12-10&#39;,10,&#39;null&#39;),
(&#39;D&#39;,&#39;ID4&#39;,&#39;2022-12-12&#39;,40,&#39;2022-11-29&#39;)]

cols = [&#39;Name&#39;,&#39;ID&#39;,&#39;ContractDate&#39;,&#39;LoanSum&#39;,&#39;ClosingDate&#39;] 
spark = SparkSession.builder.appName(&#39;test&#39;).getOrCreate()
df = spark.createDataFrame(data, cols)

print(&#39;Input----&gt;&#39;)
df.show()

next_record = f.lead(f.col(&#39;ClosingDate&#39;)).over(Window.partitionBy(&#39;ID&#39;).orderBy(&quot;ContractDate&quot;)).alias(&quot;next_ClosingDate&quot;)
prev_record = f.lag(f.col(&#39;ClosingDate&#39;)).over(Window.partitionBy(&#39;ID&#39;).orderBy(&quot;ContractDate&quot;)).alias(&quot;prev_ClosingDate&quot;)

next_loan_sum = f.lead(f.col(&#39;LoanSum&#39;)).over(Window.partitionBy(&#39;ID&#39;).orderBy(&quot;ContractDate&quot;)).alias(&quot;next_LoanSum&quot;)
prev_loan_sum = f.lag(f.col(&#39;LoanSum&#39;)).over(Window.partitionBy(&#39;ID&#39;).orderBy(&quot;ContractDate&quot;)).alias(&quot;prev_LoanSum&quot;)


print(&#39;Output----&gt;&#39;)
df.withColumn(
    &#39;check&#39;,
    f.when(
        ( (f.col(&#39;ClosingDate&#39;)==&#39;null&#39;) &amp; (next_record !=&#39;null&#39;)  ) |
        ( (f.col(&#39;ClosingDate&#39;)!=&#39;null&#39;) &amp; (prev_record ==&#39;null&#39;)  ) , 1).otherwise(0)
).withColumn(&#39;looan_sum_check&#39;,  
             f.when(  ((next_loan_sum - f.col(&#39;LoanSum&#39;) ) &lt;=5) | ((f.col(&#39;LoanSum&#39;) - prev_loan_sum ) &lt;=5) , 1).otherwise(0)     
).filter(&#39;check=1 and looan_sum_check=1&#39;).drop(&#39;check&#39;,&#39;looan_sum_check&#39;).show()

Results Output


Input----&gt;
+----+---+------------+-------+-----------+
|Name| ID|ContractDate|LoanSum|ClosingDate|
+----+---+------------+-------+-----------+
|   A|ID1|  2022-10-10|     10|2022-10-150|
|   A|ID1|  2022-10-15|     15|       null|
|   A|ID1|  2022-10-30|     20| 2022-11-10|
|   B|ID2|  2022-11-11|     15| 2022-10-14|
|   B|ID2|  2022-12-10|     30|       null|
|   B|ID2|  2022-12-12|     35| 2022-12-14|
|   C|ID3|  2022-12-19|     19| 2022-11-10|
|   D|ID4|  2022-12-10|     10|       null|
|   D|ID4|  2022-12-12|     40| 2022-11-29|
+----+---+------------+-------+-----------+

Output----&gt;
+----+---+------------+-------+-----------+
|Name| ID|ContractDate|LoanSum|ClosingDate|
+----+---+------------+-------+-----------+
|   A|ID1|  2022-10-15|     15|       null|
|   A|ID1|  2022-10-30|     20| 2022-11-10|
|   B|ID2|  2022-12-10|     30|       null|
|   B|ID2|  2022-12-12|     35| 2022-12-14|
+----+---+------------+-------+-----------+

huangapple
  • 本文由 发表于 2023年7月14日 04:35:01
  • 转载请务必保留本文链接:https://go.coder-hub.com/76683068.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定