在pyspark中计算DataFrame的原始累积和。

huangapple go评论70阅读模式
英文:

Raw wise Cumulative sum of Dataframe in pyspark

问题

这是输入DF:

origin destination 10+ Days 10 Days 9 Days 8 Days 7 Days 6 Days 5 Days 4 Days 3 Days 2 Days 1 Day
CWCJ MDCC 66 0 0 0 2 1 13 8 11 2 63
CWCJ PPSP 21 0 0 0 2 1 13 8 8 2 3
PCWD MDCC 50 0 0 0 0 0 0 0 0 0 0
PCWD PPSP 0 0 0 0 0 0 0 0 3 0 39
DPMT JNPT 0 0 0 0 0 0 0 0 0 0 21
PMKM PPSP 0 0 0 0 0 0 0 0 0 0 0
PMKM MDCC 2 0 0 0 0 0 0 0 0 0 0

我正在尝试使用累积和将输入转换为以下输出(累积和从10+天列开始):

origin destination 10+days 8-Aug 9-Aug 10-Aug 11-Aug 12-Aug 13-Aug 14-Aug 15-Aug 16-Aug 17-Aug 18-Aug
CWCJ MDCC 66 66 66 66 66 68 69 82 90 101 103 166
CWCJ PPSP 21 21 21 21 21 23 24 37 45 53 55 58
PCWD MDCC 50 50 50 50 50 50 50 50 50 50 50 50
PCWD PPSP 0 0 0 0 0 0 0 0 0 3 3 42
DPMT JNPT 0 0 0 0 0 0 0 0 0 0 0 21
PMKM PPSP 0 0 0 0 0 0 0 0 0 0 0 0
PMKM MDCC 2 2 2 2 2 2 2 2 2 2 2 2

列名应为今天的日期,今天的日期+1,今天的日期+2... 今天的日期+10

我尝试了以下Pyspark/Python代码,但没有得到预期的输出。

from pyspark.sql.window import Window
from pyspark.sql import functions as F
from datetime import datetime, timedelta

todays_date = datetime.today().date()
future_dates = [todays_date + timedelta(days=i) for i in range(1, 12)]

columns_to_sum = ["9", "8", "7", "6", "5", "4", "3", "2", "1"]

window_spec = Window.orderBy("origin", "destination")

for i, date in enumerate(future_dates):
    col_name = date.strftime("%d-%b")
    for col in columns_to_sum:
        summary_export_dwell_df_temp = summary_export_dwell_df_temp.withColumn(col_name, F.when(F.col(col) > 0, F.sum(col).over(window_spec)).otherwise(0).cast("int"))
    window_spec = Window.orderBy("origin", "destination")

selected_columns = ["origin", "destination", "10+"] + [date.strftime("%d-%b") for date in future_dates]
selected_df = summary_export_dwell_df_temp.select(*selected_columns)

selected_df.show()

然而,我认为累积和逻辑有误,因此我需要你们的意见,以及在实施累积和后的日期逻辑。

以下是我得到的输出

origin destination 10+ 09-Aug 10-Aug 11-Aug 12-Aug 13-Aug 14-Aug 15-Aug 16-Aug 17-Aug 18-Aug 19-Aug
CWCJ MDCC 66 3 3 3 3 3 3 3 3 3 3 3
CWCJ PPSP 21 0 0 0 0 0 0 0 0 0 0 0
DPMT JNPT 0 0 0 0 0 0 0 0 0 0 0 0
PCWD MDCC 50 42 42 42 42 42 42 42 42 42 42 42
PCWD PPSP 0 63 63 63 63 63 63 63 63 63 63 63
PMKM MDCC 2 0 0 0 0 0 0 0 0 0 0 0
PMKM PPSP 0 0 0 0 0 0 0 0 0 0 0 0
Total 139 126 126 126 126 126 126 126 126 126 126 126
英文:

This is the input DF

origin destination 10+ Days 10 Days 9 Days 8 Days 7 Days 6 Days 5 Days 4 Days 3 Days 2 Days 1 Day
CWCJ MDCC 66 0 0 0 2 1 13 8 11 2 63
CWCJ PPSP 21 0 0 0 2 1 13 8 8 2 3
PCWD MDCC 50 0 0 0 0 0 0 0 0 0 0
PCWD PPSP 0 0 0 0 0 0 0 0 3 0 39
DPMT JNPT 0 0 0 0 0 0 0 0 0 0 21
PMKM PPSP 0 0 0 0 0 0 0 0 0 0 0
PMKM MDCC 2 0 0 0 0 0 0 0 0 0 0

i am trying to convert input into following output using cumulative sum. (cumulative sum starts from 10+ days column)

oirigin destination 10+days 8-Aug 9-Aug 10-Aug 11-Aug 12-Aug 13-Aug 14-Aug 15-Aug 16-Aug 17-Aug 18-Aug
CWCJ MDCC 66 66 66 66 66 68 69 82 90 101 103 166
CWCJ PPSP 21 21 21 21 21 23 24 37 45 53 55 58
PCWD MDCC 50 50 50 50 50 50 50 50 50 50 50 50
PCWD PPSP 0 0 0 0 0 0 0 0 0 3 3 42
DPMT JNPT 0 0 0 0 0 0 0 0 0 0 0 21
PMKM PPSP 0 0 0 0 0 0 0 0 0 0 0 0
PMKM MDCC 2 2 2 2 2 2 2 2 2 2 2 2

column name should be todays date , todays date +1 , todays date+2.... todays date +10

i have tried folliwing pyspark/python code however it is not giving expected output.

from pyspark.sql.window import Window
from pyspark.sql import functions as F
from datetime import datetime, timedelta

todays_date = datetime.today().date()
future_dates = [todays_date + timedelta(days=i) for i in range(1, 12)]

columns_to_sum = ["9", "8", "7", "6", "5", "4", "3", "2", "1"]

window_spec = Window.orderBy("origin", "destination")

for i, date in enumerate(future_dates):
    col_name = date.strftime("%d-%b")
    for col in columns_to_sum:
        summary_export_dwell_df_temp = summary_export_dwell_df_temp.withColumn(col_name, F.when(F.col(col) > 0, F.sum(col).over(window_spec)).otherwise(0).cast("int"))
    window_spec = Window.orderBy("origin", "destination")

selected_columns = ["origin", "destination", "10+"] + [date.strftime("%d-%b") for date in future_dates]
selected_df = summary_export_dwell_df_temp.select(*selected_columns)

selected_df.show()

however, i think there is mistake in cumulative sum logic, for which i need you guys inputs as well as for the date logic after implementing cumulative sum.

below output i am getting

origin destination 10+ 09-Aug 10-Aug 11-Aug 12-Aug 13-Aug 14-Aug 15-Aug 16-Aug 17-Aug 18-Aug 19-Aug
CWCJ MDCC 66 3 3 3 3 3 3 3 3 3 3 3
CWCJ PPSP 21 0 0 0 0 0 0 0 0 0 0 0
DPMT JNPT 0 0 0 0 0 0 0 0 0 0 0 0
PCWD MDCC 50 42 42 42 42 42 42 42 42 42 42 42
PCWD PPSP 0 63 63 63 63 63 63 63 63 63 63 63
PMKM MDCC 2 0 0 0 0 0 0 0 0 0 0 0
PMKM PPSP 0 0 0 0 0 0 0 0 0 0 0 0
Total 139 126 126 126 126 126 126 126 126 126 126 126

答案1

得分: 1

由于您有pandas标签,以下是代码的翻译部分:

df = df.set_index(["origin", "destination"])
df = df.cumsum(axis=1).reset_index()

或者如果始终从第二列开始:

df.iloc[:,2:] = df.iloc[:,2:].cumsum(axis=1)

将列名重命名为以今天日期开始的日期:

df.columns = list(df.columns[:3])+[dt.strftime("%d-%b") for dt in pd.date_range(pd.Timestamp.today(),freq="D",periods=10)]
输出结果:
>>> df
  origin destination  10+ Days  08-Aug  ...  14-Aug  15-Aug  16-Aug  17-Aug
0   CWCJ        MDCC        66      66  ...      90     101     103     166
1   CWCJ        PPSP        21      21  ...      45      53      55      58
2   PCWD        MDCC        50      50  ...      50      50      50      50
3   PCWD        PPSP         0       0  ...       0       3       3      42
4   DPMT        JNPT         0       0  ...       0       0       0      21
5   PMKM        PPSP         0       0  ...       0       0       0       0
6   PMKM        MDCC         2       2  ...       2       2       2       2

[7 rows x 13 columns]
输入的df:
df = pd.DataFrame({
    'origin': ['CWCJ', 'CWCJ', 'PCWD', 'PCWD', 'DPMT', 'PMKM', 'PMKM'],
    'destination': ['MDCC', 'PPSP', 'MDCC', 'PPSP', 'JNPT', 'PPSP', 'MDCC'],
    '10+ Days': [66, 21, 50, 0, 0, 0, 2],
    '10 Days': [0, 0, 0, 0, 0, 0, 0],
    '9 Days': [0, 0, 0, 0, 0, 0, 0],
    '8 Days': [0, 0, 0, 0, 0, 0, 0],
    '7 Days': [2, 2, 0, 0, 0, 0, 0],
    '6 Days': [1, 1, 0, 0, 0, 0, 0],
    '5 Days': [13, 13, 0, 0, 0, 0, 0],
    '4 Days': [8, 8, 0, 0, 0, 0, 0],
    '3 Days': [11, 8, 0, 3, 0, 0, 0],
    '2 Days': [2, 2, 0, 0, 0, 0, 0],
    '1 Day': [63, 3, 0, 39, 21, 0, 0]
    })
英文:

Since you have the pandas tag:

df = df.set_index(["origin", "destination"])
df = df.cumsum(axis=1).reset_index()

Or if it's always from the second column:

df.iloc[:,2:] = df.iloc[:,2:].cumsum(axis=1)

To rename the columns with dates starting with today's date:

df.columns = list(df.columns[:3])+[dt.strftime("%d-%b") for dt in pd.date_range(pd.Timestamp.today(),freq="D",periods=10)]
Output:
>>> df
  origin destination  10+ Days  08-Aug  ...  14-Aug  15-Aug  16-Aug  17-Aug
0   CWCJ        MDCC        66      66  ...      90     101     103     166
1   CWCJ        PPSP        21      21  ...      45      53      55      58
2   PCWD        MDCC        50      50  ...      50      50      50      50
3   PCWD        PPSP         0       0  ...       0       3       3      42
4   DPMT        JNPT         0       0  ...       0       0       0      21
5   PMKM        PPSP         0       0  ...       0       0       0       0
6   PMKM        MDCC         2       2  ...       2       2       2       2

[7 rows x 13 columns]
Input df:
df = pd.DataFrame({
    'origin': ['CWCJ', 'CWCJ', 'PCWD', 'PCWD', 'DPMT', 'PMKM', 'PMKM'],
    'destination': ['MDCC', 'PPSP', 'MDCC', 'PPSP', 'JNPT', 'PPSP', 'MDCC'],
    '10+ Days': [66, 21, 50, 0, 0, 0, 2],
    '10 Days': [0, 0, 0, 0, 0, 0, 0],
    '9 Days': [0, 0, 0, 0, 0, 0, 0],
    '8 Days': [0, 0, 0, 0, 0, 0, 0],
    '7 Days': [2, 2, 0, 0, 0, 0, 0],
    '6 Days': [1, 1, 0, 0, 0, 0, 0],
    '5 Days': [13, 13, 0, 0, 0, 0, 0],
    '4 Days': [8, 8, 0, 0, 0, 0, 0],
    '3 Days': [11, 8, 0, 3, 0, 0, 0],
    '2 Days': [2, 2, 0, 0, 0, 0, 0],
    '1 Day': [63, 3, 0, 39, 21, 0, 0]
    })

答案2

得分: 0

尝试使用Spark内置函数Pivotunpivot来处理这个案例。

示例:

from pyspark.sql.functions import *
from pyspark.sql import *

w = Window.partitionBy("origin", "destination").rowsBetween(Window.unboundedPreceding, Window.currentRow)

# 示例数据
df = spark.createDataFrame([('CWCJ', 'MDCC', '66', '0', '0', '0', '2', '1', '13', '8', '11', '2', '63')],
                           ['origin', 'destination', '10+ Days', '10 Days', '9 Days', '8 Days', '7 Days', '6 Days', '5 Days', '4 Days', '3 Days', '2 Days', '1 Day'])
df.show()
#+------+-----------+--------+-------+------+------+------+------+------+------+------+------+-----+
#|origin|destination|10+ Days|10 Days|9 Days|8 Days|7 Days|6 Days|5 Days|4 Days|3 Days|2 Days|1 Day|
#+------+-----------+--------+-------+------+------+------+------+------+------+------+------+-----+
#|  CWCJ|       MDCC|      66|      0|     0|     0|     2|     1|    13|     8|    11|     2|   63|
#+------+-----------+--------+-------+------+------+------+------+------+------+------+------+-----+

sum_df = df.select(col('origin'), col('destination'), expr("""stack(11,'10+ Days',`10+ Days`,'10 Days',`10 Days`,'9 Days',`9 Days`,'8 Days',`8 Days`,'7 Days',`7 Days`,'6 Days',`6 Days`,'5 Days',`5 Days`,'4 Days',`4 Days`,'3 Days',`3 Days`,'2 Days',`2 Days`,'1 Day',`1 Day`)""")).\
    withColumn("mid", monotonically_increasing_id()).\
    withColumn("sum", sum("col1").over(w)).\
    withColumn("rn", row_number().over(Window.partitionBy("origin", "destination").orderBy("mid"))-2).\
    withColumn("dt", when(col("rn")>=0, expr("date_format(date_add(current_date,rn),'dd-MMM')")).otherwise(col("col0")))

sum_df.groupBy("origin", "destination").pivot("dt").agg(first(col("sum"))).show()
#+------+-----------+------+------+--------+------+------+------+------+------+------+------+------+------+
#|origin|destination|08-Aug|09-Aug|10+ Days|10-Aug|11-Aug|12-Aug|13-Aug|14-Aug|15-Aug|16-Aug|17-Aug|
#+------+-----------+------+------+--------+------+------+------+------+------+------+------+------+------+
#|  CWCJ|       MDCC|  66.0|  66.0|    66.0|  66.0|  68.0|  69.0|  82.0|  90.0| 101.0| 103.0| 166.0|
#+------+-----------+------+------+--------+------+------+------+------+------+------+------+------+

<details>
<summary>英文:</summary>

Try with `Pivot, unpivot` spark inbuilt functions for this case.

**`Example:`**

    from pyspark.sql.functions import *
    from pyspark.sql import *
    w= Window.partitionBy(&quot;origin&quot;,&quot;destination&quot;).rowsBetween(Window.unboundedPreceding, Window.currentRow)
 
    #sample data
    df = spark.createDataFrame([(&#39;CWCJ&#39;,&#39;MDCC&#39;,&#39;66&#39;,&#39;0&#39;,&#39;0&#39;,&#39;0&#39;,&#39;2&#39;,&#39;1&#39;,&#39;13&#39;,&#39;8&#39;,&#39;11&#39;,&#39;2&#39;,&#39;63&#39;)],
                               [&#39;origin&#39;,&#39;destination&#39;,&#39;10+ Days&#39;,&#39;10 Days&#39;,&#39;9 Days&#39;,&#39;8 Days&#39;,&#39;7 Days&#39;,&#39;6 Days&#39;,&#39;5 Days&#39;,&#39;4 Days&#39;,&#39;3 Days&#39;,&#39;2 Days&#39;,&#39;1 Day&#39;])
    df.show()   
    #+------+-----------+--------+-------+------+------+------+------+------+------+------+------+-----+
    #|origin|destination|10+ Days|10 Days|9 Days|8 Days|7 Days|6 Days|5 Days|4 Days|3 Days|2 Days|1 Day|
    #+------+-----------+--------+-------+------+------+------+------+------+------+------+------+-----+
    #|  CWCJ|       MDCC|      66|      0|     0|     0|     2|     1|    13|     8|    11|     2|   63|
    #+------+-----------+--------+-------+------+------+------+------+------+------+------+------+-----+

    
    sum_df = df.select(col(&#39;origin&#39;),col(&#39;destination&#39;),expr(&quot;&quot;&quot;stack(11,&#39;10+ Days&#39;,`10+ Days`,&#39;10 Days&#39;,`10 Days`,&#39;9 Days&#39;,`9 Days`,&#39;8 Days&#39;,`8 Days`,&#39;7 Days&#39;,`7 Days`,&#39;6 Days&#39;,`6 Days`,&#39;5 Days&#39;,`5 Days`,&#39;4 Days&#39;,`4 Days`,&#39;3 Days&#39;,`3 Days`,&#39;2 Days&#39;,`2 Days`,&#39;1 Day&#39;,`1 Day`)&quot;&quot;&quot;)).\
                                 withColumn(&quot;mid&quot;, monotonically_increasing_id()).\
      withColumn(&quot;sum&quot;, sum(&quot;col1&quot;).over(w)).\
        withColumn(&quot;rn&quot;, row_number().over(Window.partitionBy(&quot;origin&quot;,&quot;destination&quot;).orderBy(&quot;mid&quot;))-2).\
          withColumn(&quot;dt&quot;,when(col(&quot;rn&quot;)&gt;=0, expr(&quot;date_format(date_add(current_date,rn),&#39;dd-MMM&#39;)&quot;)).otherwise(col(&quot;col0&quot;)))
    
    sum_df.groupBy(&quot;origin&quot;,&quot;destination&quot;).pivot(&quot;dt&quot;).agg(first(col(&quot;sum&quot;))).show()
    #+------+-----------+------+------+--------+------+------+------+------+------+------+------+------+
    #|origin|destination|08-Aug|09-Aug|10+ Days|10-Aug|11-Aug|12-Aug|13-Aug|14-Aug|15-Aug|16-Aug|17-Aug|
    #+------+-----------+------+------+--------+------+------+------+------+------+------+------+------+
    #|  CWCJ|       MDCC|  66.0|  66.0|    66.0|  66.0|  68.0|  69.0|  82.0|  90.0| 101.0| 103.0| 166.0|
    #+------+-----------+------+------+--------+------+------+------+------+------+------+------+------+


</details>



# 答案3
**得分**: 0

你可以使用`functools`中的`reduce()`函数

以下是一个示例从你的问题输入样本中获取前两行

```py
from functools import reduce
import datetime
import pyspark.sql.functions as func

sumcols = [k for k in data_sdf.columns if 'day' in k.lower()]
# ['10+ Days', '10 Days', '9 Days', '8 Days', '7 Days', '6 Days', '5 Days', '4 Days', '3 Days', '2 Days', '1 Day']

# 列重命名逻辑
cname_logic = lambda x: (datetime.date.today() + datetime.timedelta(days=x)).strftime('%d-%b')

data_sdf. \
    select('origin', 'destination', 
           *[reduce(lambda x, y: x+y, [func.col(c) for c in sumcols[0:i+1]]).alias(cname_logic(i)) for i in range(len(sumcols))]
           ). \
    show()

# +------+-----------+------+------+------+------+------+------+------+------+------+------+------+
# |origin|destination|08-Aug|09-Aug|10-Aug|11-Aug|12-Aug|13-Aug|14-Aug|15-Aug|16-Aug|17-Aug|18-Aug|
# +------+-----------+------+------+------+------+------+------+------+------+------+------+------+
# |  CWCJ|       MDCC|    66|    66|    66|    66|    68|    69|    82|    90|   101|   103|   166|
# |  CWCJ|       PPSP|    21|    21|    21|    21|    23|    24|    37|    45|    53|    55|    58|
# +------+-----------+------+------+------+------+------+------+------+------+------+------+------+

reduce函数生成列的求和。

[reduce(lambda x, y: x+y, [func.col(c) for c in sumcols[0:i+1]]).alias(sumcols[i]) for i in range(len(sumcols))]

# [Column<'10+ Days AS `10+ Days`'>,
#  Column<'(10+ Days + 10 Days) AS `10 Days`'>,
#  Column<'((10+ Days + 10 Days) + 9 Days) AS `9 Days`'>,
#  Column<'(((10+ Days + 10 Days) + 9 Days) + 8 Days) AS `8 Days`'>,
#  Column<'((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) AS `7 Days`'>,
#  Column<'(((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) AS `6 Days`'>,
#  Column<'((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) AS `5 Days`'>,
#  Column<'(((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) + 4 Days) AS `4 Days`'>,
#  Column<'((((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) + 4 Days) + 3 Days) AS `3 Days`'>,
#  Column<'(((((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) + 4 Days) + 3 Days) + 2 Days) AS `2 Days`'>,
#  Column<'((((((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) + 4 Days) + 3 Days) + 2 Days) + 1 Day) AS `1 Day`'>]
英文:

you could employ reduce() from functools.

here's an example taking first 2 rows from your question's input sample

from functools import reduce
import datetime
import pyspark.sql.functions as func

sumcols = [k for k in data_sdf.columns if &#39;day&#39; in k.lower()]
# [&#39;10+ Days&#39;, &#39;10 Days&#39;, &#39;9 Days&#39;, &#39;8 Days&#39;, &#39;7 Days&#39;, &#39;6 Days&#39;, &#39;5 Days&#39;, &#39;4 Days&#39;, &#39;3 Days&#39;, &#39;2 Days&#39;, &#39;1 Day&#39;]

# column rename logic
cname_logic = lambda x: (datetime.date.today() + datetime.timedelta(days=x)).strftime(&#39;%d-%b&#39;)

data_sdf. \
    select(&#39;origin&#39;, &#39;destination&#39;, 
           *[reduce(lambda x, y: x+y, [func.col(c) for c in sumcols[0:i+1]]).alias(cname_logic(i)) for i in range(len(sumcols))]
           ). \
    show()

# +------+-----------+------+------+------+------+------+------+------+------+------+------+------+
# |origin|destination|08-Aug|09-Aug|10-Aug|11-Aug|12-Aug|13-Aug|14-Aug|15-Aug|16-Aug|17-Aug|18-Aug|
# +------+-----------+------+------+------+------+------+------+------+------+------+------+------+
# |  CWCJ|       MDCC|    66|    66|    66|    66|    68|    69|    82|    90|   101|   103|   166|
# |  CWCJ|       PPSP|    21|    21|    21|    21|    23|    24|    37|    45|    53|    55|    58|
# +------+-----------+------+------+------+------+------+------+------+------+------+------+------+

the reduce function generates the columnar sums.

[reduce(lambda x, y: x+y, [func.col(c) for c in sumcols[0:i+1]]).alias(sumcols[i]) for i in range(len(sumcols))]

# [Column&lt;&#39;10+ Days AS `10+ Days`&#39;&gt;,
#  Column&lt;&#39;(10+ Days + 10 Days) AS `10 Days`&#39;&gt;,
#  Column&lt;&#39;((10+ Days + 10 Days) + 9 Days) AS `9 Days`&#39;&gt;,
#  Column&lt;&#39;(((10+ Days + 10 Days) + 9 Days) + 8 Days) AS `8 Days`&#39;&gt;,
#  Column&lt;&#39;((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) AS `7 Days`&#39;&gt;,
#  Column&lt;&#39;(((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) AS `6 Days`&#39;&gt;,
#  Column&lt;&#39;((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) AS `5 Days`&#39;&gt;,
#  Column&lt;&#39;(((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) + 4 Days) AS `4 Days`&#39;&gt;,
#  Column&lt;&#39;((((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) + 4 Days) + 3 Days) AS `3 Days`&#39;&gt;,
#  Column&lt;&#39;(((((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) + 4 Days) + 3 Days) + 2 Days) AS `2 Days`&#39;&gt;,
#  Column&lt;&#39;((((((((((10+ Days + 10 Days) + 9 Days) + 8 Days) + 7 Days) + 6 Days) + 5 Days) + 4 Days) + 3 Days) + 2 Days) + 1 Day) AS `1 Day`&#39;&gt;]

huangapple
  • 本文由 发表于 2023年8月8日 21:37:49
  • 转载请务必保留本文链接:https://go.coder-hub.com/76860108.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定