exploding the struct with no arrays pyspark

huangapple go评论89阅读模式
英文:

exploding the struct with no arrays pyspark

问题

我有类似的JSON数据:

{
    "labels1":
         {"A":1,"B":2, "C":3},
    "labels2":
         {"A":1,"B":2, "C":3}
}

我想要3个输出列,分别是标签名(tagname)、键名(keyname)、值(value)。最终的输出将如下所示:

tagname,key,value
labels1,A,1
labels1,B,2
labels1,C,3
labels2,A,1
labels2,B,2
labels2,C,3

我该如何实现这个用例?另外,键A、B、C仅为示例,可能还有多个可选字段。提前感谢您的帮助,如果需要更多信息,请告诉我。

英文:

I have json data like

{
"labels1":
     {"A":1,"B":2, "C":3},
"labels2":
     {"A":1,"B":2, "C":3},
}

and I want 3 output columns that say tagname, keyname,value. The final output will be like

tagname,key,value
labels1,A,1
labels1,B,2
labels1,C,3
labels2,A,1
labels2,B,2
labels2,C,3

How can I achieve this usecase, also the keys A,B,C are just sample and there can be multiple optional fields. Thanks in advance and let me know if any more information is required.

答案1

得分: 0

尝试使用内置的pyspark函数,如stack,并将结构化数据展开为新列。

示例:

from pyspark.sql.functions import *
json = """{"labels1":{"A":1,"B":2, "C":3},"labels2":{"A":1,"B":2, "C":3}}"""
df = spark.read.json(sc.parallelize([json]), multiLine=True)
df.select(expr("stack(2,'labels1',labels1,'labels2',labels2)")).\
  select(col("col0").alias("tagname"),col("col1.*")).\
  select("tagname",expr("stack(3,'A',A,'B',B,'C',C) as (key,value)")).show()

#+-------+---+-----+
#|tagname|key|value|
#+-------+---+-----+
#|labels1|  A|    1|
#|labels1|  B|    2|
#|labels1|  C|    3|
#|labels2|  A|    1|
#|labels2|  B|    2|
#|labels2|  C|    3|
#+-------+---+-----+

另一种方法是使用**unpivot**函数:

df.withColumn("n",lit(1)).\
  unpivot("n",["labels1", "labels2"],"new","new1").select(col("new").alias("tagname"),col("new1.*")).\
  unpivot("tagname",["A","B","C"],"key","value").\
  show()

注意:由于Markdown中的特殊字符,代码示例中的引号可能需要适当处理。

英文:

Try with inbuilt pyspark functions for this case like stack and unnest the struct to add as new columns.

Example:

from pyspark.sql.functions import *
json = """{"labels1":{"A":1,"B":2, "C":3},"labels2":{"A":1,"B":2, "C":3}}"""
df = spark.read.json(sc.parallelize([json]), multiLine=True)
df.select(expr("stack(2,'labels1',labels1,'labels2',labels2)")).\
  select(col("col0").alias("tagname"),col("col1.*")).\
  select("tagname",expr("stack(3,'A',A,'B',B,'C',C) as (key,value)")).show()

#+-------+---+-----+
#|tagname|key|value|
#+-------+---+-----+
#|labels1|  A|    1|
#|labels1|  B|    2|
#|labels1|  C|    3|
#|labels2|  A|    1|
#|labels2|  B|    2|
#|labels2|  C|    3|
#+-------+---+-----+

Another way by using unpivot function:

df.withColumn("n",lit(1)).\
  unpivot("n",["labels1", "labels2"],"new","new1").select(col("new").alias("tagname"),col("new1.*")).\
  unpivot("tagname",["A","B","C"],"key","value").\
  show()

huangapple
  • 本文由 发表于 2023年7月10日 16:21:59
  • 转载请务必保留本文链接:https://go.coder-hub.com/76651956.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定