英文:
Join dataframe with order by desc limit on spark /java
问题
我正在使用以下代码:
Dataset<Row> dataframee = df1.as("a").join(df2.as("b"),
df2.col("id_device").equalTo(df1.col("ID_device_previous")).
and(df2.col("id_vehicule").equalTo(df1.col("ID_vehicule_previous"))).
and(df2.col("tracking_time").lt(df1.col("date_track_previous")))
,"left").selectExpr("a.*", "b.ID_tracking as ID_pprevious", "b.km as KM_pprevious","b.tracking_time as tracking_time_pprevious","b.speed as speed_pprevious");
我通过上述代码将df1
数据框与df2
数据框进行了多行连接。
但我想要的是在相同条件下,将df1
数据框与df2
数据框进行连接,并按照df2.col("tracking_time") desc limit(0,1)
进行排序。
编辑
我尝试了以下代码,但它不起作用。
df1.registerTempTable("data");
df2.createOrReplaceTempView("tdays");
Dataset<Row> d_f = sparkSession.sql("select a.* from data as a LEFT JOIN (select b.tracking_time from tdays as b where b.id_device = a.ID_device_previous and b.id_vehicule = a.ID_vehicule_previous and b.tracking_time < a.date_track_previous order by b.tracking_time desc limit 1 )");
我需要你的帮助。
英文:
I'm using the following code :
Dataset <Row> dataframee = df1.as("a").join(df2.as("b"),
df2.col("id_device").equalTo(df1.col("ID_device_previous")).
and(df2.col("id_vehicule").equalTo(df1.col("ID_vehicule_previous"))).
and(df2.col("tracking_time").lt(df1.col("date_track_previous")))
,"left").selectExpr("a.*", "b.ID_tracking as ID_pprevious", "b.km as KM_pprevious","b.tracking_time as tracking_time_pprevious","b.speed as speed_pprevious");
I get the df1 dataframe join with multiple line from df2 dataframe.
But what I want is to join the df1
dataframe with df2
dataframe ON the same condition and order by df2.col("tracking_time") desc limit(0,1)
EDIT
I tried the following code , but it doesn't work .
df1.registerTempTable("data");
df2.createOrReplaceTempView("tdays");
Dataset<Row> d_f = sparkSession.sql("select a.* from data as a LEFT JOIN (select b.tracking_time from tdays as b where b.id_device = a.ID_device_previous and b.id_vehicule = a.ID_vehicule_previous and b.tracking_time < a.date_track_previous order by b.tracking_time desc limit 1 )");
I need your help
答案1
得分: 1
你可以通过多种方式来实现这个,我知道的有以下几种方法:
1)你可以在合并后的dataframee DF上使用dropDuplicates。
val finalDF = dataframee.dropDuplicates("指定你希望在最终输出中保持独特性的列")
(或者)
2)使用Spark SQL
import spark.sql.implicits._
df1.createOrReplaceTempView("table1")
df2.createOrReplaceTempView("table2")
spark.sql("连接查询和按照独特列分组").select(df("*"))
英文:
you can do this in multiple ways which I'm aware of
-
you can do dropDuplicates on your joined dataframee DF.
val finalDF = dataframee.dropDuplicates("") // specified columns which you want to be distinct/unique in final output
(OR)
-
spark-sql
import spark.sql.implicits._ df1.createOrReplaceTempViews("table1") df2.createOrReplaceTempViews("table2") spark.sql("join query with groupBy distinct columns").select(df("*"))
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论