英文:
Spark - how to get random unique rows
问题
我需要一种方法来从数据集中获取一定数量的随机行,这些行必须是唯一的。我尝试过数据集类的sample
方法,但有时会选择重复的行。
数据集的sample
方法:
英文:
I need a way to get some x number of random rows from a dataset which are unique. I tried sample
method of dataset class but it sometimes pick duplicate rows.
Dataset's sample method:
答案1
得分: 2
Sample Function with withReplacement=>'false' would always pick distinct rows df1.sample(false, 0.1).show()
> sample(boolean withReplacement, double fraction)
Consider below example:
where withReplacement => 'true' gave duplicate rows which can be verified by count, but withReplacement => 'false' did not.
import org.apache.spark.sql.functions._
val df1 = ((1 to 10000).toList).zip(((1 to 10000).map(x=>x*2))).toDF("col1", "col2")
// df1.sample(false, 0.1).show()
println("Sample Count for with Replacement : " + df1.sample(true, 0.1).count)
println("Sample Count for with Out Replacement : " + df1.sample(false, 0.1).count)
df1.sample(true, 0.1).groupBy($"col1", $"col2").count().filter($"count">1).show(5)
df1.sample(false, 0.1).groupBy($"col1", $"col2").count().filter($"count">1).show(5)
Sample Count for with Replacement : 978
Sample Count for with Out Replacement : 973
+----+-----+-----+
|col1| col2|count|
+----+-----+-----+
|7464|14928| 2|
|6080|12160| 2|
|6695|13390| 2|
|3393| 6786| 2|
|2137| 4274| 2|
+----+-----+-----+
only showing top 5 rows
+----+----+-----+
|col1|col2|count|
+----+----+-----+
+----+----+-----+
<details>
<summary>英文:</summary>
Sample Function with withReplacement=>'false' would always pick distinct rows `df1.sample(false, 0.1).show()`
> sample(boolean withReplacement, double fraction)
Consider below example:
where withReplacement => 'true' gave duplicate rows which can be verified by count, but withReplacement => 'false' did not.
import org.apache.spark.sql.functions._
val df1 = ((1 to 10000).toList).zip(((1 to 10000).map(x=>x*2))).toDF("col1", "col2")
// df1.sample(false, 0.1).show()
println("Sample Count for with Replacement : " + df1.sample(true, 0.1).count)
println("Sample Count for with Out Replacement : " + df1.sample(false, 0.1).count)
df1.sample(true, 0.1).groupBy($"col1", $"col2").count().filter($"count">1).show(5)
df1.sample(false, 0.1).groupBy($"col1", $"col2").count().filter($"count">1).show(5)
Sample Count for with Replacement : 978
Sample Count for with Out Replacement : 973
+----+-----+-----+
|col1| col2|count|
+----+-----+-----+
|7464|14928| 2|
|6080|12160| 2|
|6695|13390| 2|
|3393| 6786| 2|
|2137| 4274| 2|
+----+-----+-----+
only showing top 5 rows
+----+----+-----+
|col1|col2|count|
+----+----+-----+
+----+----+-----+
</details>
# 答案2
**得分**: 1
你应该使用`sample`函数,其中`withReplacement`参数设置为`false`,例如,你可以使用:
```scala
val sampledData = df.sample(withReplacement = false, 0.5)
但是这不能保证提供与给定数据集的总计数的确切分数相匹配。要做到这一点,在使用sample
函数获取样本数据后,提取样本数据的X个实体。
英文:
you should use sample function with withReplacement of false, for example, you can use:
val sampledData=df.sample(withReplacement=false,0.5)
but this is NOT guaranteed to provide exactly the fraction of the total count of your given Dataset.
for doing that, after you get your sampled data by sample function, take X entity of sampled data.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论