英文:
Getting Error for org.apache.spark.sql.Encoder and missing or invalid dependency find while loading class file SQLImplicits, LowPrioritySQLImplicits
问题
I am running following code to read kafka stream with spark-3.2.2, and scala 2.12.0.
Earlier same code was working fine with spark-2.2 and scala 2.11.8,
import spark.implicits._
val kafkaStream = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", settings.kafka.brokers)
.option("startingOffsets", "latest")
.option("failOnDataLoss", "false")
.option("subscribe", "serviceproblems")
.load()
val dataset = kafkaStream.select($"key", $"value").as[(String, String)]
val mapper = new ObjectMapper
mapper.registerModule(new ServiceProblemDeserializerModule())
Facing error while building code as below
could not find implicit value for evidence parameter of type org.apache.spark.sql.Encoder[(String, String)]
[ERROR] val dataset = kafkaStream.select($"key", $"value").as[(String, String)]
There are some other errors as well which I am not able to understand reason as limited help is available.
[ERROR] missing or invalid dependency detected while loading class file 'SQLImplicits.class'.
Could not access type Encoder in package org.apache.spark.sql,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with -Ylog-classpath
to see the problematic classpath.)
A full rebuild may help if 'SQLImplicits.class' was compiled against an incompatible version of org.apache.spark.sql.
[ERROR] missing or invalid dependency detected while loading class file 'LowPrioritySQLImplicits.class'.
Could not access type Encoder in package org.apache.spark.sql,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with -Ylog-classpath
to see the problematic classpath.)
A full rebuild may help if 'LowPrioritySQLImplicits.class' was compiled against an incompatible version of org.apache.spark.sql.
[ERROR] missing or invalid dependency detected while loading class file 'package.class'.
Could not access type Row in package org.apache.spark.sql,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with -Ylog-classpath
to see the problematic classpath.)
A full rebuild may help if 'package.class' was compiled against an incompatible version of org.apache.spark.sql.
[ERROR] missing or invalid dependency detected while loading class file 'Dataset.class'.
Could not access type Encoder in package org.apache.spark.sql,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with -Ylog-classpath
to see the problematic classpath.)
A full rebuild may help if 'Dataset.class' was compiled against an incompatible version of org.apache.spark.sql.
Appreciate any help. Thanks in Advance
英文:
I am running following code to read kafka stream with spark-3.2.2, and scala 2.12.0.
Earlier same code was working fine with spark-2.2 and scala 2.11.8,
import spark.implicits._
val kafkaStream = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", settings.kafka.brokers)
.option("startingOffsets", "latest")
.option("failOnDataLoss", "false")
.option("subscribe", "serviceproblems")
.load()
val dataset = kafkaStream.select($"key", $"value").as[(String, String)]
val mapper = new ObjectMapper
mapper.registerModule(new ServiceProblemDeserializerModule())
Facing error while building code as below
could not find implicit value for evidence parameter of type org.apache.spark.sql.Encoder[(String, String)]
[ERROR] val dataset = kafkaStream.select($"key", $"value").as[(String, String)]
There are some other errors as well which I am not able to understand reason as limited help is available.
[ERROR] missing or invalid dependency detected while loading class file 'SQLImplicits.class'.
Could not access type Encoder in package org.apache.spark.sql,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
A full rebuild may help if 'SQLImplicits.class' was compiled against an incompatible version of org.apache.spark.sql.
[ERROR] missing or invalid dependency detected while loading class file 'LowPrioritySQLImplicits.class'.
Could not access type Encoder in package org.apache.spark.sql,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
A full rebuild may help if 'LowPrioritySQLImplicits.class' was compiled against an incompatible version of org.apache.spark.sql.
[ERROR] missing or invalid dependency detected while loading class file 'package.class'.
Could not access type Row in package org.apache.spark.sql,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
A full rebuild may help if 'package.class' was compiled against an incompatible version of org.apache.spark.sql.
[ERROR] missing or invalid dependency detected while loading class file 'Dataset.class'.
Could not access type Encoder in package org.apache.spark.sql,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
A full rebuild may help if 'Dataset.class' was compiled against an incompatible version of org.apache.spark.sql.
Appreciate any help. Thanks in Advance
答案1
得分: 0
以下是已翻译的内容:
在一些忙碌和导师支持之后,我成功摆脱了这个问题。发现我们正在使用org.elasticsearch依赖项,它还包含一些与Spark库冲突的内容。通过添加排除规则来解决重复依赖项的问题,问题得以解决。
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark-30_2.12</artifactId>
</dependency>
像这样:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark-30_${scala.binary.version}</artifactId>
<version>7.12.1</version>
<exclusions>
<exclusion>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
</exclusion>
<exclusion>
<groupId>org.scala.lang</groupId>
<artifactId>scala-library_2.12.0</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.spark</groupId>
<artifactId>spark-catalyst_2.12</artifactId>
</exclusion>
</exclusions>
</dependency>
英文:
After some hustle and one of Mentor support I could move away from this, what was found that we are using org.elasticsearch dependency and it also has some spark library that added conflicts, adding exclusion for duplicate dependency moved the way.
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark-30_2.12</artifactId>
</depdendency>
Like this
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark-30_${scala.binary.version}</artifactId>
<version>7.12.1</version>
<exclusions>
<exclusion>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
</exclusion>
<exclusion>
<groupId>org.scala.lang</groupId>
<artifactId>scala-library_2.12.0</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.spark</groupId>
<artifactId>spark-catalyst_2.12</artifactId>
</exclusion>
</exclusions>
</dependency>
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论