Failed to load: com/amazon/deequ/checks/Check

huangapple go评论143阅读模式
英文:

Failed to load : com/amazon/deequ/checks/Check

问题

我正在构建一个Spark应用程序,用于加载两个JSON文件,比较它们并打印出差异。我还尝试使用Amazon的aws deequ库来验证这些文件,但我遇到了以下异常:

  1. 警告:使用--illegal-access=warn来启用进一步非法反射访问操作的警告
  2. 警告:所有非法访问操作将在将来的版本中被拒绝
  3. 20/08/07 11:56:33 WARN NativeCodeLoader: 无法加载适用于您的平台的本地Hadoop库...在适用的情况下使用内置的Java
  4. 错误:无法加载com.deeq.CompareDataFramescom/amazon/deequ/checks/Check
  5. log4j:警告:找不到附加程序,用于记录器(org.apache.spark.util.ShutdownHookManager)。
  6. log4j:警告:请

当我提交作业给Spark时:

  1. ./spark-submit --class com.deeq.CompareDataFrames--master
  2. spark://saif-VirtualBox:7077 ~/Downloads/deeq-trial-1.0-SNAPSHOT.jar

我正在使用Ubuntu来托管Spark,之前在没有添加deequ来运行一些验证之前,它是正常工作的。我想知道是否在部署过程中漏掉了什么。这个错误似乎不是互联网上众所周知的错误。

代码:

  1. import com.amazon.deequ.VerificationResult;
  2. import com.amazon.deequ.VerificationSuite;
  3. import com.amazon.deequ.checks.Check;
  4. import com.amazon.deequ.checks.CheckLevel;
  5. import com.amazon.deequ.checks.CheckStatus;
  6. import com.amazon.deequ.constraints.Constraint;
  7. import org.apache.spark.api.java.JavaPairRDD;
  8. import org.apache.spark.api.java.JavaRDD;
  9. import org.apache.spark.api.java.function.PairFunction;
  10. import org.apache.spark.sql.Dataset;
  11. import org.apache.spark.sql.Row;
  12. import org.apache.spark.sql.SparkSession;
  13. import org.apache.spark.sql.types.DataTypes;
  14. import org.apache.spark.sql.types.StructField;
  15. import org.apache.spark.sql.types.StructType;
  16. import scala.Option;
  17. import scala.Tuple2;
  18. import scala.collection.mutable.ArraySeq;
  19. import scala.collection.mutable.Seq;
  20. public class CompareDataFrames {
  21. public static void main(String[] args) {
  22. SparkSession session = SparkSession.builder().appName("CompareDataFrames").getOrCreate();
  23. session.sparkContext().setLogLevel("ALL");
  24. StructType schema = DataTypes.createStructType(new StructField[]{
  25. DataTypes.createStructField("CUST_ID", DataTypes.StringType, true),
  26. DataTypes.createStructField("RECORD_LOCATOR_ID", DataTypes.StringType, true),
  27. DataTypes.createStructField("EVNT_ID", DataTypes.StringType, true)
  28. });
  29. Dataset<Row> first = session.read().option("multiline", "true").schema(schema).json("/home/saif/Downloads/FILE_DEV1.json");
  30. System.out.println("======= DataSet 1 =======");
  31. first.printSchema();
  32. first.show(false);
  33. Dataset<Row> second = session.read().option("multiline", "true").schema(schema).json("/home/saif/Downloads/FILE_DEV2.json");
  34. System.out.println("======= DataSet 2 =======");
  35. second.printSchema();
  36. second.show(false);
  37. // This will show all the rows which are present in the first dataset
  38. // but not present in the second dataset. But the comparison is at row
  39. // level and not at column level.
  40. System.out.println("======= Expect =======");
  41. first.except(second).show();
  42. StructType one = first.schema();
  43. JavaPairRDD<String, Row> pair1 = first.toJavaRDD().mapToPair((PairFunction<Row, String, Row>)
  44. row -> new Tuple2<>(row.getString(1), row));
  45. JavaPairRDD<String, Row> pair2 = second.toJavaRDD().mapToPair((PairFunction<Row, String, Row>)
  46. row -> new Tuple2<>(row.getString(1), row));
  47. System.out.println("======= Pair1 & Pair2 were created =======");
  48. JavaPairRDD<String, Row> subs = pair1.subtractByKey(pair2);
  49. JavaRDD<Row> rdd = subs.values();
  50. Dataset<Row> diff = session.createDataFrame(rdd, one);
  51. System.out.println("======= Diff Show =======");
  52. diff.show();
  53. Seq<Constraint> cons = new ArraySeq<>(0);
  54. VerificationResult vr = new VerificationSuite().onData(first)
  55. .addCheck(new Check(CheckLevel.Error(), "unit test", cons)
  56. .isComplete("EVNT_ID", Option.empty())
  57. )
  58. .run();
  59. Seq<Check> checkSeq = new ArraySeq<>(0);
  60. if (vr.status() != CheckStatus.Success()) {
  61. Dataset<Row> vrr = vr.checkResultsAsDataFrame(session, vr, checkSeq);
  62. vrr.show(false);
  63. }
  64. }
  65. }

Maven依赖:

  1. <dependencies>
  2. <dependency>
  3. <groupId>org.apache.spark</groupId>
  4. <artifactId>spark-core_2.12</artifactId>
  5. <version>3.0.0</version>
  6. </dependency>
  7. <dependency>
  8. <groupId>org.apache.spark</groupId>
  9. <artifactId>spark-streaming_2.12</artifactId>
  10. <version>3.0.0</version>
  11. <scope>provided</scope>
  12. </dependency>
  13. <dependency>
  14. <groupId>org.apache.spark</groupId>
  15. <artifactId>spark-sql_2.12</artifactId>
  16. <version>3.0.0</version>
  17. </dependency>
  18. <dependency>
  19. <groupId>org.apache.spark</groupId>
  20. <artifactId>spark-catalyst_2.12</artifactId>
  21. <version>3.0.0</version>
  22. </dependency>
  23. <dependency>
  24. <groupId>com.amazon.deequ</groupId>
  25. <artifactId>deequ</artifactId>
  26. <version>1.0.4</version>
  27. </dependency>
  28. <dependency>
  29. <groupId>org.apache.logging.log4j</groupId>
  30. <artifactId>log4j-core</artifactId>
  31. <version>2.13.3</version>
  32. </dependency>
  33. <dependency>
  34. <groupId>org.scala-lang.modules</groupId>
  35. <artifactId>scala-java8-compat_2.13</artifactId>
  36. <version>0.9.1</version>
  37. </dependency>
  38. </dependencies>

请注意,这只是代码和依赖项的翻译。如果您需要任何进一步的帮助或解释,请随时提问。

英文:

I'm building a spark application to load two json files, compare them, and print the differences. I also try to validate these files using amazon library aws deequ , but I'm getting the below exception:

  1. WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
  2. WARNING: All illegal access operations will be denied in a future release
  3. 20/08/07 11:56:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  4. Error: Failed to load com.deeq.CompareDataFrames: com/amazon/deequ/checks/Check
  5. log4j:WARN No appenders could be found for logger (org.apache.spark.util.ShutdownHookManager).
  6. log4j:WARN Please

when I submit the job to spark:

  1. ./spark-submit --class com.deeq.CompareDataFrames--master
  2. spark://saif-VirtualBox:7077 ~/Downloads/deeq-trial-1.0-SNAPSHOT.jar

I'm using Ubuntu to host spark, it was working without any issues before I added deequ to run some validation. I wonder if I'm missing something in the deployment process. It doesn't seem like this error is a well-know one on the internet.

Code :

<!-- begin snippet: java hide: false console: true babel: false -->

<!-- language: lang-java -->

  1. import com.amazon.deequ.VerificationResult;
  2. import com.amazon.deequ.VerificationSuite;
  3. import com.amazon.deequ.checks.Check;
  4. import com.amazon.deequ.checks.CheckLevel;
  5. import com.amazon.deequ.checks.CheckStatus;
  6. import com.amazon.deequ.constraints.Constraint;
  7. import org.apache.spark.api.java.JavaPairRDD;
  8. import org.apache.spark.api.java.JavaRDD;
  9. import org.apache.spark.api.java.function.PairFunction;
  10. import org.apache.spark.sql.Dataset;
  11. import org.apache.spark.sql.Row;
  12. import org.apache.spark.sql.SparkSession;
  13. import org.apache.spark.sql.types.DataTypes;
  14. import org.apache.spark.sql.types.StructField;
  15. import org.apache.spark.sql.types.StructType;
  16. import scala.Option;
  17. import scala.Tuple2;
  18. import scala.collection.mutable.ArraySeq;
  19. import scala.collection.mutable.Seq;
  20. public class CompareDataFrames {
  21. public static void main(String[] args) {
  22. SparkSession session = SparkSession.builder().appName(&quot;CompareDataFrames&quot;).getOrCreate();
  23. session.sparkContext().setLogLevel(&quot;ALL&quot;);
  24. StructType schema = DataTypes.createStructType(new StructField[]{
  25. DataTypes.createStructField(&quot;CUST_ID&quot;, DataTypes.StringType, true),
  26. DataTypes.createStructField(&quot;RECORD_LOCATOR_ID&quot;, DataTypes.StringType, true),
  27. DataTypes.createStructField(&quot;EVNT_ID&quot;, DataTypes.StringType, true)
  28. });
  29. Dataset&lt;Row&gt; first = session.read().option(&quot;multiline&quot;, &quot;true&quot;).schema(schema).json(&quot;/home/saif/Downloads/FILE_DEV1.json&quot;);
  30. System.out.println(&quot;======= DataSet 1 =======&quot;);
  31. first.printSchema();
  32. first.show(false);
  33. Dataset&lt;Row&gt; second = session.read().option(&quot;multiline&quot;, &quot;true&quot;).schema(schema).json(&quot;/home/saif/Downloads/FILE_DEV2.json&quot;);
  34. System.out.println(&quot;======= DataSet 2 =======&quot;);
  35. second.printSchema();
  36. second.show(false);
  37. // This will show all the rows which are present in the first dataset
  38. // but not present in the second dataset. But the comparison is at row
  39. // level and not at column level.
  40. System.out.println(&quot;======= Expect =======&quot;);
  41. first.except(second).show();
  42. StructType one = first.schema();
  43. JavaPairRDD&lt;String, Row&gt; pair1 = first.toJavaRDD().mapToPair((PairFunction&lt;Row, String, Row&gt;)
  44. row -&gt; new Tuple2&lt;&gt;(row.getString(1), row));
  45. JavaPairRDD&lt;String, Row&gt; pair2 = second.toJavaRDD().mapToPair((PairFunction&lt;Row, String, Row&gt;)
  46. row -&gt; new Tuple2&lt;&gt;(row.getString(1), row));
  47. System.out.println(&quot;======= Pair1 &amp; Pair2 were created =======&quot;);
  48. JavaPairRDD&lt;String, Row&gt; subs = pair1.subtractByKey(pair2);
  49. JavaRDD&lt;Row&gt; rdd = subs.values();
  50. Dataset&lt;Row&gt; diff = session.createDataFrame(rdd, one);
  51. System.out.println(&quot;======= Diff Show =======&quot;);
  52. diff.show();
  53. Seq&lt;Constraint&gt; cons = new ArraySeq&lt;&gt;(0);
  54. VerificationResult vr = new VerificationSuite().onData(first)
  55. .addCheck(new Check(CheckLevel.Error(), &quot;unit test&quot;, cons)
  56. .isComplete(&quot;EVNT_ID&quot;, Option.empty())
  57. )
  58. .run();
  59. Seq&lt;Check&gt; checkSeq = new ArraySeq&lt;&gt;(0);
  60. if (vr.status() != CheckStatus.Success()) {
  61. Dataset&lt;Row&gt; vrr = vr.checkResultsAsDataFrame(session, vr, checkSeq);
  62. vrr.show(false);
  63. }
  64. }
  65. }

<!-- end snippet -->

**Maven: **

<!-- begin snippet: xml hide: false console: true babel: false -->

<!-- language: lang-xml -->

  1. &lt;dependencies&gt;
  2. &lt;dependency&gt;
  3. &lt;groupId&gt;org.apache.spark&lt;/groupId&gt;
  4. &lt;artifactId&gt;spark-core_2.12&lt;/artifactId&gt;
  5. &lt;version&gt;3.0.0&lt;/version&gt;
  6. &lt;/dependency&gt;
  7. &lt;dependency&gt;
  8. &lt;groupId&gt;org.apache.spark&lt;/groupId&gt;
  9. &lt;artifactId&gt;spark-streaming_2.12&lt;/artifactId&gt;
  10. &lt;version&gt;3.0.0&lt;/version&gt;
  11. &lt;scope&gt;provided&lt;/scope&gt;
  12. &lt;/dependency&gt;
  13. &lt;dependency&gt;
  14. &lt;groupId&gt;org.apache.spark&lt;/groupId&gt;
  15. &lt;artifactId&gt;spark-sql_2.12&lt;/artifactId&gt;
  16. &lt;version&gt;3.0.0&lt;/version&gt;
  17. &lt;/dependency&gt;
  18. &lt;dependency&gt;
  19. &lt;groupId&gt;org.apache.spark&lt;/groupId&gt;
  20. &lt;artifactId&gt;spark-catalyst_2.12&lt;/artifactId&gt;
  21. &lt;version&gt;3.0.0&lt;/version&gt;
  22. &lt;/dependency&gt;
  23. &lt;dependency&gt;
  24. &lt;groupId&gt;com.amazon.deequ&lt;/groupId&gt;
  25. &lt;artifactId&gt;deequ&lt;/artifactId&gt;
  26. &lt;version&gt;1.0.4&lt;/version&gt;
  27. &lt;/dependency&gt;
  28. &lt;dependency&gt;
  29. &lt;groupId&gt;org.apache.logging.log4j&lt;/groupId&gt;
  30. &lt;artifactId&gt;log4j-core&lt;/artifactId&gt;
  31. &lt;version&gt;2.13.3&lt;/version&gt;
  32. &lt;/dependency&gt;
  33. &lt;dependency&gt;
  34. &lt;groupId&gt;org.scala-lang.modules&lt;/groupId&gt;
  35. &lt;artifactId&gt;scala-java8-compat_2.13&lt;/artifactId&gt;
  36. &lt;version&gt;0.9.1&lt;/version&gt;
  37. &lt;/dependency&gt;

<!-- end snippet -->

答案1

得分: 1

请按照以下方法解决您的问题。

方法 1.

使用 --jars 选项提交 Spark 作业,
从以下 Maven 仓库下载 JAR 文件到您的机器上,链接为 https://mvnrepository.com/artifact/com.amazon.deequ/deequ/1.0.4,保存到 ~/Downloads/deequ-1.0.4.jar

  1. ./spark-submit --class com.deeq.CompareDataFrames --master
  2. spark://saif-VirtualBox:7077 --jars ~/Downloads/deequ-1.0.4.jar ~/Downloads/deeq-trial-1.0-SNAPSHOT.jar

方法 2.

使用 --packages 选项提交 Spark 作业,

  1. ./spark-submit --class com.deeq.CompareDataFrames --master
  2. spark://saif-VirtualBox:7077 --packages com.amazon.deequ:deequ:1.0.4 ~/Downloads/deeq-trial-1.0-SNAPSHOT.jar

注:

  1. 只有在需要引用自定义仓库时才需要使用 --repositories 选项,默认情况下,如果未提供 --repositories 选项,则使用 Maven 中央仓库。
  2. 当指定 --packages 选项时,提交操作会尝试在 ~/.ivy2/cache~/.ivy2/jars~/.m2/repository 目录中查找包及其依赖项。如果找不到,则会使用 Ivy 从 Maven 中央仓库下载并存储在 ~/.ivy2 目录下。

编辑 1:

方法 3:

如果上述方法 1 和 2 不起作用,请使用 maven-shade-plugin 构建 uber jar,然后使用 spark-submit 进行部署。
使用以下 pom.xml 文件来使用 maven-shade-plugin 构建 uber jar。添加下面的 POM 配置并重新构建您的 JAR 文件,然后使用以下命令进行部署:

  1. spark-submit --class com.deeq.CompareDataFrames --master
  2. spark://saif-VirtualBox:7077 ~/Downloads/deeq-trial-1.0-SNAPSHOT.jar
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  3. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  4. <modelVersion>4.0.0</modelVersion>
  5. <groupId>com.deeq</groupId>
  6. <artifactId>deeq-trial-1.0-SNAPSHOT</artifactId>
  7. <version>1.0</version>
  8. <name>Spark-3.0 Spark Application</name>
  9. <url>https://maven.apache.org</url>
  10. <repositories>
  11. <repository>
  12. <id>codelds</id>
  13. <url>https://code.lds.org/nexus/content/groups/main-repo</url>
  14. </repository>
  15. <repository>
  16. <id>central</id>
  17. <name>Maven Repository Switchboard</name>
  18. <layout>default</layout>
  19. <url>https://repo1.maven.org/maven2</url>
  20. <snapshots>
  21. <enabled>false</enabled>
  22. </snapshots>
  23. </repository>
  24. </repositories>
  25. <properties>
  26. <maven.compiler.source>1.8</maven.compiler.source>
  27. <maven.compiler.target>1.8</maven.compiler.target>
  28. <encoding>UTF-8</encoding>
  29. <scala.version>2.12.8</scala.version>
  30. <java.version>1.8</java.version>
  31. <CodeCacheSize>512m</CodeCacheSize>
  32. <es.version>2.4.6</es.version>
  33. </properties>
  34. <dependencies>
  35. <dependency>
  36. <groupId>org.apache.spark</groupId>
  37. <artifactId>spark-core_2.12</artifactId>
  38. <version>3.0.0</version>
  39. </dependency>
  40. <dependency>
  41. <groupId>org.apache.spark</groupId>
  42. <artifactId>spark-streaming_2.12</artifactId>
  43. <version>3.0.0</version>
  44. <scope>provided</scope>
  45. </dependency>
  46. <dependency>
  47. <groupId>org.apache.spark</groupId>
  48. <artifactId>spark-sql_2.12</artifactId>
  49. <version>3.0.0</version>
  50. </dependency>
  51. <dependency>
  52. <groupId>org.apache.spark</groupId>
  53. <artifactId>spark-catalyst_2.12</artifactId>
  54. <version>3.0.0</version>
  55. </dependency>
  56. <dependency>
  57. <groupId>com.amazon.deequ</groupId>
  58. <artifactId>deequ</artifactId>
  59. <version>1.0.4</version>
  60. </dependency>
  61. <dependency>
  62. <groupId>org.apache.logging.log4j</groupId>
  63. <artifactId>log4j-core</artifactId>
  64. <version>2.13.3</version>
  65. </dependency>
  66. <dependency>
  67. <groupId>org.scala-lang.modules</groupId>
  68. <artifactId>scala-java8-compat_2.13</artifactId>
  69. <version>0.9.1</version>
  70. </dependency>
  71. </dependencies>
  72. <build>
  73. <resources>
  74. <resource>
  75. <directory>src/main/resources</directory>
  76. </resource>
  77. </resources>
  78. <plugins>
  79. <plugin>
  80. <groupId>net.alchim31.maven</groupId>
  81. <artifactId>scala-maven-plugin</artifactId>
  82. <version>3.2.2</version>
  83. <executions>
  84. <execution>
  85. <id>eclipse-add-source</id>
  86. <goals>
  87. <goal>add-source</goal>
  88. </goals>
  89. </execution>
  90. <execution>
  91. <id>scala-compile-first</id>
  92. <phase>process-resources</phase>
  93. <goals>
  94. <goal>compile</goal>
  95. </goals>
  96. </execution>
  97. <execution>
  98. <id>scala-test-compile-first</id>
  99. <phase>process-test-resources</phase>
  100. <goals>
  101. <goal>testCompile</goal>
  102. </goals>
  103. </execution>
  104. <execution>
  105. <id>attach-scaladocs</id>
  106. <phase
  107. <details>
  108. <summary>英文:</summary>
  109. Please follow the below approaches to resolve your problem.
  110. **Approach 1.**
  111. spark submit with `--jars` option,
  112. Download the jar from the following Maven Repo to your machine, https://mvnrepository.com/artifact/com.amazon.deequ/deequ/1.0.4 to `~/Downloads/deequ-1.0.4.jar`

./spark-submit --class com.deeq.CompareDataFrames --master
spark://saif-VirtualBox:7077 --jars ~/Downloads/deequ-1.0.4.jar ~/Downloads/deeq-trial-1.0-SNAPSHOT.jar

  1. **Approach 2.**
  2. spark submit with `--packages` option,

./spark-submit --class com.deeq.CompareDataFrames --master
spark://saif-VirtualBox:7077 --packages com.amazon.deequ:deequ:1.0.4 ~/Downloads/deeq-trial-1.0-SNAPSHOT.jar

  1. **Notes:**
  2. 1. The `--repositories` option is required only if some custom repository has to be referenced
  3. By default the maven central repository is used if the `--repositories` option is not provided
  4. When `--packages` option is specified, the submit operation tries to look for the packages and their dependencies in the `~/.ivy2/cache`, `~/.ivy2/jars`, `~/.m2/repository` directories.
  5. If they are not found, then they are downloaded from maven central using ivy and stored under the `~/.ivy2` directory.
  6. **Edit 1:**
  7. **Approach 3:**
  8. If the above solutions 1 &amp; 2 is not working then use `maven-shade-plugin` to build the `uber jar` and proceed with the `spark-submit`.
  9. use the below `pom.xml` file for building uber jar using `maven-shade-plugin`. adding the below pom and rebuild your jar and deploy it with `spark-submit`.

spark-submit --class com.deeq.CompareDataFrames --master
spark://saif-VirtualBox:7077 ~/Downloads/deeq-trial-1.0-SNAPSHOT.jar

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd&quot;>
<modelVersion>4.0.0</modelVersion>
<groupId>com.deeq</groupId>
<artifactId>deeq-trial-1.0-SNAPSHOT</artifactId>
<version>1.0</version>
<name>Spark-3.0 Spark Application</name>
<url>https://maven.apache.org</url>
<repositories>
<repository>
<id>codelds</id>
<url>https://code.lds.org/nexus/content/groups/main-repo</url>
</repository>
<repository>
<id>central</id>
<name>Maven Repository Switchboard</name>
<layout>default</layout>
<url>https://repo1.maven.org/maven2</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.version>2.12.8</scala.version>
<java.version>1.8</java.version>
<CodeCacheSize>512m</CodeCacheSize>
<es.version>2.4.6</es.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>3.0.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-catalyst_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.amazon.deequ</groupId>
<artifactId>deequ</artifactId>
<version>1.0.4</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.13.3</version>
</dependency>
<dependency>
<groupId>org.scala-lang.modules</groupId>
<artifactId>scala-java8-compat_2.13</artifactId>
<version>0.9.1</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
</resource>
</resources>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<id>eclipse-add-source</id>
<goals>
<goal>add-source</goal>
</goals>
</execution>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
<execution>
<id>scala-test-compile-first</id>
<phase>process-test-resources</phase>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
<execution>
<id>attach-scaladocs</id>
<phase>verify</phase>
<goals>
<goal>doc-jar</goal>
</goals>
</execution>
</executions>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
<recompileMode>incremental</recompileMode>
<useZincServer>true</useZincServer>
<args>
<arg>-unchecked</arg>
<arg>-deprecation</arg>
<arg>-feature</arg>
</args>
<jvmArgs>
<jvmArg>-Xms1024m</jvmArg>
<jvmArg>-Xmx1024m</jvmArg>
<jvmArg>-XX:ReservedCodeCacheSize=${CodeCacheSize}</jvmArg>
</jvmArgs>
<javacArgs>
<javacArg>-source</javacArg>
<javacArg>${java.version}</javacArg>
<javacArg>-target</javacArg>
<javacArg>${java.version}</javacArg>
<javacArg>-Xlint:all,-serial,-path</javacArg>
</javacArgs>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<excludes>
<exclude>org.xerial.snappy</exclude>
<exclude>org.scala-lang.modules</exclude>
<exclude>org.scala-lang</exclude>
</excludes>
</artifactSet>
<filters>
<filter>
<artifact>:</artifact>
<excludes>
<exclude>META-INF/.SF</exclude>
<exclude>META-INF/
.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<relocations>
<relocation>
<pattern>com.google.common</pattern>
<shadedPattern>shaded.com.google.common</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

  1. </details>

huangapple
  • 本文由 发表于 2020年8月8日 03:01:07
  • 转载请务必保留本文链接:https://go.coder-hub.com/63307771.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定