Cannot establish SSL connection to cluster, getting SSLHandshakeException: "error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER"

huangapple go评论86阅读模式
英文:

Cannot establish SSL connection to cluster, getting SSLHandshakeException: "error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER"

问题

你遇到了一个SSL握手错误,错误信息显示了在连接Cassandra数据库时发生了错误。可能是由于SSL设置或配置问题引起的。

要解决此问题,你可以尝试以下方法:

  1. 检查SSL设置:确保你的SSL设置正确配置。可能需要验证证书和密钥的有效性,并确保它们与Cassandra数据库相匹配。

  2. 检查网络连接:确保你的网络连接到Cassandra数据库是可用的,没有任何阻塞或防火墙问题。

  3. 检查Cassandra配置:确认Cassandra的SSL配置是否正确,并且它正在监听正确的端口(通常是9042)。

  4. 更新依赖库:确保你使用的DataStax Spark Cassandra Connector和相关库是最新版本,以避免已知的问题。

  5. 查看Cassandra日志:检查Cassandra的日志文件,看是否有任何其他有用的错误信息或警告。

如果以上方法都不起作用,你可能需要向DataStax社区或相关论坛寻求帮助,因为这可能涉及特定于库或环境的问题,可能需要更详细的调试和支持。

英文:

I'm trying to save a PySpark dataframe to Cassandra DB with Datastax Spark Cassanra Connector.

I set spark.cassandra.connection.ssl.enabled, create a SparkSession and try to save my dataframe. And I got the following error message in Cassandra logs:

WARN  [epollEventLoopGroup-5-32] 2023-05-05 16:35:04,962 PreV5Handlers.java:261 - Unknown exception in client networking
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:478)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:829)

And in my Python process itself i got the following error message:

INFO - : java.io.IOException: Failed to open native connection to Cassandra at {10.230.88.101}:9042
INFO - 	at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:168)
INFO - 	at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
INFO - 	at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
INFO - 	at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:32)
INFO - 	at com.datastax.spark.connector.cql.RefCountedCache.syncAcquire(RefCountedCache.scala:69)
INFO - 	at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:57)
INFO - 	at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:79)
INFO - 	at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
INFO - 	at com.datastax.spark.connector.rdd.partitioner.dht.TokenFactory$.forSystemLocalPartitioner(TokenFactory.scala:98)
INFO - 	at org.apache.spark.sql.cassandra.CassandraSourceRelation$.apply(CassandraSourceRelation.scala:276)
INFO - 	at org.apache.spark.sql.cassandra.DefaultSource.createRelation(DefaultSource.scala:83)
INFO - 	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
INFO - 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
INFO - 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
INFO - 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
INFO - 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
INFO - 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
INFO - 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
INFO - 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
INFO - 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
INFO - 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
INFO - 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
INFO - 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
INFO - 	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
INFO - 	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
INFO - 	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
INFO - 	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
INFO - 	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
INFO - 	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
INFO - 	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
INFO - 	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
INFO - 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO - 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
INFO - 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO - 	at java.lang.reflect.Method.invoke(Method.java:498)
INFO - 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
INFO - 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
INFO - 	at py4j.Gateway.invoke(Gateway.java:282)
INFO - 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
INFO - 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
INFO - 	at py4j.GatewayConnection.run(GatewayConnection.java:238)
INFO - 	at java.lang.Thread.run(Thread.java:750)

How can it be fixed?

答案1

得分: 1

这个SSL错误的最常见原因是:

  1. 服务器上未启用SSL,或者
  2. 客户端上SSL证书/密钥库/信任库配置不正确。

如果您已经验证了集群上启用了客户端到节点的加密,那么下一步是确保您在应用程序中配置了正确的SSL凭据。祝好运!

英文:

The most common causes for this SSL error:

SSLHandshakeException: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER

are:

  1. SSL not enabled on the server(s), or
  2. SSL certificate/keystore/truststores not configured correctly on the client.

If you have verified that client-to-node encryption is enabled on the cluster then the next step is to validate that you've configured the correct SSL credentials in your app. Cheers!

huangapple
  • 本文由 发表于 2023年5月11日 04:48:13
  • 转载请务必保留本文链接:https://go.coder-hub.com/76222450.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定