无法从JupyterHub运行Spark作业。

huangapple go评论62阅读模式
英文:

Unable to run spark jobs from jupyterhub

问题

I was trying to deploy spark on kubernetes after some try and following this guide
https://dev.to/akoshel/spark-on-k8s-in-jupyterhub-1da2
I successfully ran the example pi by using spark-submit from local system and it worked beautifully and I was able to see the pod in completed status.

/opt/spark/bin/spark-submit \
  --master k8s://https://127.0.0.1:62013 \
  --deploy-mode cluster \
  --driver-memory 1g \
  --conf spark.kubernetes.memoryOverheadFactor=0.5 \
  --name sparkpi-test1 \
  --class org.apache.spark.examples.SparkPi \
  --conf spark.kubernetes.container.image=spark:latest \
  --conf spark.kubernetes.driver.pod.name=spark-test1-pi \
  --conf spark.kubernetes.namespace=spark \
  --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
  --verbose \
  local:///opt/spark/examples/jars/spark-examples_2.12-3.3.2.jar 1000

Then I proceeded to deploy jupyterhub after configuring all the service accounts, role, and role-bindings, but the problem I am facing is whenever I am trying to run spark config in my jupyterhub it is showing me an error mainly unable to connect to the pod. I am using minikube to test out the deployment. If anyone can help me with this, that would be great.

from pyspark import SparkConf, SparkContext
conf = (SparkConf().setMaster("k8s://https://127.0.0.1:52750")  # Your master address name
        .set("spark.kubernetes.container.image", "pyspark:latest")  # Spark image name
        .set("spark.driver.port", "2222")  # Needs to match svc
        .set("spark.driver.blockManager.port", "7777")
        .set("spark.driver.host", "driver-service.jupyterhub.svc.cluster.local")  # Needs to match svc
        .set("spark.driver.bindAddress", "0.0.0.0")
        .set("spark.kubernetes.namespace", "spark")
        .set("spark.kubernetes.authenticate.driver.serviceAccountName", "spark")
        .set("spark.kubernetes.authenticate.serviceAccountName", "spark")
        .set("spark.executor.instances", "2")
        .set("spark.kubernetes.container.image.pullPolicy", "IfNotPresent")
        .set("spark.app.name", "tutorial_app"))

# Create a SparkContext
sc = SparkContext(conf=conf)

The error you encountered is:

/home/jovyan/.local/lib/python3.9/site-packages/pyspark/bin/load-spark-env.sh: line 68: ps: command not found
Error: Caused by: java.net.ConnectException: Failed to connect to /127.0.0.1:52750

Please note that you should ignore the pod status error.

英文:

I was trying to deploy spark on kubernetes after some try and following this guide
https://dev.to/akoshel/spark-on-k8s-in-jupyterhub-1da2
I successfully ran the example pi by using spark-submit from local system and it worked beautifully and I was able to see the pod in completed status.

/opt/spark/bin/spark-submit \
  --master k8s://https://127.0.0.1:62013 \
  --deploy-mode cluster \
  --driver-memory 1g \
  --conf spark.kubernetes.memoryOverheadFactor=0.5 \
  --name sparkpi-test1 \
  --class org.apache.spark.examples.SparkPi \
  --conf spark.kubernetes.container.image=spark:latest \
  --conf spark.kubernetes.driver.pod.name=spark-test1-pi \
  --conf spark.kubernetes.namespace=spark \
  --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
  --verbose \
  local:///opt/spark/examples/jars/spark-examples_2.12-3.3.2.jar 1000

Then I proceeded to deploy jupyterhub after configuring all the service accounts,role and role-bindings, but the problem I am facing is whenever I am trying to run spark config in my jupyterhub it is showing be error mainly unable to connect to pod,
I am using minikube to test out the deployment,
if anyone can help me with this that would be great.

from pyspark import SparkConf, SparkContext
conf = (SparkConf().setMaster("k8s://https://127.0.0.1:52750") # Your master address name
.set("spark.kubernetes.container.image", "pyspark:latest") # Spark image name
.set("spark.driver.port", "2222") # Needs to match svc
.set("spark.driver.blockManager.port", "7777")
.set("spark.driver.host", "driver-service.jupyterhub.svc.cluster.local") # Needs to match svc
.set("spark.driver.bindAddress", "0.0.0.0")
.set("spark.kubernetes.namespace", "spark")
.set("spark.kubernetes.authenticate.driver.serviceAccountName", "spark")
.set("spark.kubernetes.authenticate.serviceAccountName", "spark")
.set("spark.executor.instances", "2")
.set("spark.kubernetes.container.image.pullPolicy", "IfNotPresent")
.set("spark.app.name", "tutorial_app"))

I have created this and then ran

Create a SparkContext

sc = SparkContext(conf=conf)

/home/jovyan/.local/lib/python3.9/site-packages/pyspark/bin/load-spark-env.sh: line 68: ps: command not found
Error: Caused by: java.net.ConnectException: Failed to connect to /127.0.0.1:52750
**
output to get the idea of the cluster**

root@TIGER03720:~# k get all -n jupyterhub -o wide
NAME                                  READY   STATUS    RESTARTS      AGE    IP            NODE       NOMINATED NODE   READINESS GATES
pod/continuous-image-puller-qrjr8     1/1     Running   1 (42s ago)   104m   10.244.0.14   minikube   <none>           <none>
pod/hub-6d64d94c89-54vvp              0/1     Error     0             104m   <none>        minikube   <none>           <none>
pod/jupyter-admin                     0/1     Error     0             102m   10.244.0.18   minikube   <none>           <none>
pod/proxy-5c6db96f8-wg9jc             0/1     Running   1 (42s ago)   104m   10.244.0.16   minikube   <none>           <none>
pod/user-scheduler-86cfcff58c-v4m8q   0/1     Running   1 (42s ago)   104m   10.244.0.15   minikube   <none>           <none>
pod/user-scheduler-86cfcff58c-zdv9n   0/1     Running   1 (42s ago)   104m   10.244.0.20   minikube   <none>           <none>

NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE    SELECTOR
service/driver-service   ClusterIP      10.110.72.226    <none>        2222/TCP,7777/TCP,4040/TCP   109m   app=jupyterhub,component=singleuser-server
service/hub              ClusterIP      10.106.88.232    <none>        8081/TCP                     104m   app=jupyterhub,component=hub,release=jupyterhub
service/proxy-api        ClusterIP      10.105.217.11    <none>        8001/TCP                     104m   app=jupyterhub,component=proxy,release=jupyterhub
service/proxy-public     LoadBalancer   10.108.107.209   <pending>     80:31888/TCP                 104m   component=proxy,release=jupyterhub

NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE    CONTAINERS   IMAGES                 SELECTOR
daemonset.apps/continuous-image-puller   1         1         1       1            1           <none>          104m   pause        k8s.gcr.io/pause:3.8   app=jupyterhub,component=continuous-image-puller,release=jupyterhub

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS       IMAGES
                         SELECTOR
deployment.apps/hub              0/1     1            0           104m   hub              jupyterhub/k8s-hub:2.0.0                   app=jupyterhub,component=hub,release=jupyterhub
deployment.apps/proxy            0/1     1            0           104m   chp              jupyterhub/configurable-http-proxy:4.5.3   app=jupyterhub,component=proxy,release=jupyterhub
deployment.apps/user-scheduler   0/2     2            0           104m   kube-scheduler   k8s.gcr.io/kube-scheduler:v1.23.10         app=jupyterhub,component=user-scheduler,release=jupyterhub

NAME                                        DESIRED   CURRENT   READY   AGE    CONTAINERS       IMAGES                                     SELECTOR
replicaset.apps/hub-6d64d94c89              1         1         0       104m   hub              jupyterhub/k8s-hub:2.0.0                   app=jupyterhub,component=hub,pod-template-hash=6d64d94c89,release=jupyterhub
replicaset.apps/proxy-5c6db96f8             1         1         0       104m   chp              jupyterhub/configurable-http-proxy:4.5.3   app=jupyterhub,component=proxy,pod-template-hash=5c6db96f8,release=jupyterhubreplicaset.apps/user-scheduler-86cfcff58c   2         2         0       104m   kube-scheduler   k8s.gcr.io/kube-scheduler:v1.23.10         app=jupyterhub,component=user-scheduler,pod-template-hash=86cfcff58c,release=jupyterhub

NAME                                READY   AGE    CONTAINERS   IMAGES
root@TIGER03720:~#
root@TIGER03720:~# k get all -n spark -o wide
NAME                  READY   STATUS      RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
pod/spark-test1-pi2   0/1     Completed   0          114m   10.244.0.3   minikube   <none>           <none>

IGNORE THE POD STATUS ERROR!

答案1

得分: 0

在主 K8s 上进行了以下更改,成功地工作了...

--master k8s://https://kubernetes.default.svc.cluster.local:443

英文:

in the master k8s made the below change which worked successfully..

--master k8s://https://kubernetes.default.svc.cluster.local:443

huangapple
  • 本文由 发表于 2023年6月6日 02:16:43
  • 转载请务必保留本文链接:https://go.coder-hub.com/76409021.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定