英文:
PicklingError: Could not serialize object: IndexError: tuple index out of range
问题
我在cmd中启动了pyspark,并执行了以下操作来提高我的技能。
C:\Users\Administrator>SUCCESS: The process with PID 5328 (child process of PID 4476) has been terminated.
SUCCESS: The process with PID 4476 (child process of PID 1092) has been terminated.
SUCCESS: The process with PID 1092 (child process of PID 3952) has been terminated.
pyspark
Python 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/01/08 20:07:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 3.3.1
/_/
Using Python version 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022 19:58:39)
Spark context Web UI available at http://Mohit:4040
Spark context available as 'sc' (master = local[*], app id = local-1673188677388).
SparkSession available as 'spark'.
>>> 23/01/08 20:08:10 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped
a = sc.parallelize([1,2,3,4,5,6,7,8,9,10])
当我执行a.take(1)时,我得到了"_pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range"的错误,我无法找到原因。当在Google Colab上运行相同的代码时,不会抛出任何错误。下面是控制台中的输出。
>>> a.take(1)
Traceback (most recent call last):
File "C:\Spark\python\pyspark\serializers.py", line 458, in dumps
return cloudpickle.dumps(obj, pickle_protocol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 602, in dump
return Pickler.dump(self, obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 692, in reducer_override
return self._function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 565, in _function_reduce
return self._dynamic_function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 546, in _dynamic_function_reduce
state = _function_getstate(func)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 157, in _function_getstate
f_globals_ref = _extract_code_globals(func.__code__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in _extract_code_globals
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in <dictcomp>
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
~~~~~^^^^^^^
IndexError: tuple index out of range
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Spark\python\pyspark\rdd.py", line 1883, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\context.py", line 1486, in runJob
sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3505, in _jrdd
wrapped_func = _wrap_function(
^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3362, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3345, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\serializers.py", line 468, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
它应该返回[1]作为答案,但却抛出了这个错误。这是因为安装不正确吗?
使用的软件包 - spark-3.3.1-bin-hadoop3.tgz,Java(TM) SE Runtime Environment (build 1.8.0_351-b10),Python 3.11.1
有人可以帮助解决这个问题吗?非常感谢。
英文:
I initiated pyspark in cmd and performed below to sharpen my skills.
C:\Users\Administrator>SUCCESS: The process with PID 5328 (child process of PID 4476) has been terminated.
SUCCESS: The process with PID 4476 (child process of PID 1092) has been terminated.
SUCCESS: The process with PID 1092 (child process of PID 3952) has been terminated.
pyspark
Python 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/01/08 20:07:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 3.3.1
/_/
Using Python version 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022 19:58:39)
Spark context Web UI available at http://Mohit:4040
Spark context available as 'sc' (master = local[*], app id = local-1673188677388).
SparkSession available as 'spark'.
>>> 23/01/08 20:08:10 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped
a = sc.parallelize([1,2,3,4,5,6,7,8,9,10])
When I execute a.take(1), I get "_pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range" error and I am unable to find why. When same is run on google colab, it doesn't throw any error. Below is what I get in console.
>>> a.take(1)
Traceback (most recent call last):
File "C:\Spark\python\pyspark\serializers.py", line 458, in dumps
return cloudpickle.dumps(obj, pickle_protocol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 602, in dump
return Pickler.dump(self, obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 692, in reducer_override
return self._function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 565, in _function_reduce
return self._dynamic_function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 546, in _dynamic_function_reduce
state = _function_getstate(func)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 157, in _function_getstate
f_globals_ref = _extract_code_globals(func.__code__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in _extract_code_globals
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in <dictcomp>
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
~~~~~^^^^^^^
IndexError: tuple index out of range
Traceback (most recent call last):
File "C:\Spark\python\pyspark\serializers.py", line 458, in dumps
return cloudpickle.dumps(obj, pickle_protocol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 602, in dump
return Pickler.dump(self, obj)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 692, in reducer_override
return self._function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 565, in _function_reduce
return self._dynamic_function_reduce(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 546, in _dynamic_function_reduce
state = _function_getstate(func)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle_fast.py", line 157, in _function_getstate
f_globals_ref = _extract_code_globals(func.__code__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in _extract_code_globals
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\cloudpickle\cloudpickle.py", line 334, in <dictcomp>
out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
~~~~~^^^^^^^
IndexError: tuple index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Spark\python\pyspark\rdd.py", line 1883, in take
res = self.context.runJob(self, takeUpToNumLeft, p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\context.py", line 1486, in runJob
sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3505, in _jrdd
wrapped_func = _wrap_function(
^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3362, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\rdd.py", line 3345, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
^^^^^^^^^^^^^^^^^^
File "C:\Spark\python\pyspark\serializers.py", line 468, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: IndexError: tuple index out of range
It should provide [1] as an answer but instead throws this error. Is it because of incorrect installation?
Package used - spark-3.3.1-bin-hadoop3.tgz, Java(TM) SE Runtime Environment (build 1.8.0_351-b10), Python 3.11.1
Can anyone help in troubleshooting this? Many thanks in advance.
答案1
得分: 11
根据https://github.com/apache/spark/pull/38987,你需要使用Spark 3.4.0才能使用Python 3.11,截至目前,该版本尚未在https://spark.apache.org/downloads.html上发布。Python 3.10应该可以使用。
英文:
According to https://github.com/apache/spark/pull/38987 you will need Spark 3.4.0 to use Python 3.11, at the time of writing not yet released at https://spark.apache.org/downloads.html. Python 3.10 should work.
答案2
得分: 9
截至2023年3月2日,我遇到了完全相同的问题,并且如上所述,我卸载了Python 3.11并安装了版本3.10.9,问题得到解决!
英文:
As of 3/2/23, I had the same identical problem, and as indicated above, I uninstalled python 3.11 and installed version 3.10.9 and it's solved!
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论