Android Kotlin: 使用Chaquopy运行Python脚本时出现异常

huangapple go评论119阅读模式
英文:

Android Kotlin: Exception running python script with Chaquopy

问题

It looks like you're encountering an issue related to the use of the multiprocessing module in your Python script when running it through Chaquopy in an Android app. The error messages you're seeing indicate that the multiprocessing.dummy module does not have the cpu_count attribute. This is likely because Chaquopy does not fully support the multiprocessing module, especially when it comes to certain functions like cpu_count.

To resolve this issue, you have a few options:

  1. Remove or Refactor the Multiprocessing Code: If your script's functionality doesn't critically depend on multiprocessing, you could refactor it to use normal threading instead of multiprocessing. This would involve replacing multiprocessing.Pool with threading.Thread or threading.ThreadPoolExecutor if possible.

  2. Use a Different Library: If multiprocessing is essential for your script and cannot be easily refactored, you might consider using a different library or approach that is more compatible with Android. Chaquopy may have limitations when it comes to multiprocessing due to the Android environment.

  3. Contact Chaquopy Support: You could reach out to Chaquopy's support or community forums to inquire about possible workarounds or solutions specific to running multiprocessing in an Android environment using Chaquopy.

In any case, it's essential to consider that Android has limitations and restrictions on how multi-threading and multiprocessing can be used due to its sandboxed nature and resource constraints. Depending on the specific use case and requirements, you may need to adjust your approach to work within these limitations.

英文:

As I haven't found an Android library to compare two .wav audio files (only found musicg which is not working for me) I decided to try one of the many I've found for Python, in concrete, AudioCompare.

For that I've followed the chaquopy page instructions and I was able to install v14 with no problems, and now I am able to run Python scripts from my Android app, the problem is the audio compare library I'm trying to run is throwing an exception, that is:

  1. com.chaquo.python.PyException: OSError: This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770.

I don't know about Python, but I'm quite sure the exception is launched in Matcher.py module (don't know hot to check the line number as the exception is not giving me this information), but anyway I'll paste all files just in case:

main.py:

  1. #!/usr/bin/env python
  2. from error import *
  3. from Matcher import Matcher
  4. from argparse import ArgumentParser
  5. def audio_matcher():
  6. """Our main control flow."""
  7. parser = ArgumentParser(
  8. description="Compare two audio files to determine if one "
  9. "was derived from the other. Supports WAVE and MP3.",
  10. prog="audiomatch")
  11. parser.add_argument("-f", action="append",
  12. required=False, dest="files",
  13. default=list(),
  14. help="A file to examine.")
  15. parser.add_argument("-d", action="append",
  16. required=False, dest="dirs",
  17. default=list(),
  18. help="A directory of files to examine. "
  19. "Directory must contain only audio files.")
  20. args = parser.parse_args()
  21. from os.path import dirname, join
  22. filename1 = join(dirname(__file__), "file1.wav")
  23. filename2 = join(dirname(__file__), "file2.wav")
  24. search_paths = [filename1, filename2]
  25. #search_paths = args.dirs + args.files
  26. if len(search_paths) != 2:
  27. die("Must provide exactly two input files or directories.")
  28. code = 0
  29. # Use our matching system
  30. matcher = Matcher(search_paths[0], search_paths[1])
  31. results = matcher.match()
  32. for match in results:
  33. if not match.success:
  34. code = 1
  35. warn(match.message)
  36. else:
  37. print(match)
  38. return code
  39. if __name__ == "__main__":
  40. exit(audio_matcher())

Matcher.py (from https://github.com/charlesconnell/AudioCompare):

  1. import math
  2. import itertools
  3. from FFT import FFT
  4. import numpy as np
  5. from collections import defaultdict
  6. from InputFile import InputFile
  7. import multiprocessing
  8. #from multiprocessing.dummy import Pool as ThreadPool
  9. import os
  10. import stat
  11. from error import *
  12. from common import *
  13. BUCKET_SIZE = 20
  14. BUCKETS = 4
  15. BITS_PER_NUMBER = int(math.ceil(math.log(BUCKET_SIZE, 2)))
  16. assert((BITS_PER_NUMBER * BUCKETS) <= 32)
  17. NORMAL_CHUNK_SIZE = 1024
  18. NORMAL_SAMPLE_RATE = 44100.0
  19. SCORE_THRESHOLD = 5
  20. class FileResult(BaseResult):
  21. """The result of fingerprinting
  22. an entire audio file."""
  23. def __init__(self, fingerprints, file_len, filename):
  24. super(FileResult, self).__init__(True, "")
  25. self.fingerprints = fingerprints
  26. self.file_len = file_len
  27. self.filename = filename
  28. def __str__(self):
  29. return self.filename
  30. class ChunkInfo(object):
  31. """These objects will be the values in
  32. our master hashes that map fingerprints
  33. to instances of this class."""
  34. def __init__(self, chunk_index, filename):
  35. self.chunk_index = chunk_index
  36. self.filename = filename
  37. def __str__(self):
  38. return "Chunk: {c}, File: {f}".format(c=self.chunk_index, f=self.filename)
  39. class MatchResult(BaseResult):
  40. """The result of comparing two files."""
  41. def __init__(self, file1, file2, file1_len, file2_len, score):
  42. super(MatchResult, self).__init__(True, "")
  43. self.file1 = file1
  44. self.file2 = file2
  45. self.file1_len = file1_len
  46. self.file2_len = file2_len
  47. self.score = score
  48. def __str__(self):
  49. short_file1 = os.path.basename(self.file1)
  50. short_file2 = os.path.basename(self.file2)
  51. if self.score > SCORE_THRESHOLD:
  52. if self.file1_len < self.file2_len:
  53. return "MATCH {f1} {f2} ({s})".format(f1=short_file1, f2=short_file2, s=self.score)
  54. else:
  55. return "MATCH {f2} {f1} ({s})".format(f1=short_file1, f2=short_file2, s=self.score)
  56. else:
  57. return "NO MATCH"
  58. def _to_fingerprints(freq_chunks):
  59. """Examine the results of running chunks of audio
  60. samples through FFT. For each chunk, look at the frequencies
  61. that are loudest in each "bucket." A bucket is a series of
  62. frequencies. Return the indices of the loudest frequency in each
  63. bucket in each chunk. These indices will be encoded into
  64. a single number per chunk."""
  65. chunks = len(freq_chunks)
  66. fingerprints = np.zeros(chunks, dtype=np.uint32)
  67. # Examine each chunk independently
  68. for chunk in range(chunks):
  69. fingerprint = 0
  70. for bucket in range(BUCKETS):
  71. start_index = bucket * BUCKET_SIZE
  72. end_index = (bucket + 1) * BUCKET_SIZE
  73. bucket_vals = freq_chunks[chunk][start_index:end_index]
  74. max_index = bucket_vals.argmax()
  75. fingerprint += (max_index << (bucket * BITS_PER_NUMBER))
  76. fingerprints[chunk] = fingerprint
  77. # return the indexes of the loudest frequencies
  78. return fingerprints
  79. def _file_fingerprint(filename):
  80. """Read the samples from the files, run them through FFT,
  81. find the loudest frequencies to use as fingerprints,
  82. turn those into a hash table.
  83. Returns a 2-tuple containing the length
  84. of the file in seconds, and the hash table."""
  85. # Open the file
  86. try:
  87. file = InputFile(filename)
  88. # Read samples from the input files, divide them
  89. # into chunks by time, and convert the samples in each
  90. # chunk into the frequency domain.
  91. # The chunk size is dependent on the sample rate of the
  92. # file. It is important that each chunk represent the
  93. # same amount of time, regardless of the sample
  94. # rate of the file.
  95. chunk_size_adjust_factor = (NORMAL_SAMPLE_RATE / file.get_sample_rate())
  96. fft = FFT(file, int(NORMAL_CHUNK_SIZE / chunk_size_adjust_factor))
  97. series = fft.series()
  98. file_len = file.get_total_samples() / file.get_sample_rate()
  99. file.close()
  100. # Find the indices of the loudest frequencies
  101. # in each "bucket" of frequencies (for every chunk).
  102. # These loud frequencies will become the
  103. # fingerprints that we'll use for matching.
  104. # Each chunk will be reduced to a tuple of
  105. # 4 numbers which are 4 of the loudest frequencies
  106. # in that chunk.
  107. # Convert each tuple in winners to a single
  108. # number. This number is unique for each possible
  109. # tuple. This hopefully makes things more
  110. # efficient.
  111. fingerprints = _to_fingerprints(series)
  112. except Exception as e:
  113. return FileErrorResult(str(e))
  114. return FileResult(fingerprints, file_len, filename)
  115. class Matcher(object):
  116. """Create an instance of this class to use our matching system."""
  117. def __init__(self, dir1, dir2):
  118. """The two arguments should be strings that are
  119. file or directory paths. For files, we will simply
  120. examine these files. For directories, we will scan
  121. them for files."""
  122. self.dir1 = dir1
  123. self.dir2 = dir2
  124. @staticmethod
  125. def __search_dir(dir):
  126. """Returns the regular files residing
  127. in the given directory, OR if the input
  128. is a regular file, return a 1-element
  129. list containing this file. All paths
  130. returned will be absolute paths."""
  131. results = []
  132. # Get the absolute path of our search dir
  133. abs_dir = os.path.abspath(dir)
  134. # Get info about the directory provide
  135. dir_stat = os.stat(abs_dir)
  136. # If it's really a file, just
  137. # return the name of it
  138. if stat.S_ISREG(dir_stat.st_mode):
  139. results.append(abs_dir)
  140. return results
  141. # If it's neither a file nor directory,
  142. # bail out
  143. if not stat.S_ISDIR(dir_stat.st_mode):
  144. die("{d} is not a directory or a regular file.".format(d=abs_dir))
  145. # Scan through the contents of the
  146. # directory (non-recursively).
  147. contents = os.listdir(abs_dir)
  148. for node in contents:
  149. abs_node = abs_dir + os.sep + node
  150. node_stat = os.stat(abs_node)
  151. # If we find a regular file, add
  152. # that to our results list, otherwise
  153. # warn the user.
  154. if stat.S_ISREG(node_stat.st_mode):
  155. results.append(abs_node)
  156. else:
  157. warn("An inode that is not a regular file was found at {f}".format(abs_node))
  158. return results
  159. @staticmethod
  160. def __combine_hashes(files):
  161. """Take a list of FileResult objects and
  162. create a hash that maps all of their fingerprints
  163. to ChunkInfo objects."""
  164. master = defaultdict(list)
  165. for f in files:
  166. for chunk in range(len(f.fingerprints)):
  167. hash = f.fingerprints[chunk]
  168. master[hash].append(ChunkInfo(chunk, f.filename))
  169. return master
  170. @staticmethod
  171. def __file_lengths(files):
  172. """Take a list of FileResult objects and
  173. create a hash that maps their filenames
  174. to the length of each file, in seconds."""
  175. results = {}
  176. for f in files:
  177. results[f.filename] = f.file_len
  178. return results
  179. @staticmethod
  180. def __report_file_matches(file, master_hash, file_lengths):
  181. """Find files from the master hash that match
  182. the given file.
  183. @param file A FileResult object that is our query
  184. @param master_hash The data to search through
  185. @param file_lengths A hash mapping filenames to file lengths
  186. @return A list of MatchResult objects, one for every file
  187. that was represented in master_hash"""
  188. results = []
  189. # A hash that maps filenames to "offset" hashes. Then,
  190. # an offset hash maps the difference in chunk numbers of
  191. # the matches we will find.
  192. # We'll map those differences to the number of matches
  193. # found with that difference.
  194. # This allows us to see if many fingerprints
  195. # from different files occurred at the same
  196. # time offsets relative to each other.
  197. file_match_offsets = {}
  198. for f in file_lengths:
  199. file_match_offsets[f] = defaultdict(lambda: 0)
  200. # For each chunk in the query file
  201. for query_chunk_index in range(len(file.fingerprints)):
  202. # See if that chunk's fingerprint is in our master hash
  203. chunk_fingerprint = file.fingerprints[query_chunk_index]
  204. if chunk_fingerprint in master_hash:
  205. # If it is, record the offset between our query chunk
  206. # and the found chunk
  207. for matching_chunk in master_hash[chunk_fingerprint]:
  208. offset = matching_chunk.chunk_index - query_chunk_index
  209. file_match_offsets[matching_chunk.filename][offset] += 1
  210. # For each file that was in master_hash,
  211. # we examine the offsets of the matching fingerprints we found
  212. for f in file_match_offsets:
  213. offsets = file_match_offsets[f]
  214. # The length of the shorter file is important
  215. # to deciding whether two audio files match.
  216. min_len = min(file_lengths[f], file.file_len)
  217. # max_offset is the highest number of times that two matching
  218. # hash keys were found with the same time difference
  219. # relative to each other.
  220. if len(offsets) != 0:
  221. max_offset = max(offsets.values())
  222. else:
  223. max_offset = 0
  224. # The score is the ratio of max_offset (as explained above)
  225. # to the length of the shorter file. A short file that should
  226. # match another file will result in less matching fingerprints
  227. # than a long file would, so we take this into account. At the
  228. # same time, a long file that should *not* match another file
  229. # will generate a decent number of matching fingerprints by
  230. # pure chance, so this corrects for that as well.
  231. if min_len > 0:
  232. score = max_offset / min_len
  233. else:
  234. score = 0
  235. results.append(MatchResult(file.filename, f, file.file_len, file_lengths[f], score))
  236. return results
  237. def match(self):
  238. """Takes two AbstractInputFiles as input,
  239. and returns a boolean as output, indicating
  240. if the two files match."""
  241. dir1_files = Matcher.__search_dir(self.dir1)
  242. dir2_files = Matcher.__search_dir(self.dir2)
  243. # Try to determine how many
  244. # processors are in the computer
  245. # we're running on, to determine
  246. # the appropriate amount of parallelism
  247. # to use
  248. try:
  249. cpus = multiprocessing.cpu_count()
  250. except NotImplementedError:
  251. cpus = 1
  252. # Construct a process pool to give the task of
  253. # fingerprinting audio files
  254. pool = multiprocessing.Pool(cpus)
  255. try:
  256. # Get the fingerprints from each input file.
  257. # Do this using a pool of processes in order
  258. # to parallelize the work neatly.
  259. map1_result = pool.map_async(_file_fingerprint, dir1_files)
  260. map2_result = pool.map_async(_file_fingerprint, dir2_files)
  261. # Wait for pool to finish processing
  262. pool.close()
  263. pool.join()
  264. # Get results from process pool
  265. dir1_results = map1_result.get()
  266. dir2_results = map2_result.get()
  267. except KeyboardInterrupt:
  268. pool.terminate()
  269. raise
  270. results = []
  271. # If there was an error in fingerprinting a file,
  272. # add a special ErrorResult to our results list
  273. results.extend([x for x in dir1_results if not x.success])
  274. results.extend([x for x in dir2_results if not x.success])
  275. # Proceed only with fingerprints that were computed
  276. # successfully
  277. dir1_successes = [x for x in dir1_results if x.success and x.file_len > 0]
  278. dir2_successes = [x for x in dir2_results if x.success and x.file_len > 0]
  279. # Empty files should match other empty files
  280. # Our matching algorithm will not report these as a match,
  281. # so we have to make a special case for it.
  282. dir1_empty_files = [x for x in dir1_results if x.success and x.file_len == 0]
  283. dir2_empty_files = [x for x in dir2_results if x.success and x.file_len == 0]
  284. # Every empty file should match every other empty file
  285. for empty_file1, empty_file2 in itertools.product(dir1_empty_files, dir2_empty_files):
  286. results.append(MatchResult(empty_file1.filename, empty_file2.filename, empty_file1.file_len, empty_file2.file_len, SCORE_THRESHOLD + 1))
  287. # This maps filenames to the lengths of the files
  288. dir1_file_lengths = Matcher.__file_lengths(dir1_successes)
  289. dir2_file_lengths = Matcher.__file_lengths(dir2_successes)
  290. # Get the combined sizes of the files in our two search
  291. # paths
  292. dir1_size = sum(dir1_file_lengths.values())
  293. dir2_size = sum(dir2_file_lengths.values())
  294. # Whichever search path has more data in it is the
  295. # one we want to put in the master hash, and then query
  296. # via the other one
  297. if dir1_size < dir2_size:
  298. dir_successes = dir1_successes
  299. master_hash = Matcher.__combine_hashes(dir2_successes)
  300. file_lengths = dir2_file_lengths
  301. else:
  302. dir_successes = dir2_successes
  303. master_hash = Matcher.__combine_hashes(dir1_successes)
  304. file_lengths = dir1_file_lengths
  305. # Loop through each file in the first search path our
  306. # program was given.
  307. for file in dir_successes:
  308. # For each file, check its fingerprints against those in the
  309. # second search path. For matching
  310. # fingerprints, look up the the times (chunk number)
  311. # that the fingerprint occurred
  312. # in each file. Store the time differences in
  313. # offsets. The point of this is to see if there
  314. # are many matching fingerprints at the
  315. # same time difference relative to each
  316. # other. This indicates that the two files
  317. # contain similar audio.
  318. file_matches = Matcher.__report_file_matches(file, master_hash, file_lengths)
  319. results.extend(file_matches)
  320. return results

May be after all the effort it won't work, but at least I'd like to give it a try.

Many help in order to be able to run this matcher script will be much appreciated.

Full exception:

  1. 0 = {StackTraceElement@19282} "<python>.java.android.__init__(__init__.py:140)"
  2. 1 = {StackTraceElement@19283} "<python>.multiprocessing.synchronize.__init__(synchronize.py:57)"
  3. 2 = {StackTraceElement@19284} "<python>.multiprocessing.synchronize.__init__(synchronize.py:162)"
  4. 3 = {StackTraceElement@19285} "<python>.multiprocessing.context.Lock(context.py:68)"
  5. 4 = {StackTraceElement@19286} "<python>.multiprocessing.queues.__init__(queues.py:336)"
  6. 5 = {StackTraceElement@19287} "<python>.multiprocessing.context.SimpleQueue(context.py:113)"
  7. 6 = {StackTraceElement@19288} "<python>.multiprocessing.pool._setup_queues(pool.py:343)"
  8. 7 = {StackTraceElement@19289} "<python>.multiprocessing.pool.__init__(pool.py:191)"
  9. 8 = {StackTraceElement@19290} "<python>.multiprocessing.context.Pool(context.py:119)"
  10. 9 = {StackTraceElement@19291} "<python>.Matcher.match(Matcher.py:306)"
  11. 10 = {StackTraceElement@19292} "<python>.main.audio_matcher(main.py:38)"
  12. 11 = {StackTraceElement@19293} "<python>.chaquopy_java.call(chaquopy_java.pyx:354)"
  13. 12 = {StackTraceElement@19294} "<python>.chaquopy_java.Java_com_chaquo_python_PyObject_callAttrThrowsNative(chaquopy_java.pyx:326)"
  14. 13 = {StackTraceElement@19295} "com.chaquo.python.PyObject.callAttrThrowsNative(Native Method)"
  15. 14 = {StackTraceElement@19296} "com.chaquo.python.PyObject.callAttrThrows(PyObject.java:232)"
  16. 15 = {StackTraceElement@19297} "com.chaquo.python.PyObject.callAttr(PyObject.java:221)"
  17. 16 = {StackTraceElement@19298} "com.testmepracticetool.toeflsatactexamprep.ui.activities.main.MainActivity.startActivity(MainActivity.kt:104)"
  18. 17 = {StackTraceElement@19299} "com.testmepracticetool.toeflsatactexamprep.ui.activities.main.MainActivity.onCreate(MainActivity.kt:80)"
  19. 18 = {StackTraceElement@19300} "android.app.Activity.performCreate(Activity.java:7994)"
  20. 19 = {StackTraceElement@19301} "android.app.Activity.performCreate(Activity.java:7978)"
  21. 20 = {StackTraceElement@19302} "android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1309)"
  22. 21 = {StackTraceElement@19303} "android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3422)"
  23. 22 = {StackTraceElement@19304} "android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3601)"
  24. 23 = {StackTraceElement@19305} "android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:85)"
  25. 24 = {StackTraceElement@19306} "android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135)"
  26. 25 = {StackTraceElement@19307} "android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95)"
  27. 26 = {StackTraceElement@19308} "android.app.ActivityThread$H.handleMessage(ActivityThread.java:2066)"
  28. 27 = {StackTraceElement@19309} "android.os.Handler.dispatchMessage(Handler.java:106)"
  29. 28 = {StackTraceElement@19310} "android.os.Looper.loop(Looper.java:223)"
  30. 29 = {StackTraceElement@19311} "android.app.ActivityThread.main(ActivityThread.java:7656)"
  31. 30 = {StackTraceElement@19312} "java.lang.reflect.Method.invoke(Native Method)"
  32. 31 = {StackTraceElement@19313} "com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)"
  33. 32 = {StackTraceElement@19314} "com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947)"

Edit 1: New exception after @mhsmith great help:

After replacing the import multithreading now I get the following exception:

  1. com.chaquo.python.PyException: AttributeError: module 'multiprocessing.dummy' has no attribute 'cpu_count'
  2. 0 = {StackTraceElement@19430} "<python>.Matcher.match(Matcher.py:300)"
  3. 1 = {StackTraceElement@19431} "<python>.main.audio_matcher(main.py:38)"
  4. 2 = {StackTraceElement@19432} "<python>.chaquopy_java.call(chaquopy_java.pyx:354)"
  5. 3 = {StackTraceElement@19433} "<python>.chaquopy_java.Java_com_chaquo_python_PyObject_callAttrThrowsNative(chaquopy_java.pyx:326)"
  6. 4 = {StackTraceElement@19434} "com.chaquo.python.PyObject.callAttrThrowsNative(Native Method)"
  7. 5 = {StackTraceElement@19435} "com.chaquo.python.PyObject.callAttrThrows(PyObject.java:232)"
  8. 6 = {StackTraceElement@19436} "com.chaquo.python.PyObject.callAttr(PyObject.java:221)"
  9. 7 = {StackTraceElement@19437} "com.testmepracticetool.toeflsatactexamprep.ui.activities.main.MainActivity.startActivity(MainActivity.kt:104)"
  10. 8 = {StackTraceElement@19438} "com.testmepracticetool.toeflsatactexamprep.ui.activities.main.MainActivity.onCreate(MainActivity.kt:80)"
  11. 9 = {StackTraceElement@19439} "android.app.Activity.performCreate(Activity.java:7994)"
  12. 10 = {StackTraceElement@19440} "android.app.Activity.performCreate(Activity.java:7978)"
  13. 11 = {StackTraceElement@19441} "android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1309)"
  14. 12 = {StackTraceElement@19442} "android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3422)"
  15. 13 = {StackTraceElement@19443} "android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3601)"
  16. 14 = {StackTraceElement@19444} "android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:85)"
  17. 15 = {StackTraceElement@19445} "android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135)"
  18. 16 = {StackTraceElement@19446} "android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95)"
  19. 17 = {StackTraceElement@19447} "android.app.ActivityThread$H.handleMessage(ActivityThread.java:2066)"
  20. 18 = {StackTraceElement@19448} "android.os.Handler.dispatchMessage(Handler.java:106)"
  21. 19 = {StackTraceElement@19449} "android.os.Looper.loop(Looper.java:223)"
  22. 20 = {StackTraceElement@19450} "android.app.ActivityThread.main(ActivityThread.java:7656)"
  23. 21 = {StackTraceElement@19451} "java.lang.reflect.Method.invoke(Native Method)"
  24. 22 = {StackTraceElement@19452} "com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)"
  25. 23 = {StackTraceElement@19453} "com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947)"

答案1

得分: 1

> 因为 Android 不支持 POSIX 信号量,大多数 multiprocessing API 会失败并显示错误信息 "This platform lacks a functioning sem_open implementation"。最简单的解决方法是使用 multiprocessing.dummy 代替。

我看到你已经尝试过这样做,但正确的方法是将
import multiprocessing 替换为 import multiprocessing.dummy as multiprocessing


编辑:对于第二个异常,最简单的解决方法是将导入语句改写如下:

  1. from multiprocessing import cpu_count
  2. from multiprocessing.dummy import Pool

然后从文件的其余部分中删除使用这些名称的地方的 multiprocessing. 前缀。

英文:

As it says in the Chaquopy documentation:

> Because Android doesn’t support POSIX semaphores, most of the multiprocessing APIs will fail with the error “This platform lacks a functioning sem_open implementation”. The simplest solution is to use multiprocessing.dummy instead.

I see you've already attempted to do this, but the correct way is to replace
import multiprocessing with import multiprocessing.dummy as multiprocessing.


Edit: for the second exception, the simplest solution is to rewrite the import statements as follows:

  1. from multiprocessing import cpu_count
  2. from multiprocessing.dummy import Pool

And then remove the multiprocessing. prefix from the places where those names are used in the rest of the file.

huangapple
  • 本文由 发表于 2023年6月6日 02:53:59
  • 转载请务必保留本文链接:https://go.coder-hub.com/76409244.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定