Unexpected failure when preparing tensor allocations: tensorflow/lite/kernels/reshape.cc:85 num_input_elements != num_output_elements (1200 != 0)

huangapple go评论108阅读模式
英文:

Unexpected failure when preparing tensor allocations: tensorflow/lite/kernels/reshape.cc:85 num_input_elements != num_output_elements (1200 != 0)

问题

I have a tflite model and the input signature of the TFLite model is 'shape_signature': array([ -1, 12000, 1]. I have tested with random data with shape [1,1200,1] and model runs without any error.

prediction shape is also (1,1200,1)

Now I want to do this in android

I tried this in android but i'm getting this error

  1. Node number 6 (RESHAPE) failed to prepare.```
  2. <details>
  3. <summary>英文:</summary>
  4. I have a tflite model and the input signature of the TFLite model is &#39;shape_signature&#39;: array([ -1, 12000, 1]. I have tested with random data with shape [1,1200,1] and model runs without any error.
  5. prediction shape is also (1,1200,1)
  6. https://colab.research.google.com/gist/pjpratik/bd48804cc8d40239812079b5a249aac3/60367.ipynb#scrollTo=9qc4EpLTUw0v
  7. Now I want to do this in android
  8. I tried this in android but i&#39;m getting this error

private fun applyModel() {
val inputFloatArray = Array(1) { Array(inputAudioData.size) { FloatArray(1) } } //1,1200,1

  1. val outputFloatArray = inputFloatArray //Attempt 1
  2. val outputFloatArray = FloatArray(1200) //Attempt 2
  3. val outputFloatArray = FloatArray(1) //Attempt 3
  4. val outputFloatArray = Array(1) { Array(inputAudioData.size) { FloatArray(1) } } //Attempt 4
  5. Log.d(&quot;tflite&quot;, &quot;Model input data: ${inputFloatArray.toString()}&quot;)
  6. tflite!!.run(inputFloatArray, outputFloatArray)
  7. Log.d(&quot;tflite&quot;, &quot;Model output data: ${outputFloatArray.toString()}&quot;)
  8. }

java.lang.IllegalStateException: Internal error: Unexpected failure when preparing tensor allocations: tensorflow/lite/kernels/reshape.cc:85 num_input_elements != num_output_elements (1200 != 0)
Node number 6 (RESHAPE) failed to prepare.
at org.tensorflow.lite.NativeInterpreterWrapper.allocateTensors(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.allocateTensorsIfNeeded(NativeInterpreterWrapper.java:308)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:248)
at org.tensorflow.lite.InterpreterImpl.runForMultipleInputsOutputs(InterpreterImpl.java:101)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:77)
at org.tensorflow.lite.InterpreterImpl.run(InterpreterImpl.java:94)
```

答案1

得分: 2

这似乎更像是调试的一部分,你需要检查模型期望的输入类型以及生成什么类型的数据。

你可以尝试将数据分块,然后尝试在数据的一部分上进行操作,下面显示了我在 dtln_aec_128_1.tflite 上应用的方式。

你可以按照以下实现并尝试在你的数据集上运行它

https://github.com/breizhn/DTLN-aec/tree/main/pretrained_models

  1. private fun applyModel(data: ShortArray): ShortArray {
  2. val chunkSize = 257 // 可以根据你的模型测试而变化,可以是 1200
  3. val outputData = ShortArray(chunkSize)
  4. val inputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }
  5. for (i in 0 until chunkSize)
  6. inputArray[0][0][i] = data[i].toFloat()
  7. val outputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }
  8. tflite!!.run(inputArray, outputArray)
  9. Log.d("tflite output", "Model direct output ${outputArray[0][0].joinToString(" ")}")
  10. val outBuffer = FloatArray(chunkSize)
  11. for (i in 0 until chunkSize)
  12. outBuffer[i] = (outputArray[0][0][i]).toFloat()
  13. for (i in 0 until chunkSize)
  14. outputData[i] = outBuffer[i].toInt().toShort()
  15. return outputData
  16. }

加载模型的部分可以这样做

  1. @Throws(IOException::class)
  2. private fun loadModelFile(activity: Activity): MappedByteBuffer? {
  3. val fileDescriptor: AssetFileDescriptor = activity.assets.openFd("dtln_aec_128_1.tflite")
  4. val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
  5. val fileChannel = inputStream.channel
  6. val startOffset = fileDescriptor.startOffset
  7. val declaredLength = fileDescriptor.declaredLength
  8. return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength)
  9. }

app/build.gradle 中添加:

  1. implementation 'org.tensorflow:tensorflow-lite:2.12.0'

buildTypes{} 的下方添加:

  1. aaptOptions {
  2. noCompress "dtln_aec_128_1.tflite"
  3. }

并将其复制到 assets 文件夹中。

英文:

This seems to be much more kind of debugging, you need to check what kind of input your model is expecting and resulting what type of data.

You can try chunking your data and then try to work on part of the data, the below shows how I applied on dtln dtln_aec_128_1.tflite

You can follow below implementation and can try to work it on your dataset

https://github.com/breizhn/DTLN-aec/tree/main/pretrained_models

  1. private fun applyModel(data: ShortArray): ShortArray {
  2. val chunkSize = 257 // can be depending on your model test, can be 1200
  3. val outputData = ShortArray(chunkSize)
  4. val inputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }
  5. for (i in 0 until chunkSize)
  6. inputArray[0][0][i] = data[i].toFloat()
  7. val outputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }
  8. tflite!!.run(inputArray, outputArray)
  9. Log.d(&quot;tflite output&quot;, &quot;Model direct output ${outputArray[0][0].joinToString(&quot; &quot;)}&quot;)
  10. val outBuffer = FloatArray(chunkSize)
  11. for (i in 0 until chunkSize)
  12. outBuffer[i] = (outputArray[0][0][i]).toFloat()
  13. for (i in 0 until chunkSize)
  14. outputData[i] = outBuffer[i].toInt()
  15. .toShort()
  16. return outputData
  17. }

For loading the model you can do this

  1. @Throws(IOException::class)
  2. private fun loadModelFile(activity: Activity): MappedByteBuffer? {
  3. val fileDescriptor: AssetFileDescriptor = activity.assets.openFd(&quot;dtln_aec_128_1.tflite&quot;)
  4. val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
  5. val fileChannel = inputStream.channel
  6. val startOffset = fileDescriptor.startOffset
  7. val declaredLength = fileDescriptor.declaredLength
  8. return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declared length)
  9. }

app/build.gradle

  1. implementation &#39;org.tensorflow:tensorflow-lite:2.12.0&#39;

just below buildTypes{}

  1. aaptOptions {
  2. noCompress &quot;dtln_aec_128_1.tflite&quot;
  3. }

and copied to assets folder

huangapple
  • 本文由 发表于 2023年4月19日 17:09:39
  • 转载请务必保留本文链接:https://go.coder-hub.com/76052705.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定