英文:
Unexpected failure when preparing tensor allocations: tensorflow/lite/kernels/reshape.cc:85 num_input_elements != num_output_elements (1200 != 0)
问题
I have a tflite model and the input signature of the TFLite model is 'shape_signature': array([ -1, 12000, 1]. I have tested with random data with shape [1,1200,1] and model runs without any error.
prediction shape is also (1,1200,1)
Now I want to do this in android
I tried this in android but i'm getting this error
Node number 6 (RESHAPE) failed to prepare.```
<details>
<summary>英文:</summary>
I have a tflite model and the input signature of the TFLite model is 'shape_signature': array([ -1, 12000, 1]. I have tested with random data with shape [1,1200,1] and model runs without any error.
prediction shape is also (1,1200,1)
https://colab.research.google.com/gist/pjpratik/bd48804cc8d40239812079b5a249aac3/60367.ipynb#scrollTo=9qc4EpLTUw0v
Now I want to do this in android
I tried this in android but i'm getting this error
private fun applyModel() {
val inputFloatArray = Array(1) { Array(inputAudioData.size) { FloatArray(1) } } //1,1200,1
val outputFloatArray = inputFloatArray //Attempt 1
val outputFloatArray = FloatArray(1200) //Attempt 2
val outputFloatArray = FloatArray(1) //Attempt 3
val outputFloatArray = Array(1) { Array(inputAudioData.size) { FloatArray(1) } } //Attempt 4
Log.d("tflite", "Model input data: ${inputFloatArray.toString()}")
tflite!!.run(inputFloatArray, outputFloatArray)
Log.d("tflite", "Model output data: ${outputFloatArray.toString()}")
}
java.lang.IllegalStateException: Internal error: Unexpected failure when preparing tensor allocations: tensorflow/lite/kernels/reshape.cc:85 num_input_elements != num_output_elements (1200 != 0)
Node number 6 (RESHAPE) failed to prepare.
at org.tensorflow.lite.NativeInterpreterWrapper.allocateTensors(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.allocateTensorsIfNeeded(NativeInterpreterWrapper.java:308)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:248)
at org.tensorflow.lite.InterpreterImpl.runForMultipleInputsOutputs(InterpreterImpl.java:101)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:77)
at org.tensorflow.lite.InterpreterImpl.run(InterpreterImpl.java:94)
```
答案1
得分: 2
这似乎更像是调试的一部分,你需要检查模型期望的输入类型以及生成什么类型的数据。
你可以尝试将数据分块,然后尝试在数据的一部分上进行操作,下面显示了我在 dtln_aec_128_1.tflite
上应用的方式。
你可以按照以下实现并尝试在你的数据集上运行它
https://github.com/breizhn/DTLN-aec/tree/main/pretrained_models
private fun applyModel(data: ShortArray): ShortArray {
val chunkSize = 257 // 可以根据你的模型测试而变化,可以是 1200
val outputData = ShortArray(chunkSize)
val inputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }
for (i in 0 until chunkSize)
inputArray[0][0][i] = data[i].toFloat()
val outputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }
tflite!!.run(inputArray, outputArray)
Log.d("tflite output", "Model direct output ${outputArray[0][0].joinToString(" ")}")
val outBuffer = FloatArray(chunkSize)
for (i in 0 until chunkSize)
outBuffer[i] = (outputArray[0][0][i]).toFloat()
for (i in 0 until chunkSize)
outputData[i] = outBuffer[i].toInt().toShort()
return outputData
}
加载模型的部分可以这样做
@Throws(IOException::class)
private fun loadModelFile(activity: Activity): MappedByteBuffer? {
val fileDescriptor: AssetFileDescriptor = activity.assets.openFd("dtln_aec_128_1.tflite")
val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
val fileChannel = inputStream.channel
val startOffset = fileDescriptor.startOffset
val declaredLength = fileDescriptor.declaredLength
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength)
}
在 app/build.gradle
中添加:
implementation 'org.tensorflow:tensorflow-lite:2.12.0'
在 buildTypes{}
的下方添加:
aaptOptions {
noCompress "dtln_aec_128_1.tflite"
}
并将其复制到 assets 文件夹中。
英文:
This seems to be much more kind of debugging, you need to check what kind of input your model is expecting and resulting what type of data.
You can try chunking your data and then try to work on part of the data, the below shows how I applied on dtln dtln_aec_128_1.tflite
You can follow below implementation and can try to work it on your dataset
https://github.com/breizhn/DTLN-aec/tree/main/pretrained_models
private fun applyModel(data: ShortArray): ShortArray {
val chunkSize = 257 // can be depending on your model test, can be 1200
val outputData = ShortArray(chunkSize)
val inputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }
for (i in 0 until chunkSize)
inputArray[0][0][i] = data[i].toFloat()
val outputArray = Array(1) { Array(1) { FloatArray(chunkSize) } }
tflite!!.run(inputArray, outputArray)
Log.d("tflite output", "Model direct output ${outputArray[0][0].joinToString(" ")}")
val outBuffer = FloatArray(chunkSize)
for (i in 0 until chunkSize)
outBuffer[i] = (outputArray[0][0][i]).toFloat()
for (i in 0 until chunkSize)
outputData[i] = outBuffer[i].toInt()
.toShort()
return outputData
}
For loading the model you can do this
@Throws(IOException::class)
private fun loadModelFile(activity: Activity): MappedByteBuffer? {
val fileDescriptor: AssetFileDescriptor = activity.assets.openFd("dtln_aec_128_1.tflite")
val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
val fileChannel = inputStream.channel
val startOffset = fileDescriptor.startOffset
val declaredLength = fileDescriptor.declaredLength
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declared length)
}
app/build.gradle
implementation 'org.tensorflow:tensorflow-lite:2.12.0'
just below buildTypes{}
aaptOptions {
noCompress "dtln_aec_128_1.tflite"
}
and copied to assets folder
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论