生产 – 什么是加载文件以进行快速计算的最佳方法?

huangapple go评论57阅读模式
英文:

production - What is the best way to load a file for fast computation?

问题

我正在部署一个深度学习模型,并将keras模型保存为*.h5文件。我认为复杂的模型会使文件大小变大,从而在服务器上导致交互变慢,但是否有除了减少模型中的层数之外的其他方法可以做到?是否有一种压缩.h5*文件以加快在服务器上加载的方式?

谢谢。

英文:

I'm deploying a deep learning model and saved the keras model as .h5 file. I think complex model will make it big in size and hence slow interaction at the server, but is there a way other than reducing the layers in the model that I can do? Is there a sort of compressing the .h5 file in order to load it faster for the server?

Thank you

答案1

得分: 1

有一种方法可以做到这一点。

你正在寻找的是称为“量化”的方法。

与减少层数等效于模型剪枝不同,量化通过修改权重的精度(在某些情况下甚至是激活函数)来减小模型的大小和延迟。

要获取更详细的信息,请阅读官方TensorFlow文档上的此页面:https://www.tensorflow.org/lite/performance/post_training_quantization

英文:

There is a way to do that.

What you are looking for is called quantization.

Not necessarily reducing the layers which is equivalent to model-pruning, quantization reduces both the size and the latency of the model by modifying the precision of the weights (or even activations in some cases).

For more detailed information, read this page on the official TensorFlow documentation: https://www.tensorflow.org/lite/performance/post_training_quantization

huangapple
  • 本文由 发表于 2020年1月6日 17:48:39
  • 转载请务必保留本文链接:https://go.coder-hub.com/59609860.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定