英文:
MongoDb Slows down, When mongoDb write operations are more than 10000/sec
问题
我正在运行一个使用Golang编写的cron任务,并且我正在使用MongoDB作为数据库。
我的系统中有128GB的RAM用于存储数据库,而代码则在另一个系统中运行。
该cron任务同时运行着17000个商家的操作,每个商家都有不同的数据库,也就是说系统中有17000个数据库。
现在我来告诉你具体情况,
当cron任务运行时,每秒大约有10000个写入/插入操作,这使得MongoDB变慢,并且影响了MongoDB的性能以及整个cron任务的执行。这些写入操作包括批量插入查询和单个插入操作,并且这些查询是并发执行的,针对不同的商家。
为了解决这个问题,我正在考虑使用事务来进行写入操作,这样会对MongoDB的性能下降产生积极的影响吗?还有其他什么方法可以提高MongoDB的性能,而不会使其变慢呢?
英文:
I am running a cron whose code is written in golang, and i am using mongoDb as database
There was 128GB Ram into my system in which DataBase is stored, and I am using different system for the code.
The cron is running with 17000 merchants parallely, each merchant having different database, which means there was 17000 Db's into system.
Now I will tell you the scenario,
When the cron Runs, there are approximately 10000 write/insert operations per seconds, which makes mongodb slow and it affects the performance of the mongodb as well as the overall cron. The write operations include Bulk Insert queries as well as single Insertion and moreover these queries are being executed concurrently for different merchants.
To overcome this problem, I'm thinking to use Transactions for write operations, will it make an positive impact on the slow down of mongodb. Is there anything else which i can implement to improve the performance of mongoDb, that doesn't slows it down.
答案1
得分: 2
这些事务不会使您的性能更快。这些事务将在数据库中添加额外的锁定,可能会进一步降低性能。
如果您每秒插入10,000个写操作,MongoDB的性能将受到影响。这些写操作需要在副本之间进行处理和复制。在进行大量写操作时,您会注意到性能下降。
有几种策略可以解决这个问题:
- **分片数据库:**您可以将MongoDB分片,使写操作分散到多个节点中。
- **控制写入速度:**不要以每秒10,000次的速度进行写入,可以适当延迟写入。这样可以平滑写入峰值,您将看到性能下降不会很严重。也许您可以同时写入100个商家的数据,而不是17,000个商家。
- **使用更大的机器:**您可以尝试使用更大的机器来处理高写入吞吐量,或者调整MongoDB的性能调优参数。这不是一个理想的解决方案,但有时当其他方法都无效时,这是最后的手段。
好的资源:
- https://www.mongodb.com/basics/best-practices
- https://stackify.com/mongodb-performance-tuning/
- https://medium.com/idealo-tech-blog/advanced-mongodb-performance-tuning-2ddcd01a27d2
- https://medium.com/mongodb-performance-tuning
英文:
The transactions will not make your performance faster. The transactions will add additional locks in the database which might degrade the performance further.
If you are inserting 10k writes per second the MongoDB performance will be impacted. The writes are required to be processed and replicated across the replicas. During this massive writes, you will notice degraded performance.
There are several strategies to overcome this problem
- Shard the database: You can shard the mongo DB so that the writes are scattered into multiple nodes.
- Space the writes: Instead of writing at 10k/sec you can write slowly with some delay. This will smooth out the write spike and you will see the not-very-degraded performance. Maybe You can write 100 merchants' data in parallel at a time instead of 17k merchants.
- Bigger machine: You could try to use bigger machines to accommodate the high write throughput or play with MongoDB performance tuning parameters. This is not an ideal solution but sometime when nothing works this is the last resort.
Good resources
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论