如何在Repast中模拟跨多个排(处理器核心)的订单队列?

huangapple go评论49阅读模式
英文:

How to simulate an order queue in repast across multiple ranks(processor cores )?

问题

我刚刚开始使用Repast Python。我想模拟一个包含多个步骤的小型订单处理流程。

假设有1000个订单,它们的下单时间不同。订单接收后有3个步骤,分别是拣选(10 - 15分钟)、打包(8 - 12分钟)和发货(5 - 10分钟)。每个步骤都有一定数量的工人,比如10名拣选工人、5名打包工人和2名发货工人。

所有工人都是独立的,可以并行工作。一旦工人完成了分配的订单活动,他可以继续下一个订单进行处理。

如何创建一个队列变量,让所有Repast Python处理器都能访问?

我找不到Repast Python中基于物流的示例。我试图探索Repast库,比如Simpy,但它们对于大型问题来说不够可扩展。

在Repast4py文档的随机漫步示例中,我们使用以下命令运行程序:

mpirun -n 4 python rndwalk.py random_walk.yaml

这将在多个进程上运行程序,但它们都共享一个SharedGrid以进行交互。是否有类似的方式可以创建共享队列,用于每个步骤,比如订单队列、拣选队列、打包队列等,所有工人都能访问?

英文:

I am just beginning with repast python. I want to simulate a small order handling process with multiple steps.

Let's say there are 1000 orders with different order placement timestamps. There are 3 steps after the order is received, picking(10 - 15 mins), packing(8 - 12mins), shipping(5 - 10 mins). Each step has dedicated number of workers lets say 10 for picking, 5 for packing and 2 for shipping.

All the workers are independent and can work parallelly. Once a worker is done with the assigned activity for an order, he can move on the next order to process it.

How can a create a queue variable that is accessible to all the processors in repast python.

I cant find any logistics based examples of repast python. I am trying to explore repast libraries like Simpy but they are not scalable for large problems.

In the Random Walk example in repast4py documentation, we run the program using

mpirun -n 4 python rndwalk.py random_walk.yaml 

This will run the program on multiple ranks but they all share a SharedGrid to interact. Is there something similar for creating shared queues for each step of the process like an order queue, picking queue, packing queue etc...than can accessed by all workers?

答案1

得分: 1

不了解更多细节的情况下,我认为您需要选择一个特定的排名(例如,排名0)来管理队列,并在进程之间进行同步。排名0可以从完整队列创建每个排名的队列,并使用mpi4py将它们与自身和其他排名共享。在适当的间隔内,完整队列可以从排名队列更新,并创建新的排名队列。请参考mpi4py文档以了解如何在排名之间发送和接收Python对象。例如,

https://mpi4py.readthedocs.io/en/stable/tutorial.html#collective-communication

广播、散布、聚集等是MPI集合通信的概念。这是一个很好的介绍:https://mpitutorial.com/tutorials/mpi-broadcast-and-collective-communication/,尽管示例是用C编写的。

最后,如果repast4py在单一进程上运行正常(mpirun -n 1),则无需共享队列。因此,如果您的模拟在单一进程上运行得足够快,那么您可以完全避免这个问题。

英文:

Without knowing more of the details, I think you'll need to select a particular rank (e.g., rank 0) to manage the queues and synchronize them across processes. Rank 0 could create a queue for each rank from the full queue and use mpi4py to share those with itself and the other ranks. At some appropriate interval the full queue could be updated from the rank queues and new rank queues created. See the mpi4py documentation for how send and recv Python objects between ranks. For example,

https://mpi4py.readthedocs.io/en/stable/tutorial.html#collective-communication

Broadcast, scatter, gather etc., are MPI collection communication concepts. This is a good introduction to them: https://mpitutorial.com/tutorials/mpi-broadcast-and-collective-communication/, although the examples are in C.

Lastly, repast4py runs just fine on a single process (mpirun -n 1) in which case there's no need to share queues. So, if you simulation runs fast enough on a single process then you'd avoid the issue entirely.

huangapple
  • 本文由 发表于 2023年6月15日 05:24:40
  • 转载请务必保留本文链接:https://go.coder-hub.com/76477641.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定