英文:
How can I show the latencies of the internal computations of a service in locust?
问题
Is there a way to show the latency statistics (and hopefully graphs) of the internal computations of a service in the locust web interface during the test?
在Locust的Web界面中,是否有一种方法可以显示服务内部计算的延迟统计数据(以及希望有图表)?
I have a service which internally realizes several computations. I would need to perform a load test and benchmark the times of each of these internal computations. Something like:
我有一个服务,它在内部执行了多个计算。我需要进行负载测试,并对这些内部计算的时间进行基准测试。类似于:
/compute
:
- Computation A -> Xms average, Yms median, Zms max etc.
- Computation B -> Xms average, Yms median, Zms max etc.
- Computation C -> Xms average, Yms median, Zms max etc.
“/compute”:
- 计算A -> X毫秒平均值,Y毫秒中位数,Z毫秒最大值等。
- 计算B -> X毫秒平均值,Y毫秒中位数,Z毫秒最大值等。
- 计算C -> X毫秒平均值,Y毫秒中位数,Z毫秒最大值等。
However in locust I can only see the overall time statistics of the endpoint (/compute
in this case).
但是,在Locust中,我只能看到端点(/compute
在此情况下)的总体时间统计信息。
I am right now returning the latencies of each of the computations in the response. I have checked the docs but I have not found a way to do show the statistics of those numbers during the test in the locust web interface.
我目前正在将每个计算的延迟返回到响应中。我已经查阅了文档,但我没有找到在Locust Web界面中在测试期间显示这些数字统计信息的方法。
The only workaround I have found is saving the responses into a file and computing the statistics separately.
我唯一找到的解决方法是将响应保存到文件中,然后单独计算统计数据。
Is there any way to do it? Or any other better solution?
有没有办法实现?或者有没有更好的解决方案?
Thank you so much in advance.
非常感谢提前回答。
英文:
Is there a way to show the latency statistics (and hopefully graphs) of the internal computations of a service in the locust web interface during the test?
I have a service which internally realizes several computations. I would need to perform a load test and benchmark the times of each of these internal computations. Something like:
/compute
:
- Computation A -> Xms average, Yms median, Zms max etc.
- Computation B -> Xms average, Yms median, Zms max etc.
- Computation C -> Xms average, Yms median, Zms max etc.
However in locust I can only see the overall time statistics of the endpoint (/compute
in this case).
I am right now returning the latencies of each of the computations in the response. I have checked the docs but I have not found a way to do show the statistics of those numbers during the test in the locust web interface.
The only workaround I have found is saving the responses into a file and computing the statistics separately.
Is there any way to do it? Or any other better solution?
Thank you so much in advance
答案1
得分: 1
是的,有一种方法可以做到这一点。您正在寻找的是扩展Web UI。
> 作为添加简单Web路由的替代方法,您可以使用Flask蓝图和模板,不仅可以添加路由,还可以扩展Web UI,以允许您在内置的Locust统计信息旁边显示自定义数据。这更加高级,因为它还涉及编写和包含HTML和JavaScript文件以供路由提供,但可以极大增强Web UI的实用性和可定制性。
>
> 扩展Web UI的工作示例,包括HTML和JavaScript示例文件,可以在Locust源代码的示例目录中找到。
具体来说,这是具有相关扩展Web UI示例的目录。扩展Web UI要比仅添加自定义路由复杂一些,但如果您熟悉这些技术,应该不会太难。您可以向UI添加自己的选项卡,并在其中显示您想要的任何内容。如果需要,甚至可以创建与主Locust统计信息表格和图表相同风格的表格和图表。
英文:
Yes, there is a way to do it. What you're looking for is Extending the Web UI.
> As an alternative to adding simple web routes, you can use Flask
> Blueprints and templates to not only add routes but also extend the
> web UI to allow you to show custom data along side the built-in Locust
> stats. This is more advanced as it involves also writing and including
> HTML and Javascript files to be served by routes but can greatly
> enhance the utility and customizability of the web UI.
>
> A working example of extending the web UI, complete with HTML and
> Javascript example files, can be found in the examples directory of
> the Locust source code.
Specifically, this is the directory with the relevant example of extending the web UI. It's a bit more involved to extend the web UI than it is to just add your custom route, but if you're familiar with the technologies it shouldn't be too bad. You can add your own tab to the UI and show whatever you want on it. You can even create a table and a chart in the same style as the main Locust stats tables and chart if desired.
答案2
得分: 0
以下是翻译好的部分:
可能的一个快速解决方法是为每个计算触发您自己的请求事件,以便它们在 Web 用户界面中显示为不同的端点。
request_meta = {
"request_type": "my-custom-type",
"name": "my-custom-name",
"response_time": <计算的时间>,
"response_length": 0,
"exception": None,
"context": None,
"response": None,
}
env.events.request.fire(**request_meta)
编写一个包装器可能会导致更干净的代码。
英文:
A quick workaround might be firing your own request event for each computation so they show as different endpoints in de web UI.
request_meta = {
"request_type": "my-custom-type",
"name": "my-custom-name",
"response_time": <calculated-time>,
"response_length": 0,
"exception": None,
"context": None,
"response": None,
}
env.events.request.fire(**request_meta)
Writing a wrapper might result in a cleaner code too.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论