英文:
CloudWatch API returning strange results
问题
我在尝试使用get_metric_statistics
查询一些我的EBS卷的“Volume Write Bytes”指标时,得到了一些奇怪的结果。我想知道是否有人可以帮助我理解我从AWS获取的响应,为了本帖的目的,请考虑真实的卷ID已更改为"vol-1234"。
HTTP POST (152.32ms) https://monitoring.us-east-1.amazonaws.com:443/
响应状态 Net::HTTPOK (200)
响应主体 <无法记录>
=> [#<struct Aws::CloudWatch::Types::Datapoint timestamp=2020-01-06 13:56:00 UTC, sample_count=nil, average=4767.288888888889, sum=nil, minimum=0.0, maximum=0.0, unit="Bytes", extended_statistics={}>,
#<struct Aws::CloudWatch::Types::Datapoint timestamp=2020-01-06 13:21:00 UTC, sample_count=nil, average=5512.661654135339, sum=nil, minimum=0.0, maximum=0.0, unit="Bytes", extended_statistics={}>,
#<struct Aws::CloudWatch::Types::Datapoint timestamp=2020-01-06 15:06:00 UTC, sample_count=nil, average=5371.133079847908, sum=nil, minimum=0.0, maximum=0.0, unit="Bytes", extended_statistics={}>,
...
有人能解释为什么平均值似乎在4-5k左右,而最大值和最小值都为0.0吗?这似乎发生在多个卷上,而不是孤立的情况。
英文:
I have some strange results while trying to query the 'Volume Write Bytes' metric using get_metric_statistics
for some of my EBS Volumes. I was wondering if someone can help me understand the response I'm getting from AWS, for the purpose of this post please take into account that the real volume id has been changed to "vol-1234".
{"Cloudwatch Args"=>{:namespace=>"AWS/EBS", :metric_name=>"VolumeWriteBytes", :dimensions=>[{:name=>"VolumeId", :value=>"vol-1234"}], :start_time=>2020-01-06 12:41:58 UTC, :end_time=>2020-01-06 15:41:
58 UTC, :period=>300, :statistics=>["Average", "Minimum", "Maximum"]}, :account=>11, :region=>"us-east-1"}
HTTP POST (152.32ms) https://monitoring.us-east-1.amazonaws.com:443/
Response status Net::HTTPOK (200)
Response body <impossible to log>
=> [#<struct Aws::CloudWatch::Types::Datapoint timestamp=2020-01-06 13:56:00 UTC, sample_count=nil, average=4767.288888888889, sum=nil, minimum=0.0, maximum=0.0, unit="Bytes", extended_statistics={}>,
#<struct Aws::CloudWatch::Types::Datapoint timestamp=2020-01-06 13:21:00 UTC, sample_count=nil, average=5512.661654135339, sum=nil, minimum=0.0, maximum=0.0, unit="Bytes", extended_statistics={}>,
#<struct Aws::CloudWatch::Types::Datapoint timestamp=2020-01-06 15:06:00 UTC, sample_count=nil, average=5371.133079847908, sum=nil, minimum=0.0, maximum=0.0, unit="Bytes", extended_statistics={}>,
...
Can someone explain why the Average value seems to be around 4-5k, while the Maximum and Minimum values are 0.0 ? This seems to happen on multiple volumes and it's not an isolated case.
答案1
得分: 0
"Volume Write Bytes metric ---> 提供了在指定时间段内的写入操作信息。总和统计报告了该时段内传输的总字节数。平均统计报告了该时段内每个写入操作的平均大小,但对于连接到基于 Nitro 的实例的卷,平均值表示指定时段内的平均值。SampleCount 统计报告了该时段内的总写入操作次数,但对于连接到基于 Nitro 的实例的卷,样本计数表示用于统计计算的数据点数。对于 Xen 实例,只有在卷上存在写入活动时才报告数据。最小和最大统计仅由连接到基于 Nitro 的实例的卷支持。"
"In conclusion we can rule out the theory of this being something unusual."
英文:
According to AWS :
" Volume Write Bytes metric ---> Provides information on the write operations in a specified period of time. The Sum statistic reports the total number of bytes transferred during the period. The Average statistic reports the average size of each write operation during the period, except on volumes attached to a Nitro-based instance, where the average represents the average over the specified period. The SampleCount statistic reports the total number of write operations during the period, except on volumes attached to a Nitro-based instance, where the sample count represents the number of data points used in the statistical calculation. For Xen instances, data is reported only when there is write activity on the volume.
The Minimum and Maximum statistics on this metric are supported only by volumes attached to Nitro-based instances."
In conclusion we can rule out the theory of this being something unusual.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论