慢速洛里斯在Netty中作为客户端或服务器

huangapple go评论72阅读模式
英文:

Slow Loris in Netty as a client or server

问题

我对Netty还不太熟悉,所以请您谅解。似乎有很多问题在询问为什么特定的Netty实现很慢,以及如何加速它。但是我的用例有点不同。我想避免使用低级别的套接字实现(因此选择了Netty),但我也知道阻塞事件组是不好的。我知道我可以动态地管理流水线。我不确定自己是否对Netty了解足够,是否有可能实现这一点,我也没有尝试过许多我已经知道不好的事情(例如thread.sleep)。协议是HTTP,但我也希望它对其他协议也有用。

但我不知道的是,对于共享端口上的单个连接,如何减慢服务器对客户端的响应,反之亦然?或者更恰当地说:我应该在哪里实现所需的减慢?我猜测编码器可能是一个位置;但由于Netty的方法,我对具体的实现一无所知。

英文:

I'm still pretty new to netty so please bare with me. There seems to be plenty of questions asking why a specefic netty implementation is slow and how to make it faster. But my use case is a bit different. I want to avoid low level socket implementations (hence netty) but I also know that blocking the event group is bad. I know I can dynamically manage the pipeline. I'm not sure I know enough about netty to know if this is possible, and I've not tried much that I don't already know is bad (thread.sleep for example). The protocol is HTTP but I also need it to be useful for other protocols.

But what I don't know is, for a single connection on a shared port, how to slow down the response of the server to the client, and vice versa? Or put more aptly: where, and what, would I implement the slowness required? My guess is the encoder for the where; but because of netty's approach, i haven't the foggiest for the what.

答案1

得分: 1

你说你知道 Thread.sleep 是“不好的”,但实际上这取决于你想实现什么以及你在哪里放置睡眠操作。我认为构建这个的最佳方法是使用 DefaultEventExecutorGroup 来将慢速的 ChannelHandler 处理任务转移到非事件循环线程,然后在你的处理程序中调用 Thread.sleep

ChannelPipeline 的 Java 文档中,在“构建管道”部分:
https://netty.io/4.1/api/io/netty/channel/ChannelPipeline.html

用户应该在管道中拥有一个或多个 ChannelHandler 来接收 I/O 事件(例如读取)并请求 I/O 操作(例如写入和关闭)。例如,一个典型的服务器在每个通道的管道中将拥有以下处理程序,但是根据协议和业务逻辑的复杂性和特点,你可能会有所不同:

协议解码器 - 将二进制数据(例如 ByteBuf)转换为 Java 对象。
协议编码器 - 将 Java 对象转换为二进制数据。
业务逻辑处理程序 - 执行实际的业务逻辑(例如数据库访问)。
并且可以表示如下示例所示:

static final EventExecutorGroup group = new DefaultEventExecutorGroup(16);
...

ChannelPipeline pipeline = ch.pipeline();

pipeline.addLast("decoder", new MyProtocolDecoder());
pipeline.addLast("encoder", new MyProtocolEncoder());

// 告诉管道在与 I/O 线程不同的线程中运行 MyBusinessLogicHandler 的事件处理方法,
// 以便 I/O 线程不会被耗时的任务阻塞。
// 如果你的业务逻辑是完全异步的或者完成得非常快,你不需要指定一个线程组。
pipeline.addLast(group, "handler", new MyBusinessLogicHandler());

请注意,尽管使用 DefaultEventLoopGroup 会将操作从事件循环中卸载,但它仍会按照每个 ChannelHandlerContext 的串行方式处理任务,从而保证顺序。由于顺序,它仍可能成为瓶颈。如果你的用例不要求顺序,你可以考虑使用 UnorderedThreadPoolEventExecutor 来最大程度地提高任务执行的并行性。

英文:

You say that you know that Thread.sleep is "bad" but it really depends on what you're trying to achieve and where you put the sleep. I believe that the best way to build this would be to use a DefaultEventExecutorGroup to offload the processing of your slow-down ChannelHandler onto non-event-loop threads and then call Thread.sleep in your handler.

From the ChannelPipeline javadoc, under the "Building a pipeline" section:
https://netty.io/4.1/api/io/netty/channel/ChannelPipeline.html

> A user is supposed to have one or more ChannelHandlers in a pipeline to receive I/O events (e.g. read) and to request I/O operations (e.g. write and close). For example, a typical server will have the following handlers in each channel's pipeline, but your mileage may vary depending on the complexity and characteristics of the protocol and business logic:
>
> Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java object.
Protocol Encoder - translates a Java object into binary data.
Business Logic Handler - performs the actual business logic (e.g. database access).
and it could be represented as shown in the following example:
>
>
> static final EventExecutorGroup group = new DefaultEventExecutorGroup(16);
> ...
>
> ChannelPipeline pipeline = ch.pipeline();
>
> pipeline.addLast("decoder", new MyProtocolDecoder());
> pipeline.addLast("encoder", new MyProtocolEncoder());
>
> // Tell the pipeline to run MyBusinessLogicHandler's event handler methods
> // in a different thread than an I/O thread so that the I/O thread is not blocked by
> // a time-consuming task.
> // If your business logic is fully asynchronous or finished very quickly, you don't
> // need to specify a group.
> pipeline.addLast(group, "handler", new MyBusinessLogicHandler());
>

> Be aware that while using DefaultEventLoopGroup will offload the operation from the EventLoop it will still process tasks in a serial fashion per ChannelHandlerContext and so guarantee ordering. Due the ordering it may still become a bottle-neck. If ordering is not a requirement for your use-case you may want to consider using UnorderedThreadPoolEventExecutor to maximize the parallelism of the task execution.

答案2

得分: 0

希望有人能够提供比这个更详细解释的答案,但基本上只需要使用ChannelTrafficShapingHandler并设置一些足够小的值。

例如,一个2KB的响应,读写限制为512B,最大时间为6000毫秒,检查间隔为1000毫秒,使用ChannelTrafficShapingHandler时,响应将花费4000毫秒,而在本地同时运行客户端和服务器时,不使用它只需要50毫秒。我预计当在网络中传输时,这些时间会大幅增加。

final ChannelTrafficShapingHandler channelTrafficShapingHandler = new ChannelTrafficShapingHandler(
    getRateInBytesPerSecond(), getRateInBytesPerSecond(), getCheckInterval(), getMaxTime());
ch.addLast(channelTrafficShapingHandler);
英文:

I hope someone can post a better (more explanative) answer than this but basically all that's needed is to use a ChannelTrafficShapingHandler with some small enough values.

For instance, a 2kb response with read and write limit of 512b, maxTime of 6000ms, and a checkInterval of 1000ms forces the response to take 4000ms with the ChannelTrafficShapingHandler, and 50ms without it when running both client and server locally. I expect those times to increase dramatically when on the network wire.

final ChannelTrafficShapingHandler channelTrafficShapingHandler = new ChannelTrafficShapingHandler(
    getRateInBytesPerSecond(), getRateInBytesPerSecond(), getCheckInterval(), getMaxTime());
ch.addLast(channelTrafficShapingHandler);

huangapple
  • 本文由 发表于 2020年7月26日 11:06:28
  • 转载请务必保留本文链接:https://go.coder-hub.com/63095691.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定