英文:
How would I limit upload and download speed from the server in golang?
问题
如何在golang中限制服务器的上传和下载速度?
我正在编写一个golang服务器,允许用户上传和下载文件。文件很大,大约1GB字节。我想将上传和下载速度限制为(例如)1MB/s(当然可以配置)。
以下是我的上传代码:
func uploadFile(w http.ResponseWriter, r *http.Request) {
file, _, err := r.FormFile("file")
if err != nil {
http.Error(w, err.Error(), 500)
return
}
defer file.Close()
os.MkdirAll(`e:\test`, os.ModePerm)
out, err := os.Create(`e:\test\test.mpg`)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
defer out.Close()
_, err = io.Copy(out, file)
if err != nil {
http.Error(w, err.Error(), 500)
}
}
希望以上内容对你有帮助!
英文:
How would I limit upload and download speed from the server in golang?
I'm writing a golang server to allow users to upload and download files. And file is big, about 1GB bytes. I want to limit the upload and download speed to (for instance) 1MB/s (configurable of course).
below is my upload code:
func uploadFile(w http.ResponseWriter, r *http.Request) {
file, _, err := r.FormFile("file")
if err != nil {
http.Error(w, err.Error(), 500)
return
}
defer file.Close()
os.MkdirAll(`e:\test`, os.ModePerm)
out, err := os.Create(`e:\test\test.mpg`)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
defer out.Close()
_, err = io.Copy(out, file)
if err != nil {
http.Error(w, err.Error(), 500)
}
}
答案1
得分: 19
有一个令牌桶算法可以帮助实现速率限制。我找到了一个示例实现,你可以使用:https://github.com/juju/ratelimit
package main
import (
"bytes"
"fmt"
"io"
"time"
"github.com/juju/ratelimit"
)
func main() {
// 源文件大小为1MB
src := bytes.NewReader(make([]byte, 1024*1024))
// 目标文件
dst := &bytes.Buffer{}
// 每秒添加100KB的令牌,最多持有100KB
bucket := ratelimit.NewBucketWithRate(100*1024, 100*1024)
start := time.Now()
// 将源文件拷贝到目标文件,但使用速率限制的读取器进行包装
io.Copy(dst, ratelimit.Reader(src, bucket))
fmt.Printf("拷贝了%d字节,耗时%s\n", dst.Len(), time.Since(start))
}
运行后的输出结果为:
拷贝了1048576字节,耗时9.239607694秒
你可以使用不同的令牌桶实现来提供所需的行为。在你的代码中,设置正确的令牌桶后,你可以调用:
_, err = io.Copy(out, ratelimit.Reader(file, bucket))
英文:
There's a token bucket algorithm that can be helpful to implement such the rate limit. I found an example implementation, which you can use: https://github.com/juju/ratelimit
package main
import (
"bytes"
"fmt"
"io"
"time"
"github.com/juju/ratelimit"
)
func main() {
// Source holding 1MB
src := bytes.NewReader(make([]byte, 1024*1024))
// Destination
dst := &bytes.Buffer{}
// Bucket adding 100KB every second, holding max 100KB
bucket := ratelimit.NewBucketWithRate(100*1024, 100*1024)
start := time.Now()
// Copy source to destination, but wrap our reader with rate limited one
io.Copy(dst, ratelimit.Reader(src, bucket))
fmt.Printf("Copied %d bytes in %s\n", dst.Len(), time.Since(start))
}
After running it, the output is:
Copied 1048576 bytes in 9.239607694s
You can use different bucket implementations to provide desired behaviour. In your code, after setting up right token bucket, you would call:
_, err = io.Copy(out, ratelimit.Reader(file, bucket))
答案2
得分: 4
你可以查看PuerkitoBio/throttled
的实现,该实现在这篇文章中介绍:
> **throttled
**是一个Go包,实现了各种策略来控制对HTTP处理程序的访问。它支持请求的速率限制、请求的恒定间隔流动以及内存使用阈值来授予或拒绝访问,同时还提供了扩展其功能的机制。
速率限制可能不完全符合你的需求,但可以为实现类似功能提供一个很好的思路。
英文:
You could check out the implementation of PuerkitoBio/throttled
, presented in this article:
> throttled
, a Go package that implements various strategies to control access to HTTP handlers.
Out-of-the-box, it supports rate-limiting of requests, constant interval flow of requests and memory usage thresholds to grant or deny access, but it also provides mechanisms to extend its functionality.
The rate limit isn't exactly what you need, but can give a good idea for implementing a similar feature.
答案3
得分: 0
你可以使用https://github.com/ConduitIO/bwlimit来限制服务器和客户端请求的带宽。它与其他库不同之处在于,它尊重读/写截止时间(超时)并限制整个请求(包括标头)的带宽,而不仅仅是请求体。
如果你只想限制单个HTTP处理程序的文件上传和下载速度,可以使用该库提供的Reader
和Writer
对象。
package example
import (
"io"
"net/http"
"github.com/conduitio/bwlimit"
)
const (
writeLimit = 1 * bwlimit.Mebibyte // 写入限制为1048576 B/s
readLimit = 4 * bwlimit.KB // 读取限制为4000 B/s
)
func uploadFile(w http.ResponseWriter, r *http.Request) {
file, _, _ := r.FormFile("file")
// 应用带宽限制
fileReader := bwlimit.NewReader(file, readLimit)
// 准备输出...
// 使用带宽限制进行拷贝
_, _ = io.Copy(out, fileReader)
}
func downloadFile(w http.ResponseWriter, r *http.Request) {
// 准备文件...
in, _ := os.Open("file")
// 应用带宽限制
responseWriter := bwlimit.NewWriter(w, writeLimit)
// 使用带宽限制写入响应体
io.Copy(responseWriter, in)
}
英文:
You can use https://github.com/ConduitIO/bwlimit to limit the bandwidth of requests on the server and the client. It differs from other libraries, because it respects read/write deadlines (timeouts) and limits the bandwidth of the whole request including headers, not only the request body.
If you are interested in only limiting the upload and download speed of the file for a single HTTP handler, you can use the Reader
and Writer
objects provided by the library.
package example
import (
"io"
"net/http"
"github.com/conduitio/bwlimit"
)
const (
writeLimit = 1 * bwlimit.Mebibyte // write limit is 1048576 B/s
readLimit = 4 * bwlimit.KB // read limit is 4000 B/s
)
func uploadFile(w http.ResponseWriter, r *http.Request) {
file, _, _ := r.FormFile("file")
// apply bandwidth limit
fileReader := bwlimit.NewReader(file, readLimit)
// prepare out ...
// copy using the bandwidth limit
_, _ = io.Copy(out, fileReader)
}
func downloadFile(w http.ResponseWriter, r *http.Request) {
// prepare file ...
in, _ := os.Open("file")
// apply bandwidth limit
responseWriter := bwlimit.NewWriter(w, writeLimit)
// write body with bandwidth limit
io.Copy(responseWriter, in)
}
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论