英文:
Store Uploaded File in MongoDB GridFS Using mgo without Saving to Memory
问题
我是你的中文翻译助手,以下是翻译好的内容:
我是一个对Golang和Sinatra不太熟悉的人。我已经修改了一个Sinatra应用程序,使其能够接受通过HTML表单上传的文件,并将其保存到托管的MongoDB数据库中的GridFS中。这似乎运行良好。我正在使用mgo驱动程序用Golang编写相同的应用程序。
从功能上来说,它运行良好。然而,在我的Golang代码中,我将文件读入内存,然后再将文件从内存写入MongoDB使用mgo。这似乎比我等效的Sinatra应用程序慢得多。我感觉Rack和Sinatra之间的交互没有执行这个“中间”或“临时”步骤。
这是我Go代码的一部分:
func uploadfilePageHandler(w http.ResponseWriter, req *http.Request) {
// 获取多部分表单文件信息
file, handler, err := req.FormFile("filename")
if err != nil {
fmt.Println(err)
}
// 将文件读入内存
data, err := ioutil.ReadAll(file)
// ... 检查 err 是否为 nil
// 指定Mongodb数据库
my_db := mongo_session.DB("...数据库名称...")
// 在Mongodb Gridfs实例中创建文件
my_file, err := my_db.GridFS("fs").Create(unique_filename)
// ... 检查 err 是否为 nil
// 将文件写入Mongodb Gridfs实例
n, err := my_file.Write(data)
// ... 检查 err 是否为 nil
// 关闭文件
err = my_file.Close()
// ... 检查 err 是否为 nil
// 写入日志类型的消息
fmt.Printf("%d 字节写入到Mongodb实例\n", n)
// ... 其他将流程重定向到用户流程的语句...
}
问题:
- 这个“临时”步骤是否是必需的(
data, err := ioutil.ReadAll(file)
)? - 如果是,我能否更高效地执行此步骤?
- 是否有其他被接受的做法或方法我应该考虑?
谢谢...
英文:
noob Golang and Sinatra person here. I have hacked a Sinatra app to accept an uploaded file posted from an HTML form and save it to a hosted MongoDB database via GridFS. This seems to work fine. I am writing the same app in Golang using the mgo driver.
Functionally it works fine. However in my Golang code, I read the file into memory and then write the file from memory to the MongoDB using mgo. This appears much slower than my equivalent Sinatra app. I get the sense that the interaction between Rack and Sinatra does not execute this "middle" or "interim" step.
Here's a snippet of my Go code:
<!-- language: lang-go -->
func uploadfilePageHandler(w http.ResponseWriter, req *http.Request) {
// Capture multipart form file information
file, handler, err := req.FormFile("filename")
if err != nil {
fmt.Println(err)
}
// Read the file into memory
data, err := ioutil.ReadAll(file)
// ... check err value for nil
// Specify the Mongodb database
my_db := mongo_session.DB("... database name...")
// Create the file in the Mongodb Gridfs instance
my_file, err := my_db.GridFS("fs").Create(unique_filename)
// ... check err value for nil
// Write the file to the Mongodb Gridfs instance
n, err := my_file.Write(data)
// ... check err value for nil
// Close the file
err = my_file.Close()
// ... check err value for nil
// Write a log type message
fmt.Printf("%d bytes written to the Mongodb instance\n", n)
// ... other statements redirecting to rest of user flow...
}
Question:
- Is this "interim" step needed (
data, err := ioutil.ReadAll(file)
)? - If so, can I execute this step more efficiently?
- Are there other accepted practices or approaches I should be considering?
Thanks...
答案1
得分: 9
不,你不应该一次性将整个文件读入内存,因为当文件太大时会出现问题。GridFS.Create的文档中的第二个示例避免了这个问题:
file, err := db.GridFS("fs").Create("myfile.txt")
check(err)
messages, err := os.Open("/var/log/messages")
check(err)
defer messages.Close()
err = io.Copy(file, messages)
check(err)
err = file.Close()
check(err)
至于为什么它比其他方法慢,没有深入了解这两种方法的细节很难说清楚。
英文:
No, you should not read the file entirely in memory at once, as that will break when the file is too large. The second example in the documentation for GridFS.Create avoids this problem:
file, err := db.GridFS("fs").Create("myfile.txt")
check(err)
messages, err := os.Open("/var/log/messages")
check(err)
defer messages.Close()
err = io.Copy(file, messages)
check(err)
err = file.Close()
check(err)
As for why it's slower than something else, hard to tell without diving into the details of the two approaches used.
答案2
得分: 1
一旦你从multipartForm中获取到文件,可以使用下面的函数将其保存到GridFs中。我也测试过对大文件的处理(最高达570MB)。
//....在handlerfunc中的代码
for _, fileHeaders := range r.MultipartForm.File {
for _, fileHeader := range fileHeaders {
file, _ := fileHeader.Open()
if gridFile, err := db.GridFS("fs").Create(fileHeader.Filename); err != nil {
//errorResponse(w, err, http.StatusInternalServerError)
return
} else {
gridFile.SetMeta(fileMetadata)
gridFile.SetName(fileHeader.Filename)
if err := writeToGridFile(file, gridFile); err != nil {
//errorResponse(w, err, http.StatusInternalServerError)
return
}
}
}
}
func writeToGridFile(file multipart.File, gridFile *mgo.GridFile) error {
reader := bufio.NewReader(file)
defer func() { file.Close() }()
// make a buffer to keep chunks that are read
buf := make([]byte, 1024)
for {
// read a chunk
n, err := reader.Read(buf)
if err != nil && err != io.EOF {
return errors.New("无法读取输入文件")
}
if n == 0 {
break
}
// write a chunk
if _, err := gridFile.Write(buf[:n]); err != nil {
return errors.New("无法写入GridFs文件:" + gridFile.Name())
}
}
gridFile.Close()
return nil
}
英文:
Once you have the file from multipartForm, it can be saved into GridFs using below function. I tested this against huge files as well ( upto 570MB).
<!-- language : go -->
//....code inside the handlerfunc
for _, fileHeaders := range r.MultipartForm.File {
for _, fileHeader := range fileHeaders {
file, _ := fileHeader.Open()
if gridFile, err := db.GridFS("fs").Create(fileHeader.Filename); err != nil {
//errorResponse(w, err, http.StatusInternalServerError)
return
} else {
gridFile.SetMeta(fileMetadata)
gridFile.SetName(fileHeader.Filename)
if err := writeToGridFile(file, gridFile); err != nil {
//errorResponse(w, err, http.StatusInternalServerError)
return
}
<!-- language: go -->
func writeToGridFile(file multipart.File, gridFile *mgo.GridFile) error {
reader := bufio.NewReader(file)
defer func() { file.Close() }()
// make a buffer to keep chunks that are read
buf := make([]byte, 1024)
for {
// read a chunk
n, err := reader.Read(buf)
if err != nil && err != io.EOF {
return errors.New("Could not read the input file")
}
if n == 0 {
break
}
// write a chunk
if _, err := gridFile.Write(buf[:n]); err != nil {
return errors.New("Could not write to GridFs for "+ gridFile.Name())
}
}
gridFile.Close()
return nil
}
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论