使用jackc/pgx在PostgreSQL中插入大对象时返回”out of memory (SQLSTATE 54000)”错误。

huangapple go评论108阅读模式
英文:

Inserting large object in Postgresql using jackc/pgx returns "out of memory (SQLSTATE 54000)"

问题

我正在使用jackc/pgx库将大对象插入到Postgres中。当大对象较小时,它可以正常工作。然而,在某种情况下,大对象的大小接近1.8 GB。结果,在执行写操作时,出现了"out of memory (SQLSTATE 54000)"错误。

以下是我插入blob的代码片段:

import (
	"github.com/jackc/pgx/v4"
	"github.com/jackc/pgx/v4/pgxpool"
)

// 从文件中读取字节以作为大对象导入
b, err := ioutil.ReadFile(pathToLargeObjectFile)
txWrite, err := dbPool.Begin(ctx)
loWrite := txWrite.LargeObjects()

fmt.Printf("Creating new blob with ID : %d", ID)
id, err := loWrite.Create(ctx, ID)
// 使用ID打开blob
obj, err := loWrite.Open(ctx, id, pgx.LargeObjectModeRead|pgx.LargeObjectModeWrite)
n, err := obj.Write(b)
fmt.Printf("Written %d byte to blob %d\n", n, id)

我在这一行代码上遇到了错误:

n, err := obj.Write(b)

我该如何避免这个错误并成功导入大对象?

我阅读了这篇帖子https://stackoverflow.com/questions/14509747/inserting-large-object-into-postgresql-returns-53200-out-of-memory-error,其中尝试以块的形式写入字节。

在jackc/pgx中是否也有类似的方法?

英文:

I am using jackc/pgx library to insert largeobjects into Postgres.
It works fine when the large objects are small. However in one case the large object was measuring almost 1.8 GB in size. As a result when performing the write operation, there was "out of memory (SQLSTATE 54000)" error.

Here is the code snippet how I am inserting blobs

import (
	"github.com/jackc/pgx/v4"
	"github.com/jackc/pgx/v4/pgxpool"
)

// Read bytes from the file to be imported as large object
b, err := ioutil.ReadFile(pathToLargeObjectFile)
txWrite, err := dbPool.Begin(ctx)
loWrite := txWrite.LargeObjects()

fmt.Printf("Creating new blob with ID : %d", ID)
id, err := loWrite.Create(ctx, ID)
// open blob with ID
obj, err := loWrite.Open(ctx, id, pgx.LargeObjectModeRead|pgx.LargeObjectModeWrite)
n, err := obj.Write(b)
fmt.printf("Written %d byte to blob %d\n", n, id)

I get an error on this line

> n, err := obj.Write(b)

How do I prevent the error and successfully import the large object?

I read this post https://stackoverflow.com/questions/14509747/inserting-large-object-into-postgresql-returns-53200-out-of-memory-error which tries to write the bytes in chunks.

Is similar possible with jackc/pgx?

答案1

得分: 0

解决方法是我们需要分块读取文件,并将其分块写入 blob。

这里需要注意的重要点是,当使用 obj.Write(b) 时,obj 对象会保持指向上一次写入结束的指针位置。因此,如果我们连续进行写入操作,每次写入后都会将数据追加到 blob 中。

以下是我解决问题的方法:

// 使用 ID 打开 blob
obj, err := loWrite.Open(ctx, id, pgx.LargeObjectModeRead|pgx.LargeObjectModeWrite)
if err != nil {
    fmt.Printf("打开 blob 时出错。错误信息:%s", err.Error())
    return err
}

importFile := "<导入文件的路径>"
// 用于读取 blob 备份文件的读取器
reader := bufio.NewReader(blobBackupFile)
chunk := 0
id := "<blob 的 ID>"
// 逐块从 blob 中读取并写入 blob
for {
    buf := make([]byte, bufferSize)    // 初始化缓冲区
    bytesRead, err := reader.Read(buf) // 将块加载到缓冲区
    buf = buf[:bytesRead]              // 将缓冲区切片为实际读取的字节数

    if bytesRead == 0 {
        if err == io.EOF {
            fmt.Printf("已到达文件末尾:%s", file.Name())
            break
        }
        if err != nil {
            fmt.Printf("从文件 %s 读取第 %d 块时出错。错误信息:%s", chunk, importFile, err.Error())
            break
        }

        return err
    }
    loc, err := obj.Tell()
    if err != nil {
        fmt.Printf("获取当前指针位置时出错:%s", err.Error())
    }
    fmt.Printf("BlobID:%d。指针位置:%#v", id, loc)
    fmt.Printf("正在将第 %d 块(%d 字节)写入 blob %d 的地址:%#v", chunk, bytesRead, loc, id)

    bytesWritten, err := obj.Write(buf)
    if err != nil {
        fmt.Printf("将字节写入 blob 时出错:%s", err.Error())
        return err
    }
    fmt.Printf("已写入 %d 字节到 blob %d。", bytesWritten, id)
    endLoc, err := obj.Tell()
    if err != nil {
        fmt.Printf("写入后获取指针位置时出错:%s", err.Error())
    }
    fmt.Printf("已从地址 %#v 写入第 %d 块(%d 字节)到地址 %#v 的 blob %d", loc, chunk, bytesWritten, endLoc, id)

    if int64(totalBytesRead) == file.Size() {
        fmt.Printf("最后一块(第 %d 块)已写入地址 %#v 到地址 %#v 的 blob %d", chunk, loc, endLoc, id)
        break
    }

    // 下一块
    chunk++
}

// 在完整写入 blob 后关闭 blob 和 obj
英文:

The solution to the same is that we need to read the file in chunks and write it to the blob in chunks.

The important point to note here is that when using obj.Write(b), the obj object maintains the pointer to the end of the previous write. Thus if we do the writes in succession the blob will get appended after every write

Here is how I resolved it,

// Open blob with ID
obj, err := loWrite.Open(ctx, id, pgx.LargeObjectModeRead|pgx.LargeObjectModeWrite)
if err != nil {
fmt.Printf(&quot;Error opening blob. Error: %s&quot;,  err.Error())
return err
}
importFile:= &lt;string path to import file&gt;
// reader for the blob backup file
reader := bufio.NewReader(blobBackupFile)
chunk:=0
id:= &lt;id of the blob&gt;
// Start reading from blob in chunks and writing it to blob
for {
buf := make([]byte, bufferSize)    // Initializing the buffer
bytesRead, err := reader.Read(buf) // Loading chunk into buffer
buf = buf[:bytesRead]              // slicing it to the number of the bytes actually read
if bytesRead == 0 {
if err == io.EOF {
fmt.Printf(&quot;Reached end of file %s&quot;, file.Name())
break
}
if err != nil {
fmt.Printf(&quot;Error reading chunks %d from file %s. Error: %s&quot;, chunk, importFile, err.Error())
break
}
return err
}
loc, err := obj.Tell()
if err != nil {
fmt.Printf(&quot;Error in getting the current pointer location %s&quot;, err.Error())
}
fmt.Printf(&quot;BlobID: %d. Pointer at %#v &quot;, id, loc)
fmt.Printf(&quot;Writing chunk %d of %d bytes at address %#v of the blob %d&quot;, chunk, bytesRead, loc, id)
bytesWritten, err := obj.Write(buf)
if err != nil {
fmt.Printf(&quot;Error writing bytes to blob %s&quot;, err.Error())
return err
}
fmt.Printf(&quot;Written %d byte to blob %d.&quot;, bytesWritten, id)
endLoc, err := obj.Tell()
if err != nil {
fmt.Printf(&quot;Error getting the pointer location after writing %s&quot;, err.Error())
}
fmt.Printf(&quot;Written chunk %d of %d bytes from address %#v to address %#v of the blob %d&quot;, chunk, bytesWritten, loc, endLoc, id)
if int64(totalBytesRead) == file.Size() {
fmt.Printf(&quot;Chunk %d was last chunk written at address %#v to address %#v of the blob %d&quot;, chunk, loc, endLoc, id)
break
}
//next chunk
chunk++
}

Close the blob and obj after writing the complete blob

huangapple
  • 本文由 发表于 2022年8月8日 19:56:02
  • 转载请务必保留本文链接:https://go.coder-hub.com/73277493.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定