如何使用package database/sql批量执行SQL语句

huangapple go评论100阅读模式
英文:

How do I batch sql statements with package database/sql

问题

我如何使用Go的database/sql包批量执行SQL语句?

在Java中,我会这样做:

// 创建一个预处理语句
sql := "INSERT INTO my_table VALUES(?)"
stmt, err := db.Prepare(sql)
if err != nil {
    log.Fatal(err)
}

// 插入10行数据
for i := 0; i < 10; i++ {
    _, err := stmt.Exec(i)
    if err != nil {
        log.Fatal(err)
    }
}

// 执行批处理
err = stmt.Close()
if err != nil {
    log.Fatal(err)
}

请注意,Go的database/sql包没有直接支持批量执行SQL语句的功能。因此,我们需要使用循环来逐个执行SQL语句。

英文:

How do I batch sql statements with Go's database/sql package?

In Java I would do it like this :

// Create a prepared statement
String sql = &quot;INSERT INTO my_table VALUES(?)&quot;;
PreparedStatement pstmt = connection.prepareStatement(sql);

// Insert 10 rows of data
for (int i=0; i&lt;10; i++) {
    pstmt.setString(1, &quot;&quot;+i);
    pstmt.addBatch();
}

// Execute the batch
int [] updateCounts = pstmt.executeBatch();

How would I achieve the same in Go?

答案1

得分: 93

由于db.Exec函数是可变参数的,一个选项(实际上只需要进行一次网络往返)是自己构造语句并将参数拆分后传递。

示例代码:

func BulkInsert(unsavedRows []*ExampleRowStruct) error {
    valueStrings := make([]string, 0, len(unsavedRows))
    valueArgs := make([]interface{}, 0, len(unsavedRows) * 3)
    for _, post := range unsavedRows {
        valueStrings = append(valueStrings, "(?, ?, ?)")
        valueArgs = append(valueArgs, post.Column1)
        valueArgs = append(valueArgs, post.Column2)
        valueArgs = append(valueArgs, post.Column3)
    }
    stmt := fmt.Sprintf("INSERT INTO my_sample_table (column1, column2, column3) VALUES %s", 
                        strings.Join(valueStrings, ","))
    _, err := db.Exec(stmt, valueArgs...)
    return err
}

在我进行的简单测试中,与其他答案中的Begin、Prepare、Commit相比,这种解决方案在插入10,000行时大约快了4倍 - 不过实际的改进效果将取决于您的个人设置、网络延迟等。

英文:

Since the db.Exec function is variadic, one option (that actually does only make a single network roundtrip) is to construct the statement yourself and explode the arguments and pass them in.

Sample code:

<!-- language: golang -->

func BulkInsert(unsavedRows []*ExampleRowStruct) error {
    valueStrings := make([]string, 0, len(unsavedRows))
    valueArgs := make([]interface{}, 0, len(unsavedRows) * 3)
    for _, post := range unsavedRows {
    	valueStrings = append(valueStrings, &quot;(?, ?, ?)&quot;)
	    valueArgs = append(valueArgs, post.Column1)
	    valueArgs = append(valueArgs, post.Column2)
	    valueArgs = append(valueArgs, post.Column3)
    }
    stmt := fmt.Sprintf(&quot;INSERT INTO my_sample_table (column1, column2, column3) VALUES %s&quot;, 
                        strings.Join(valueStrings, &quot;,&quot;))
    _, err := db.Exec(stmt, valueArgs...)
    return err
}

In a simple test I ran, this solution is about 4 times faster at inserting 10,000 rows than the Begin, Prepare, Commit presented in the other answer - though the actual improvement will depend a lot on your individual setup, network latencies, etc.

答案2

得分: 20

如果您正在使用PostgreSQL,那么pq支持批量导入

英文:

If you’re using PostgreSQL then pq supports bulk imports.

答案3

得分: 16

适应PostgreSQL的Andrew的解决方案,该数据库不支持?占位符,以下代码可以工作:

func BulkInsert(unsavedRows []*ExampleRowStruct) error {
    valueStrings := make([]string, 0, len(unsavedRows))
    valueArgs := make([]interface{}, 0, len(unsavedRows) * 3)
    i := 0
    for _, post := range unsavedRows {
        valueStrings = append(valueStrings, fmt.Sprintf("($%d, $%d, $%d)", i*3+1, i*3+2, i*3+3))
        valueArgs = append(valueArgs, post.Column1)
        valueArgs = append(valueArgs, post.Column2)
        valueArgs = append(valueArgs, post.Column3)
        i++
    }
    stmt := fmt.Sprintf("INSERT INTO my_sample_table (column1, column2, column3) VALUES %s", strings.Join(valueStrings, ","))
    _, err := db.Exec(stmt, valueArgs...)
    return err
}
英文:

Adapting Andrew's solution for PostgreSQL, which doesn't support the ? placeholder, the following works:

func BulkInsert(unsavedRows []*ExampleRowStruct) error {
    valueStrings := make([]string, 0, len(unsavedRows))
    valueArgs := make([]interface{}, 0, len(unsavedRows) * 3)
    i := 0
    for _, post := range unsavedRows {
        valueStrings = append(valueStrings, fmt.Sprintf(&quot;($%d, $%d, $%d)&quot;, i*3+1, i*3+2, i*3+3))
        valueArgs = append(valueArgs, post.Column1)
        valueArgs = append(valueArgs, post.Column2)
        valueArgs = append(valueArgs, post.Column3)
        i++
    }
    stmt := fmt.Sprintf(&quot;INSERT INTO my_sample_table (column1, column2, column3) VALUES %s&quot;, strings.Join(valueStrings, &quot;,&quot;))
    _, err := db.Exec(stmt, valueArgs...)
    return err
}

答案4

得分: 7

扩展Avi Flax的答案,我需要在我的INSERT语句中使用ON CONFLICT DO UPDATE子句。

解决方法是将数据复制到一个临时表(在事务结束时删除),然后从临时表插入到永久表中。

这是我最终选择的代码:

func (fdata *FDataStore) saveToDBBulk(items map[fdataKey][]byte) (err error) {
    tx, err := fdata.db.Begin()
    if err != nil {
        return errors.Wrap(err, "begin transaction")
    }
    txOK := false
    defer func() {
        if !txOK {
            tx.Rollback()
        }
    }()

    // The ON COMMIT DROP clause at the end makes sure that the table
    // is cleaned up at the end of the transaction.
    // While the "for{..} state machine" goroutine in charge of delayed
    // saving ensures this function is not running twice at any given time.
    _, err = tx.Exec(sqlFDataMakeTempTable)
    // CREATE TEMPORARY TABLE fstore_data_load
    // (map text NOT NULL, key text NOT NULL, data json)
    // ON COMMIT DROP
    if err != nil {
        return errors.Wrap(err, "create temporary table")
    }

    stmt, err := tx.Prepare(pq.CopyIn(_sqlFDataTempTableName, "map", "key", "data"))
    for key, val := range items {
        _, err = stmt.Exec(string(key.Map), string(key.Key), string(val))
        if err != nil {
            return errors.Wrap(err, "loading COPY data")
        }
    }

    _, err = stmt.Exec()
    if err != nil {
        return errors.Wrap(err, "flush COPY data")
    }
    err = stmt.Close()
    if err != nil {
        return errors.Wrap(err, "close COPY stmt")
    }

    _, err = tx.Exec(sqlFDataSetFromTemp)
    // INSERT INTO fstore_data (map, key, data)
    // SELECT map, key, data FROM fstore_data_load
    // ON CONFLICT DO UPDATE SET data = EXCLUDED.data
    if err != nil {
        return errors.Wrap(err, "move from temporary to real table")
    }

    err = tx.Commit()
    if err != nil {
        return errors.Wrap(err, "commit transaction")
    }
    txOK = true
    return nil
}
英文:

Expanding on Avi Flax's answer, I needed an ON CONFLICT DO UPDATE clause in my INSERT.

The solution to this is to COPY to a temporary table (set to delete at the end of the transaction) then INSERT from the temporary table to the permanent table.

Here's the code I settled on:

<!-- language: go -->

func (fdata *FDataStore) saveToDBBulk(items map[fdataKey][]byte) (err error) {
tx, err := fdata.db.Begin()
if err != nil {
return errors.Wrap(err, &quot;begin transaction&quot;)
}
txOK := false
defer func() {
if !txOK {
tx.Rollback()
}
}()
// The ON COMMIT DROP clause at the end makes sure that the table
// is cleaned up at the end of the transaction.
// While the &quot;for{..} state machine&quot; goroutine in charge of delayed
// saving ensures this function is not running twice at any given time.
_, err = tx.Exec(sqlFDataMakeTempTable)
// CREATE TEMPORARY TABLE fstore_data_load
// (map text NOT NULL, key text NOT NULL, data json)
// ON COMMIT DROP
if err != nil {
return errors.Wrap(err, &quot;create temporary table&quot;)
}
stmt, err := tx.Prepare(pq.CopyIn(_sqlFDataTempTableName, &quot;map&quot;, &quot;key&quot;, &quot;data&quot;))
for key, val := range items {
_, err = stmt.Exec(string(key.Map), string(key.Key), string(val))
if err != nil {
return errors.Wrap(err, &quot;loading COPY data&quot;)
}
}
_, err = stmt.Exec()
if err != nil {
return errors.Wrap(err, &quot;flush COPY data&quot;)
}
err = stmt.Close()
if err != nil {
return errors.Wrap(err, &quot;close COPY stmt&quot;)
}
_, err = tx.Exec(sqlFDataSetFromTemp)
// INSERT INTO fstore_data (map, key, data)
// SELECT map, key, data FROM fstore_data_load
// ON CONFLICT DO UPDATE SET data = EXCLUDED.data
if err != nil {
return errors.Wrap(err, &quot;move from temporary to real table&quot;)
}
err = tx.Commit()
if err != nil {
return errors.Wrap(err, &quot;commit transaction&quot;)
}
txOK = true
return nil
}

答案5

得分: 3

这是对@Debasish Mitra的解决方案的一种方法,如果您正在使用Postgres。

功能示例:https://play.golang.org/p/dFFD2MrEy3J

备选示例:https://play.golang.org/p/vUtW0K4jVMd

data := []Person{{"John", "Doe", 27}, {"Leeroy", "Jenkins", 19}}

vals := []interface{}{}
for _, row := range data {
    vals = append(vals, row.FirstName, row.LastName, row.Age)
}

sqlStr := `INSERT INTO test(column1, column2, column3) VALUES %s`
sqlStr = ReplaceSQL(sqlStr, "(?, ?, ?)", len(data))

//准备并执行语句
stmt, _ := db.Prepare(sqlStr)
res, _ := stmt.Exec(vals...)

func ReplaceSQL

func ReplaceSQL(stmt, pattern string, len int) string {
    pattern += ","
    stmt = fmt.Sprintf(stmt, strings.Repeat(pattern, len))
    n := 0
    for strings.IndexByte(stmt, '?') != -1 {
        n++
        param := "$" + strconv.Itoa(n)
        stmt = strings.Replace(stmt, "?", param, 1)
    }
    return strings.TrimSuffix(stmt, ",")
}
英文:

Here is a take on @Debasish Mitra's solution if you are using Postgres.

Functioning example: https://play.golang.org/p/dFFD2MrEy3J

Alternate example: https://play.golang.org/p/vUtW0K4jVMd

data := []Person{{&quot;John&quot;, &quot;Doe&quot;, 27}, {&quot;Leeroy&quot;, &quot;Jenkins&quot;, 19}}

vals := []interface{}{}
for _, row := range data {
    vals = append(vals, row.FirstName, row.LastName, row.Age)
}

sqlStr := `INSERT INTO test(column1, column2, column3) VALUES %s`
sqlStr = ReplaceSQL(sqlStr, &quot;(?, ?, ?)&quot;, len(data))

//Prepare and execute the statement
stmt, _ := db.Prepare(sqlStr)
res, _ := stmt.Exec(vals...)

func ReplaceSQL

func ReplaceSQL(stmt, pattern string, len int) string {
	pattern += &quot;,&quot;
	stmt = fmt.Sprintf(stmt, strings.Repeat(pattern, len))
	n := 0
	for strings.IndexByte(stmt, &#39;?&#39;) != -1 {
		n++
		param := &quot;$&quot; + strconv.Itoa(n)
		stmt = strings.Replace(stmt, &quot;?&quot;, param, 1)
	}
	return strings.TrimSuffix(stmt, &quot;,&quot;)
}

答案6

得分: 3

如果有人正在使用pgx(被认为是Golang中最好的Postgres驱动程序),请参考此解决方案:
https://github.com/jackc/pgx/issues/764#issuecomment-685249471

英文:

In case anyone is using pgx (the supposed best Postgres driver in Golang), see this solution:
https://github.com/jackc/pgx/issues/764#issuecomment-685249471

答案7

得分: 2

func BulkInsert(unsavedRows []ExampleRowStruct) error {
valueStrings := make([]string, 0, len(unsavedRows))
valueArgs := make([]interface{}, 0, len(unsavedRows) * 3)
i := 0
for _, post := range unsavedRows {
valueStrings = append(valueStrings, fmt.Sprintf("(@p%d, @p%d, @p%d)", i
3+1, i3+2, i3+3))
valueArgs = append(valueArgs, post.Column1)
valueArgs = append(valueArgs, post.Column2)
valueArgs = append(valueArgs, post.Column3)
i++
}
sqlQuery := fmt.Sprintf("INSERT INTO my_sample_table (column1, column2, column3) VALUES %s", strings.Join(valueStrings, ","))

var params []interface{}
for i := 0; i < len(valueArgs); i++ {
var param sql.NamedArg
param.Name = fmt.Sprintf("p%v", i+1)
param.Value = valueArgs[i]
params = append(params, param)
}
_, err := db.Exec(sqlQuery, params...)
return err

}

英文:

Take the idea of Andrew C and adapt it for a need in my work using sql scalar variables. It works perfectly for that specific requirement in my work. Maybe it is useful to someone because it is useful to simulate batch transactions of sql in golang. That's the idea.

func BulkInsert(unsavedRows []*ExampleRowStruct) error {
valueStrings := make([]string, 0, len(unsavedRows))
valueArgs := make([]interface{}, 0, len(unsavedRows) * 3)
i := 0
for _, post := range unsavedRows {
valueStrings = append(valueStrings, fmt.Sprintf(&quot;(@p%d, @p%d, @p%d)&quot;, i*3+1, i*3+2, i*3+3))
valueArgs = append(valueArgs, post.Column1)
valueArgs = append(valueArgs, post.Column2)
valueArgs = append(valueArgs, post.Column3)
i++
}
sqlQuery := fmt.Sprintf(&quot;INSERT INTO my_sample_table (column1, column2, column3) VALUES %s&quot;, strings.Join(valueStrings, &quot;,&quot;))
var params []interface{}
for i := 0; i &lt; len(valueArgs); i++ {
var param sql.NamedArg
param.Name = fmt.Sprintf(&quot;p%v&quot;, i+1)
param.Value = valueArgs[i]
params = append(params, param)
}
_, err := db.Exec(sqlQuery, params...)
return err
}

答案8

得分: 2

对于Postgres,lib pq支持批量插入:https://godoc.org/github.com/lib/pq#hdr-Bulk_imports

但是通过下面的代码也可以实现相同的功能,它在尝试执行批量条件更新时非常有用(根据查询更改查询)。

要执行类似的Postgres批量插入,可以使用以下函数。

// ReplaceSQL函数将任何字符串模式的实例替换为递增的$ n序列
func ReplaceSQL(old, searchPattern string) string {
tmpCount := strings.Count(old, searchPattern)
for m := 1; m <= tmpCount; m++ {
old = strings.Replace(old, searchPattern, "$"+strconv.Itoa(m), 1)
}
return old
}

因此,上面的示例变为

sqlStr := "INSERT INTO test(n1, n2, n3) VALUES "
vals := []interface{}{}
for _, row := range data {
sqlStr += "(?, ?, ?)," // 将“?”符号放置在列数相等的位置
vals = append(vals, row["v1"], row["v2"], row["v3"]) // 将row["v{n}"]块放置在列数相等的位置
}
// 去除最后一个逗号
sqlStr = strings.TrimSuffix(sqlStr, ",")
// 将“?”替换为“$n”以适应Postgres
sqlStr = ReplaceSQL(sqlStr, "?")
// 准备语句
stmt, _ := db.Prepare(sqlStr)
// 一次性格式化所有vals
res, _ := stmt.Exec(vals...)
英文:

For Postgres lib pq supports bulk inserts: https://godoc.org/github.com/lib/pq#hdr-Bulk_imports

But same can be achieved through below code but where it is really helpful is when one tries to perform bulk conditional update (change the query accordingly).

For performing similar bulk inserts for Postgres, you can use the following function.

// ReplaceSQL replaces the instance occurrence of any string pattern with an increasing $n based sequence
func ReplaceSQL(old, searchPattern string) string {
tmpCount := strings.Count(old, searchPattern)
for m := 1; m &lt;= tmpCount; m++ {
old = strings.Replace(old, searchPattern, &quot;$&quot;+strconv.Itoa(m), 1)
}
return old
}

So above sample becomes

sqlStr := &quot;INSERT INTO test(n1, n2, n3) VALUES &quot;
vals := []interface{}{}
for _, row := range data {
sqlStr += &quot;(?, ?, ?),&quot; // Put &quot;?&quot; symbol equal to number of columns
vals = append(vals, row[&quot;v1&quot;], row[&quot;v2&quot;], row[&quot;v3&quot;]) // Put row[&quot;v{n}&quot;] blocks equal to number of columns
}
//trim the last ,
sqlStr = strings.TrimSuffix(sqlStr, &quot;,&quot;)
//Replacing ? with $n for postgres
sqlStr = ReplaceSQL(sqlStr, &quot;?&quot;)
//prepare the statement
stmt, _ := db.Prepare(sqlStr)
//format all vals at once
res, _ := stmt.Exec(vals...)

答案9

得分: 2

我已经让pq.CopyIn工作了,实际上比字符串值/参数方法快2.4倍(这对我非常有帮助,也是一个优雅的解决方案,谢谢!)

我插入了1000万个int和varchar的测试值到一个结构体中,并使用以下函数加载它。我对GoLang还有点陌生,请多包涵...

func copyData(client *client.DbClient, dataModels []*dataModel) error{
db := *client.DB
txn, err := db.Begin()
if err != nil {
return err
}
defer txn.Commit()
stmt, err := txn.Prepare(pq.CopyIn("_temp", "a", "b"))
if err != nil {
return(err)
}
for _, model := range dataModels{
_, err := stmt.Exec(model.a, model.b)
if err != nil {
txn.Rollback()
return err
}
}
_, err = stmt.Exec()
if err != nil {
return err
}
err = stmt.Close()
if err != nil {
return err
}
return nil
}

`

经过测试,使用字符串值/参数方法耗时1分30.60秒。

经过测试,使用copyIn方法耗时37.57秒。

英文:

I got pq.CopyIn working, and it's actually 2.4x faster than the string values/args approach (which was super helpful and an elegant solution, btw, so thank you!)

I inserted 10 million test values of int, varchar into a struct, and loaded it with the following function. I'm kinda new to GoLang, so bear with me ...

func copyData(client *client.DbClient, dataModels []*dataModel) error{
db := *client.DB
txn, err := db.Begin()
if err != nil {
return err
}
defer txn.Commit()
stmt, err := txn.Prepare(pq.CopyIn(&quot;_temp&quot;, &quot;a&quot;, &quot;b&quot;))
if err != nil {
return(err)
}
for _, model := range dataModels{
_, err := stmt.Exec(model.a, model.b)
if err != nil {
txn.Rollback()
return err
}
}
_, err = stmt.Exec()
if err != nil {
return err
}
err = stmt.Close()
if err != nil {
return err
}
return nil
}

`

Elapsed (stringValues/args): 1m30.60s.

Elapsed (copyIn): 37.57s.

答案10

得分: 1

批处理在database/sql提供的接口中是不可能的。然而,特定的数据库驱动程序可能会单独支持批处理。例如,https://github.com/ziutek/mymysql似乎支持与MySQL的批处理。

英文:

Batching is not possible via the interfaces available in database/sql. A particular database driver may support it separately, however. For instance https://github.com/ziutek/mymysql appears to support batching with MySQL.

答案11

得分: 1

一个更好的链式语法的库是go-pg

https://github.com/go-pg/pg/wiki/Writing-Queries#insert

使用单个查询插入多个书籍:

err := db.Model(book1, book2).Insert()
英文:

One more good library to look at with chain syntax is go-pg

https://github.com/go-pg/pg/wiki/Writing-Queries#insert

Insert multiple books with single query:

err := db.Model(book1, book2).Insert()

答案12

得分: 0

这是一个更通用的版本,根据@andrew-c和@mastercarl的答案生成查询和值参数:

// bulk/insert.go

import (
"strconv"
"strings"
)
type ValueExtractor = func(int) []interface{}
func Generate(tableName string, columns []string, numRows int, postgres bool, valueExtractor ValueExtractor) (string, []interface{}) {
numCols := len(columns)
var queryBuilder strings.Builder
queryBuilder.WriteString("INSERT INTO ")
queryBuilder.WriteString(tableName)
queryBuilder.WriteString("(")
for i, column := range columns {
queryBuilder.WriteString("\"")
queryBuilder.WriteString(column)
queryBuilder.WriteString("\"")
if i < numCols-1 {
queryBuilder.WriteString(",")
}
}
queryBuilder.WriteString(") VALUES ")
var valueArgs []interface{}
valueArgs = make([]interface{}, 0, numRows*numCols)
for rowIndex := 0; rowIndex < numRows; rowIndex++ {
queryBuilder.WriteString("(")
for colIndex := 0; colIndex < numCols; colIndex++ {
if postgres {
queryBuilder.WriteString("$")
queryBuilder.WriteString(strconv.Itoa(rowIndex*numCols + colIndex + 1))
} else {
queryBuilder.WriteString("?")
}
if colIndex < numCols-1 {
queryBuilder.WriteString(",")
}
}
queryBuilder.WriteString(")")
if rowIndex < numRows-1 {
queryBuilder.WriteString(",")
}
valueArgs = append(valueArgs, valueExtractor(rowIndex)...)
}
return queryBuilder.String(), valueArgs
}

// bulk/insert_test.go

import (
"fmt"
"strconv"
)
func valueExtractor(index int) []interface{} {
return []interface{}{
"trx-" + strconv.Itoa(index),
"name-" + strconv.Itoa(index),
index,
}
}
func ExampleGeneratePostgres() {
query, valueArgs := Generate("tbl_persons", []string{"transaction_id", "name", "age"}, 3, true, valueExtractor)
fmt.Println(query)
fmt.Println(valueArgs)
// Output:
// INSERT INTO tbl_persons("transaction_id","name","age") VALUES ($1,$2,$3),($4,$5,$6),($7,$8,$9)
// [[trx-0 name-0 0] [trx-1 name-1 1] [trx-2 name-2 2]]
}
func ExampleGenerateOthers() {
query, valueArgs := Generate("tbl_persons", []string{"transaction_id", "name", "age"}, 3, false, valueExtractor)
fmt.Println(query)
fmt.Println(valueArgs)
// Output:
// INSERT INTO tbl_persons("transaction_id","name","age") VALUES (?,?,?),(?,?,?),(?,?,?)
// [[trx-0 name-0 0] [trx-1 name-1 1] [trx-2 name-2 2]]
}
英文:

Here's a more generic version to generate the query & value args based on answers from @andrew-c and @mastercarl:

// bulk/insert.go

import (
&quot;strconv&quot;
&quot;strings&quot;
)
type ValueExtractor = func(int) []interface{}
func Generate(tableName string, columns []string, numRows int, postgres bool, valueExtractor ValueExtractor) (string, []interface{}) {
numCols := len(columns)
var queryBuilder strings.Builder
queryBuilder.WriteString(&quot;INSERT INTO &quot;)
queryBuilder.WriteString(tableName)
queryBuilder.WriteString(&quot;(&quot;)
for i, column := range columns {
queryBuilder.WriteString(&quot;\&quot;&quot;)
queryBuilder.WriteString(column)
queryBuilder.WriteString(&quot;\&quot;&quot;)
if i &lt; numCols-1 {
queryBuilder.WriteString(&quot;,&quot;)
}
}
queryBuilder.WriteString(&quot;) VALUES &quot;)
var valueArgs []interface{}
valueArgs = make([]interface{}, 0, numRows*numCols)
for rowIndex := 0; rowIndex &lt; numRows; rowIndex++ {
queryBuilder.WriteString(&quot;(&quot;)
for colIndex := 0; colIndex &lt; numCols; colIndex++ {
if postgres {
queryBuilder.WriteString(&quot;$&quot;)
queryBuilder.WriteString(strconv.Itoa(rowIndex*numCols + colIndex + 1))
} else {
queryBuilder.WriteString(&quot;?&quot;)
}
if colIndex &lt; numCols-1 {
queryBuilder.WriteString(&quot;,&quot;)
}
}
queryBuilder.WriteString(&quot;)&quot;)
if rowIndex &lt; numRows-1 {
queryBuilder.WriteString(&quot;,&quot;)
}
valueArgs = append(valueArgs, valueExtractor(rowIndex)...)
}
return queryBuilder.String(), valueArgs
}

// bulk/insert_test.go

import (
&quot;fmt&quot;
&quot;strconv&quot;
)
func valueExtractor(index int) []interface{} {
return []interface{}{
&quot;trx-&quot; + strconv.Itoa(index),
&quot;name-&quot; + strconv.Itoa(index),
index,
}
}
func ExampleGeneratePostgres() {
query, valueArgs := Generate(&quot;tbl_persons&quot;, []string{&quot;transaction_id&quot;, &quot;name&quot;, &quot;age&quot;}, 3, true, valueExtractor)
fmt.Println(query)
fmt.Println(valueArgs)
// Output:
// INSERT INTO tbl_persons(&quot;transaction_id&quot;,&quot;name&quot;,&quot;age&quot;) VALUES ($1,$2,$3),($4,$5,$6),($7,$8,$9)
// [[trx-0 name-0 0] [trx-1 name-1 1] [trx-2 name-2 2]]
}
func ExampleGenerateOthers() {
query, valueArgs := Generate(&quot;tbl_persons&quot;, []string{&quot;transaction_id&quot;, &quot;name&quot;, &quot;age&quot;}, 3, false, valueExtractor)
fmt.Println(query)
fmt.Println(valueArgs)
// Output:
// INSERT INTO tbl_persons(&quot;transaction_id&quot;,&quot;name&quot;,&quot;age&quot;) VALUES (?,?,?),(?,?,?),(?,?,?)
// [[trx-0 name-0 0] [trx-1 name-1 1] [trx-2 name-2 2]]
}

答案13

得分: 0

如果有人正在使用Postgres并为此构建字符串,我强烈建议您尝试使用UNNEST函数,它可以将数组展开为一组行:

INSERT INTO "your_table" ("your_column") SELECT UNNEST($1::int2[])

int2替换为作为参数传入的数组的类型。

英文:

If anyone is using Postgres and building strings for this, I highly recommend that you try and use the UNNEST function instead... which expands an array into a set of rows:

INSERT INTO &quot;your_table&quot; (&quot;your_column&quot;) SELECT UNNEST($1::int2[])

Replace int2 with the type of your array, passed in as a parameter.

huangapple
  • 本文由 发表于 2012年9月19日 07:16:28
  • 转载请务必保留本文链接:https://go.coder-hub.com/12486436.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定