使用反射与结构体来构建通用处理函数。

huangapple go评论125阅读模式
英文:

Using reflection with structs to build generic handler function

问题

我有一些问题,我需要构建一个可以动态使用参数化结构的函数。因此,我的代码有20多个函数,它们非常相似,只有一个类型有所不同。我主要使用Java,我会开发基本的泛型函数,或者将普通对象作为函数的参数(然后使用反射)。我需要类似的功能,但是使用Go语言。

我有几种类型,例如:

// List结构主要用于JSON序列化
type OrangeList struct {
Oranges []Orange
}

type BananaList struct {
Bananas []Banana
}

type Orange struct {
Orange_id string
Field_1 int
// 不同类型的字段不同,这里只是简化的代码示例
}

type Banana struct {
Banana_id string
Field_1 int
// 不同类型的字段不同,这里只是简化的代码示例
}

然后,我有一个函数,基本上对于每种列表类型都有一个:

// 最后会有20多个这样的函数,唯一的区别基本上就是两种类型!这样做不够DRY!
func buildOranges(rows *sqlx.Rows) ([]byte, error) {
oranges := OrangeList{} // 这个类型会变
for rows.Next() {
orange := Orange{} // 这个类型会变
err := rows.StructScan(&orange) // 这个已经可以处理每种情况了,也可以自己使用反射
checkError(err, "rows.Scan")
oranges.Oranges = append(oranges.Oranges, orange)
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(oranges)
return jsontext, err
}

是的,我可以更改SQL库以使用更智能的ORM或框架,但这不是重点。我想学习如何构建通用函数,以处理所有不同类型的类似函数。

我已经做到了这一步,但它仍然无法正常工作(目标结构不是预期的结构):

func buildWhatever(rows *sqlx.Rows, tgt interface{}) ([]byte, error) {
tgtValueOf := reflect.ValueOf(tgt)
tgtType := tgtValueOf.Type()
targets := reflect.SliceOf(tgtValueOf.Type())
for rows.Next() {
target := reflect.New(tgtType)
err := rows.StructScan(&target) // 在这个阶段,target仍然不是1:1相似的结构,所以StructScan失败了...它是一种扭曲的“Value”对象。嗯。
// 移除了添加到列表的部分,因为解决方案会类似
checkError(err, "rows.Scan")
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(targets)
return jsontext, err
}

所以,我需要将列表类型和普通类型作为参数传递,然后构建每个类型的实例,我的其余逻辑可能很容易修复。

英文:

I have some trouble building a function that can dynamically use parametrized structs. For that reason my code has 20+ functions that are similar except basically for one type that gets used. Most of my experience is with Java, and I'd just develop basic generic functions, or use plain Object as parameter to function (and reflection from that point on). I would need something similar, using Go.

I have several types like:

// The List structs are mostly needed for json marshalling
type OrangeList struct {
    Oranges []Orange
}

type BananaList struct {
    Bananas []Banana
}

type Orange struct {
    Orange_id string
    Field_1 int
    // The fields are different for different types, I am simplifying the code example
}

type Banana struct {
    Banana_id string
    Field_1 int
    // The fields are different for different types, I am simplifying the code example
}

Then I have function, basically for each list type:

// In the end there are 20+ of these, the only difference is basically in two types! 
// This is very un-DRY!
func buildOranges(rows *sqlx.Rows) ([]byte, error) {
    oranges := OrangeList{}     // This type changes
    for rows.Next() {
        orange := Orange{}      // This type changes
        err := rows.StructScan(&orange)   // This can handle each case already, could also use reflect myself too
        checkError(err, "rows.Scan")
        oranges.Oranges = append(oranges.Oranges,orange)
    }
    checkError(rows.Err(), "rows.Err")
    jsontext, err := json.Marshal(oranges)
    return jsontext, err
}

Yes, I could change the sql library to use more intelligent ORM or framework, but that's besides the point. I want to learn on how to build generic function that can handle similar function for all my different types.

I got this far, but it still doesn't work properly (target isn't expected struct I think):

func buildWhatever(rows *sqlx.Rows, tgt interface{}) ([]byte, error) {
    tgtValueOf := reflect.ValueOf(tgt)
    tgtType := tgtValueOf.Type()
    targets := reflect.SliceOf(tgtValueOf.Type())
    for rows.Next() {
        target := reflect.New(tgtType)
        err := rows.StructScan(&target) // At this stage target still isn't 1:1 smilar struct so the StructScan fails... It's some perverted "Value" object instead. Meh.
        // Removed appending to the list because the solutions for that would be similar
        checkError(err, "rows.Scan")
    }
    checkError(rows.Err(), "rows.Err")
    jsontext, err := json.Marshal(targets)
    return jsontext, err
}

So umm, I would need to give the list type, and the vanilla type as parameters, then build one of each, and the rest of my logic would be probably fixable quite easily.

答案1

得分: 2

原来有一个sqlx.StructScan(rows, &destSlice)函数可以完成你的内部循环,只要给定一个适当类型的切片。sqlx文档提到了缓存反射操作的结果,所以与手动编写的方法相比,它可能有一些额外的优化。

听起来你实际上问的是“如何从我的reflect.Value中获取rows.StructScan可以接受的内容?”直接的答案是reflect.Interface(target);它应该返回一个表示*Orangeinterface{},你可以直接传递给StructScan(不需要额外的&操作)。然后,我认为targets = reflect.Append(targets, target.Indirect())将把你的target转换为表示Orangereflect.Value并将其附加到切片中。targets.Interface()应该得到一个表示[]Orangeinterface{}json.Marshal可以理解。我之所以说所有这些“应该”和“我认为”,是因为我没有尝试过这条路线。

总的来说,反射是冗长而缓慢的。有时候它是完成任务的最佳或唯一方法,但是当你可以的时候,最好寻找一种不需要使用反射的方法来完成任务。

所以,如果在你的应用程序中可以工作,你也可以直接将Rows转换为JSON,而不需要经过中间结构。下面是一个示例程序(当然需要sqlite3),它将sql.Rows转换为map[string]string,然后再转换为JSON。(请注意,它不会处理NULL,将数字表示为JSON数字,或者处理任何无法适应map[string]string的内容。)

package main

import (
	_ "code.google.com/p/go-sqlite/go1/sqlite3"

	"database/sql"
	"encoding/json"
	"os"
)

func main() {
	db, err := sql.Open("sqlite3", "foo")
	if err != nil {
		panic(err)
	}
	tryQuery := func(query string, args ...interface{}) *sql.Rows {
		rows, err := db.Query(query, args...)
		if err != nil {
			panic(err)
		}
		return rows
	}
	tryQuery("drop table if exists t")
	tryQuery("create table t(i integer, j integer)")
	tryQuery("insert into t values(?, ?)", 1, 2)
	tryQuery("insert into t values(?, ?)", 3, 1)

	// now query and serialize
	rows := tryQuery("select * from t")
	names, err := rows.Columns()
	if err != nil {
		panic(err)
	}
	// vals stores the values from one row
	vals := make([]interface{}, 0, len(names))
	for _, _ = range names {
		vals = append(vals, new(string))
	}
	// rowMaps stores all rows
	rowMaps := make([]map[string]string, 0)
	for rows.Next() {
		rows.Scan(vals...)
		// now make value list into name=>value map
		currRow := make(map[string]string)
		for i, name := range names {
			currRow[name] = *(vals[i].(*string))
		}
		// accumulating rowMaps is the easy way out
		rowMaps = append(rowMaps, currRow)
	}
	json, err := json.Marshal(rowMaps)
	if err != nil {
		panic(err)
	}
	os.Stdout.Write(json)
}

理论上,你可以通过不重用相同的rowMap并使用json.Encoder将每一行的JSON追加到缓冲区中来减少分配次数。你还可以进一步,不使用rowMap,只使用名称和值的列表。我应该说,我没有将其与基于reflect的方法进行比较,尽管我知道reflect足够慢,如果你可以忍受任何一种策略,比较它们可能是值得的。

英文:

Turns out there's an sqlx.StructScan(rows, &destSlice) function that will do your inner loop, given a slice of the appropriate type. The sqlx docs refer to caching results of reflection operations, so it may have some additional optimizations compared to writing one.

Sounds like the immediate question you're actually asking is "how do I get something out of my reflect.Value that rows.StructScan will accept?" And the direct answer is reflect.Interface(target); it should return an interface{} representing an *Orange you can pass directly to StructScan (no additional & operation needed). Then, I think targets = reflect.Append(targets, target.Indirect()) will turn your target into a reflect.Value representing an Orange and append it to the slice. targets.Interface() should get you an interface{} representing an []Orange that json.Marshal understands. I say all these 'should's and 'I think's because I haven't tried that route.

Reflection, in general, is verbose and slow. Sometimes it's the best or only way to get something done, but it's often worth looking for a way to get your task done without it when you can.

So, if it works in your app, you can also convert Rows straight to JSON, without going through intermediate structs. Here's a sample program (requires sqlite3 of course) that turns sql.Rows into map[string]string and then into JSON. (Note it doesn't try to handle NULL, represent numbers as JSON numbers, or generally handle anything that won't fit in a map[string]string.)

package main
import (
_ "code.google.com/p/go-sqlite/go1/sqlite3"
"database/sql"
"encoding/json"
"os"
)
func main() {
db, err := sql.Open("sqlite3", "foo")
if err != nil {
panic(err)
}
tryQuery := func(query string, args ...interface{}) *sql.Rows {
rows, err := db.Query(query, args...)
if err != nil {
panic(err)
}
return rows
}
tryQuery("drop table if exists t")
tryQuery("create table t(i integer, j integer)")
tryQuery("insert into t values(?, ?)", 1, 2)
tryQuery("insert into t values(?, ?)", 3, 1)
// now query and serialize
rows := tryQuery("select * from t")
names, err := rows.Columns()
if err != nil {
panic(err)
}
// vals stores the values from one row
vals := make([]interface{}, 0, len(names))
for _, _ = range names {
vals = append(vals, new(string))
}
// rowMaps stores all rows
rowMaps := make([]map[string]string, 0)
for rows.Next() {
rows.Scan(vals...)
// now make value list into name=>value map
currRow := make(map[string]string)
for i, name := range names {
currRow[name] = *(vals[i].(*string))
}
// accumulating rowMaps is the easy way out
rowMaps = append(rowMaps, currRow)
}
json, err := json.Marshal(rowMaps)
if err != nil {
panic(err)
}
os.Stdout.Write(json)
}

In theory, you could build this to do fewer allocations by not reusing the same rowMap each time and using a json.Encoder to append each row's JSON to a buffer. You could go a step further and not use a rowMap at all, just the lists of names and values. I should say I haven't compared the speed against a reflect-based approach, though I know reflect is slow enough it might be worth comparing them if you can put up with either strategy.

huangapple
  • 本文由 发表于 2013年11月30日 06:14:43
  • 转载请务必保留本文链接:https://go.coder-hub.com/20294044.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定