如何高效地并行化数组列表并控制并行度?

huangapple go评论78阅读模式
英文:

How to efficiently parallelize array list and control the parallelism?

问题

我有一个resourceId数组,我需要并行循环遍历它。然后为每个资源生成一个URL,并将其放入一个映射中,其中键是resourceId,值是URL。

我找到了下面的代码来完成这个任务,但我不确定这是否是正确的方法。我在这里使用sizedwaitgroup来并行化resourceId列表。在向映射写入数据时,我还使用了锁。我确定这不是高效的代码,因为使用锁,然后使用sizedwaitgroup会有一些性能问题。

有什么更好和更高效的方法来做到这一点吗?我应该在这里使用通道吗?我想控制并行度,而不是运行与resourceId列表长度相同的goroutine。如果任何resourceId的URL生成失败,我希望将其记录为该resourceId的错误,但不会中断其他并行运行以获取其他resourceId的URL。

例如:如果有10个资源,其中2个失败,则记录这2个的错误,并且映射应该包含剩余8个的条目。

// 并行运行20个线程
swg := sizedwaitgroup.New(20)
var mutex = &sync.Mutex{}
start := time.Now()
m := make(map[string]*customerPbV1.CustomerResponse)
for _, resources := range resourcesList {
  swg.Add()
  go func(resources string) {
    defer swg.Done()
    customerUrl, err := us.GenerateUrl(clientId, resources, appConfig)
    if err != nil {
      errs.NewWithCausef(err, "Could not generate the url for %s", resources)
    }
    mutex.Lock()
    m[resources] = customerUrl
    mutex.Unlock()
  }(resources)
}
swg.Wait()

elapsed := time.Since(start)
fmt.Println(elapsed)

**注意:**上面的代码将从多个读取线程以高吞吐量调用,因此需要具有良好的性能。

英文:

I have a resourceId array which I need loop in parallel. And generate URL for each resource and then put inside a map which is key (resourcId) and value is url.

I got below code which does the job but I am not sure if this is the right way to do it. I am using sizedwaitgroup here to parallelize the resourceId list. And also using lock on map while writing the data to it. I am sure this isn't efficient code as using lock and then using sizedwaitgroup will have some performance problem.

What is the best and efficient way to do this? Should I use channels here? I want to control the parallelism on how much I should have instead of running length of resourceId list. If any resourceId url generation fails, I want to log that as an error for that resourceId but do not disrupt other go routine running in parallel to get the url generated for other resourceId.

For example: If there are 10 resources, and 2 fails then log error for those 2 and map should have entry for remaining 8.

// running 20 threads in parallel
swg := sizedwaitgroup.New(20)
var mutex = &sync.Mutex{}
start := time.Now()
m := make(map[string]*customerPbV1.CustomerResponse)
for _, resources := range resourcesList {
  swg.Add()
  go func(resources string) {
    defer swg.Done()
    customerUrl, err := us.GenerateUrl(clientId, resources, appConfig)
    if err != nil {
      errs.NewWithCausef(err, "Could not generate the url for %s", resources)
    }
    mutex.Lock()
    m[resources] = customerUrl
    mutex.Unlock()
  }(resources)
}
swg.Wait()

elapsed := time.Since(start)
fmt.Println(elapsed)

Note: Above code will be called at high throughput from multiple reader threads so it needs to perform well.

答案1

得分: 4

我不确定sizedwaitgroup是什么,也没有解释,但总体而言,这种方法看起来并不像Go语言的典型方式。至于“最佳”方法,这是一个主观问题,但在Go语言中最典型的方法可能是以下这样的:

func main() {
    wg := new(sync.WaitGroup)
    start := time.Now()
    numWorkers := 20
    m := make(map[string]*customerPbV1.CustomerResponse)
    work := make(chan string)
    results := make(chan result)
    for i := 0; i < numWorkers; i++ {
        wg.Add(1)
        go worker(work, results)
    }
    go func() {
        for _, resources := range resourcesList {
            work <- resources
        }
        close(work)
    }()

    go func() {
        wg.Wait()
        close(results)
    }()

    for result := range results {
        m[result.resources] = result.response
    }

    elapsed := time.Since(start)
    fmt.Println(elapsed)
}

type result struct {
    resources string
    response  *customerPbV1.CustomerResponse
}

func worker(ch chan string, r chan result) {
    for w := range ch {
        customerUrl, err := us.GenerateUrl(clientId, w, appConfig)
        if err != nil {
            errs.NewWithCausef(err, "Could not generate the url for %s", resources)
            continue
        }
        r <- result{w, customerUrl}
    }
}

(不过,根据名称,我会假设errs.NewWithCause实际上并不处理错误,而是返回一个错误。在这种情况下,当前的代码会将错误丢弃,一个正确的解决方案是为处理错误添加一个额外的chan error

func main() {
    wg := new(sync.WaitGroup)
    start := time.Now()
    numWorkers := 20
    m := make(map[string]*customerPbV1.CustomerResponse)
    work := make(chan string)
    results := make(chan result)
    errors := make(chan error)
    for i := 0; i < numWorkers; i++ {
        wg.Add(1)
        go worker(work, results, errors)
    }

    go func() {
        for _, resources := range resourcesList {
            work <- resources
        }
        close(work)
    }()

    go func() {
        wg.Wait()
        close(results)
        close(errors)
    }()

    go func() {
        for err := range errors {
            // 处理错误
        }
    }()

    for result := range results {
        m[result.resources] = result.response
    }

    elapsed := time.Since(start)
    fmt.Println(elapsed)
}

type result struct {
    resources string
    response  *customerPbV1.CustomerResponse
}

func worker(ch chan string, r chan result, errs chan error) {
    for w := range ch {
        customerUrl, err := us.GenerateUrl(clientId, w, appConfig)
        if err != nil {
            errs <- errs.NewWithCausef(err, "Could not generate the url for %s", resources)
            continue
        }
        r <- result{w, customerUrl}
    }
}
英文:

I'm not sure what sizedwaitgroup is and it's not explained, but overall this approach doesn't look very typical of Go. For that matter, "best" is a matter of opinion, but the most typical approach in Go would be something along these lines:

func main() {
wg := new(sync.WaitGroup)
start := time.Now()
numWorkers := 20
m := make(map[string]*customerPbV1.CustomerResponse)
work := make(chan string)
results := make(chan result)
for i := 0; i &lt; numWorkers; i++ {
wg.Add(1)
go worker(work, results)
}
go func() {
for _, resources := range resourcesList {
work &lt;- resources
}
close(work)
}()
go func() {
wg.Wait()
close(results)
}()
for result := range results {
m[result.resources] = result.response
}
elapsed := time.Since(start)
fmt.Println(elapsed)
}
type result struct {
resources string
response  *customerPbV1.CustomerResponse
}
func worker(ch chan string, r chan result) {
for w := range ch {
customerUrl, err := us.GenerateUrl(clientId, w, appConfig)
if err != nil {
errs.NewWithCausef(err, &quot;Could not generate the url for %s&quot;, resources)
continue
}
r &lt;- result{w, customerUrl}
}
}

(Though, based on the name, I would assume errs.NewWithCause doesn't actually handle errors, but returns one, in which case the current code is dropping them on the floor, and a proper solution would have an additional chan error for handling errors:

func main() {
wg := new(sync.WaitGroup)
start := time.Now()
numWorkers := 20
m := make(map[string]*customerPbV1.CustomerResponse)
work := make(chan string)
results := make(chan result)
errors := make(chan error)
for i := 0; i &lt; numWorkers; i++ {
wg.Add(1)
go worker(work, results, errors)
}
go func() {
for _, resources := range resourcesList {
work &lt;- resources
}
close(work)
}()
go func() {
wg.Wait()
close(results)
close(errors)
}()
go func() {
for err := range errors {
// Do something with err
}
}()
for result := range results {
m[result.resources] = result.response
}
elapsed := time.Since(start)
fmt.Println(elapsed)
}
type result struct {
resources string
response  *customerPbV1.CustomerResponse
}
func worker(ch chan string, r chan result, errs chan error) {
for w := range ch {
customerUrl, err := us.GenerateUrl(clientId, w, appConfig)
if err != nil {
errs &lt;- errs.NewWithCausef(err, &quot;Could not generate the url for %s&quot;, resources)
continue
}
r &lt;- result{w, customerUrl}
}
}

答案2

得分: 1

我已经为你翻译了代码和注释,请阅读以下内容:

package main

import (
	"errors"
	"fmt"
	"log"
	"math/rand"
	"runtime"
	"strconv"
	"sync"
	"time"
)

type Result struct {
	resource string
	val      int
	err      error
}

/*
将 Result 结构体更改为以下内容
result 结构体将收集创建映射所需的所有信息
type Result struct {
	resources    string
	customerUrl  *customerPbV1.CustomerResponse
	err          error
}
*/

// const numWorker = 8

func main() {
	now := time.Now()
	rand.Seed(time.Now().UnixNano())
	m := make(map[string]int)
	// m := make(map[string]*customerPbV1.CustomerResponse) 				// 更改为此行

	numWorker := runtime.NumCPU()
	fmt.Println(numWorker)
	chanResult := make(chan Result)

	go func() {
		for i := 0; i < 20; i++ {
			/*
			 customerUrl, err := us.GenerateUrl(clientId, resources, appConfig)
			 假设 i 是资源
			 chanResult <- Result {resource: strconv.Itoa(i)}
			*/
			chanResult <- Result{ // 这将阻塞,直到 chanResult 在第 68 行被消耗
				resource: strconv.Itoa(i),
			}
		}
		close(chanResult)
	}()

	var wg sync.WaitGroup
	cr := make(chan Result)
	wg.Add(numWorker)

	go func() {
		wg.Wait()
		close(cr) // 注意:不要忘记关闭 cr
	}()

	go func() {
		for i := 0; i < numWorker; i++ { // 此循环将运行 goroutine
			go func(x int) {
				for job := range chanResult { // 在第 49 行解除 chan 阻塞
					log.Println("worker", x, "working on", job.resource)
					x, err := query(job.resource) // TODO: customerUrl, err := us.GenerateUrl(clientId, resources, appConfig)
					cr <- Result{                 // 发送到通道,将阻塞,直到它被消耗。消耗在主 goroutine 中的 "line 84"
						resource: job.resource,
						val:      x,
						err:      err,
					}
				}
				wg.Done()
			}(i)
		}
	}()

	counterTotal := 0
	counterSuccess := 0
	for res := range cr { // 将在第 71 行解除通道阻塞
		if res.err != nil {
			log.Printf("发现错误 %s。堆栈跟踪:%s", res.resource, res.err)
		} else {
			m[res.resource] = res.val // 注意:保存到映射
			counterSuccess++
		}
		counterTotal++
	}
	log.Printf("%d/%d 个作业已运行", counterSuccess, counterTotal)
	fmt.Println("最终结果:", m)
	fmt.Println("m 的长度:", len(m))

	fmt.Println(runtime.NumGoroutine())
	fmt.Println(time.Since(now))
}

func query(s string) (int, error) {
	time.Sleep(time.Second)
	i, err := strconv.Atoi(s)
	if err != nil {
		return 0, err
	}

	if i%3 == 0 {
		return 0, errors.New("i 被 3 整除")
	}
	ms := i + 500 + rand.Intn(500)
	return ms, nil
}

playground: https://go.dev/play/p/LeyE9n1hh81

英文:

I have create example code with comment.
please read the comment.

> note: query function will sleep in 1 second.

package main

import (
	&quot;errors&quot;
	&quot;fmt&quot;
	&quot;log&quot;
	&quot;math/rand&quot;
	&quot;runtime&quot;
	&quot;strconv&quot;
	&quot;sync&quot;
	&quot;time&quot;
)

type Result struct {
	resource string
	val      int
	err      error
}

/*
CHANGE Result struct to this
result struct will collect all you need to create map
type Result struct {
	resources string
	customerUrl *customerPbV1.CustomerResponse
	err error
}
*/

// const numWorker = 8

func main() {
	now := time.Now()
	rand.Seed(time.Now().UnixNano())
	m := make(map[string]int)
	// m := make(map[string]*customerPbV1.CustomerResponse) 				// CHANGE TO THIS

	numWorker := runtime.NumCPU()
	fmt.Println(numWorker)
	chanResult := make(chan Result)

	go func() {
		for i := 0; i &lt; 20; i++ {
			/*
			 customerUrl, err := us.GenerateUrl(clientId, resources, appConfig)
			 we asume i is resources
			 chanResult &lt;- Result {resource: strconv.Itoa(i)}
			*/
			chanResult &lt;- Result{ // this will block until chanResult is consume in line 68
				resource: strconv.Itoa(i),
			}
		}
		close(chanResult)
	}()

	var wg sync.WaitGroup
	cr := make(chan Result)
	wg.Add(numWorker)

	go func() {
		wg.Wait()
		close(cr) // NOTE: don&#39;t forget to close cr
	}()

	go func() {
		for i := 0; i &lt; numWorker; i++ { // this for loop will run goroutine
			go func(x int) {
				for job := range chanResult { // unblock chan on line 49
					log.Println(&quot;worker&quot;, x, &quot;working on&quot;, job.resource)
					x, err := query(job.resource) // TODO: customerUrl, err := us.GenerateUrl(clientId, resources, appConfig)
					cr &lt;- Result{                 // send to channel, will block until it consume. Consume is in MAIN goroutine &quot;line 84&quot;
						resource: job.resource,
						val:      x,
						err:      err,
					}
				}
				wg.Done()
			}(i)
		}
	}()

	counterTotal := 0
	counterSuccess := 0
	for res := range cr { // will unblock channel in line 71
		if res.err != nil {
			log.Printf(&quot;error found %s. stack trace: %s&quot;, res.resource, res.err)
		} else {
			m[res.resource] = res.val // NOTE: save to map
			counterSuccess++
		}
		counterTotal++
	}
	log.Printf(&quot;%d/%d of total job run&quot;, counterSuccess, counterTotal)
	fmt.Println(&quot;final :&quot;, m)
	fmt.Println(&quot;len m&quot;, len(m))

	fmt.Println(runtime.NumGoroutine())
	fmt.Println(time.Since(now))
}

func query(s string) (int, error) {
	time.Sleep(time.Second)
	i, err := strconv.Atoi(s)
	if err != nil {
		return 0, err
	}

	if i%3 == 0 {
		return 0, errors.New(&quot;i divided by 3&quot;)
	}
	ms := i + 500 + rand.Intn(500)
	return ms, nil
}

playground : https://go.dev/play/p/LeyE9n1hh81

答案3

得分: 0

这是一个纯通道解决方案(playground)。
我认为性能主要取决于GenerateUrl或者在我的代码中是generateURL
另外,我想指出的一件事是,这个正确的术语应该是并发而不是并行

package main

import (
	"errors"
	"log"
	"strconv"
	"strings"
)

type result struct {
	resourceID, url string
	err             error
}

func generateURL(resourceID string) (string, error) {
	if strings.HasPrefix(resourceID, "error-") {
		return "", errors.New(resourceID)
	}
	return resourceID, nil
}

func main() {
	// 这是资源的ID
	resources := make([]string, 10000)
	for i := 0; i < 10000; i++ {
		s := strconv.Itoa(i)
		if i%10 == 0 {
			resources[i] = "error-" + s
		} else {
			resources[i] = "id-" + s
		}
	}

	numOfChannel := 20
	// 我们通过这个通道将结果发送到resourceMap
	ch := make(chan result, 10)
	// 这些通道是用来接收资源ID的goroutine的
	channels := make([]chan string, numOfChannel)
	// 在处理完所有资源后,这个通道用于通知goroutine退出
	done := make(chan struct{})

	for i := range channels {
		c := make(chan string)
		channels[i] = c

		go func() {
			for {
				select {
				case rid := <-c:
					u, err := generateURL(rid)
					ch <- result{rid, u, err}
				case _, ok := <-done:
					if !ok {
						break
					}
				}
			}
		}()
	}

	go func() {
		for i, r := range resources {
			channels[i%numOfChannel] <- r
		}
	}()

	resourceMap := make(map[string]string)
	i := 0
	for p := range ch {
		if p.err != nil {
			log.Println(p.resourceID, p.err)
		} else {
			resourceMap[p.resourceID] = p.url
		}
		i++
		if i == len(resources)-1 {
			break
		}
	}

	close(done)
}
英文:

Here is a pure channel solution (playground).
I think the performance really depends on the GenerateUrl or in my code generateURL.
Also one more thing I would like to point out is that correct term for this is concurrency not parallelism.

package main
import (
&quot;errors&quot;
&quot;log&quot;
&quot;strconv&quot;
&quot;strings&quot;
)
type result struct {
resourceID, url string
err             error
}
func generateURL(resourceID string) (string, error) {
if strings.HasPrefix(resourceID, &quot;error-&quot;) {
return &quot;&quot;, errors.New(resourceID)
}
return resourceID, nil
}
func main() {
// This is the resource IDs
resources := make([]string, 10000)
for i := 0; i &lt; 10000; i++ {
s := strconv.Itoa(i)
if i%10 == 0 {
resources[i] = &quot;error-&quot; + s
} else {
resources[i] = &quot;id-&quot; + s
}
}
numOfChannel := 20
// We send result through this channel to the resourceMap
ch := make(chan result, 10)
// These are the channels that go routine receives resource ID from
channels := make([]chan string, numOfChannel)
// After processing all resources, this channel is used to signal the go routines to exit
done := make(chan struct{})
for i := range channels {
c := make(chan string)
channels[i] = c
go func() {
for {
select {
case rid := &lt;-c:
u, err := generateURL(rid)
ch &lt;- result{rid, u, err}
case _, ok := &lt;-done:
if !ok {
break
}
}
}
}()
}
go func() {
for i, r := range resources {
channels[i%numOfChannel] &lt;- r
}
}()
resourceMap := make(map[string]string)
i := 0
for p := range ch {
if p.err != nil {
log.Println(p.resourceID, p.err)
} else {
resourceMap[p.resourceID] = p.url
}
i++
if i == len(resources)-1 {
break
}
}
close(done)
}

huangapple
  • 本文由 发表于 2022年2月25日 05:02:23
  • 转载请务必保留本文链接:https://go.coder-hub.com/71258215.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定