如果在Golang中使用`mutex.lock()`锁定了一个函数,如何发送响应?

huangapple go评论86阅读模式
英文:

How to send a response back if a function is locked using mutex.lock() in Golang?

问题

我有这个函数。

func (s *eS) Post(param *errorlogs.Q) (*errorlogs.Error, *errors.RestErr) {
    //sub := q.Get("sub")
    s.mu.Lock()
    utime := int32(time.Now().Unix())

    // Open our jsonFile
    jsonFile, errFile := getlist(param.Id)
    // if we os.Open returns an error then handle it
    if errFile != nil {
        return nil, errFile
    }

    jsonFile, err := os.Open(dir + "/File.json")
    // if we os.Open returns an error then handle it
    if err != nil {
        return nil, errors.NewNotFoundError("Bad File request")
    }
    // read our opened jsonFile as a byte array.
    byteValue, _ := ioutil.ReadAll(jsonFile)
    // we initialize our  model
    var errorFile errorlogs.Error_File
    // we unmarshal our byteArray which contains our
    // jsonFile's content into '' which we defined above
    json.Unmarshal(byteValue, &errorFile)
    // defer the closing of our jsonFile so that we can parse it later on
    defer jsonFile.Close()
    // An object to copy the required data from the response
    var id int32
    if len(errorFile.Error) == 0 {
        id = 0
    } else {
        id = errorFile.Error[len(errorFile.Error)-1].ID
    }

    newValue := &errorlogs.Error{
        ID:         id + 1,
        Utime:      utime,
    
    }

    errorFile.Error = append(errorFile.Error, *newValue)
    file, err := json.Marshal(errorFile)
    if err != nil {
        return nil, errors.NewInternalServerError("Unable to json marshal file")
    }
    err = ioutil.WriteFile(dir+"/File.json", file, 0644)
    if err != nil {
        return nil, errors.NewInternalServerError("Unable to write file")
    }
    s.mu.Unlock()

    return newValue, nil

}

在这里,我正在锁定此函数以防止并发请求。如果某个客户端正在写入文件,它将不会允许其他客户端同时写入。但现在我有疑问,当锁定时,mutex.Lock()对所有其他请求做了什么?它让其他客户端等待吗?还是完全忽略其他客户端?我们有办法向客户端发送某种响应吗?还是让其他客户端等待,然后允许它们访问此函数?

英文:

I have this function.

func (s *eS) Post(param *errorlogs.Q) (*errorlogs.Error, *errors.RestErr) {
//sub := q.Get("sub")
s.mu.Lock()
utime := int32(time.Now().Unix())
// Open our jsonFile
jsonFile, errFile := getlist(param.Id)
// if we os.Open returns an error then handle it
if errFile != nil {
return nil, errFile
}
jsonFile, err := os.Open(dir + "/File.json")
// if we os.Open returns an error then handle it
if err != nil {
return nil, errors.NewNotFoundError("Bad File request")
}
// read our opened jsonFile as a byte array.
byteValue, _ := ioutil.ReadAll(jsonFile)
// we initialize our  model
var errorFile errorlogs.Error_File
// we unmarshal our byteArray which contains our
// jsonFile's content into '' which we defined above
json.Unmarshal(byteValue, &errorFile)
// defer the closing of our jsonFile so that we can parse it later on
defer jsonFile.Close()
// An object to copy the required data from the response
var id int32
if len(errorFile.Error) == 0 {
id = 0
} else {
id = errorFile.Error[len(errorFile.Error)-1].ID
}
newValue := &errorlogs.Error{
ID:         id + 1,
Utime:      utime,
}
errorFile.Error = append(errorFile.Error, *newValue)
file, err := json.Marshal(errorFile)
if err != nil {
return nil, errors.NewInternalServerError("Unable to json marshal file")
}
err = ioutil.WriteFile(dir+"/File.json", file, 0644)
if err != nil {
return nil, errors.NewInternalServerError("Unable to write file")
}
s.mu.Unlock()
return newValue, nil
}

Here i am locking this function from the concurrent request that if some client is already writing to the file it will not let the other client write to it at the same time. But now i have confusion that what does this mutex.Lock() does to all the other requests while it is being locked? does it let the other client wait? or it just ignore all the other clients? do we have any way of sending back the client with some kind of response? or let the other client wait and then allow them to access this function ?

答案1

得分: 6

当一个互斥锁被锁定时,所有对 Mutex.Lock() 的调用都会被阻塞,直到首先调用 Mutex.Unlock()

因此,在处理程序运行时(并持有互斥锁),所有其他请求都会在 Lock() 调用处被阻塞。

**注意:**如果处理程序不正常完成,因为你提前返回(使用 return 语句)或者发生 panic,互斥锁将保持锁定状态,因此所有后续请求都会被阻塞。

一个良好的实践是在锁定后立即使用 defer 解锁互斥锁:

s.mu.Lock()
defer s.mu.Unlock()

这样可以确保无论函数如何结束(可能是正常结束、返回或者 panic),都会调用 Unlock()

尽量将锁的持有时间尽量缩短,以减少其他请求的阻塞时间。虽然在进入处理程序时立即锁定并在返回前解锁可能很方便,但如果在处理程序的“生命周期”中不使用受保护的资源,只有在使用共享资源时才锁定和解锁。例如,如果你想保护对文件的并发访问,可以锁定互斥锁,读取/写入文件,然后在完成后立即解锁互斥锁。你对读取的数据的处理以及如何组装和发送响应都不应该阻塞其他请求。当然,当使用 defer 解锁时,它可能不会像应该的那样早运行(当你完成共享资源时)。因此,在某些情况下,可能不使用 defer 是可以的,或者访问共享资源的代码可以移动到一个命名或未命名(匿名)函数中,以仍然能够使用 defer

sync.Mutex 不支持“查看”状态,也不支持“尝试锁定”操作。这意味着使用 sync.Mutex 时,你无法向客户端发出信号,告诉它必须等待,因为处理请求正在等待另一个请求完成。如果你需要这样的功能,可以使用通道。容量为1的缓冲通道可以实现这个功能: “锁定”操作是在通道上发送一个值,“解锁”操作是从通道接收一个值。到目前为止,一切都很好。 “尝试锁定”操作可以是一个“条件”发送操作:使用带有 default 分支的 select 语句,你可以检测到如果无法立即锁定,因为它已经被锁定,你可以做其他事情或同时做其他事情,并稍后重试锁定。

下面是一个示例:

var lock = make(chan struct{}, 1)
func handler(w http.ResponseWriter, r *http.Request) {
// 尝试锁定:
select {
case lock <- struct{}{}:
// 成功:继续
defer func() { <-lock }() // 解锁延迟执行
default:
// 另一个处理程序会阻塞我们,发送一个“错误”回去
http.Error(w, "请稍后再试", http.StatusTooManyRequests)
return
}
time.Sleep(time.Second * 2) // 模拟长时间计算
io.WriteString(w, "完成")
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}

上面的简单示例如果另一个请求持有锁,则立即返回错误。你可以选择在这里执行不同的操作:你可以将其放在循环中,在放弃并返回错误之前重试几次(在迭代之间稍微休眠)。你可以在尝试锁定时使用超时,并且只有在一段时间内无法获取锁时才接受“失败”(参见 time.After()context.WithTimeout())。当然,如果我们使用某种超时,必须删除 default 分支(如果没有其他分支可以立即执行,则立即选择 default 分支)。

而且,既然我们正在使用超时,因为我们已经使用了 select,我们可以额外监视请求的上下文:如果上下文被取消,我们应该提前终止并返回。我们可以通过添加一个从上下文的 done 通道接收的 case 来实现,例如 case <-r.Context().Done():

下面是一个示例,展示了如何使用 select 简单地实现超时和上下文监视:

var lock = make(chan struct{}, 1)
func handler(w http.ResponseWriter, r *http.Request) {
// 最多等待1秒:
ctx, cancel := context.WithTimeout(r.Context(), time.Second) 
defer cancel()
// 尝试锁定:
select {
case lock <- struct{}{}:
// 成功:继续
defer func() { <-lock }() // 解锁延迟执行
case <-ctx.Done():
// 超时或上下文取消
http.Error(w, "请稍后再试", http.StatusTooManyRequests)
return
}
time.Sleep(time.Second * 2) // 模拟长时间计算
io.WriteString(w, "完成")
}
英文:

When a mutex is locked, all other calls to Mutex.Lock() will block until Mutex.Unlock() is called first.

So while your handler is running (and holding the mutex), all other requests will get blocked at the Lock() call.

Note: if your handler doesn't complete normally because you return early (using a return statement), or it panics, your mutex will remain locked, and hence all further requests will block.

A good practice is to use defer to unlock a mutex, right after it is locked:

s.mu.Lock()
defer s.mu.Unlock()

This ensures Unlock() will be called no matter how your function ends (may end normally, return or panic).

Try to hold the lock for as little time as possible to minimize blocking time of other requests. While it may be convenient to lock right as you enter the handler and only unlock before return, if you don't use the protected resources for the "lifetime" of the handler, only lock and unlock when you use the shared resource. For example if you want to protect concurrent access to a file, lock the mutex, read / write the file, and as soon as you're done with it, unlock the mutex. What you do with the read data and how you assemble and send your response should not block other requests. Of course when using defer to unlock, that may not run as early as it should be (when you're done with the shared resource). So in some cases it may be OK not to use defer, or the code accessing shared resources may be moved to a named or unnamed (anonymous) function to still be able to use defer.

sync.Mutex does not support "peeking" the status, nor "try-lock" operation. This means using sync.Mutex you cannot signal the client that it has to wait because processing the request is waiting another request to complete. If you'd need such functionality, you could use channels. A buffered channel with a capacity of 1 could fulfil this functionality: the "lock" operation is sending a value on the channel, the "unlock" operation is receiving a value from the channel. So far so good. The "try-lock" operation could be a "conditional" send operation: using a select statement with a default case, you could detect that you can't lock now because it is already locked, and you could do something else instead or meanwhile, and retry locking later.

Here's an example how it could look like:

var lock = make(chan struct{}, 1)
func handler(w http.ResponseWriter, r *http.Request) {
// Try locking:
select {
case lock &lt;- struct{}{}:
// Success: proceed
defer func() { &lt;-lock }() // Unlock deferred
default:
// Another handler would block us, send back an &quot;error&quot;
http.Error(w, &quot;Try again later&quot;, http.StatusTooManyRequests)
return
}
time.Sleep(time.Second * 2) // Simulate long computation
io.WriteString(w, &quot;Done&quot;)
}
func main() {
http.HandleFunc(&quot;/&quot;, handler)
log.Fatal(http.ListenAndServe(&quot;:8080&quot;, nil))
}

The above simple example returns an error immediately if another request holds the lock. You could choose to do different things here: you could put it in a loop and retry a few times before giving up and returning an error (sleeping a little between iterations). You could use a timeout when attempting to lock, and only accept "failure" if you can't get the lock for some time (see time.After() and context.WithTimeout()). Of course if we're using a timeout of some sort, the default case must be removed (the default case is chosen immediately if none of the other cases can proceed immediately).

And while we're at it (the timeout), since we're already using select, it's a bonus that we can incorporate monitoring the request's context: if it's cancelled, we should terminate and return early. We may do so by adding a case receiving from the context's done channel, like case &lt;-r.Context().Done():.

Here's an example how timeout and context monitoring could be done simply with a select:

var lock = make(chan struct{}, 1)
func handler(w http.ResponseWriter, r *http.Request) {
// Wait 1 sec at most:
ctx, cancel := context.WithTimeout(r.Context(), time.Second) 
defer cancel()
// Try locking:
select {
case lock &lt;- struct{}{}:
// Success: proceed
defer func() { &lt;-lock }() // Unlock deferred
case &lt;-ctx.Done():
// Timeout or context cancelled
http.Error(w, &quot;Try again later&quot;, http.StatusTooManyRequests)
return
}
time.Sleep(time.Second * 2) // Simulate long computation
io.WriteString(w, &quot;Done&quot;)
}

huangapple
  • 本文由 发表于 2021年6月10日 17:18:52
  • 转载请务必保留本文链接:https://go.coder-hub.com/67918285.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定