英文:
Solution to remove lock after Application server redeployment
问题
在重新部署应用程序 war 包之前,我从一个环境路径中检查了 xd.lck 文件:
Exodus 的私有属性:20578@localhost
jetbrains.exodus.io.LockingManager.lock(LockingManager.kt:89)
我正在从 Nginx Unit 和 Payara 服务器进行测试,以消除这可能是 Unit 中的孤立案例的可能性。
而进程 20578
在 htop 中显示为:
20578 root 20 0 2868M 748M 7152 S 0.7 75.8 14:05.75 /usr/lib/jvm/zulu-8-amd64/bin/java -cp /
在成功重新部署后,访问 Web 应用程序会引发:
java.lang.Thread.run(Thread.java:748)
at jetbrains.exodus.log.Log.tryLock(Log.kt:799)
at jetbrains.exodus.log.Log.<init>(Log.kt:120)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:142)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:121)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:10
检查相同的 xd.lck
文件显示相同的内容。这意味着“锁定没有立即释放”,与这里描述的情况相反。
我假设在这种特定情况下,对于基于 Glassfish 的 Payara 服务器,即使重新部署已完成,服务器也不会终止先前的进程。也许是为了实现“零停机时间”重新部署,不确定,Payara 的专家可以在这里纠正我。
通过 htop 检查,在重新部署后进程 20578
仍在运行。
由于大多数应用程序服务器都表现出这种方式,有什么最佳解决方案和/或解决方法,以便在每次重新部署时都不需要手动删除每个环境的所有锁定文件(如果可以删除)?
英文:
Before redeploying the application war, I checked the xd.lck file from one of the environment path:
Private property of Exodus: 20578@localhost
jetbrains.exodus.io.LockingManager.lock(LockingManager.kt:89)
I'm testing from both Nginx Unit and Payara server to eliminate the possibility that this is an isolated case with Unit.
And process 20578
shows from htop:
20578 root 20 0 2868M 748M 7152 S 0.7 75.8 14:05.75 /usr/lib/jvm/zulu-8-amd64/bin/java -cp /
After redeployment finished successfully, accessing the web application throws:
java.lang.Thread.run(Thread.java:748)
at jetbrains.exodus.log.Log.tryLock(Log.kt:799)
at jetbrains.exodus.log.Log.<init>(Log.kt:120)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:142)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:121)
at jetbrains.exodus.env.Environments.newLogInstance(Environments.java:10
And checking the same xd.lck
file shows the same content. Meaning to say that "lock is not immediately released" contrary to what is described here.
My assumption is for this specific case with Payara Server (based on Glassfish) is that, the server does not kill the previous process even after redeployment has completed. Maybe perhaps for "zero-downtime" redeployment, not sure, Payara experts can correct me here.
Checking with htop the process 20578
is still running even after the redeployment.
As with Xodus, since most application servers behave this way, what would be the best solution and/or workaround so we don't need to manually delete all lock files of each environment (if can be deleted) every time we redeploy?
答案1
得分: 0
解决方案是让Java应用程序查找锁定文件的进程,然后执行kill -15
信号,例如优雅地使Java处理信号以能够关闭环境:
// 获取所有PersistentEntityStore
entityStoreMap.forEach((dir, entityStore) -> {
entityStore.getEnvironment().close();
entityStore.close();
}
英文:
Solution is for the Java application to look for the process locking the file then do a kill -15
signal for example to gracefully make the Java handle the signal to be able to close environments:
// Get all PersistentEntityStore's
entityStoreMap.forEach((dir, entityStore) -> {
entityStore.getEnvironment().close();
entityStore.close();
}
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论