英文:
Why does my Flask application only work properly on AKS with one replica?
问题
当我在AKS上部署一个简单的Flask应用程序,只有一个副本时,该应用程序按预期运行。但是当我部署具有两个副本的应用程序时,它并不按预期工作。用户在成功登录后无法重定向到主页,有时它会工作。
我们如何管理具有两个副本的Flask应用程序?
以下是我的Kubernetes清单供您参考。
谢谢
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontpage
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: frontpage
template:
metadata:
labels:
app: frontpage
spec:
containers:
- name: frontpage
image: ***.azurecr.io/frontpage:latest
limits:
ports:
- containerPort: 5000
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontpage
namespace: default
spec:
selector:
app: frontpage
ports:
- name: http
port: 80
targetPort: 5000
type: ClusterIP
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontpage
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- www.***.com
secretName: ingress-tls-csi
rules:
- host: www.***.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontpage
port:
number: 80
英文:
When I deploy a simple flask application on AKS with one replica, the application is running as expected. But when I deploy the application with two replicas, it was not working as expected. User not able to redirect home page post sign in was successful, sometimes it works.
How do we manage the flask application with two replicas?
Here are my kubernetes manifests for your reference.
Thanks
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontpage
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: frontpage
template:
metadata:
labels:
app: frontpage
spec:
containers:
- name: frontpage
image: ***.azurecr.io/frontpage:latest
limits:
ports:
- containerPort: 5000
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontpage
namespace: default
spec:
selector:
app: frontpage
ports:
- name: http
port: 80
targetPort: 5000
type: ClusterIP
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontpage
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- www.***.com
secretName: ingress-tls-csi
rules:
- host: www.***.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontpage
port:
number: 80
答案1
得分: 0
尝试删除注释 nginx.ingress.kubernetes.io/rewrite-target
或添加 捕获组。
更新
Your application 使用了 文件系统会话亲和性,因此包含一个状态(参见 SESSION_TYPE
)。这个状态不会在副本之间共享。
您能否更改 会话后端?您还可以考虑在副本之间用卷共享后端以进行测试。
英文:
Try to delete the annotation nginx.ingress.kubernetes.io/rewrite-target
or to add the captured group.
UPDATE
Your application uses a file system session affinity and therefore contains a state (see SESSION_TYPE
). This state is not shared between your replicas.
Can you change the session backend? You can also probably share the backend with a volume between replicas for testing.
答案2
得分: 0
谢谢你的更新。我已经配置了 app_config.py,如下所示:
SESSION_TYPE = "filesystem"
SESSION_FILE_DIR = "/mnt/blob"
并更新了我的 kubernetes manifests
,包括卷的详细信息,
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontpage
spec:
replicas: 2
selector:
matchLabels:
app: frontpage
template:
metadata:
labels:
app: frontpage
spec:
containers:
- name: frontpage
image: ****.azurecr.io/frontpage:latest
resources:
limits:
cpu: 250m
memory: 1000Gi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- name: flask-session-volume
mountPath: "/mnt/blob"
volumes:
- name: flask-session-volume
persistentVolumeClaim:
claimName: pvc-blob
---
apiVersion: v1
kind: Service
metadata:
name: frontpage
spec:
selector:
app: frontpage
ports:
- name: http
port: 80
targetPort: 5000
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontpage
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- ***.host.com
secretName: ingress-tls-csi
rules:
- host: ***.host.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontpage
port:
number: 80
英文:
Thank you for your update. I have configured app_config.py, like
SESSION_TYPE = "filesystem"
SESSION_FILE_DIR = "/mnt/blob"
And updated my kubernetes manifests
with volume details,
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontpage
spec:
replicas: 2
selector:
matchLabels:
app: frontpage
template:
metadata:
labels:
app: frontpage
spec:
containers:
- name: frontpage
image: ****.azurecr.io/frontpage:latest
resources:
limits:
cpu: 250m
memory: 1000Gi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- name: flask-session-volume
mountPath: "/mnt/blob"
volumes:
- name: flask-session-volume
persistentVolumeClaim:
claimName: pvc-blob
---
apiVersion: v1
kind: Service
metadata:
name: frontpage
spec:
selector:
app: frontpage
ports:
- name: http
port: 80
targetPort: 5000
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontpage
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- ***.host.com
secretName: ingress-tls-csi
rules:
- host: ***.host.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontpage
port:
number: 80
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论