英文:
How to use nginx as a sidecar container for IIS in Kubernetes?
问题
我在使用nginx和IIS服务器一起在单个Kubernetes pod中时,遇到了奇怪的结果。看起来与nginx.conf有关。如果我绕过nginx直接访问IIS,我看到标准的起始页面 -
然而,当我尝试通过反向代理访问时,我看到这个部分结果 -
以下是文件:
nginx.conf:
events {
worker_connections 4096; ## 默认: 1024
}
http {
server {
listen 81;
#使用变量防止nginx在启动时检查主机名,这会导致容器失败/重启循环,因为nginx启动比IIS服务器更快。
set $target "http://127.0.0.1:80/";
location / {
proxy_pass $target;
}
}
}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
...
name: ...
spec:
replicas: 1
selector:
matchLabels:
pod: ...
template:
metadata:
labels:
pod: ...
name: ...
spec:
containers:
- image: claudiubelu/nginx:1.15-1-windows-amd64-1809
name: nginx-reverse-proxy
volumeMounts:
- mountPath: "C:/usr/share/nginx/conf"
name: nginx-conf
imagePullPolicy: Always
- image: some-repo/proprietary-server-including-iis
name: ...
imagePullPolicy: Always
nodeSelector:
kubernetes.io/os: windows
imagePullSecrets:
- name: secret1
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
从卷映射nginx.conf文件只是一种方便快速测试不同配置的方法。可以使用kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/
来替换新配置。
Busybox pod(用于访问PVC):
apiVersion: v1
kind: Pod
metadata:
name: nginx-busybox-pod
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "360000"
imagePullPolicy: Always
name: busybox
volumeMounts:
- name: nginx-conf
mountPath: "/mnt/nginx/conf"
restartPolicy: Always
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
nodeSelector:
kubernetes.io/os: linux
最后是PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nginx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: azurefile
有任何想法吗?
英文:
I have a strange result from using nginx and IIS server together in single Kubernetes pod. It seems to be an issue with nginx.conf. If I bypass nginx and go directly to IIS, I see the standard landing page -
However when I try to go through the reverse proxy I see this partial result -
Here are the files:
nginx.conf:
events {
worker_connections 4096; ## Default: 1024
}
http{
server {
listen 81;
#Using variable to prevent nginx from checking hostname at startup, which leads to a container failure / restart loop, due to nginx starting faster than IIS server.
set $target "http://127.0.0.1:80/";
location / {
proxy_pass $target;
}
}
}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
...
name: ...
spec:
replicas: 1
selector:
matchLabels:
pod: ...
template:
metadata:
labels:
pod: ...
name: ...
spec:
containers:
- image: claudiubelu/nginx:1.15-1-windows-amd64-1809
name: nginx-reverse-proxy
volumeMounts:
- mountPath: "C:/usr/share/nginx/conf"
name: nginx-conf
imagePullPolicy: Always
- image: some-repo/proprietary-server-including-iis
name: ...
imagePullPolicy: Always
nodeSelector:
kubernetes.io/os: windows
imagePullSecrets:
- name: secret1
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
Mapping the nginx.conf file from a volume is just a convenient way to rapidly test different configs. New configs can be swapped in using kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/
.
Busybox pod (used to access the PVC):
apiVersion: v1
kind: Pod
metadata:
name: nginx-busybox-pod
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "360000"
imagePullPolicy: Always
name: busybox
volumeMounts:
- name: nginx-conf
mountPath: "/mnt/nginx/conf"
restartPolicy: Always
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
nodeSelector:
kubernetes.io/os: linux
And lastly the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nginx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: azurefile
Any ideas why?
答案1
得分: 1
以下是翻译好的部分:
- 新的指令 -
proxy_set_header Host $host;
- 在
proxy_pass
指令使用的target
变量中删除了尾随斜杠。 - (特定于我的应用程序)服务器上的其他端点更好地使用
$host:$server_port
代替$host
来访问。这是由于应用服务器重定向传入请求到不同的URI,导致代理的端口(81)丢失。
英文:
After some testing, here is a working nginx.conf -
http{
server {
listen 81;
set $target "http://127.0.0.1:80";
location / {
proxy_pass $target;
proxy_set_header Host $host;
}
}
}
- New directive -
proxy_set_header Host $host;
- Trailing slash removed from the
target
variable used by the proxy_pass directive. - (Specific to my application) Other endpoints on the server are better reachable using
$host:$server_port
in place of$host
. This is caused by the app server redirecting incoming requests to different URIs, losing the proxy's port (81) in the process.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论