英文:
istio - connection refused
问题
我在尝试连接到一个K8s工作负载时遇到了"connection refused"错误。
这是使用kubeadm进行的自定义K8s安装。
域名:example.com
解析到K8s服务器的IP地址。
我部署了一个示例Pod(nginx)以及一个相应的服务。我可以通过导航到集群IP(内部的10.0.0.0/24范围已路由到我的K8s主节点,因此可以直接访问)来验证它的工作。
在主节点上,我不能看到任何监听80端口的进程,使用netstat也是如此。
我还可以在端口80上运行一个自定义的Web服务器(使用python3 -m http.server 80),成功地提供本地目录,这意味着没有其他进程在使用这个端口。
请问您需要提供哪些日志信息?
英文:
I am getting a connection refused
error when trying to connect to a k8s workload.
This is a custom installation of k8s using kubeadm.
Domain: example.com
resolves to the IP address of the k8s server.
I have deployed a sample pod (nginx) with an accompanying service. I can see that it works by navigating to the cluster ip (the internal 10.0.0.0/24 range has been routed to my k8s master node so it is accessible directly).
On the master node I can't see anything listening to port 80 using netstat.
I can also run a custom web server on port 80 (using python3 -m http.server 80) which succeeds in serving the local directory which means that no other process does.
Ask me anything. I will provide logs.
Kubernetes
- v.1.27.3
Istio
- client version: 1.17.1
- control plane version: 1.18.0
- data plane version: 1.18.0 (1 proxies)
gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gw-foo
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- example.com
port:
name: http
number: 80
protocol: HTTP
virtualservice.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: plex-vs
namespace: default
spec:
gateways:
- default/gw-foo
hosts:
- example.com
http:
- match:
- port: 80
name: foo
route:
- destination:
host: foo-svc.some-namespace.svc.cluster.local
port:
number: 30000
Custom kubeadm-config.yaml
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.27.3
networking:
podSubnet: "10.3.0.0/24"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
#serverTLSBootstrap: true
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "192.168.1.200"
Istio Ingress status
$ kubectl -n istio-ingress get deployment istio-ingressgateway
NAME READY UP-TO-DATE AVAILABLE AGE
istio-ingressgateway 1/1 1 1 47h
$ kubectl -n istio-ingress describe deployments.apps istio-ingressgateway
Name: istio-ingressgateway
Namespace: istio-ingress
CreationTimestamp: Tue, 11 Jul 2023 21:55:48 +0300
Labels: app=istio-ingressgateway
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=istio-ingressgateway
app.kubernetes.io/version=1.18.0
helm.sh/chart=gateway-1.18.0
istio=ingressgateway
Annotations: deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: istio-ingressgateway
meta.helm.sh/release-namespace: istio-ingress
Selector: app=istio-ingressgateway,istio=ingressgateway
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=istio-ingressgateway
istio=ingressgateway
sidecar.istio.io/inject=true
Annotations: inject.istio.io/templates: gateway
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
sidecar.istio.io/inject: true
Service Account: istio-ingressgateway
Containers:
istio-proxy:
Image: auto
Port: 15090/TCP
Host Port: 0/TCP
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: istio-ingressgateway-778d69499b (1/1 replicas created)
Events: <none>
Istioctl analyze output
$ istioctl analyze -A
Warning [IST0108] (Pod istio-ingress/istio-ingressgateway-778d69499b-g6mkv) Unknown annotation: istio.io/rev
Warning [IST0108] (Pod istio-system/istiod-5f859db56c-796zw) Unknown annotation: ambient.istio.io/redirection
Warning [IST0108] (Pod default/my-nginx-7754db7798-ft46j) Unknown annotation: istio.io/rev
Info [IST0102] (Namespace calico-apiserver) The namespace is not enabled for Istio injection. Run 'kubectl label namespace calico-apiserver istio-injection=enabled' to enable it, or 'kubectl label namespace calico-apiserver istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0102] (Namespace calico-system) The namespace is not enabled for Istio injection. Run 'kubectl label namespace calico-system istio-injection=enabled' to enable it, or 'kubectl label namespace calico-system istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0102] (Namespace cert-manager) The namespace is not enabled for Istio injection. Run 'kubectl label namespace cert-manager istio-injection=enabled' to enable it, or 'kubectl label namespace cert-manager istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0102] (Namespace hello-kubernetes) The namespace is not enabled for Istio injection. Run 'kubectl label namespace hello-kubernetes istio-injection=enabled' to enable it, or 'kubectl label namespace hello-kubernetes istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0102] (Namespace istio-ingress) The namespace is not enabled for Istio injection. Run 'kubectl label namespace istio-ingress istio-injection=enabled' to enable it, or 'kubectl label namespace istio-ingress istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0102] (Namespace tigera-operator) The namespace is not enabled for Istio injection. Run 'kubectl label namespace tigera-operator istio-injection=enabled' to enable it, or 'kubectl label namespace tigera-operator istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0118] (Service calico-apiserver/calico-api) Port name apiserver (port: 443, targetPort: 5443) doesn't follow the naming convention of Istio port.
Info [IST0118] (Service calico-system/calico-kube-controllers-metrics) Port name metrics-port (port: 9094, targetPort: 9094) doesn't follow the naming convention of Istio port.
Info [IST0118] (Service calico-system/calico-typha) Port name calico-typha (port: 5473, targetPort: calico-typha) doesn't follow the naming convention of Istio port.
Info [IST0118] (Service hello-kubernetes/hello-world-service) Port name hello-svc (port: 8065, targetPort: 31870) doesn't follow the naming convention of Istio port.
答案1
得分: 1
搞清楚了。
以下是我对我的理解出了问题的解释。
我原以为在我的主机上会有一些进程在监听80端口和443端口。
事实并非如此,如下所示:
kubectl -n istio-system get svc istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.107.128.144 <pending> 15021:31095/TCP,80:32657/TCP,443:30544/TCP,31400:31995/TCP,15443:30016/TCP 58m
我所要做的就是将所有非安全的http流量路由到主机的32657端口,将安全的https流量路由到主机的30544端口。
英文:
Figured it out.
Here's what was going wrong with my understanding.
I was expecting some process to be listening on port 80 and 443 on my master host.
That is simply not the case as demonstrated below:
kubectl -n istio-system get svc istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.107.128.144 <pending> 15021:31095/TCP,80:32657/TCP,443:30544/TCP,31400:31995/TCP,15443:30016/TCP 58m
All I had to do was to route all unsecure http traffic to port 32657 and secure https traffic to port 30544 on the master host.
答案2
得分: 0
你提到,“example.com解析到k8s服务器的IP地址” - example.com必须解析到运行在您集群内的入口网关服务的IP地址。
英文:
You mention, "example.com resolves to the IP address of the k8s server" - example.com has to resolve to the IP address of the ingress gateway service running inside your cluster.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论