如何在Kubernetes内部允许自定义端口上的TCP服务(非HTTP)。

huangapple go评论47阅读模式
英文:

How to allow for tcp service (not http) on custom port inside kubernetes

问题

我有一个运行 OPC 服务器在端口 4840 的容器。我正在尝试配置我的 microk8s 以允许我的 OPC 客户端连接到端口 4840。以下是我的部署和服务的示例:

(这里没有定义命名空间,但它们是通过 Azure 流水线部署的,命名空间在那里定义,部署和服务的命名空间是 "jawcrusher")

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jawcrusher
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jawcrusher
  strategy: {}
  template:
    metadata:
      labels:
        app: jawcrusher
    spec:
      volumes:
        - name: jawcrusher-config
          configMap:
            name: jawcrusher-config
      containers:
      - image: XXXmicrok8scontainerregistry.azurecr.io/jawcrusher:#{Version}#
        name: jawcrusher
        ports:
          - containerPort: 4840
        volumeMounts:
          - name: jawcrusher-config
            mountPath: "/jawcrusher/config/config.yml"
            subPath: "config.yml"
      imagePullSecrets:
        - name: acrsecret

service.yml

apiVersion: v1
kind: Service
metadata:
  name: jawcrusher-service
spec:
  ports:
  - name: 7070-4840
    port: 7070
    protocol: TCP
    targetPort: 4840
  selector:
    app: jawcrusher
  type: ClusterIP
status:
  loadBalancer: {}

我正在使用一个名为 Lens 的 k8s 客户端,在该客户端中有一个功能可以将本地端口转发到服务。如果我这样做,我可以使用我的 OPC 客户端连接到 localhost:4840 上的 OPC 服务器。对我来说,这表明服务和部署已正确设置。

所以现在我想告诉 microk8s 从外部提供我的 OPC 服务器的 4840 端口。例如,如果我的服务器的 DNS 是 microk8s.xxxx.internal,我想用我的 OPC 客户端连接到 microk8s.xxxx.internal:4840。

我尽可能地遵循了这个教程:https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/。

它说要更新入口的 TCP 配置,这是我更新后的样子:

nginx-ingress-tcp-microk8s-conf:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-tcp-microk8s-conf
  namespace: ingress
  # 其他字段...
data:
  '4840': jawcrusher/jawcrusher-service:7070

它还说要更新名为 ingress-nginx-controller 的部署,但在 microk8s 中似乎是一个名为 nginx-ingress-microk8s-controller 的守护集。这是我添加新端口后的样子:

nginx-ingress-microk8s-controller:

spec:
  containers:
    - name: nginx-ingress-microk8s
      image: registry.k8s.io/ingress-nginx/controller:v1.2.0
      args:
        # 其他参数...
        - name: jawcrusher
          hostPort: 4840
          containerPort: 4840
          protocol: TCP

在我更新守护集之后,所有 pod 都会重新启动。端口似乎是开放的,如果我运行此脚本,会输出:

Test-NetConnection -ComputerName microk8s.xxxx.internal -Port 4840
ComputerName     : microk8s.xxxx.internal
RemoteAddress    : 10.161.64.124
RemotePort       : 4840
InterfaceAlias   : Ethernet 2
SourceAddress    : 10.53.226.55
TcpTestSucceeded : True

在我进行更改之前,它会显示 TcpTestSucceeded: False。

但是 OPC 客户端无法连接。它只显示:
"无法连接到服务器:BadCommunicationError"。

有没有人看到我哪里出错了,或者知道如何在 microk8s 中完成这个操作。

更新 1:
当我尝试使用 OPC 客户端连接到服务器时,我在入口守护集 pod 的日志中看到了错误消息:

> 2023/02/15 09:57:32 [error] 999#999: *63002 connect() failed (111:
> Connection refused) while connecting to upstream, client:
> 10.53.225.232, server: 0.0.0.0:4840, upstream: "10.1.98.125:4840",
> bytes from/to client:0/0, bytes from/to upstream:0/0

10.53.225.232 是客户端机器的 IP 地址,10.1.98.125 是运行 OPC 服务器的 pod 的 IP。所以似乎已经理解了将外部端口 4840 代理/转发到我的服务,而服务又指向 OPC 服务器 pod。但为什么会出错...

更新 2:
只是为了澄清,如果我运行 kubectl port forward 命令并指向我的服务,它是有效的。但是,如果我尝试直接连接到 4840 端口,则无法连接。例如,这有效:

kubectl port-forward service/jawcrusher-service 5000:4840 -n jawcrusher --address='0.0.0.0'

这允许我使用 OPC 客户端连接到端口 5000 上的服务器。

英文:

I have a container running an OPC-server on port 4840. I am trying to configure my microk8s to allow my OPC-Client to connect to the port 4840. Here are examples of my deployment and service:

(No namespace is defined here but they are deployed through azure pipelines and that is where the namespace is defined, the namespace for the deployment and service is "jawcrusher")

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jawcrusher
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jawcrusher
  strategy: {}
  template:
    metadata:
      labels:
        app: jawcrusher
    spec:
      volumes:
        - name: jawcrusher-config
          configMap:
            name: jawcrusher-config
      containers:
      - image: XXXmicrok8scontainerregistry.azurecr.io/jawcrusher:#{Version}#
        name: jawcrusher
        ports:
          - containerPort: 4840
        volumeMounts:
          - name: jawcrusher-config
            mountPath: "/jawcrusher/config/config.yml"
            subPath: "config.yml"
      imagePullSecrets:
        - name: acrsecret

service.yml

apiVersion: v1
kind: Service
metadata:
  name: jawcrusher-service
spec:
  ports:
  - name: 7070-4840
    port: 7070
    protocol: TCP
    targetPort: 4840
  selector:
    app: jawcrusher
  type: ClusterIP
status:
  loadBalancer: {}

I am using a k8s-client called Lens and in this client there is a functionality to forward local ports to the service. If I do this I can connect to the OPC-Server with my OPC-Client using the url localhost:4840. To me that indicates that the service and deployment is set up correctly.

So now I want to tell microk8s to serve my OPC-Server from port 4840 "externally". So for example if my dns to the server is microk8s.xxxx.internal I would like to connect with my OPC-Client to microk8s.xxxx.internal:4840.

I have followed this tutorial as much as I can: https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/.

It says to update the tcp-configuration for the ingress, this is how it looks after I updated it:

nginx-ingress-tcp-microk8s-conf:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-tcp-microk8s-conf
  namespace: ingress
  uid: a32690ac-34d2-4441-a5da-a00ec52d308a
  resourceVersion: '7649705'
  creationTimestamp: '2023-01-12T14:12:07Z'
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"nginx-ingress-tcp-microk8s-conf","namespace":"ingress"}}
  managedFields:
    - manager: kubectl-client-side-apply
      operation: Update
      apiVersion: v1
      time: '2023-01-12T14:12:07Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
    - manager: kubectl-patch
      operation: Update
      apiVersion: v1
      time: '2023-02-14T07:50:30Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:data:
          .: {}
          f:4840: {}
  selfLink: /api/v1/namespaces/ingress/configmaps/nginx-ingress-tcp-microk8s-conf
data:
  '4840': jawcrusher/jawcrusher-service:7070
binaryData: {}

It also says to update a deployment called ingress-nginx-controller but in microk8s it seems to be a daemonset called nginx-ingress-microk8s-controller. This is what it looks like after adding a new port:

nginx-ingress-microk8s-controller:

spec:
      containers:
        - name: nginx-ingress-microk8s
          image: registry.k8s.io/ingress-nginx/controller:v1.2.0
          args:
            - /nginx-ingress-controller
            - '--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf'
            - >-
              --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
            - >-
              --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
            - '--ingress-class=public'
            - ' '
            - '--publish-status-address=127.0.0.1'
          ports:
            - name: http
              hostPort: 80
              containerPort: 80
              protocol: TCP
            - name: https
              hostPort: 443
              containerPort: 443
              protocol: TCP
            - name: health
              hostPort: 10254
              containerPort: 10254
              protocol: TCP
            ####THIS IS WHAT I ADDED####
            - name: jawcrusher
              hostPort: 4840
              containerPort: 4840
              protocol: TCP

After I have updated the daemonset it restarts all the pods. The port seem to be open, if I run this script is outputs:

 Test-NetConnection -ComputerName microk8s.xxxx.internal -Port 4840                                                                                                                                                                                                                                                                             
 ComputerName     : microk8s.xxxx.internal                                                                            
 RemoteAddress    : 10.161.64.124                                                                                        
 RemotePort       : 4840                                                                                                 
 InterfaceAlias   : Ethernet 2                                                                                           
 SourceAddress    : 10.53.226.55
 TcpTestSucceeded : True

Before I did the changes it said TcpTestSucceeded: False.

But the OPC-Client cannot connect. It just says:
Could not connect to server: BadCommunicationError.

Does anyone see if I made a mistake somewhere or knows how to do this in microk8s.

Update 1:
I see an error message in the ingress-daemonset-pod logs when I try to connect to the server with my OPC-Client:

> 2023/02/15 09:57:32 [error] 999#999: *63002 connect() failed (111:
> Connection refused) while connecting to upstream, client:
> 10.53.225.232, server: 0.0.0.0:4840, upstream: "10.1.98.125:4840", bytes from/to client:0/0, bytes from/to upstream:0/0

10.53.225.232 is the client machines ip address and 10.1.98.125 is the ip number of the pod running the OPC-server.

So it seems like it has understood that external port 4840 should be proxied/forwarded to my service which in turn points to the OPC-server-pod. But why do I get an error...

Update 2:
Just to clearify, if I run kubectl port forward command and point to my service it works. But not if I try to connect directly to the 4840 port. So for example this works:

kubectl port-forward service/jawcrusher-service 5000:4840 -n jawcrusher --address='0.0.0.0'

This allows me to connect with my OPC-client to the server on port 5000.

答案1

得分: 2

你只需使用kubectl命令,将本地主机端口x转发到服务/部署/容器端口y。假设你的Kubernetes集群中有一个Nats流服务器,它使用端口4222上的TCP协议,那么你可以使用以下命令进行端口转发:

kubectl port-forward service/nats 4222:4222

这将把本地主机上端口4222的所有流量转发到集群内名为nats的服务的端口4222。你也可以将流量转发到特定的容器或部署上。使用 kubectl port-forward -h 命令来查看所有选项。

如果你正在使用k3d在Docker或Rancher桌面上设置k3s集群,你可以将端口参数添加到k3d命令中:

k3d cluster create k3s --registry-create registry:5000 -p 8080:80@loadbalancer -p 4222:4222@server:0

以上内容是翻译好的部分,不包含额外的回答或解释。

英文:

You should simply do a port forwarding from your localhost port x to your service/deployment/pod on port y with kubectl command.
Lets say you have a Nats streaming server in your k8s, it's using tcp over port 4222, your command in that case would be:
kubectl port-forward service/nats 4222:4222
In this case it will forward all traffic on localhost over port 4222 to the service named nats inside your cluster on port 4222. Instead of service you could forward to a specific pod or deployment...
Use kubectl port-forward -h to see all your options...
In case you are using k3d to setup a k3s in docker or rancher desktop you could add the port parameter to your k3d command:
k3d cluster create k3s --registry-create registry:5000 -p 8080:80@loadbalancer -p 4222:4222@server:0

答案2

得分: 0

问题从来不是microk8s或入口配置的问题。问题在于我的服务器绑定到回环地址(127.0.0.1)。

当我将配置更改为服务器监听0.0.0.0而不是127.0.0.1时,它开始正常工作。

英文:

The problem was never with microk8s or the ingress configuration. The problem was that my server was bound to the loopback address (127.0.0.1).

When I changed the configuration so the server listened to 0.0.0.0 instead of 127.0.0.1 it started working.

huangapple
  • 本文由 发表于 2023年2月14日 18:43:23
  • 转载请务必保留本文链接:https://go.coder-hub.com/75446695.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定