英文:
Kubernetes Logging Wrong Severity
问题
我在k8的pod中部署了一个Python进程。这个进程使用一个简单的记录器:
import sys
import logging
import logging.handlers
from pathlib import Path
import coloredlogs
from app.core.config import settings
CONFIG_PATH = Path.joinpath(BASE_PATH, '.configrc')
LOG_FORMAT = f'%(asctime)s.[{settings.APP_NAME}] %(levelname)s %(message)s'
LOG_LEVEL = settings.dict().get('LOG_LEVEL', logging.INFO)
LOGGER = logging.getLogger(__name__)
coloredlogs.install(
level=LOG_LEVEL,
fmt=LOG_FORMAT,
logger=LOGGER)
LOGGER.info("INFO Creating main objects...")
然而,当我检查k8的日志时,它总是抱怨这些日志是错误的:
ERROR 2023-06-01T13:54:37.742688222Z [resource.labels.containerName: myapp] 2023-06-01 15:54:37.[myapp] INFO Creating main objects...
{
insertId: "8xj2l2hwts2f45gt"
labels: {4}
logName: "projects/proj-iot-poc/logs/stderr"
receiveTimestamp: "2023-06-01T13:54:38.596712564Z"
resource: {2}
severity: "ERROR"
textPayload: "2023-06-01 15:54:37.[myapp] INFO Creating main objects..."
timestamp: "2023-06-01T13:54:37.742688222Z"
}
不值一提的是,我只使用了 LOGGER.info("Creating main objects...")
并且我期望日志只是一个信息而不是错误。
清单如下:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "11"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"name":"myapp"},"name":"myapp","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"myapp"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"app":"myapp","version":"1.6.1rc253"}},"spec":{"containers":[{"env":[{"name":"COMMON_CONFIG_COMMIT_ID","value":"1b2e6669140391d680ff0ca34811ddc2553f15f7"},{"name":"OWN_CONFIG_COMMIT_ID","value":"52a142ca003ade39a0fd96faffbe5334facc3463"}],"envFrom":[{"configMapRef":{"name":"myapp-config"}}],"image":"europe-west1-docker.pkg.dev/mycluster/docker-main/myapp:1.6.1rc253","lifecycle":{"preStop":{"exec":{"command":["/bin/bash","-c","sleep 5"]}}},"name":"myapp","resources":null}],"restartPolicy":"Always"}}}}}
creationTimestamp: "2023-05-26T07:49:37Z"
generation: 13
labels:
name: myapp
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:name: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:version: {}
f:spec:
f:containers:
k:{"name":"myapp"}:
.: {}
f:env:
.: {}
k:{"name":"COMMON_CONFIG_COMMIT_ID"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"OWN_CONFIG_COMMIT_ID"}:
.: {}
f:name: {}
f:value: {}
f:envFrom: {}
f:image: {}
f:imagePullPolicy: {}
f:lifecycle:
.: {}
f:preStop:
.: {}
f:exec:
.: {}
f:command: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-05-26T07:49:37Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2023-06-03T06:41:24Z"
name: myapp
namespace: default
resourceVersion: "537412667"
uid: 375a536e-e39c-4001-a234-e47e812f0bee
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: myapp
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
app: myapp
version: 1.6.1rc253
spec:
containers:
- env:
- name: COMMON_CONFIG_COMMIT_ID
value: 1b2e6669140391d680ff0ca34811ddc2553f15f7
- name: OWN_CONFIG_COMMIT_ID
value: 52a142ca003ade39a0fd96faffbe5334facc3463
envFrom:
<details>
<summary>英文:</summary>
I deployed a python process into a pod of k8. The process makes of a simple logger:
import sys
import logging
import logging.handlers
from pathlib import Path
import coloredlogs
from app.core.config import settings
CONFIG_PATH = Path.joinpath(BASE_PATH, '.configrc')
LOG_FORMAT = f'%(asctime)s.[{settings.APP_NAME}] %(levelname)s %(message)s'
LOG_LEVEL = settings.dict().get('LOG_LEVEL', logging.INFO)
LOGGER = logging.getLogger(__name__)
coloredlogs.install(
level=LOG_LEVEL,
fmt=LOG_FORMAT,
logger=LOGGER)
LOGGER.info("INFO Creating main objects...")
However, when I inspect the logs in k8, it always complains that those logs are errors:
ERROR 2023-06-01T13:54:37.742688222Z [resource.labels.containerName: myapp] 2023-06-01 15:54:37.[myapp] INFO Creating main objects...
{
insertId: "8xj2l2hwts2f45gt"
labels: {4}
logName: "projects/proj-iot-poc/logs/stderr"
receiveTimestamp: "2023-06-01T13:54:38.596712564Z"
resource: {2}
severity: "ERROR"
textPayload: "2023-06-01 15:54:37.[myapp] INFO Creating main objects..."
timestamp: "2023-06-01T13:54:37.742688222Z"
}
Not worth to say that I just usedd out `LOGGER.info("Creating main objects...")` and I expect the log to be just an INFO not and ERROR...
The manifest is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "11"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"name":"myapp"},"name":"myapp","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"myapp"}},"strategy":{"type":"Recreate"},"template":{"metadata":{"labels":{"app":"myapp","version":"1.6.1rc253"}},"spec":{"containers":[{"env":[{"name":"COMMON_CONFIG_COMMIT_ID","value":"1b2e6669140391d680ff0ca34811ddc2553f15f7"},{"name":"OWN_CONFIG_COMMIT_ID","value":"52a142ca003ade39a0fd96faffbe5334facc3463"}],"envFrom":[{"configMapRef":{"name":"myapp-config"}}],"image":"europe-west1-docker.pkg.dev/mycluster/docker-main/myapp:1.6.1rc253","lifecycle":{"preStop":{"exec":{"command":["/bin/bash","-c","sleep 5"]}}},"name":"myapp","resources":null}],"restartPolicy":"Always"}}}}
creationTimestamp: "2023-05-26T07:49:37Z"
generation: 13
labels:
name: myapp
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:name: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:version: {}
f:spec:
f:containers:
k:{"name":"myapp"}:
.: {}
f:env:
.: {}
k:{"name":"COMMON_CONFIG_COMMIT_ID"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"OWN_CONFIG_COMMIT_ID"}:
.: {}
f:name: {}
f:value: {}
f:envFrom: {}
f:image: {}
f:imagePullPolicy: {}
f:lifecycle:
.: {}
f:preStop:
.: {}
f:exec:
.: {}
f:command: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-05-26T07:49:37Z" - apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2023-06-03T06:41:24Z"
name: myapp
namespace: default
resourceVersion: "537412667"
uid: 375a536e-e39c-4001-a234-e47e812f0bee
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: myapp
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
app: myapp
version: 1.6.1rc253
spec:
containers:- env:
- name: COMMON_CONFIG_COMMIT_ID
value: 1b2e6669140391d680ff0ca34811ddc2553f15f7 - name: OWN_CONFIG_COMMIT_ID
value: 52a142ca003ade39a0fd96faffbe5334facc3463
envFrom: - configMapRef:
name: myapp-config
image: europe-west1-docker.pkg.dev/mycluster/docker-main/myapp:1.6.1rc253
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- sleep 5
name: myapp
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- name: COMMON_CONFIG_COMMIT_ID
- env:
- lastTransitionTime: "2023-05-26T07:49:37Z"
lastUpdateTime: "2023-06-01T14:33:00Z"
message: ReplicaSet "myapp-f9bbb5f6d" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing - lastTransitionTime: "2023-06-03T06:41:24Z"
lastUpdateTime: "2023-06-03T06:41:24Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 13
readyReplicas: 1
replicas: 1
updatedReplicas: 1
**EDIT**
Maybe this is realted to:
[GCP and Python Logging][1]
[1]: https://stackoverflow.com/questions/54147483/google-cloud-functions-python-logging-issue
</details>
# 答案1
**得分**: 0
在网上爬取信息后,我了解到GCP不太方便地支持Python日志记录。最终我编写了自定义的日志记录器:
```python
"""
通用的具有GCP兼容性的带有颜色的日志记录器
"""
import os
import sys
import datetime
import pprint
from typing import TextIO
class CustomLogger:
BLACK = '\033[0;30m'
RED = '\033[0;31m'
GREEN = '\033[0;32m'
BROWN = '\033[0;33m'
BLUE = '\033[0;34m'
PURPLE = '\033[0;35m'
CYAN = '\033[0;36m'
GREY = '\033[0;37m'
DARK_GREY = '\033[1;30m'
LIGHT_RED = '\033[1;31m'
BOLD_RED = "\x1b[31;1m"
LIGHT_GREEN = '\033[1;32m'
YELLOW = '\033[1;33m'
LIGHT_BLUE = '\033[1;34m'
LIGHT_PURPLE = '\033[1;35m'
LIGHT_CYAN = '\033[1;36m'
WHITE = '\033[1;37m'
RESET = "\033[0m"
def __init__(self, fmt: str = ""):
self.fmt = f"%(asctime).[{os.getenv('APP_NAME')}] %(levelname) %(message)"
self.terminal_type = os.getenv('TERM') or ""
def debug(self, message: str = ""):
out = self.__format(message)
self.sprint(self.DARK_GREY, message=out)
def warning(self, message: str = ""):
out = self.__format(message)
self.sprint(self.CYAN, message=out)
def info(self, message: str = ""):
out = self.__format(message, 'INFO')
self.sprint(self.GREEN, message=out)
def critical(self, message: str = ""):
out = self.__format(message, 'CRITICAL')
self.sprint(self.BOLD_RED, message=out, file=sys.stderr)
def exception(self, message: str = ""):
out = self.__format(message, 'CRITICAL')
self.sprint(self.LIGHT_RED, message=out, file=sys.stderr)
def error(self, message: str = ""):
out = self.__format(message, 'ERROR')
self.sprint(self.RED, message=out, file=sys.stderr)
def __format(self, message, level: str = ""):
now = datetime.datetime.now().strftime('%Y-%m-%dT%H:%M')
message = pprint.pformat(message)
return self.fmt.replace("%(message)", message).\
replace("%(asctime)", now).\
replace("%(levelname)", level)
def sprint(self, color, message, file: TextIO = sys.stdout):
if 'xterm' in self.terminal_type:
print(color + message + self.RESET)
else:
print(message)
LOGGER = CustomLogger()
省略了一些代码优化部分。
英文:
After scraping the web, I understood that GCP does not support the logging python painlessy.
I ended up writing my custom logger:
"""
General colorized Logger compatible with GCP
"""
import os
import sys
import datetime
import pprint
from typing import TextIO
class CustomLogger:
BLACK = '\033[0;30m'
RED = '\033[0;31m'
GREEN = '\033[0;32m'
BROWN = '\033[0;33m'
BLUE = '\033[0;34m'
PURPLE = '\033[0;35m'
CYAN = '\033[0;36m'
GREY = '\033[0;37m'
DARK_GREY = '\033[1;30m'
LIGHT_RED = '\033[1;31m'
BOLD_RED = "\x1b[31;1m"
LIGHT_GREEN = '\033[1;32m'
YELLOW = '\033[1;33m'
LIGHT_BLUE = '\033[1;34m'
LIGHT_PURPLE = '\033[1;35m'
LIGHT_CYAN = '\033[1;36m'
WHITE = '\033[1;37m'
RESET = "\033[0m"
pippo = "1"
def __init__(self, fmt: str = ""):
self.fmt = f"%(asctime).[{os.getenv('APP_NAME')] %(levelname) %(message)"
self.terminal_type = os.getenv('TERM') or ""
def debug(self, message: str = ""):
out = self.__format(message)
self.sprint(self.DARK_GREY, message=out)
def warning(self, message: str = ""):
out = self.__format(message)
self.sprint(self.CYAN, message=out)
def info(self, message: str = ""):
out = self.__format(message, 'INFO')
self.sprint(self.GREEN, message=out)
def critical(self, message: str = ""):
out = self.__format(message, 'CRITICAL')
self.sprint(self.BOLD_RED, message=out, file=sys.stderr)
def exception(self, message: str = ""):
out = self.__format(message, 'CRITICAL')
self.sprint(self.LIGHT_RED, message=out, file=sys.stderr)
def error(self, message: str = ""):
out = self.__format(message, 'ERROR')
self.sprint(self.RED, message=out, file=sys.stderr)
def __format(self, message, level: str = ""):
now = datetime.datetime.now().strftime('%Y-%m-%dT%H:%M')
message = pprint.pformat(message)
return self.fmt.replace("%(message)", message).\
replace("%(asctime)", now).\
replace("%(levelname)", level)
def sprint(self, color, message, file: TextIO = sys.stdout):
if 'xterm' in self.terminal_type:
print(color + message + self.RESET)
else:
print(message)
LOGGER = CustomLogger()
Some code optimization omitted.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论