英文:
OpenTelemetry Collector not exporting data to OTEL/HTTP exporter
问题
I am using the example from here
https://github.com/open-telemetry/opentelemetry-java-docs/tree/main/otlp/docker
I have modified the OTEL config and now it looks like this. I have added otelhttp and otel exporter configuration.
OTEL Configuration
receivers:
otlp:
protocols:
grpc:
exporters:
otlp:
endpoint: "http://docker.for.mac.localhost:4318"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
otlphttp:
endpoint: "http://docker.for.mac.localhost:4318"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
loglevel: debug
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
tls:
insecure: true
# Alternatively, use jaeger_thrift_http with the settings below. In this case
# update the list of exporters on the traces pipeline.
#
# jaeger_thrift_http:
# url: http://jaeger-all-in-one:14268/api/traces
processors:
batch:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, zipkin, jaeger, otlp, otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, prometheus, otlp, otlphttp]
and I have a exporter running at port 4318 on localhost and OTEL setup is deployed using docker-compose using below docker compose file
Docker Compose
version: "2"
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250:14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Collector
otel-collector:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-collector-config-demo.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-collector-config-demo.yaml:/etc/otel-collector-config-demo.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "55679:55679" # zpages extension
- "4317:4317" # otlp receiver
- "8000:8000" # parseable exporter
- "4318:4318"
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
environment:
OTEL_EXPORTER_OTLP_ENDPOINT: http://docker.for.mac.localhost:4318
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
On exporter side
(written in Rust) I am seeing this error
[2023-06-01T22:29:27Z ERROR actix_http::h1::dispatcher] stream error: Request parse error: Invalid HTTP version specified
[2023-06-01T22:29:27Z ERROR actix_http::h1::dispatcher] stream error: Request parse error: Invalid HTTP version specified
On OTEL Collector side
I am seeing these errors
2023-06-01 23:
<details>
<summary>英文:</summary>
I am using the example from here
https://github.com/open-telemetry/opentelemetry-java-docs/tree/main/otlp/docker
I have modified the OTEL config and now it looks like this. I have added otelhttp and otel exporter configuration.
## OTEL Configuration
receivers:
otlp:
protocols:
grpc:
exporters:
otlp:
endpoint: "http://docker.for.mac.localhost:4318"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
otlphttp:
endpoint: "http://docker.for.mac.localhost:4318"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
loglevel: debug
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
tls:
insecure: true
Alternatively, use jaeger_thrift_http with the settings below. In this case
update the list of exporters on the traces pipeline.
jaeger_thrift_http:
url: http://jaeger-all-in-one:14268/api/traces
processors:
batch:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, zipkin, jaeger, otlp, otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, prometheus, otlp, otlphttp]
and I have a exporter running at port 4318 on localhost and OTEL setup is deployed using docker-compose using below docker compose file
## Docker Compose
version: "2"
services:
Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250:14250"
Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
Collector
otel-collector:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-collector-config-demo.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-collector-config-demo.yaml:/etc/otel-collector-config-demo.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "55679:55679" # zpages extension
- "4317:4317" # otlp receiver
- "8000:8000" # parseable exporter
- "4318:4318"
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
environment:
OTEL_EXPORTER_OTLP_ENDPOINT: http://docker.for.mac.localhost:4318
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
## On exporter side
(written in Rust) I am seeing this error
[2023-06-01T22:29:27Z ERROR actix_http::h1::dispatcher] stream error: Request parse error: Invalid HTTP version specified
[2023-06-01T22:29:27Z ERROR actix_http::h1::dispatcher] stream error: Request parse error: Invalid HTTP version specified
## On OTEL Collector side
I am seeing these errors
2023-06-01 23:29:27 }. Err: connection error: desc = "error reading server preface: http2: frame too large" {"grpc_log": true}
2023-06-01 23:29:27 2023-06-01T22:29:27.409Z warn zapgrpc/zapgrpc.go:195 [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
2023-06-01 23:29:27 "Addr": "docker.for.mac.localhost:4318",
2023-06-01 23:29:27 "ServerName": "docker.for.mac.localhost:4318",
2023-06-01 23:29:27 "Attributes": null,
2023-06-01 23:29:27 "BalancerAttributes": null,
2023-06-01 23:29:27 "Type": 0,
2023-06-01 23:29:27 "Metadata": null
2023-06-01 23:29:27 }. Err: connection error: desc = "error reading server preface: http2: frame too large" {"grpc_log": true}
2023-06-01 23:29:28 2023-06-01T22:29:28.405Z info jaegerexporter@v0.78.0/exporter.go:173 State of the connection with the Jaeger Collector backend {"kind": "exporter", "data_type": "traces", "name": "jaeger", "state": "READY"}
I suspect OTLP Collector is making and HTTP/2 call which makes it fail.
## EDIT 1
I tried removing otlphttp completely from configuration and still got same errors on both exporter and collector side.
### On Exporter
[2023-06-02T07:26:54Z ERROR actix_http::h1::dispatcher] stream error: Request parse error: Invalid HTTP version specified
[2023-06-02T07:26:54Z ERROR actix_http::h1::dispatcher] stream error: Request parse error: Invalid HTTP version specified
### On Collector
2023-06-02 08:26:54 docker-otel-collector-1 | 2023-06-02T07:26:54.482Z warn zapgrpc/zapgrpc.go:195 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
2023-06-02 08:26:54 docker-otel-collector-1 | "Addr": "docker.for.mac.localhost:4318",
2023-06-02 08:26:54 docker-otel-collector-1 | "ServerName": "docker.for.mac.localhost:4318",
2023-06-02 08:26:54 docker-otel-collector-1 | "Attributes": null,
2023-06-02 08:26:54 docker-otel-collector-1 | "BalancerAttributes": null,
2023-06-02 08:26:54 docker-otel-collector-1 | "Type": 0,
2023-06-02 08:26:54 docker-otel-collector-1 | "Metadata": null
2023-06-02 08:26:54 docker-otel-collector-1 | }. Err: connection error: desc = "error reading server preface: http2: frame too large" {"grpc_log": true}
## EDIT 2
When I only configured otlphttp and removed otlp config section. Then there was no effect at all.
Neither collector sent anything nor exporter received anything.
My latest configuration looks like below
receivers:
otlp:
protocols:
grpc:
http:
exporters:
otlphttp:
endpoint: "http://docker.for.mac.localhost:4318"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
loglevel: debug
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
tls:
insecure: true
Alternatively, use jaeger_thrift_http with the settings below. In this case
update the list of exporters on the traces pipeline.
jaeger_thrift_http:
url: http://jaeger-all-in-one:14268/api/traces
processors:
batch:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, zipkin, jaeger, otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, prometheus, otlphttp]
#### Collector Debug logs
2023-06-02T08:23:30.909Z info service/telemetry.go:104 Setting up own telemetry...
2023-06-02T08:23:30.909Z info service/telemetry.go:127 Serving Prometheus metrics {"address": ":8888", "level": "Basic"}
2023-06-02T08:23:30.909Z debug extension/extension.go:135 Beta component. May change in the future. {"kind": "extension", "name": "pprof"}
2023-06-02T08:23:30.909Z debug extension/extension.go:135 Beta component. May change in the future. {"kind": "extension", "name": "zpages"}
2023-06-02T08:23:30.909Z debug extension/extension.go:135 Beta component. May change in the future. {"kind": "extension", "name": "health_check"}
2023-06-02T08:23:30.909Z debug exporter@v0.78.2/exporter.go:273 Beta component. May change in the future. {"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
2023-06-02T08:23:30.910Z debug exporter@v0.78.2/exporter.go:273 Stable component. {"kind": "exporter", "data_type": "metrics", "name": "otlphttp"}
2023-06-02T08:23:30.910Z info exporter@v0.78.2/exporter.go:275 Development component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "logging"}
2023-06-02T08:23:30.910Z warn loggingexporter@v0.78.2/factory.go:98 'loglevel' option is deprecated in favor of 'verbosity'. Set 'verbosity' to equivalent value to preserve behavior. {"kind": "exporter", "data_type": "traces", "name": "logging", "loglevel": "debug", "equivalent verbosity level": "Detailed"}
2023-06-02T08:23:30.910Z info exporter@v0.78.2/exporter.go:275 Development component. May change in the future. {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2023-06-02T08:23:30.910Z debug processor/processor.go:287 Stable component. {"kind": "processor", "name": "batch", "pipeline": "metrics"}
2023-06-02T08:23:30.910Z debug exporter@v0.78.2/exporter.go:273 Beta component. May change in the future. {"kind": "exporter", "data_type": "traces", "name": "zipkin"}
2023-06-02T08:23:30.910Z info exporter@v0.78.2/exporter.go:275 Deprecated component. Will be removed in future releases. {"kind": "exporter", "data_type": "traces", "name": "jaeger"}
2023-06-02T08:23:30.910Z warn jaegerexporter@v0.78.0/factory.go:43 jaeger exporter is deprecated and will be removed in July 2023. See https://github.com/open-telemetry/opentelemetry-specification/pull/2858 for more details. {"kind": "exporter", "data_type": "traces", "name": "jaeger"}
2023-06-02T08:23:30.910Z debug exporter@v0.78.2/exporter.go:273 Stable component. {"kind": "exporter", "data_type": "traces", "name": "otlphttp"}
2023-06-02T08:23:30.910Z debug processor/processor.go:287 Stable component. {"kind": "processor", "name": "batch", "pipeline": "traces"}
2023-06-02T08:23:30.910Z debug receiver@v0.78.2/receiver.go:294 Stable component. {"kind": "receiver", "name": "otlp", "data_type": "traces"}
2023-06-02T08:23:30.910Z debug receiver@v0.78.2/receiver.go:294 Stable component. {"kind": "receiver", "name": "otlp", "data_type": "metrics"}
2023-06-02T08:23:30.911Z info service/service.go:131 Starting otelcol-contrib... {"Version": "0.78.0", "NumCPU": 4}
2023-06-02T08:23:30.911Z info extensions/extensions.go:30 Starting extensions...
2023-06-02T08:23:30.911Z info extensions/extensions.go:33 Extension is starting... {"kind": "extension", "name": "pprof"}
2023-06-02T08:23:30.911Z info pprofextension@v0.78.0/pprofextension.go:60 Starting net/http/pprof server {"kind": "extension", "name": "pprof", "config": {"TCPAddr":{"Endpoint":":1888"},"BlockProfileFraction":0,"MutexProfileFraction":0,"SaveToFile":""}}
2023-06-02T08:23:30.911Z info extensions/extensions.go:37 Extension started. {"kind": "extension", "name": "pprof"}
2023-06-02T08:23:30.911Z info extensions/extensions.go:33 Extension is starting... {"kind": "extension", "name": "zpages"}
2023-06-02T08:23:30.911Z info zpagesextension@v0.78.2/zpagesextension.go:53 Registered zPages span processor on tracer provider {"kind": "extension", "name": "zpages"}
2023-06-02T08:23:30.911Z info zpagesextension@v0.78.2/zpagesextension.go:63 Registered Host's zPages {"kind": "extension", "name": "zpages"}
2023-06-02T08:23:30.911Z info zpagesextension@v0.78.2/zpagesextension.go:75 Starting zPages extension {"kind": "extension", "name": "zpages", "config": {"TCPAddr":{"Endpoint":":55679"}}}
2023-06-02T08:23:30.911Z info extensions/extensions.go:37 Extension started. {"kind": "extension", "name": "zpages"}
2023-06-02T08:23:30.911Z info extensions/extensions.go:33 Extension is starting... {"kind": "extension", "name": "health_check"}
2023-06-02T08:23:30.911Z info healthcheckextension@v0.78.0/healthcheckextension.go:34 Starting health_check extension {"kind": "extension", "name": "health_check", "config": {"Endpoint":"0.0.0.0:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"Path":"/","ResponseBody":null,"CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}}
2023-06-02T08:23:30.911Z warn internal/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "extension", "name": "health_check", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-06-02T08:23:30.911Z info extensions/extensions.go:37 Extension started. {"kind": "extension", "name": "health_check"}
2023-06-02T08:23:30.911Z warn internal/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "exporter", "data_type": "metrics", "name": "prometheus", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-06-02T08:23:30.911Z warn internal/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-06-02T08:23:30.911Z info zapgrpc/zapgrpc.go:178 [core] [Server #1] Server created {"grpc_log": true}
2023-06-02T08:23:30.911Z info otlpreceiver@v0.78.2/otlp.go:83 Starting GRPC server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4317"}
2023-06-02T08:23:30.912Z warn internal/warning.go:40 Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks {"kind": "receiver", "name": "otlp", "data_type": "traces", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-06-02T08:23:30.912Z info otlpreceiver@v0.78.2/otlp.go:101 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Server #1 ListenSocket #2] ListenSocket created {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] Channel created {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] original dial target is: "jaeger-all-in-one:14250" {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] parsed dial target is: {Scheme:jaeger-all-in-one Authority: URL:{Scheme:jaeger-all-in-one Opaque:14250 User: Host: Path: RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}} {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] fallback to scheme "passthrough" {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] parsed dial target is: {Scheme:passthrough Authority: URL:{Scheme:passthrough Opaque: User: Host: Path:/jaeger-all-in-one:14250 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}} {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] Channel authority set to "jaeger-all-in-one:14250" {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] Resolver state updated: {
"Addresses": [
{
"Addr": "jaeger-all-in-one:14250",
"ServerName": "",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}
],
"ServiceConfig": null,
"Attributes": null
} (resolver returned new addresses) {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] Channel switches to new LB policy "pick_first" {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3 SubChannel #4] Subchannel created {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] Channel Connectivity change to CONNECTING {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3 SubChannel #4] Subchannel Connectivity change to CONNECTING {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3 SubChannel #4] Subchannel picks a new address "jaeger-all-in-one:14250" to connect {"grpc_log": true}
2023-06-02T08:23:30.912Z info zapgrpc/zapgrpc.go:178 [core] pickfirstBalancer: UpdateSubConnState: 0x4000cda078, {CONNECTING <nil>} {"grpc_log": true}
2023-06-02T08:23:30.913Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3 SubChannel #4] Subchannel Connectivity change to READY {"grpc_log": true}
2023-06-02T08:23:30.914Z info zapgrpc/zapgrpc.go:178 [core] pickfirstBalancer: UpdateSubConnState: 0x4000cda078, {READY <nil>} {"grpc_log": true}
2023-06-02T08:23:30.914Z info zapgrpc/zapgrpc.go:178 [core] [Channel #3] Channel Connectivity change to READY {"grpc_log": true}
2023-06-02T08:23:30.914Z info jaegerexporter@v0.78.0/exporter.go:173 State of the connection with the Jaeger Collector backend {"kind": "exporter", "data_type": "traces", "name": "jaeger", "state": "READY"}
2023-06-02T08:23:30.914Z info healthcheck/handler.go:129 Health Check state change {"kind": "extension", "name": "health_check", "status": "ready"}
2023-06-02T08:23:30.914Z info service/service.go:148 Everything is ready. Begin running and processing data.
2023-06-02T08:23:35.806Z debug prometheusexporter@v0.78.0/collector.go:360 collect called {"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
2023-06-02T08:23:35.806Z debug prometheusexporter@v0.78.0/accumulator.go:268 Accumulator collect called {"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
I can only see `prometheus exporter` being called in debug calls and otlphttp exporter is not called at all.
## Edit 3
Configuration using port 4317
receivers:
otlp:
protocols:
grpc:
http:
exporters:
otlphttp:
endpoint: "http://docker.for.mac.localhost:4317"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
loglevel: debug
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
tls:
insecure: true
Alternatively, use jaeger_thrift_http with the settings below. In this case
update the list of exporters on the traces pipeline.
jaeger_thrift_http:
url: http://jaeger-all-in-one:14268/api/traces
processors:
batch:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
telemetry:
logs:
level: "debug"
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, zipkin, jaeger, otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, prometheus, otlphttp]
Just for reference my exporter has following endpoints exposed..
http://localhost:4318/v1/metrics
http://localhost:4318/v1/logs
http://localhost:4318/v1/traces
</details>
# 答案1
**得分**: 1
你已经配置了两个导出器(gRPC 和 HTTP)来使用相同的端口:
```yaml
otlp:
endpoint: "http://docker.for.mac.localhost:4318"
...
otlphttp:
endpoint: "http://docker.for.mac.localhost:4318"
当你向 gRPC 端点发送 HTTP 请求时,它会失败。移除 otlphttp 导出器或在端口 4317 上设置 otlphttp 接收器。
你的配置将如下所示:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
otlp:
endpoint: "http://docker.for.mac.localhost:4318"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
otlphttp:
endpoint: "http://docker.for.mac.localhost:4317"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
loglevel: debug
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
tls:
insecure: true
# 或者,使用以下设置使用 jaeger_thrift_http。在这种情况下,
# 更新跟踪管道上的导出器列表。
#
# jaeger_thrift_http:
# url: http://jaeger-all-in-one:14268/api/traces
processors:
batch:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, zipkin, jaeger, otlp, otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, prometheus, otlp, otlphttp]
请确保在 docker-compose.yml 中暴露 4317 端口。
请尝试并告诉我们是否有效。
英文:
You have configured both exporters (gRPC and HTTP) to use the same port:
otlp:
endpoint: "http://docker.for.mac.localhost:4318"
...
otlphttp:
endpoint: "http://docker.for.mac.localhost:4318"
When you send a HTTP request to the gRPC endpoint, it fails. Remove the otlphttp exporter or set up the otlphttp receiver on port 4317.
Your configuration would then look like:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
otlp:
endpoint: "http://docker.for.mac.localhost:4318"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
otlphttp:
endpoint: "http://docker.for.mac.localhost:4317"
tls:
insecure: true
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "demo"
X-P-TAG-tag1: "value1"
X-P-META-meta1: "value1"
Content-type: "application/json"
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
loglevel: debug
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
tls:
insecure: true
# Alternatively, use jaeger_thrift_http with the settings below. In this case
# update the list of exporters on the traces pipeline.
#
# jaeger_thrift_http:
# url: http://jaeger-all-in-one:14268/api/traces
processors:
batch:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, zipkin, jaeger, otlp, otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, prometheus, otlp, otlphttp]
Make sure to expose 4317 in docker-compose.yml as well.
Please try it out and let us know if it worked.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论