2026-02-16T09:59:00.953559Z INFO vector::app: Log level is enabled. level="info" 2026-02-16T09:59:00.954013Z INFO vector::app: Loading configs. paths=["/etc/vector"] 2026-02-16T09:59:00.957566Z INFO source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}: vector::sources::kubernetes_logs: Obtained Kubernetes Node name to collect logs for (self). self_node_name="ip-10-0-151-16.ec2.internal" 2026-02-16T09:59:00.964518Z INFO source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}: vector::sources::kubernetes_logs: Including matching files. ret=["**/*"] 2026-02-16T09:59:00.964535Z INFO source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}: vector::sources::kubernetes_logs: Internal log [Including matching files.] is being suppressed to avoid flooding. 2026-02-16T09:59:00.982688Z INFO vector::topology::running: Running healthchecks. 2026-02-16T09:59:00.982767Z INFO vector: Vector has started. debug="false" version="0.52.0" arch="x86_64" revision="ca5bf26 2025-12-16 14:56:07.290167996" 2026-02-16T09:59:00.982983Z INFO vector::topology::builder: Healthcheck passed. 2026-02-16T09:59:00.984182Z INFO vector::internal_events::api: API server running. address=127.0.0.1:8686 playground=off graphql=http://127.0.0.1:8686/graphql 2026-02-16T09:59:00.984907Z INFO vector::sinks::prometheus::exporter: Building HTTP server. address=0.0.0.0:9598 2026-02-16T09:59:00.988846Z WARN http: vector::internal_events::http_client: HTTP error. error=error trying to connect: tcp connect error: Connection refused (os error 111) error_type="request_failed" stage="processing" 2026-02-16T09:59:00.988874Z ERROR vector::topology::builder: msg="Healthcheck failed." error=Failed to make HTTP(S) request: error trying to connect: tcp connect error: Connection refused (os error 111) component_kind="sink" component_type="loki" component_id=loki 2026-02-16T09:59:01.508588Z INFO source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}:file_server: vector::internal_events::file::source: Found new file to watch. file=/var/log/pods/openshift-console_downloads-5b45c9c7bf-c4hgc_1e9670d0-6c62-453c-a1b7-b309974ace7a/download-server/0.log 2026-02-16T09:59:01.509046Z INFO source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}:file_server: vector::internal_events::file::source: Internal log [Found new file to watch.] is being suppressed to avoid flooding. 2026-02-16T09:59:01.512100Z WARN source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}:file_server: vector::internal_events::file::source: Currently ignoring file too small to fingerprint. file=/var/log/pods/openshift-gitops_openshift-gitops-repo-server-544dd88c7c-whxg8_3064e58c-cad5-4ca7-83fc-2e5948a2874f/copyutil/0.log 2026-02-16T09:59:01.512413Z WARN source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}:file_server: vector::internal_events::file::source: Internal log [Currently ignoring file too small to fingerprint.] is being suppressed to avoid flooding. 2026-02-16T09:59:01.705110Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}:http: vector::internal_events::http_client: HTTP error. error=error trying to connect: tcp connect error: Connection refused (os error 111) error_type="request_failed" stage="processing" 2026-02-16T09:59:01.705144Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: Failed to make HTTP(S) request: error trying to connect: tcp connect error: Connection refused (os error 111) 2026-02-16T09:59:02.326592Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}:http: vector::internal_events::http_client: Internal log [HTTP error.] is being suppressed to avoid flooding. 2026-02-16T09:59:02.326645Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}: vector::sinks::util::retries: Internal log [Retrying after error.] is being suppressed to avoid flooding. 2026-02-16T09:59:13.311455Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}: vector::sinks::util::retries: Internal log [Retrying after error.] has been suppressed 5 times. 2026-02-16T09:59:13.311471Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 502 Bad Gateway 2026-02-16T09:59:22.376502Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}: vector::sinks::util::retries: Internal log [Retrying after error.] is being suppressed to avoid flooding. 2026-02-16T09:59:32.754718Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}: vector::sinks::util::retries: Internal log [Retrying after error.] has been suppressed 1 times. 2026-02-16T09:59:32.754806Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 502 Bad Gateway 2026-02-16T09:59:49.801230Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=1}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 500 Internal Server Error 2026-02-16T10:00:01.219096Z WARN source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}:file_server: vector::internal_events::file::source: Internal log [Currently ignoring file too small to fingerprint.] has been suppressed 6 times. 2026-02-16T10:00:01.219111Z WARN source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}:file_server: vector::internal_events::file::source: Currently ignoring file too small to fingerprint. file=/var/log/pods/openshift-backplane_osd-delete-backplane-serviceaccounts-29520600-nq8rm_0e46148d-b884-4819-8fb9-386e4810f794/osd-delete-backplane-serviceaccounts/0.log 2026-02-16T10:00:01.768375Z INFO source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}:file_server: vector::internal_events::file::source: Internal log [Found new file to watch.] has been suppressed 114 times. 2026-02-16T10:00:01.768391Z INFO source{component_kind="source" component_id=k8s_logs component_type=kubernetes_logs}:file_server: vector::internal_events::file::source: Found new file to watch. file=/var/log/pods/openshift-backplane_osd-delete-backplane-serviceaccounts-29520600-nq8rm_0e46148d-b884-4819-8fb9-386e4810f794/osd-delete-backplane-serviceaccounts/0.log 2026-02-16T10:00:09.061111Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=7}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests 2026-02-16T10:00:09.315865Z WARN sink{component_kind="sink" component_id=loki component_type=loki}:request{request_id=8}: vector::sinks::util::retries: Internal log [Retrying after error.] is being suppressed to avoid flooding.