Apr 23 17:41:05.464933 ip-10-0-139-215 systemd[1]: Starting Kubernetes Kubelet... Apr 23 17:41:05.894547 ip-10-0-139-215 kubenswrapper[2574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:41:05.894547 ip-10-0-139-215 kubenswrapper[2574]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 23 17:41:05.894547 ip-10-0-139-215 kubenswrapper[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:41:05.894547 ip-10-0-139-215 kubenswrapper[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 17:41:05.894547 ip-10-0-139-215 kubenswrapper[2574]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 17:41:05.895347 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.895203 2574 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 17:41:05.898216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898201 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:41:05.898216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898216 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898220 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898224 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898227 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898230 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898233 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898235 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898238 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898241 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898243 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898246 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898248 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898251 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898253 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898256 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898260 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898262 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898265 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898267 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898270 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:41:05.898282 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898272 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898275 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898278 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898280 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898283 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898286 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898288 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898291 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898294 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898297 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898299 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898302 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898304 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898307 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898309 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898313 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898316 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898318 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898321 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898323 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:41:05.898772 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898326 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898328 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898330 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898333 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898338 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898342 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898344 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898347 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898349 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898351 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898354 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898356 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898359 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898363 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898367 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898370 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898374 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898377 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898380 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898383 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:41:05.899328 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898385 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898388 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898392 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898395 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898397 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898400 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898403 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898406 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898409 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898412 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898415 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898417 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898420 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898422 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898424 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898427 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898429 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898432 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898434 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:41:05.899830 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898437 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898439 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898442 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898444 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898466 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898470 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898880 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898886 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898890 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898894 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898897 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898900 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898903 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898905 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898908 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898911 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898913 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898915 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898918 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:41:05.900270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898921 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898923 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898926 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898929 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898931 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898934 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898937 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898940 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898942 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898945 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898947 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898949 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898952 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898954 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898957 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898959 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898962 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898964 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898967 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898970 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:41:05.900739 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898973 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898975 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898978 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898980 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898983 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898985 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898988 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898990 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898993 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898995 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.898997 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899000 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899002 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899004 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899007 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899011 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899013 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899016 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899018 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899021 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:41:05.901270 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899023 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899026 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899028 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899031 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899036 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899039 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899042 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899045 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899047 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899050 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899053 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899056 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899058 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899062 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899064 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899067 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899069 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899071 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899074 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899076 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:41:05.901773 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899079 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899081 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899083 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899086 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899088 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899091 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899093 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899097 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899100 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899103 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899105 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899108 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.899110 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899809 2574 flags.go:64] FLAG: --address="0.0.0.0" Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899823 2574 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899830 2574 flags.go:64] FLAG: --anonymous-auth="true" Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899835 2574 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899840 2574 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899843 2574 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899848 2574 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899852 2574 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 23 17:41:05.902269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899856 2574 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899859 2574 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899862 2574 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899866 2574 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899869 2574 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899871 2574 flags.go:64] FLAG: --cgroup-root="" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899874 2574 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899877 2574 flags.go:64] FLAG: --client-ca-file="" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899880 2574 flags.go:64] FLAG: --cloud-config="" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899883 2574 flags.go:64] FLAG: --cloud-provider="external" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899885 2574 flags.go:64] FLAG: --cluster-dns="[]" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899890 2574 flags.go:64] FLAG: --cluster-domain="" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899893 2574 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899896 2574 flags.go:64] FLAG: --config-dir="" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899899 2574 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899902 2574 flags.go:64] FLAG: --container-log-max-files="5" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899906 2574 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899909 2574 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899912 2574 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899916 2574 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899919 2574 flags.go:64] FLAG: --contention-profiling="false" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899922 2574 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899925 2574 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899928 2574 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899931 2574 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 23 17:41:05.902794 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899937 2574 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899939 2574 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899942 2574 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899945 2574 flags.go:64] FLAG: --enable-load-reader="false" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899948 2574 flags.go:64] FLAG: --enable-server="true" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899952 2574 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899956 2574 flags.go:64] FLAG: --event-burst="100" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899959 2574 flags.go:64] FLAG: --event-qps="50" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899961 2574 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899965 2574 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899968 2574 flags.go:64] FLAG: --eviction-hard="" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899972 2574 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899974 2574 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899977 2574 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899980 2574 flags.go:64] FLAG: --eviction-soft="" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899983 2574 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899986 2574 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899989 2574 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899992 2574 flags.go:64] FLAG: --experimental-mounter-path="" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899995 2574 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.899998 2574 flags.go:64] FLAG: --fail-swap-on="true" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900000 2574 flags.go:64] FLAG: --feature-gates="" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900004 2574 flags.go:64] FLAG: --file-check-frequency="20s" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900007 2574 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900010 2574 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 23 17:41:05.903407 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900014 2574 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900017 2574 flags.go:64] FLAG: --healthz-port="10248" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900020 2574 flags.go:64] FLAG: --help="false" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900023 2574 flags.go:64] FLAG: --hostname-override="ip-10-0-139-215.ec2.internal" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900026 2574 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900030 2574 flags.go:64] FLAG: --http-check-frequency="20s" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900032 2574 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900036 2574 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900040 2574 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900043 2574 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900046 2574 flags.go:64] FLAG: --image-service-endpoint="" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900049 2574 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900052 2574 flags.go:64] FLAG: --kube-api-burst="100" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900055 2574 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900058 2574 flags.go:64] FLAG: --kube-api-qps="50" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900060 2574 flags.go:64] FLAG: --kube-reserved="" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900063 2574 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900067 2574 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900069 2574 flags.go:64] FLAG: --kubelet-cgroups="" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900072 2574 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900075 2574 flags.go:64] FLAG: --lock-file="" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900078 2574 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900081 2574 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900084 2574 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 23 17:41:05.904023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900089 2574 flags.go:64] FLAG: --log-json-split-stream="false" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900092 2574 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900095 2574 flags.go:64] FLAG: --log-text-split-stream="false" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900097 2574 flags.go:64] FLAG: --logging-format="text" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900100 2574 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900103 2574 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900106 2574 flags.go:64] FLAG: --manifest-url="" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900109 2574 flags.go:64] FLAG: --manifest-url-header="" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900114 2574 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900117 2574 flags.go:64] FLAG: --max-open-files="1000000" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900121 2574 flags.go:64] FLAG: --max-pods="110" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900125 2574 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900128 2574 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900131 2574 flags.go:64] FLAG: --memory-manager-policy="None" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900134 2574 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900137 2574 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900140 2574 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900143 2574 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900151 2574 flags.go:64] FLAG: --node-status-max-images="50" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900154 2574 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900157 2574 flags.go:64] FLAG: --oom-score-adj="-999" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900160 2574 flags.go:64] FLAG: --pod-cidr="" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900163 2574 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 23 17:41:05.904588 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900169 2574 flags.go:64] FLAG: --pod-manifest-path="" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900173 2574 flags.go:64] FLAG: --pod-max-pids="-1" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900176 2574 flags.go:64] FLAG: --pods-per-core="0" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900179 2574 flags.go:64] FLAG: --port="10250" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900182 2574 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900185 2574 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0a26b3dd3694107e2" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900188 2574 flags.go:64] FLAG: --qos-reserved="" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900191 2574 flags.go:64] FLAG: --read-only-port="10255" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900194 2574 flags.go:64] FLAG: --register-node="true" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900197 2574 flags.go:64] FLAG: --register-schedulable="true" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900200 2574 flags.go:64] FLAG: --register-with-taints="" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900203 2574 flags.go:64] FLAG: --registry-burst="10" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900206 2574 flags.go:64] FLAG: --registry-qps="5" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900209 2574 flags.go:64] FLAG: --reserved-cpus="" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900211 2574 flags.go:64] FLAG: --reserved-memory="" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900215 2574 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900218 2574 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900222 2574 flags.go:64] FLAG: --rotate-certificates="false" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900225 2574 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900227 2574 flags.go:64] FLAG: --runonce="false" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900230 2574 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900233 2574 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900237 2574 flags.go:64] FLAG: --seccomp-default="false" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900239 2574 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900242 2574 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900245 2574 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 23 17:41:05.905166 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900249 2574 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900252 2574 flags.go:64] FLAG: --storage-driver-password="root" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900254 2574 flags.go:64] FLAG: --storage-driver-secure="false" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900257 2574 flags.go:64] FLAG: --storage-driver-table="stats" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900260 2574 flags.go:64] FLAG: --storage-driver-user="root" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900262 2574 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900265 2574 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900268 2574 flags.go:64] FLAG: --system-cgroups="" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900271 2574 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900276 2574 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900279 2574 flags.go:64] FLAG: --tls-cert-file="" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900282 2574 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900286 2574 flags.go:64] FLAG: --tls-min-version="" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900289 2574 flags.go:64] FLAG: --tls-private-key-file="" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900292 2574 flags.go:64] FLAG: --topology-manager-policy="none" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900295 2574 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900297 2574 flags.go:64] FLAG: --topology-manager-scope="container" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900300 2574 flags.go:64] FLAG: --v="2" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900305 2574 flags.go:64] FLAG: --version="false" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900309 2574 flags.go:64] FLAG: --vmodule="" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900313 2574 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.900316 2574 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900417 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900422 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:41:05.905837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900425 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900427 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900430 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900433 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900436 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900439 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900445 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900447 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900450 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900452 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900455 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900457 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900460 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900462 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900465 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900467 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900470 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900473 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900475 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900478 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:41:05.906453 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900480 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900482 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900485 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900488 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900490 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900493 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900495 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900497 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900500 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900502 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900505 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900508 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900510 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900513 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900515 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900518 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900521 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900523 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900527 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900530 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:41:05.906981 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900532 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900535 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900537 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900540 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900542 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900544 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900547 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900549 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900552 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900554 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900557 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900559 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900561 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900564 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900568 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900571 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900574 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900577 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900580 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900582 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:41:05.907487 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900585 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900587 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900590 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900592 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900595 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900597 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900600 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900602 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900605 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900607 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900612 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900615 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900617 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900620 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900623 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900625 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900627 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900643 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900645 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:41:05.908272 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900648 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900651 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900653 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900657 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.900660 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.901278 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.908742 2574 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.908762 2574 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908819 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908825 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908828 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908831 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908834 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908837 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908841 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:41:05.908837 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908844 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908847 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908850 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908853 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908855 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908858 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908860 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908863 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908866 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908868 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908871 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908873 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908876 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908879 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908881 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908884 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908886 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908889 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908891 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:41:05.909216 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908895 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908899 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908902 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908905 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908907 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908910 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908912 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908915 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908917 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908920 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908922 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908925 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908927 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908929 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908932 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908935 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908938 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908941 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908944 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:41:05.909725 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908948 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908952 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908956 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908959 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908961 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908964 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908966 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908969 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908971 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908974 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908976 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908979 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908982 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908984 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908987 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908989 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908992 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908994 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908997 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.908999 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:41:05.910211 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909001 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909004 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909007 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909009 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909012 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909014 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909017 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909020 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909023 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909025 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909028 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909033 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909035 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909038 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909041 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909043 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909046 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909065 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909069 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909072 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:41:05.910731 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909075 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.909080 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909189 2574 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909195 2574 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909198 2574 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909201 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909203 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909206 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909209 2574 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909211 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909214 2574 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909216 2574 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909219 2574 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909221 2574 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909224 2574 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 23 17:41:05.911206 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909226 2574 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909229 2574 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909232 2574 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909234 2574 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909237 2574 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909240 2574 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909243 2574 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909246 2574 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909248 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909252 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909254 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909256 2574 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909259 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909262 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909264 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909266 2574 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909270 2574 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909274 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909277 2574 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 23 17:41:05.911594 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909279 2574 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909282 2574 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909284 2574 feature_gate.go:328] unrecognized feature gate: Example2 Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909287 2574 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909290 2574 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909292 2574 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909295 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909299 2574 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909302 2574 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909304 2574 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909307 2574 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909310 2574 feature_gate.go:328] unrecognized feature gate: Example Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909312 2574 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909315 2574 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909318 2574 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909320 2574 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909323 2574 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909326 2574 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909329 2574 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909331 2574 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 23 17:41:05.912073 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909334 2574 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909336 2574 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909339 2574 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909342 2574 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909344 2574 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909347 2574 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909349 2574 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909352 2574 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909354 2574 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909356 2574 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909359 2574 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909361 2574 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909364 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909366 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909369 2574 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909371 2574 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909373 2574 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909376 2574 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909378 2574 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909381 2574 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 23 17:41:05.912556 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909383 2574 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909386 2574 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909388 2574 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909390 2574 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909393 2574 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909395 2574 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909398 2574 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909400 2574 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909403 2574 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909406 2574 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909408 2574 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909410 2574 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909413 2574 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:05.909415 2574 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.909421 2574 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 23 17:41:05.913108 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.910161 2574 server.go:962] "Client rotation is on, will bootstrap in background" Apr 23 17:41:05.914200 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.914185 2574 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 23 17:41:05.915049 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.915038 2574 server.go:1019] "Starting client certificate rotation" Apr 23 17:41:05.915156 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.915139 2574 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:41:05.915206 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.915183 2574 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 23 17:41:05.939799 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.939777 2574 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:41:05.942052 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.942033 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 23 17:41:05.959532 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.959510 2574 log.go:25] "Validated CRI v1 runtime API" Apr 23 17:41:05.964958 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.964934 2574 log.go:25] "Validated CRI v1 image API" Apr 23 17:41:05.966216 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.966196 2574 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 17:41:05.970145 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.970114 2574 fs.go:135] Filesystem UUIDs: map[4ae50370-514d-4225-a6eb-256b5c3f8935:/dev/nvme0n1p4 74434568-bb51-46ff-9fe8-b01d11e2c3b7:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2] Apr 23 17:41:05.970252 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.970144 2574 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 23 17:41:05.971769 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.971748 2574 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:41:05.976602 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.976491 2574 manager.go:217] Machine: {Timestamp:2026-04-23 17:41:05.97443254 +0000 UTC m=+0.396101490 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3098536 MemoryCapacity:32812175360 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec2fb5d9e1ae0dbc93303d0ef88004ff SystemUUID:ec2fb5d9-e1ae-0dbc-9330-3d0ef88004ff BootID:b494d5a9-b05d-4425-9229-7a7e75a529c3 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16406085632 Type:vfs Inodes:4005392 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6562435072 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16406089728 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:57:5b:d4:dc:95 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:57:5b:d4:dc:95 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:d6:9a:bd:c9:fe:7f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:32812175360 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:34603008 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 23 17:41:05.976602 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.976595 2574 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 23 17:41:05.976746 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.976694 2574 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 23 17:41:05.977027 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.977005 2574 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 17:41:05.977171 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.977028 2574 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-139-215.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 17:41:05.977218 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.977182 2574 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 17:41:05.977218 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.977190 2574 container_manager_linux.go:306] "Creating device plugin manager" Apr 23 17:41:05.977218 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.977203 2574 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:41:05.978034 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.978024 2574 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 23 17:41:05.978730 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.978721 2574 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:41:05.979013 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.979004 2574 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 23 17:41:05.982156 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.982146 2574 kubelet.go:491] "Attempting to sync node with API server" Apr 23 17:41:05.982190 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.982165 2574 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 17:41:05.982190 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.982177 2574 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 23 17:41:05.982190 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.982186 2574 kubelet.go:397] "Adding apiserver pod source" Apr 23 17:41:05.982317 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.982194 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 17:41:05.983225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.983211 2574 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:41:05.983225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.983229 2574 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 23 17:41:05.986008 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.985990 2574 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 23 17:41:05.987192 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.987178 2574 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 17:41:05.988612 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988597 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 23 17:41:05.988612 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988615 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988621 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988627 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988649 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988655 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988664 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988669 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988677 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988684 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988701 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 23 17:41:05.988764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.988710 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 23 17:41:05.989465 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.989455 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 23 17:41:05.989465 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.989464 2574 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 23 17:41:05.992928 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.992913 2574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 17:41:05.993010 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.992950 2574 server.go:1295] "Started kubelet" Apr 23 17:41:05.993066 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.993006 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 17:41:05.993167 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.993112 2574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 17:41:05.993221 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.993185 2574 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 23 17:41:05.993836 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:05.993813 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-139-215.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 17:41:05.993933 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:05.993911 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 17:41:05.993995 ip-10-0-139-215 systemd[1]: Started Kubernetes Kubelet. Apr 23 17:41:05.994100 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.994034 2574 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-139-215.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 23 17:41:05.994288 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.994271 2574 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 17:41:05.994486 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.994434 2574 server.go:317] "Adding debug handlers to kubelet server" Apr 23 17:41:05.998848 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.998833 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 17:41:05.998938 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:05.998852 2574 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 23 17:41:06.001276 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.001237 2574 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 23 17:41:06.001367 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.001281 2574 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 17:41:06.001420 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.001352 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.001472 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.001421 2574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 17:41:06.001596 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.001540 2574 reconstruct.go:97] "Volume reconstruction finished" Apr 23 17:41:06.001670 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.001596 2574 reconciler.go:26] "Reconciler: start to sync state" Apr 23 17:41:06.002186 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.002166 2574 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 23 17:41:06.002186 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.002185 2574 factory.go:55] Registering systemd factory Apr 23 17:41:06.002320 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.002195 2574 factory.go:223] Registration of the systemd container factory successfully Apr 23 17:41:06.002320 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.002215 2574 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-47snk" Apr 23 17:41:06.002589 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.002572 2574 factory.go:153] Registering CRI-O factory Apr 23 17:41:06.002589 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.002588 2574 factory.go:223] Registration of the crio container factory successfully Apr 23 17:41:06.002731 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.002607 2574 factory.go:103] Registering Raw factory Apr 23 17:41:06.002731 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.002621 2574 manager.go:1196] Started watching for new ooms in manager Apr 23 17:41:06.003323 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.003271 2574 manager.go:319] Starting recovery of all containers Apr 23 17:41:06.003461 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.002555 2574 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-139-215.ec2.internal.18a90d38e56e85c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-139-215.ec2.internal,UID:ip-10-0-139-215.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-139-215.ec2.internal,},FirstTimestamp:2026-04-23 17:41:05.992926663 +0000 UTC m=+0.414595616,LastTimestamp:2026-04-23 17:41:05.992926663 +0000 UTC m=+0.414595616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-139-215.ec2.internal,}" Apr 23 17:41:06.004918 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.004886 2574 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 17:41:06.006818 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.006792 2574 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-47snk" Apr 23 17:41:06.007206 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.007186 2574 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 23 17:41:06.014873 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.014716 2574 manager.go:324] Recovery completed Apr 23 17:41:06.019306 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.019293 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:41:06.022253 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.022239 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:41:06.022325 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.022267 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:41:06.022325 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.022277 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:41:06.022768 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.022750 2574 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-10-0-139-215.ec2.internal\" not found" node="ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.022768 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.022752 2574 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 23 17:41:06.022861 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.022775 2574 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 23 17:41:06.022861 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.022799 2574 state_mem.go:36] "Initialized new in-memory state store" Apr 23 17:41:06.024795 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.024784 2574 policy_none.go:49] "None policy: Start" Apr 23 17:41:06.024834 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.024799 2574 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 17:41:06.024834 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.024809 2574 state_mem.go:35] "Initializing new in-memory state store" Apr 23 17:41:06.062319 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.062301 2574 manager.go:341] "Starting Device Plugin manager" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.062355 2574 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.062369 2574 server.go:85] "Starting device plugin registration server" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.062626 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.062660 2574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.062771 2574 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.062844 2574 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.062850 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.063358 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 23 17:41:06.073992 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.063394 2574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.129941 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.129906 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 17:41:06.131169 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.131152 2574 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 17:41:06.131266 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.131186 2574 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 17:41:06.131266 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.131205 2574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 17:41:06.131266 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.131211 2574 kubelet.go:2451] "Starting kubelet main sync loop" Apr 23 17:41:06.131266 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.131246 2574 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 23 17:41:06.133506 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.133486 2574 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:41:06.163419 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.163368 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:41:06.164214 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.164201 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:41:06.164272 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.164230 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:41:06.164272 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.164240 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:41:06.164272 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.164262 2574 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.177744 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.177727 2574 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.177790 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.177750 2574 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-139-215.ec2.internal\": node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.190132 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.190105 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.232359 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.232304 2574 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal"] Apr 23 17:41:06.232469 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.232401 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:41:06.233323 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.233303 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:41:06.233390 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.233338 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:41:06.233390 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.233349 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:41:06.235458 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.235446 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:41:06.235613 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.235599 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.235703 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.235644 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:41:06.236172 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.236153 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:41:06.236172 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.236164 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:41:06.236301 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.236185 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:41:06.236301 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.236200 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:41:06.236301 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.236187 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:41:06.236301 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.236266 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:41:06.238270 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.238256 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.238317 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.238290 2574 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 23 17:41:06.239361 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.239348 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientMemory" Apr 23 17:41:06.239406 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.239370 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasNoDiskPressure" Apr 23 17:41:06.239406 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.239381 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeHasSufficientPID" Apr 23 17:41:06.253170 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.253148 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-139-215.ec2.internal\" not found" node="ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.256825 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.256810 2574 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-139-215.ec2.internal\" not found" node="ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.290704 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.290682 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.303116 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.303083 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/fed33c6440c35183d017b214d982b3b1-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal\" (UID: \"fed33c6440c35183d017b214d982b3b1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.303199 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.303126 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fed33c6440c35183d017b214d982b3b1-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal\" (UID: \"fed33c6440c35183d017b214d982b3b1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.303199 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.303143 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/0d68c36ed96ea5528325ea66516f8810-config\") pod \"kube-apiserver-proxy-ip-10-0-139-215.ec2.internal\" (UID: \"0d68c36ed96ea5528325ea66516f8810\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.391340 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.391309 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.403806 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.403783 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/fed33c6440c35183d017b214d982b3b1-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal\" (UID: \"fed33c6440c35183d017b214d982b3b1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.403941 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.403820 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fed33c6440c35183d017b214d982b3b1-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal\" (UID: \"fed33c6440c35183d017b214d982b3b1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.403941 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.403856 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/0d68c36ed96ea5528325ea66516f8810-config\") pod \"kube-apiserver-proxy-ip-10-0-139-215.ec2.internal\" (UID: \"0d68c36ed96ea5528325ea66516f8810\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.403941 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.403888 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/fed33c6440c35183d017b214d982b3b1-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal\" (UID: \"fed33c6440c35183d017b214d982b3b1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.403941 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.403905 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fed33c6440c35183d017b214d982b3b1-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal\" (UID: \"fed33c6440c35183d017b214d982b3b1\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.403941 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.403906 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/0d68c36ed96ea5528325ea66516f8810-config\") pod \"kube-apiserver-proxy-ip-10-0-139-215.ec2.internal\" (UID: \"0d68c36ed96ea5528325ea66516f8810\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.491573 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.491510 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.555051 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.555025 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.559572 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.559556 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" Apr 23 17:41:06.592137 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.592110 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.692798 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.692768 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.793437 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.793364 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.894121 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.894092 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.914548 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.914525 2574 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 23 17:41:06.915049 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.914689 2574 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 23 17:41:06.994282 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:06.994249 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:06.999366 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:06.999339 2574 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 23 17:41:07.009382 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.009347 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-22 17:36:05 +0000 UTC" deadline="2027-10-01 10:52:23.425801613 +0000 UTC" Apr 23 17:41:07.009382 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.009379 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12617h11m16.416424779s" Apr 23 17:41:07.015246 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.015223 2574 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 23 17:41:07.040137 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.040116 2574 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-nhxm7" Apr 23 17:41:07.053878 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.053831 2574 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-nhxm7" Apr 23 17:41:07.066511 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:07.066476 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfed33c6440c35183d017b214d982b3b1.slice/crio-c483bc416c4d1ef7f05c3e503e13dcdc68b1f4ef895a15cbed178a79239f52e7 WatchSource:0}: Error finding container c483bc416c4d1ef7f05c3e503e13dcdc68b1f4ef895a15cbed178a79239f52e7: Status 404 returned error can't find the container with id c483bc416c4d1ef7f05c3e503e13dcdc68b1f4ef895a15cbed178a79239f52e7 Apr 23 17:41:07.071394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.071377 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:41:07.075648 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.075603 2574 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:41:07.077152 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:07.077118 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d68c36ed96ea5528325ea66516f8810.slice/crio-ef8592573c9b1f45e814e3de2ce16d3f79790e15f35f1d029bd0847199131b19 WatchSource:0}: Error finding container ef8592573c9b1f45e814e3de2ce16d3f79790e15f35f1d029bd0847199131b19: Status 404 returned error can't find the container with id ef8592573c9b1f45e814e3de2ce16d3f79790e15f35f1d029bd0847199131b19 Apr 23 17:41:07.094926 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:07.094905 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:07.134174 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.134130 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" event={"ID":"fed33c6440c35183d017b214d982b3b1","Type":"ContainerStarted","Data":"c483bc416c4d1ef7f05c3e503e13dcdc68b1f4ef895a15cbed178a79239f52e7"} Apr 23 17:41:07.135104 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.135076 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" event={"ID":"0d68c36ed96ea5528325ea66516f8810","Type":"ContainerStarted","Data":"ef8592573c9b1f45e814e3de2ce16d3f79790e15f35f1d029bd0847199131b19"} Apr 23 17:41:07.195271 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:07.195248 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:07.295910 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:07.295869 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:07.380203 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.380177 2574 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:41:07.396726 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:07.396702 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:07.497542 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:07.497504 2574 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-139-215.ec2.internal\" not found" Apr 23 17:41:07.571203 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.571000 2574 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:41:07.602649 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.602605 2574 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" Apr 23 17:41:07.617479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.617454 2574 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:41:07.618682 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.618662 2574 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" Apr 23 17:41:07.633343 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.633278 2574 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 23 17:41:07.983579 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.983498 2574 apiserver.go:52] "Watching apiserver" Apr 23 17:41:07.994746 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.994721 2574 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 23 17:41:07.995199 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.995166 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-l747m","openshift-multus/multus-r2mgw","openshift-multus/network-metrics-daemon-mfhnv","openshift-network-diagnostics/network-check-target-lz78w","kube-system/konnectivity-agent-vq478","kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal","openshift-cluster-node-tuning-operator/tuned-xssq4","openshift-image-registry/node-ca-76svx","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal","openshift-multus/multus-additional-cni-plugins-5h5xl","openshift-network-operator/iptables-alerter-s97fv","openshift-ovn-kubernetes/ovnkube-node-246wr","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl"] Apr 23 17:41:07.997662 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:07.997628 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.001259 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.001240 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 23 17:41:08.001259 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.001253 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-q77pz\"" Apr 23 17:41:08.001419 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.001240 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:41:08.002529 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.002510 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.002676 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.002659 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:08.002759 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.002733 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:08.005389 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.004802 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:08.005389 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.004860 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:08.005686 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.005569 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 23 17:41:08.005686 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.005584 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 23 17:41:08.005686 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.005575 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 23 17:41:08.005920 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.005650 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-2gjvj\"" Apr 23 17:41:08.007601 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.007581 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 23 17:41:08.009562 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.009398 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:08.011593 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.011575 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.011711 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.011692 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.012943 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.012907 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-modprobe-d\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.012943 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.012936 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-host\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013139 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.012961 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cflnd\" (UniqueName: \"kubernetes.io/projected/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-kube-api-access-cflnd\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013139 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.012984 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysctl-conf\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013139 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013037 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-system-cni-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013139 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013089 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-cnibin\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013139 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013120 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-os-release\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013096 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013159 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-kubelet\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013185 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hr6\" (UniqueName: \"kubernetes.io/projected/3d52817f-2284-48d3-800c-a67ac0e0fe4b-kube-api-access-v5hr6\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013215 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-systemd\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013237 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysctl-d\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013256 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-cni-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013284 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-cni-binary-copy\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013306 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-cni-bin\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013369 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 23 17:41:08.013394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013355 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-conf-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013429 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-daemon-config\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013455 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-kubernetes\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013493 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-lib-modules\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013521 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/af104084-9831-4928-8414-358452540c48-etc-tuned\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013535 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-socket-dir-parent\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013550 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-k8s-cni-cncf-io\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013571 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-cni-multus\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013505 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-ck86f\"" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013611 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-hostroot\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013662 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013689 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysconfig\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013711 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-run\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013733 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlpwg\" (UniqueName: \"kubernetes.io/projected/af104084-9831-4928-8414-358452540c48-kube-api-access-hlpwg\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013764 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-multus-certs\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013808 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-etc-kubernetes\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013830 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-sys\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.013888 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013858 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-netns\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.014479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013895 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-var-lib-kubelet\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.014479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013943 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af104084-9831-4928-8414-358452540c48-tmp\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.014479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.013948 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.016459 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.016143 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.016912 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.016897 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 23 17:41:08.017318 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.017299 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 23 17:41:08.017422 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.017333 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 23 17:41:08.017479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.017431 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 23 17:41:08.017479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.017429 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-z64c7\"" Apr 23 17:41:08.017479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.017468 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 23 17:41:08.017816 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.017796 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-rjjlh\"" Apr 23 17:41:08.017948 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.017801 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 23 17:41:08.018112 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.018098 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 23 17:41:08.018255 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.018241 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-8n4q4\"" Apr 23 17:41:08.018559 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.018543 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.020917 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.020899 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.022864 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.022688 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:41:08.022864 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.022721 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 23 17:41:08.022864 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.022788 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 23 17:41:08.023030 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.023006 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 23 17:41:08.024251 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024234 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 23 17:41:08.024464 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024446 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-fkl2c\"" Apr 23 17:41:08.024556 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024480 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 23 17:41:08.024705 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024685 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 23 17:41:08.024705 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024685 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 23 17:41:08.024850 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024717 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-dxlhg\"" Apr 23 17:41:08.024850 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024737 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 23 17:41:08.024947 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024879 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 23 17:41:08.024947 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.024913 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 23 17:41:08.025910 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.025880 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-vqfn4\"" Apr 23 17:41:08.026417 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.026401 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 23 17:41:08.054528 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.054501 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:36:07 +0000 UTC" deadline="2027-10-15 07:12:42.535632619 +0000 UTC" Apr 23 17:41:08.054528 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.054528 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12949h31m34.481107136s" Apr 23 17:41:08.088414 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.088390 2574 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 23 17:41:08.102547 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.102517 2574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 17:41:08.114980 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.114960 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-log-socket\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.115093 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.114987 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/3cccf98b-e13a-4889-a901-8e28ef02f8da-agent-certs\") pod \"konnectivity-agent-vq478\" (UID: \"3cccf98b-e13a-4889-a901-8e28ef02f8da\") " pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:08.115093 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115017 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-k8s-cni-cncf-io\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.115093 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115076 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-cni-multus\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.115225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115112 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-multus-certs\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.115225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115141 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-etc-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.115225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115142 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-cni-multus\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.115225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115166 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-run-ovn-kubernetes\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.115225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115079 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-k8s-cni-cncf-io\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.115225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115201 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-multus-certs\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.115225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115192 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-cni-netd\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115252 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22g9\" (UniqueName: \"kubernetes.io/projected/0f6164a3-aee1-463f-8c3a-a432711f40db-kube-api-access-p22g9\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115282 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-run\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115310 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-etc-kubernetes\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115336 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-kubelet\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115362 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-ovn\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115386 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-socket-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115387 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-etc-kubernetes\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115404 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-run\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115414 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-registration-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115448 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5hh\" (UniqueName: \"kubernetes.io/projected/59053c21-2759-4fb0-86d0-fd32dd514204-kube-api-access-gf5hh\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115470 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.115512 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115497 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-cnibin\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115537 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-cni-binary-copy\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115584 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cflnd\" (UniqueName: \"kubernetes.io/projected/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-kube-api-access-cflnd\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115613 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvq7v\" (UniqueName: \"kubernetes.io/projected/2e14deef-4985-48d4-a516-5ed2e89733cf-kube-api-access-qvq7v\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115662 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-node-log\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115687 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115712 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-var-lib-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115738 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-cnibin\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115753 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-system-cni-dir\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115776 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/3cccf98b-e13a-4889-a901-8e28ef02f8da-konnectivity-ca\") pod \"konnectivity-agent-vq478\" (UID: \"3cccf98b-e13a-4889-a901-8e28ef02f8da\") " pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115799 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/828447ca-91a9-49c8-a1b8-50a5cfbe0580-tmp-dir\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115826 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pxf\" (UniqueName: \"kubernetes.io/projected/828447ca-91a9-49c8-a1b8-50a5cfbe0580-kube-api-access-p7pxf\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115853 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hr6\" (UniqueName: \"kubernetes.io/projected/3d52817f-2284-48d3-800c-a67ac0e0fe4b-kube-api-access-v5hr6\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115863 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-cnibin\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115882 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-systemd\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115909 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-os-release\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.116056 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115932 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115947 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-systemd\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115954 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-systemd\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115961 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-cni-bin\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.115997 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysctl-d\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116023 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-cni-binary-copy\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116046 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-conf-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116073 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/59053c21-2759-4fb0-86d0-fd32dd514204-serviceca\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116098 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-kubernetes\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116105 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-conf-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116118 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-lib-modules\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116140 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/af104084-9831-4928-8414-358452540c48-etc-tuned\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116161 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-socket-dir-parent\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116165 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-kubernetes\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116183 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-hostroot\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116208 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a046af0e-862d-4ab0-abeb-47a68683f10f-host-slash\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116218 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysctl-d\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116233 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:08.116839 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116259 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysconfig\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116262 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-hostroot\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116283 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hlpwg\" (UniqueName: \"kubernetes.io/projected/af104084-9831-4928-8414-358452540c48-kube-api-access-hlpwg\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116309 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59053c21-2759-4fb0-86d0-fd32dd514204-host\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116311 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-socket-dir-parent\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116334 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgnst\" (UniqueName: \"kubernetes.io/projected/a046af0e-862d-4ab0-abeb-47a68683f10f-kube-api-access-dgnst\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116357 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-ovnkube-script-lib\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116380 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-sys\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116402 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-lib-modules\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116404 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-netns\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116426 2574 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116438 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-run-netns\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116434 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-run-netns\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116577 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96bsm\" (UniqueName: \"kubernetes.io/projected/490b05a0-5dc6-444e-a2bb-5908cba8c492-kube-api-access-96bsm\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116604 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-var-lib-kubelet\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.116611 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116648 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af104084-9831-4928-8414-358452540c48-tmp\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.117524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116684 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-cni-binary-copy\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.116710 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:41:08.616678565 +0000 UTC m=+3.038347507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116729 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-var-lib-kubelet\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116745 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-sys\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116767 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysconfig\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116769 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-slash\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116814 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-device-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116887 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-etc-selinux\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116932 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-modprobe-d\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116957 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-host\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.116980 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-kubelet-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117002 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysctl-conf\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117022 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-system-cni-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117024 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-host\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117042 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-os-release\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117061 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-kubelet\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117083 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a046af0e-862d-4ab0-abeb-47a68683f10f-iptables-alerter-script\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117113 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-kubelet\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117124 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-systemd-units\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117113 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-system-cni-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117060 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-modprobe-d\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117150 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/af104084-9831-4928-8414-358452540c48-etc-sysctl-conf\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117165 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-ovnkube-config\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117153 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-os-release\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117200 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117253 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117272 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117293 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-env-overrides\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117317 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f6164a3-aee1-463f-8c3a-a432711f40db-ovn-node-metrics-cert\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117340 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-sys-fs\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117362 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/828447ca-91a9-49c8-a1b8-50a5cfbe0580-hosts-file\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117383 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-cni-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117402 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-cni-bin\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.118764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117418 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-daemon-config\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.119338 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117898 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-daemon-config\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.119338 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.117986 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-multus-cni-dir\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.119338 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.118015 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-host-var-lib-cni-bin\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.119871 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.119849 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/af104084-9831-4928-8414-358452540c48-etc-tuned\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.119959 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.119881 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af104084-9831-4928-8414-358452540c48-tmp\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.135400 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.135378 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cflnd\" (UniqueName: \"kubernetes.io/projected/a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b-kube-api-access-cflnd\") pod \"multus-r2mgw\" (UID: \"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b\") " pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.136007 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.135985 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hr6\" (UniqueName: \"kubernetes.io/projected/3d52817f-2284-48d3-800c-a67ac0e0fe4b-kube-api-access-v5hr6\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:08.136318 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.136301 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlpwg\" (UniqueName: \"kubernetes.io/projected/af104084-9831-4928-8414-358452540c48-kube-api-access-hlpwg\") pod \"tuned-xssq4\" (UID: \"af104084-9831-4928-8414-358452540c48\") " pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.218296 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218271 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-os-release\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.218296 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218299 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218317 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-systemd\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218339 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-cni-bin\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218366 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/59053c21-2759-4fb0-86d0-fd32dd514204-serviceca\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218397 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a046af0e-862d-4ab0-abeb-47a68683f10f-host-slash\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218407 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-cni-bin\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218422 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-os-release\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218434 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59053c21-2759-4fb0-86d0-fd32dd514204-host\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218406 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-systemd\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218460 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a046af0e-862d-4ab0-abeb-47a68683f10f-host-slash\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218491 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgnst\" (UniqueName: \"kubernetes.io/projected/a046af0e-862d-4ab0-abeb-47a68683f10f-kube-api-access-dgnst\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218481 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.218530 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218517 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-ovnkube-script-lib\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218544 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-run-netns\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218561 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59053c21-2759-4fb0-86d0-fd32dd514204-host\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218569 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-96bsm\" (UniqueName: \"kubernetes.io/projected/490b05a0-5dc6-444e-a2bb-5908cba8c492-kube-api-access-96bsm\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218603 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-slash\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218625 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-device-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218669 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-etc-selinux\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218693 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-kubelet-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218720 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a046af0e-862d-4ab0-abeb-47a68683f10f-iptables-alerter-script\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218745 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-systemd-units\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218807 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-ovnkube-config\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218815 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-device-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218853 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218889 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218914 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218937 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-env-overrides\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218940 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/59053c21-2759-4fb0-86d0-fd32dd514204-serviceca\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.219096 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218963 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f6164a3-aee1-463f-8c3a-a432711f40db-ovn-node-metrics-cert\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218994 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-sys-fs\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219018 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/828447ca-91a9-49c8-a1b8-50a5cfbe0580-hosts-file\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219054 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-log-socket\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219059 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-slash\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219092 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/3cccf98b-e13a-4889-a901-8e28ef02f8da-agent-certs\") pod \"konnectivity-agent-vq478\" (UID: \"3cccf98b-e13a-4889-a901-8e28ef02f8da\") " pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219104 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-log-socket\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219124 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-etc-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219145 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-ovnkube-script-lib\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219187 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-etc-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219202 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-run-ovn-kubernetes\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.218860 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-run-netns\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219149 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-run-ovn-kubernetes\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219269 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-cni-netd\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219294 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p22g9\" (UniqueName: \"kubernetes.io/projected/0f6164a3-aee1-463f-8c3a-a432711f40db-kube-api-access-p22g9\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219320 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-kubelet\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219344 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-ovn\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.219880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219401 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-socket-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219405 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-ovnkube-config\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219424 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-registration-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219449 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5hh\" (UniqueName: \"kubernetes.io/projected/59053c21-2759-4fb0-86d0-fd32dd514204-kube-api-access-gf5hh\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219474 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219497 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-cnibin\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219523 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-cni-binary-copy\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219531 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-etc-selinux\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219548 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvq7v\" (UniqueName: \"kubernetes.io/projected/2e14deef-4985-48d4-a516-5ed2e89733cf-kube-api-access-qvq7v\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219608 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-systemd-units\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219718 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219921 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-kubelet-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219967 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219987 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.219997 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-socket-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220013 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-registration-dir\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220031 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-node-log\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.220691 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220053 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f6164a3-aee1-463f-8c3a-a432711f40db-env-overrides\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220064 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220093 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-var-lib-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220114 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/490b05a0-5dc6-444e-a2bb-5908cba8c492-sys-fs\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220121 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-system-cni-dir\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220147 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/3cccf98b-e13a-4889-a901-8e28ef02f8da-konnectivity-ca\") pod \"konnectivity-agent-vq478\" (UID: \"3cccf98b-e13a-4889-a901-8e28ef02f8da\") " pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220156 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220205 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-run-ovn\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220200 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-node-log\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220195 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/828447ca-91a9-49c8-a1b8-50a5cfbe0580-tmp-dir\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220249 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-kubelet\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220253 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p7pxf\" (UniqueName: \"kubernetes.io/projected/828447ca-91a9-49c8-a1b8-50a5cfbe0580-kube-api-access-p7pxf\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220251 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-system-cni-dir\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220054 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-host-cni-netd\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220294 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f6164a3-aee1-463f-8c3a-a432711f40db-var-lib-openvswitch\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220342 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e14deef-4985-48d4-a516-5ed2e89733cf-cnibin\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220365 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/828447ca-91a9-49c8-a1b8-50a5cfbe0580-hosts-file\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.221445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220546 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/828447ca-91a9-49c8-a1b8-50a5cfbe0580-tmp-dir\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.222287 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.220824 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e14deef-4985-48d4-a516-5ed2e89733cf-cni-binary-copy\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.222287 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.221307 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/3cccf98b-e13a-4889-a901-8e28ef02f8da-konnectivity-ca\") pod \"konnectivity-agent-vq478\" (UID: \"3cccf98b-e13a-4889-a901-8e28ef02f8da\") " pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:08.222287 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.221331 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a046af0e-862d-4ab0-abeb-47a68683f10f-iptables-alerter-script\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.223064 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.223034 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/3cccf98b-e13a-4889-a901-8e28ef02f8da-agent-certs\") pod \"konnectivity-agent-vq478\" (UID: \"3cccf98b-e13a-4889-a901-8e28ef02f8da\") " pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:08.223240 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.223224 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f6164a3-aee1-463f-8c3a-a432711f40db-ovn-node-metrics-cert\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.234842 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.233753 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:41:08.234842 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.233786 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:41:08.234842 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.233800 2574 projected.go:194] Error preparing data for projected volume kube-api-access-vmx9k for pod openshift-network-diagnostics/network-check-target-lz78w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:08.234842 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.233875 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k podName:c0594da3-a624-4d0d-9765-82537ca166c3 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:08.733858232 +0000 UTC m=+3.155527169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vmx9k" (UniqueName: "kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k") pod "network-check-target-lz78w" (UID: "c0594da3-a624-4d0d-9765-82537ca166c3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:08.235685 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.235659 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgnst\" (UniqueName: \"kubernetes.io/projected/a046af0e-862d-4ab0-abeb-47a68683f10f-kube-api-access-dgnst\") pod \"iptables-alerter-s97fv\" (UID: \"a046af0e-862d-4ab0-abeb-47a68683f10f\") " pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.236523 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.236500 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p22g9\" (UniqueName: \"kubernetes.io/projected/0f6164a3-aee1-463f-8c3a-a432711f40db-kube-api-access-p22g9\") pod \"ovnkube-node-246wr\" (UID: \"0f6164a3-aee1-463f-8c3a-a432711f40db\") " pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.237464 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.237440 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvq7v\" (UniqueName: \"kubernetes.io/projected/2e14deef-4985-48d4-a516-5ed2e89733cf-kube-api-access-qvq7v\") pod \"multus-additional-cni-plugins-5h5xl\" (UID: \"2e14deef-4985-48d4-a516-5ed2e89733cf\") " pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.237566 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.237504 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7pxf\" (UniqueName: \"kubernetes.io/projected/828447ca-91a9-49c8-a1b8-50a5cfbe0580-kube-api-access-p7pxf\") pod \"node-resolver-l747m\" (UID: \"828447ca-91a9-49c8-a1b8-50a5cfbe0580\") " pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.238202 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.238172 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5hh\" (UniqueName: \"kubernetes.io/projected/59053c21-2759-4fb0-86d0-fd32dd514204-kube-api-access-gf5hh\") pod \"node-ca-76svx\" (UID: \"59053c21-2759-4fb0-86d0-fd32dd514204\") " pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.242368 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.242348 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-96bsm\" (UniqueName: \"kubernetes.io/projected/490b05a0-5dc6-444e-a2bb-5908cba8c492-kube-api-access-96bsm\") pod \"aws-ebs-csi-driver-node-htjdl\" (UID: \"490b05a0-5dc6-444e-a2bb-5908cba8c492\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.311370 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.311337 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-xssq4" Apr 23 17:41:08.319079 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.319048 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r2mgw" Apr 23 17:41:08.326750 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.326730 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:08.332294 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.332265 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l747m" Apr 23 17:41:08.337771 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.337753 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-76svx" Apr 23 17:41:08.344295 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.344276 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" Apr 23 17:41:08.351860 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.351837 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-s97fv" Apr 23 17:41:08.358458 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.358436 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:08.365081 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.365059 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" Apr 23 17:41:08.621955 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.621868 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:08.622101 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.622026 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:08.622101 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.622087 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:41:09.622071502 +0000 UTC m=+4.043740443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:08.715798 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:08.715770 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf104084_9831_4928_8414_358452540c48.slice/crio-484e951b178b52f476ae73d0098cb8529dfe91b5c80e7f7947bc130bfc2edb73 WatchSource:0}: Error finding container 484e951b178b52f476ae73d0098cb8529dfe91b5c80e7f7947bc130bfc2edb73: Status 404 returned error can't find the container with id 484e951b178b52f476ae73d0098cb8529dfe91b5c80e7f7947bc130bfc2edb73 Apr 23 17:41:08.716717 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:08.716687 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod828447ca_91a9_49c8_a1b8_50a5cfbe0580.slice/crio-224e79a5737a836ec59e07b3162e7a91829973b2c0d29597c16120aebe4fd65e WatchSource:0}: Error finding container 224e79a5737a836ec59e07b3162e7a91829973b2c0d29597c16120aebe4fd65e: Status 404 returned error can't find the container with id 224e79a5737a836ec59e07b3162e7a91829973b2c0d29597c16120aebe4fd65e Apr 23 17:41:08.720180 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:08.720156 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f6164a3_aee1_463f_8c3a_a432711f40db.slice/crio-4a5b20859f3b24d03a2e0b602f42314f1381304e730e5a779027b79a6b41a594 WatchSource:0}: Error finding container 4a5b20859f3b24d03a2e0b602f42314f1381304e730e5a779027b79a6b41a594: Status 404 returned error can't find the container with id 4a5b20859f3b24d03a2e0b602f42314f1381304e730e5a779027b79a6b41a594 Apr 23 17:41:08.720931 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:08.720874 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490b05a0_5dc6_444e_a2bb_5908cba8c492.slice/crio-83055d9fffb1552be742534083e188b6b88c90bd8b700b7959c7a79b413a0563 WatchSource:0}: Error finding container 83055d9fffb1552be742534083e188b6b88c90bd8b700b7959c7a79b413a0563: Status 404 returned error can't find the container with id 83055d9fffb1552be742534083e188b6b88c90bd8b700b7959c7a79b413a0563 Apr 23 17:41:08.721917 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:08.721780 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e14deef_4985_48d4_a516_5ed2e89733cf.slice/crio-4fd49450134a905043231bcdb604899e30dd82d18dc6db57384dbacb38d0b030 WatchSource:0}: Error finding container 4fd49450134a905043231bcdb604899e30dd82d18dc6db57384dbacb38d0b030: Status 404 returned error can't find the container with id 4fd49450134a905043231bcdb604899e30dd82d18dc6db57384dbacb38d0b030 Apr 23 17:41:08.723442 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:41:08.723418 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda29bcdd0_8e46_4bba_9d0f_3db54ee9f75b.slice/crio-dba55e2bb73e0d73c2ae86cb8e92d4d8a629a53795c2cd1f1c68ca44033cc803 WatchSource:0}: Error finding container dba55e2bb73e0d73c2ae86cb8e92d4d8a629a53795c2cd1f1c68ca44033cc803: Status 404 returned error can't find the container with id dba55e2bb73e0d73c2ae86cb8e92d4d8a629a53795c2cd1f1c68ca44033cc803 Apr 23 17:41:08.822997 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:08.822830 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:08.822997 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.822990 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:41:08.823152 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.823011 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:41:08.823152 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.823022 2574 projected.go:194] Error preparing data for projected volume kube-api-access-vmx9k for pod openshift-network-diagnostics/network-check-target-lz78w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:08.823152 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:08.823068 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k podName:c0594da3-a624-4d0d-9765-82537ca166c3 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:09.823053399 +0000 UTC m=+4.244722336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmx9k" (UniqueName: "kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k") pod "network-check-target-lz78w" (UID: "c0594da3-a624-4d0d-9765-82537ca166c3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:09.055486 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.055445 2574 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-22 17:36:07 +0000 UTC" deadline="2027-10-06 09:10:31.470553272 +0000 UTC" Apr 23 17:41:09.055486 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.055484 2574 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12735h29m22.415072834s" Apr 23 17:41:09.132352 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.131712 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:09.132352 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.131833 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:09.132352 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.131928 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:09.132352 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.132014 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:09.143581 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.142918 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" event={"ID":"0d68c36ed96ea5528325ea66516f8810","Type":"ContainerStarted","Data":"1081a1bb7e09ad0f0aa5125718a0dd1cf2e7f45693b1b8cd617fa5d163397d17"} Apr 23 17:41:09.146856 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.145580 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-pthrv"] Apr 23 17:41:09.148366 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.148345 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.148513 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.148490 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:09.152890 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.152860 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-76svx" event={"ID":"59053c21-2759-4fb0-86d0-fd32dd514204","Type":"ContainerStarted","Data":"0d143c49cd9e23f0fcec966623bed07c7c8462e5dd688b4d79f4637fded5e149"} Apr 23 17:41:09.155926 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.155625 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-s97fv" event={"ID":"a046af0e-862d-4ab0-abeb-47a68683f10f","Type":"ContainerStarted","Data":"3903729fc9d5840351056ca8eec2b79782e9b9e952e4ecb90ff2e867889ba7d9"} Apr 23 17:41:09.161858 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.161793 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-vq478" event={"ID":"3cccf98b-e13a-4889-a901-8e28ef02f8da","Type":"ContainerStarted","Data":"f64283d5bc1485e44548fb8ff43ef4a9bd958b35e3858705ff4b41c34a2eae0b"} Apr 23 17:41:09.164781 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.164718 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"4a5b20859f3b24d03a2e0b602f42314f1381304e730e5a779027b79a6b41a594"} Apr 23 17:41:09.176619 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.176592 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-xssq4" event={"ID":"af104084-9831-4928-8414-358452540c48","Type":"ContainerStarted","Data":"484e951b178b52f476ae73d0098cb8529dfe91b5c80e7f7947bc130bfc2edb73"} Apr 23 17:41:09.179817 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.179754 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r2mgw" event={"ID":"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b","Type":"ContainerStarted","Data":"dba55e2bb73e0d73c2ae86cb8e92d4d8a629a53795c2cd1f1c68ca44033cc803"} Apr 23 17:41:09.186314 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.186291 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerStarted","Data":"4fd49450134a905043231bcdb604899e30dd82d18dc6db57384dbacb38d0b030"} Apr 23 17:41:09.193243 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.193211 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" event={"ID":"490b05a0-5dc6-444e-a2bb-5908cba8c492","Type":"ContainerStarted","Data":"83055d9fffb1552be742534083e188b6b88c90bd8b700b7959c7a79b413a0563"} Apr 23 17:41:09.195947 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.195879 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l747m" event={"ID":"828447ca-91a9-49c8-a1b8-50a5cfbe0580","Type":"ContainerStarted","Data":"224e79a5737a836ec59e07b3162e7a91829973b2c0d29597c16120aebe4fd65e"} Apr 23 17:41:09.207245 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.206050 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-139-215.ec2.internal" podStartSLOduration=2.206035826 podStartE2EDuration="2.206035826s" podCreationTimestamp="2026-04-23 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:41:09.173087327 +0000 UTC m=+3.594756288" watchObservedRunningTime="2026-04-23 17:41:09.206035826 +0000 UTC m=+3.627704785" Apr 23 17:41:09.227668 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.227600 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.227808 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.227711 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/71650021-930d-4f87-9886-b770243bb591-kubelet-config\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.227808 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.227759 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/71650021-930d-4f87-9886-b770243bb591-dbus\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.328675 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.328573 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/71650021-930d-4f87-9886-b770243bb591-kubelet-config\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.328833 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.328667 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/71650021-930d-4f87-9886-b770243bb591-dbus\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.328833 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.328751 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.329140 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.329061 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/71650021-930d-4f87-9886-b770243bb591-kubelet-config\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.329268 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.329185 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/71650021-930d-4f87-9886-b770243bb591-dbus\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.329661 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.329377 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:09.329661 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.329461 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret podName:71650021-930d-4f87-9886-b770243bb591 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:09.829441174 +0000 UTC m=+4.251110115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret") pod "global-pull-secret-syncer-pthrv" (UID: "71650021-930d-4f87-9886-b770243bb591") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:09.631585 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.630976 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:09.631585 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.631177 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:09.631585 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.631242 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:41:11.631222094 +0000 UTC m=+6.052891045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:09.832811 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.832772 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:09.832975 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:09.832844 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:09.833054 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.832998 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:09.833112 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.833060 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret podName:71650021-930d-4f87-9886-b770243bb591 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:10.833042316 +0000 UTC m=+5.254711256 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret") pod "global-pull-secret-syncer-pthrv" (UID: "71650021-930d-4f87-9886-b770243bb591") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:09.833506 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.833485 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:41:09.833577 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.833513 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:41:09.833577 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.833526 2574 projected.go:194] Error preparing data for projected volume kube-api-access-vmx9k for pod openshift-network-diagnostics/network-check-target-lz78w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:09.833703 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:09.833578 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k podName:c0594da3-a624-4d0d-9765-82537ca166c3 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:11.8335611 +0000 UTC m=+6.255230039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmx9k" (UniqueName: "kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k") pod "network-check-target-lz78w" (UID: "c0594da3-a624-4d0d-9765-82537ca166c3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:10.215058 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:10.215019 2574 generic.go:358] "Generic (PLEG): container finished" podID="fed33c6440c35183d017b214d982b3b1" containerID="a4df34ed4927bc3a2797493ee95b0e67bd89be85fabb5cf656c8ef65a84914f6" exitCode=0 Apr 23 17:41:10.215505 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:10.215108 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" event={"ID":"fed33c6440c35183d017b214d982b3b1","Type":"ContainerDied","Data":"a4df34ed4927bc3a2797493ee95b0e67bd89be85fabb5cf656c8ef65a84914f6"} Apr 23 17:41:10.843052 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:10.842913 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:10.843221 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:10.843064 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:10.843221 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:10.843126 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret podName:71650021-930d-4f87-9886-b770243bb591 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:12.843107476 +0000 UTC m=+7.264776436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret") pod "global-pull-secret-syncer-pthrv" (UID: "71650021-930d-4f87-9886-b770243bb591") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:11.132289 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:11.131454 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:11.132289 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.131573 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:11.132289 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:11.132002 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:11.132289 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.132095 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:11.132289 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:11.132175 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:11.132289 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.132256 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:11.222575 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:11.221889 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" event={"ID":"fed33c6440c35183d017b214d982b3b1","Type":"ContainerStarted","Data":"6faca566382d9297e7db58e5fbe329cca30be84b86376809b9d7ad43425bd379"} Apr 23 17:41:11.649649 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:11.649597 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:11.649831 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.649804 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:11.649901 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.649865 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:41:15.649846975 +0000 UTC m=+10.071515912 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:11.851293 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:11.851253 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:11.851494 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.851476 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:41:11.851574 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.851501 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:41:11.851574 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.851514 2574 projected.go:194] Error preparing data for projected volume kube-api-access-vmx9k for pod openshift-network-diagnostics/network-check-target-lz78w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:11.851714 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:11.851576 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k podName:c0594da3-a624-4d0d-9765-82537ca166c3 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:15.851558255 +0000 UTC m=+10.273227196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmx9k" (UniqueName: "kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k") pod "network-check-target-lz78w" (UID: "c0594da3-a624-4d0d-9765-82537ca166c3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:12.862215 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:12.862165 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:12.862683 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:12.862321 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:12.862683 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:12.862391 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret podName:71650021-930d-4f87-9886-b770243bb591 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:16.862371792 +0000 UTC m=+11.284040750 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret") pod "global-pull-secret-syncer-pthrv" (UID: "71650021-930d-4f87-9886-b770243bb591") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:13.132165 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:13.132132 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:13.132335 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:13.132261 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:13.132562 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:13.132544 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:13.132669 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:13.132650 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:13.132739 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:13.132722 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:13.132841 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:13.132822 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:15.132203 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:15.132167 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:15.132659 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:15.132180 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:15.132659 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.132307 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:15.132659 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.132412 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:15.132659 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:15.132456 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:15.132659 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.132517 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:15.688311 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:15.688269 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:15.688488 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.688430 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:15.688538 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.688499 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:41:23.688477791 +0000 UTC m=+18.110146743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:15.890565 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:15.890523 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:15.891205 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.890757 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:41:15.891205 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.890783 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:41:15.891205 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.890798 2574 projected.go:194] Error preparing data for projected volume kube-api-access-vmx9k for pod openshift-network-diagnostics/network-check-target-lz78w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:15.891205 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:15.890856 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k podName:c0594da3-a624-4d0d-9765-82537ca166c3 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:23.890838029 +0000 UTC m=+18.312506971 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmx9k" (UniqueName: "kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k") pod "network-check-target-lz78w" (UID: "c0594da3-a624-4d0d-9765-82537ca166c3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:16.899582 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:16.899095 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:16.899582 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:16.899225 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:16.899582 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:16.899288 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret podName:71650021-930d-4f87-9886-b770243bb591 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:24.899275066 +0000 UTC m=+19.320944018 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret") pod "global-pull-secret-syncer-pthrv" (UID: "71650021-930d-4f87-9886-b770243bb591") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:17.132972 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:17.132184 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:17.132972 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:17.132326 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:17.132972 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:17.132392 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:17.132972 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:17.132457 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:17.132972 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:17.132835 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:17.132972 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:17.132923 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:19.131999 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:19.131954 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:19.132566 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:19.132081 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:19.132566 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:19.132104 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:19.132566 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:19.132081 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:19.132566 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:19.132188 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:19.132566 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:19.132266 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:21.131649 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:21.131598 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:21.132094 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:21.131737 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:21.132094 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:21.131742 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:21.132094 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:21.131766 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:21.132094 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:21.131827 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:21.132094 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:21.131911 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:23.132126 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:23.132048 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:23.132126 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:23.132081 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:23.132595 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:23.132170 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:23.132595 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.132178 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:23.132595 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.132317 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:23.132595 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.132396 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:23.752673 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:23.752619 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:23.752854 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.752769 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:23.752854 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.752833 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:41:39.75281549 +0000 UTC m=+34.174484428 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:23.954333 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:23.954291 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:23.954506 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.954469 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:41:23.954506 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.954491 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:41:23.954506 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.954501 2574 projected.go:194] Error preparing data for projected volume kube-api-access-vmx9k for pod openshift-network-diagnostics/network-check-target-lz78w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:23.954699 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:23.954556 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k podName:c0594da3-a624-4d0d-9765-82537ca166c3 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:39.954541484 +0000 UTC m=+34.376210421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmx9k" (UniqueName: "kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k") pod "network-check-target-lz78w" (UID: "c0594da3-a624-4d0d-9765-82537ca166c3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:24.961569 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:24.961519 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:24.962001 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:24.961651 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:24.962001 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:24.961723 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret podName:71650021-930d-4f87-9886-b770243bb591 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:40.961703553 +0000 UTC m=+35.383372500 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret") pod "global-pull-secret-syncer-pthrv" (UID: "71650021-930d-4f87-9886-b770243bb591") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:25.132611 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:25.132408 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:25.132611 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:25.132444 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:25.132611 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:25.132470 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:25.132611 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:25.132563 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:25.132963 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:25.132653 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:25.132963 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:25.132748 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:26.250539 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.250271 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"427c3cd7a9a66d70c9d53c007269673fc273b9f8aa3a8dbdd7077748c12e7546"} Apr 23 17:41:26.252547 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.252437 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-xssq4" event={"ID":"af104084-9831-4928-8414-358452540c48","Type":"ContainerStarted","Data":"e82f4612f6b238a5d5a0dc5e896fe36271337e468b8ecb085256ecc95622d61a"} Apr 23 17:41:26.254945 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.254916 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r2mgw" event={"ID":"a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b","Type":"ContainerStarted","Data":"7c6982a5cfd8cfd5d9520504b702ec34ea10e4a7e85c32790c05699de57e212c"} Apr 23 17:41:26.256764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.256744 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerStarted","Data":"6fad0031961faf5fb08bc471ab3dca42ee90aedfe8d66cf554db6099a6b4cc3d"} Apr 23 17:41:26.260049 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.260024 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-76svx" event={"ID":"59053c21-2759-4fb0-86d0-fd32dd514204","Type":"ContainerStarted","Data":"cf253dd42e9f7e3faa7563bdc02927b493f332760d359f8940c8a9342e08c386"} Apr 23 17:41:26.271295 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.271105 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-xssq4" podStartSLOduration=3.245387889 podStartE2EDuration="20.271090896s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.717804715 +0000 UTC m=+3.139473668" lastFinishedPulling="2026-04-23 17:41:25.743507723 +0000 UTC m=+20.165176675" observedRunningTime="2026-04-23 17:41:26.270093114 +0000 UTC m=+20.691762072" watchObservedRunningTime="2026-04-23 17:41:26.271090896 +0000 UTC m=+20.692759856" Apr 23 17:41:26.272369 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.271461 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-139-215.ec2.internal" podStartSLOduration=19.271447333 podStartE2EDuration="19.271447333s" podCreationTimestamp="2026-04-23 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:41:11.239481776 +0000 UTC m=+5.661150757" watchObservedRunningTime="2026-04-23 17:41:26.271447333 +0000 UTC m=+20.693116294" Apr 23 17:41:26.283273 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.283233 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-76svx" podStartSLOduration=3.270979648 podStartE2EDuration="20.283220132s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.731278994 +0000 UTC m=+3.152947936" lastFinishedPulling="2026-04-23 17:41:25.743519484 +0000 UTC m=+20.165188420" observedRunningTime="2026-04-23 17:41:26.282831206 +0000 UTC m=+20.704500164" watchObservedRunningTime="2026-04-23 17:41:26.283220132 +0000 UTC m=+20.704889091" Apr 23 17:41:26.317380 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:26.317333 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-r2mgw" podStartSLOduration=3.063137401 podStartE2EDuration="20.317319728s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.726246396 +0000 UTC m=+3.147915333" lastFinishedPulling="2026-04-23 17:41:25.980428718 +0000 UTC m=+20.402097660" observedRunningTime="2026-04-23 17:41:26.316832328 +0000 UTC m=+20.738501283" watchObservedRunningTime="2026-04-23 17:41:26.317319728 +0000 UTC m=+20.738988686" Apr 23 17:41:27.131607 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.131394 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:27.131793 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.131456 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:27.131793 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:27.131715 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:27.131793 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.131481 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:27.131793 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:27.131784 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:27.131980 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:27.131870 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:27.265190 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.265159 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-s97fv" event={"ID":"a046af0e-862d-4ab0-abeb-47a68683f10f","Type":"ContainerStarted","Data":"b9ef389efa5265648a3f82809b7f321d1112fa58802b647563b7ff7939316c36"} Apr 23 17:41:27.266585 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.266552 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-vq478" event={"ID":"3cccf98b-e13a-4889-a901-8e28ef02f8da","Type":"ContainerStarted","Data":"865f7a3f5618713155c47eeed511450b7c110ace3b98cf5f64c925b79d5b4ee6"} Apr 23 17:41:27.269163 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.269146 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:41:27.269453 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.269429 2574 generic.go:358] "Generic (PLEG): container finished" podID="0f6164a3-aee1-463f-8c3a-a432711f40db" containerID="8acb9edd3f738470ed4e1b9d798ebc3c8c09ba8ff3a6afa91719d0d602fcd5a8" exitCode=1 Apr 23 17:41:27.269504 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.269454 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"e972e04ff01e3190ca6bd00ccd12a5f4022e69a69eb77063ffc63601e1b5d482"} Apr 23 17:41:27.269504 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.269477 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"469c6af4da76b575cf9327c9fac10ca5a6099887291c12cbb76f91df582cccb5"} Apr 23 17:41:27.269504 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.269489 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"4c873f73fac93abdd72ec546e7cebaadc03105bfa28914536bee379e404ec8ab"} Apr 23 17:41:27.269504 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.269502 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"bf18c0c68ca47d69698c959cfaae4e311cbdeb2b4ebeb02886f037b32422f6cb"} Apr 23 17:41:27.269653 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.269517 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerDied","Data":"8acb9edd3f738470ed4e1b9d798ebc3c8c09ba8ff3a6afa91719d0d602fcd5a8"} Apr 23 17:41:27.270696 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.270678 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e14deef-4985-48d4-a516-5ed2e89733cf" containerID="6fad0031961faf5fb08bc471ab3dca42ee90aedfe8d66cf554db6099a6b4cc3d" exitCode=0 Apr 23 17:41:27.270792 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.270726 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerDied","Data":"6fad0031961faf5fb08bc471ab3dca42ee90aedfe8d66cf554db6099a6b4cc3d"} Apr 23 17:41:27.272071 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.272030 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" event={"ID":"490b05a0-5dc6-444e-a2bb-5908cba8c492","Type":"ContainerStarted","Data":"1a3f8edf942eb72d3c729622ce87fc2e47a7421ebd968b85b2bf5fdcd9325888"} Apr 23 17:41:27.273545 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.273423 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l747m" event={"ID":"828447ca-91a9-49c8-a1b8-50a5cfbe0580","Type":"ContainerStarted","Data":"d9ca1e150d4c6063319f6f32141356e3e7d3f3957f15fac2e5ea4813eb96bb2e"} Apr 23 17:41:27.280873 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.280836 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-s97fv" podStartSLOduration=4.052550838 podStartE2EDuration="21.28082549s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.727979402 +0000 UTC m=+3.149648353" lastFinishedPulling="2026-04-23 17:41:25.956254052 +0000 UTC m=+20.377923005" observedRunningTime="2026-04-23 17:41:27.280690298 +0000 UTC m=+21.702359257" watchObservedRunningTime="2026-04-23 17:41:27.28082549 +0000 UTC m=+21.702494454" Apr 23 17:41:27.295048 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.295002 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-vq478" podStartSLOduration=4.279769723 podStartE2EDuration="21.294985861s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.728290656 +0000 UTC m=+3.149959593" lastFinishedPulling="2026-04-23 17:41:25.743506778 +0000 UTC m=+20.165175731" observedRunningTime="2026-04-23 17:41:27.294932393 +0000 UTC m=+21.716601351" watchObservedRunningTime="2026-04-23 17:41:27.294985861 +0000 UTC m=+21.716654828" Apr 23 17:41:27.328722 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.328663 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-l747m" podStartSLOduration=4.08996548 podStartE2EDuration="21.328627952s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.718841375 +0000 UTC m=+3.140510312" lastFinishedPulling="2026-04-23 17:41:25.957503847 +0000 UTC m=+20.379172784" observedRunningTime="2026-04-23 17:41:27.328386482 +0000 UTC m=+21.750055443" watchObservedRunningTime="2026-04-23 17:41:27.328627952 +0000 UTC m=+21.750296911" Apr 23 17:41:27.399778 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:27.399753 2574 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 23 17:41:28.075767 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:28.075616 2574 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-23T17:41:27.399772509Z","UUID":"6e92060c-a819-49de-ac44-0d8ae4bb95bd","Handler":null,"Name":"","Endpoint":""} Apr 23 17:41:28.077419 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:28.077399 2574 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 23 17:41:28.077419 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:28.077431 2574 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 23 17:41:28.102567 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:28.102540 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:28.103263 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:28.103242 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:28.277475 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:28.277423 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" event={"ID":"490b05a0-5dc6-444e-a2bb-5908cba8c492","Type":"ContainerStarted","Data":"56e90c4784d87b190b2e32ffd4f82a2c885f7199c785d7c73bd417f6d26d0df7"} Apr 23 17:41:28.277973 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:28.277818 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:28.278109 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:28.278084 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-vq478" Apr 23 17:41:29.132307 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:29.132048 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:29.132467 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:29.132048 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:29.132467 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:29.132379 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:29.132467 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:29.132436 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:29.132467 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:29.132052 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:29.132699 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:29.132529 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:29.282599 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:29.282565 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:41:29.283082 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:29.283057 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"84fdf82e3fbb18fa15971b21b766f192547dc4411d614b993cc28bbdddc87415"} Apr 23 17:41:29.285128 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:29.285075 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" event={"ID":"490b05a0-5dc6-444e-a2bb-5908cba8c492","Type":"ContainerStarted","Data":"c1a7471fee6544ff0f3599b5765c40cd56f384ce621bfe5effe37bdf788edb20"} Apr 23 17:41:29.303016 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:29.302969 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-htjdl" podStartSLOduration=3.495613654 podStartE2EDuration="23.302955607s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.722992187 +0000 UTC m=+3.144661124" lastFinishedPulling="2026-04-23 17:41:28.530334132 +0000 UTC m=+22.952003077" observedRunningTime="2026-04-23 17:41:29.302511458 +0000 UTC m=+23.724180412" watchObservedRunningTime="2026-04-23 17:41:29.302955607 +0000 UTC m=+23.724624565" Apr 23 17:41:31.131734 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:31.131699 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:31.132288 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:31.131699 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:31.132288 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:31.131820 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:31.132288 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:31.131900 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:31.132288 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:31.131699 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:31.132288 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:31.131974 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:32.293711 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:32.293519 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:41:32.294442 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:32.294021 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"0ad4c850b859173a01c76d4990bc53d3bfa346dfb152cd38e0d9552e3425c8c8"} Apr 23 17:41:32.294442 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:32.294325 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:32.294442 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:32.294349 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:32.294562 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:32.294485 2574 scope.go:117] "RemoveContainer" containerID="8acb9edd3f738470ed4e1b9d798ebc3c8c09ba8ff3a6afa91719d0d602fcd5a8" Apr 23 17:41:32.295831 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:32.295804 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e14deef-4985-48d4-a516-5ed2e89733cf" containerID="6eaaaf3ef6d283fae1b6fdaa024922df94778138bfed4ee855ee5ca353108d33" exitCode=0 Apr 23 17:41:32.295937 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:32.295846 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerDied","Data":"6eaaaf3ef6d283fae1b6fdaa024922df94778138bfed4ee855ee5ca353108d33"} Apr 23 17:41:32.310708 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:32.310682 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:33.132129 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.132107 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:33.132129 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.132121 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:33.132268 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:33.132207 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:33.132268 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.132227 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:33.132359 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:33.132318 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:33.132429 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:33.132408 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:33.274327 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.274257 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mfhnv"] Apr 23 17:41:33.276505 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.276478 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-lz78w"] Apr 23 17:41:33.277152 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.277126 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-pthrv"] Apr 23 17:41:33.300674 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.300617 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e14deef-4985-48d4-a516-5ed2e89733cf" containerID="f22155f48837e6e1dbee9ba01575b4dffa01b2c87617b51154b9089eaa8e8b4a" exitCode=0 Apr 23 17:41:33.301126 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.300675 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerDied","Data":"f22155f48837e6e1dbee9ba01575b4dffa01b2c87617b51154b9089eaa8e8b4a"} Apr 23 17:41:33.308761 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.308597 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:41:33.309085 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.309055 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" event={"ID":"0f6164a3-aee1-463f-8c3a-a432711f40db","Type":"ContainerStarted","Data":"4a1bddba7e9397d326d50660a209f145eac51459a673ee5676979f07a007142f"} Apr 23 17:41:33.309213 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.309101 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:33.309272 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:33.309212 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:33.309272 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.309104 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:33.309389 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:33.309306 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:33.309389 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.309104 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:33.309473 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:33.309396 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:33.309473 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.309410 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:33.326890 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.326869 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:41:33.343946 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:33.343895 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" podStartSLOduration=10.057914486 podStartE2EDuration="27.343878514s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.722180121 +0000 UTC m=+3.143849073" lastFinishedPulling="2026-04-23 17:41:26.00814415 +0000 UTC m=+20.429813101" observedRunningTime="2026-04-23 17:41:33.34246112 +0000 UTC m=+27.764130078" watchObservedRunningTime="2026-04-23 17:41:33.343878514 +0000 UTC m=+27.765547453" Apr 23 17:41:34.312739 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:34.312660 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e14deef-4985-48d4-a516-5ed2e89733cf" containerID="72e9906cae8bfe789318a4590ff1c146e8db3050ac7689466fbb0765468301b7" exitCode=0 Apr 23 17:41:34.313075 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:34.312740 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerDied","Data":"72e9906cae8bfe789318a4590ff1c146e8db3050ac7689466fbb0765468301b7"} Apr 23 17:41:35.132422 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:35.132383 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:35.132594 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:35.132384 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:35.132594 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:35.132518 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:35.132594 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:35.132538 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:35.132741 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:35.132650 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:35.132741 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:35.132728 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:37.131669 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:37.131613 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:37.132241 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:37.131764 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:37.132241 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:37.132117 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:37.132241 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:37.132218 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:37.132392 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:37.132265 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:37.132392 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:37.132348 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:39.132285 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.132244 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:39.132285 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.132284 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:39.133092 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.132374 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:39.133092 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.132380 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lz78w" podUID="c0594da3-a624-4d0d-9765-82537ca166c3" Apr 23 17:41:39.133092 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.132491 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:41:39.133092 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.132565 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-pthrv" podUID="71650021-930d-4f87-9886-b770243bb591" Apr 23 17:41:39.775164 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.775128 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:39.775347 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.775300 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:39.775416 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.775379 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:42:11.775359208 +0000 UTC m=+66.197028159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 23 17:41:39.929539 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.929514 2574 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-139-215.ec2.internal" event="NodeReady" Apr 23 17:41:39.929707 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.929663 2574 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 23 17:41:39.970077 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.970046 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pnmwc"] Apr 23 17:41:39.976052 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.976017 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:39.976182 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.976139 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 23 17:41:39.976182 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.976153 2574 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 23 17:41:39.976182 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.976162 2574 projected.go:194] Error preparing data for projected volume kube-api-access-vmx9k for pod openshift-network-diagnostics/network-check-target-lz78w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:39.976284 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:39.976203 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k podName:c0594da3-a624-4d0d-9765-82537ca166c3 nodeName:}" failed. No retries permitted until 2026-04-23 17:42:11.976191894 +0000 UTC m=+66.397860831 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmx9k" (UniqueName: "kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k") pod "network-check-target-lz78w" (UID: "c0594da3-a624-4d0d-9765-82537ca166c3") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 23 17:41:39.983856 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.983777 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wfmxn"] Apr 23 17:41:39.983969 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.983955 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:39.986880 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.986844 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-b6vkm\"" Apr 23 17:41:39.987051 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.987023 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 23 17:41:39.987185 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:39.987168 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 23 17:41:40.010302 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.010273 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pnmwc"] Apr 23 17:41:40.010405 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.010311 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wfmxn"] Apr 23 17:41:40.010405 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.010327 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:40.015114 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.015099 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 23 17:41:40.015343 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.015327 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-bmxrf\"" Apr 23 17:41:40.015414 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.015326 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 23 17:41:40.030290 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.030231 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 23 17:41:40.076477 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.076449 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs4ld\" (UniqueName: \"kubernetes.io/projected/23665133-39c5-4391-bafe-d17164250221-kube-api-access-gs4ld\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.076664 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.076520 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23665133-39c5-4391-bafe-d17164250221-config-volume\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.076664 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.076536 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.076664 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.076579 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23665133-39c5-4391-bafe-d17164250221-tmp-dir\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.177769 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.177729 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gs4ld\" (UniqueName: \"kubernetes.io/projected/23665133-39c5-4391-bafe-d17164250221-kube-api-access-gs4ld\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.178348 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.177829 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:40.178348 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.177865 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd5gl\" (UniqueName: \"kubernetes.io/projected/c0a77136-ccae-4958-8ad5-7373ea79258f-kube-api-access-wd5gl\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:40.178348 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.177918 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23665133-39c5-4391-bafe-d17164250221-config-volume\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.178348 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.177942 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.178348 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.177973 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23665133-39c5-4391-bafe-d17164250221-tmp-dir\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.178348 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.178181 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:41:40.178348 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.178257 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls podName:23665133-39c5-4391-bafe-d17164250221 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:40.678234787 +0000 UTC m=+35.099903724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls") pod "dns-default-pnmwc" (UID: "23665133-39c5-4391-bafe-d17164250221") : secret "dns-default-metrics-tls" not found Apr 23 17:41:40.178718 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.178372 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/23665133-39c5-4391-bafe-d17164250221-tmp-dir\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.178718 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.178548 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23665133-39c5-4391-bafe-d17164250221-config-volume\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.191419 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.191386 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs4ld\" (UniqueName: \"kubernetes.io/projected/23665133-39c5-4391-bafe-d17164250221-kube-api-access-gs4ld\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.278805 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.278710 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:40.278805 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.278754 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wd5gl\" (UniqueName: \"kubernetes.io/projected/c0a77136-ccae-4958-8ad5-7373ea79258f-kube-api-access-wd5gl\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:40.279041 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.278874 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:41:40.279041 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.278935 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert podName:c0a77136-ccae-4958-8ad5-7373ea79258f nodeName:}" failed. No retries permitted until 2026-04-23 17:41:40.778919448 +0000 UTC m=+35.200588384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert") pod "ingress-canary-wfmxn" (UID: "c0a77136-ccae-4958-8ad5-7373ea79258f") : secret "canary-serving-cert" not found Apr 23 17:41:40.288958 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.288934 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd5gl\" (UniqueName: \"kubernetes.io/projected/c0a77136-ccae-4958-8ad5-7373ea79258f-kube-api-access-wd5gl\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:40.327151 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.327120 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerStarted","Data":"e4f60efec6b17f012921c22f55e3ae337301143c2ecbd1b0938a6f726bce015b"} Apr 23 17:41:40.680873 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.680837 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:40.681030 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.680991 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:41:40.681072 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.681052 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls podName:23665133-39c5-4391-bafe-d17164250221 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:41.681035517 +0000 UTC m=+36.102704471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls") pod "dns-default-pnmwc" (UID: "23665133-39c5-4391-bafe-d17164250221") : secret "dns-default-metrics-tls" not found Apr 23 17:41:40.781474 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.781438 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:40.781628 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.781583 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:41:40.781693 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.781654 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert podName:c0a77136-ccae-4958-8ad5-7373ea79258f nodeName:}" failed. No retries permitted until 2026-04-23 17:41:41.781620433 +0000 UTC m=+36.203289371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert") pod "ingress-canary-wfmxn" (UID: "c0a77136-ccae-4958-8ad5-7373ea79258f") : secret "canary-serving-cert" not found Apr 23 17:41:40.982713 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:40.982624 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:40.982808 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.982772 2574 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:40.982846 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:40.982831 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret podName:71650021-930d-4f87-9886-b770243bb591 nodeName:}" failed. No retries permitted until 2026-04-23 17:42:12.982814258 +0000 UTC m=+67.404483212 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret") pod "global-pull-secret-syncer-pthrv" (UID: "71650021-930d-4f87-9886-b770243bb591") : object "kube-system"/"original-pull-secret" not registered Apr 23 17:41:41.131577 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.131547 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:41:41.131745 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.131549 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:41:41.131745 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.131549 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:41:41.135200 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.135160 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:41:41.135200 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.135173 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:41:41.135415 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.135225 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-qk5s8\"" Apr 23 17:41:41.135415 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.135169 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:41:41.135415 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.135256 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-kjp2q\"" Apr 23 17:41:41.136470 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.136453 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:41:41.331778 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.331686 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e14deef-4985-48d4-a516-5ed2e89733cf" containerID="e4f60efec6b17f012921c22f55e3ae337301143c2ecbd1b0938a6f726bce015b" exitCode=0 Apr 23 17:41:41.331778 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.331745 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerDied","Data":"e4f60efec6b17f012921c22f55e3ae337301143c2ecbd1b0938a6f726bce015b"} Apr 23 17:41:41.688842 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.688811 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:41.689007 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:41.688946 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:41:41.689007 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:41.689005 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls podName:23665133-39c5-4391-bafe-d17164250221 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:43.688989006 +0000 UTC m=+38.110657943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls") pod "dns-default-pnmwc" (UID: "23665133-39c5-4391-bafe-d17164250221") : secret "dns-default-metrics-tls" not found Apr 23 17:41:41.790123 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:41.790088 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:41.790301 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:41.790217 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:41:41.790301 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:41.790272 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert podName:c0a77136-ccae-4958-8ad5-7373ea79258f nodeName:}" failed. No retries permitted until 2026-04-23 17:41:43.790258148 +0000 UTC m=+38.211927085 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert") pod "ingress-canary-wfmxn" (UID: "c0a77136-ccae-4958-8ad5-7373ea79258f") : secret "canary-serving-cert" not found Apr 23 17:41:42.336687 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:42.336441 2574 generic.go:358] "Generic (PLEG): container finished" podID="2e14deef-4985-48d4-a516-5ed2e89733cf" containerID="a70a78e5286632c6e8b8f69cea91f8c1f5961338121f490b9f0ae8112def4fbb" exitCode=0 Apr 23 17:41:42.336687 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:42.336518 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerDied","Data":"a70a78e5286632c6e8b8f69cea91f8c1f5961338121f490b9f0ae8112def4fbb"} Apr 23 17:41:43.341141 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:43.341109 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" event={"ID":"2e14deef-4985-48d4-a516-5ed2e89733cf","Type":"ContainerStarted","Data":"b99563a7991f363ca3a846654d5abcb89c9a706f363bae3701a7e5d4ab7acbad"} Apr 23 17:41:43.703042 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:43.702998 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:43.703216 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:43.703113 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:41:43.703216 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:43.703174 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls podName:23665133-39c5-4391-bafe-d17164250221 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:47.703160929 +0000 UTC m=+42.124829866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls") pod "dns-default-pnmwc" (UID: "23665133-39c5-4391-bafe-d17164250221") : secret "dns-default-metrics-tls" not found Apr 23 17:41:43.803770 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:43.803732 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:43.803896 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:43.803869 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:41:43.803942 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:43.803931 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert podName:c0a77136-ccae-4958-8ad5-7373ea79258f nodeName:}" failed. No retries permitted until 2026-04-23 17:41:47.803913756 +0000 UTC m=+42.225582710 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert") pod "ingress-canary-wfmxn" (UID: "c0a77136-ccae-4958-8ad5-7373ea79258f") : secret "canary-serving-cert" not found Apr 23 17:41:47.733017 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:47.732968 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:47.733418 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:47.733136 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:41:47.733418 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:47.733221 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls podName:23665133-39c5-4391-bafe-d17164250221 nodeName:}" failed. No retries permitted until 2026-04-23 17:41:55.733203383 +0000 UTC m=+50.154872338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls") pod "dns-default-pnmwc" (UID: "23665133-39c5-4391-bafe-d17164250221") : secret "dns-default-metrics-tls" not found Apr 23 17:41:47.834268 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:47.834221 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:47.834386 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:47.834367 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:41:47.834438 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:47.834433 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert podName:c0a77136-ccae-4958-8ad5-7373ea79258f nodeName:}" failed. No retries permitted until 2026-04-23 17:41:55.83441728 +0000 UTC m=+50.256086217 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert") pod "ingress-canary-wfmxn" (UID: "c0a77136-ccae-4958-8ad5-7373ea79258f") : secret "canary-serving-cert" not found Apr 23 17:41:55.794397 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:55.794352 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:41:55.794891 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:55.794508 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:41:55.794891 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:55.794572 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls podName:23665133-39c5-4391-bafe-d17164250221 nodeName:}" failed. No retries permitted until 2026-04-23 17:42:11.794556339 +0000 UTC m=+66.216225276 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls") pod "dns-default-pnmwc" (UID: "23665133-39c5-4391-bafe-d17164250221") : secret "dns-default-metrics-tls" not found Apr 23 17:41:55.894819 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:41:55.894778 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:41:55.895019 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:55.894953 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:41:55.895086 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:41:55.895035 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert podName:c0a77136-ccae-4958-8ad5-7373ea79258f nodeName:}" failed. No retries permitted until 2026-04-23 17:42:11.895013766 +0000 UTC m=+66.316682718 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert") pod "ingress-canary-wfmxn" (UID: "c0a77136-ccae-4958-8ad5-7373ea79258f") : secret "canary-serving-cert" not found Apr 23 17:42:05.325443 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:05.325415 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-246wr" Apr 23 17:42:05.349438 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:05.349386 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5h5xl" podStartSLOduration=27.944752382 podStartE2EDuration="59.349370732s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:41:08.724661893 +0000 UTC m=+3.146330834" lastFinishedPulling="2026-04-23 17:41:40.129280246 +0000 UTC m=+34.550949184" observedRunningTime="2026-04-23 17:41:43.366930767 +0000 UTC m=+37.788599732" watchObservedRunningTime="2026-04-23 17:42:05.349370732 +0000 UTC m=+59.771039690" Apr 23 17:42:11.810464 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:11.810422 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:42:11.810957 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:11.810513 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:42:11.810957 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:11.810617 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:42:11.810957 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:11.810713 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls podName:23665133-39c5-4391-bafe-d17164250221 nodeName:}" failed. No retries permitted until 2026-04-23 17:42:43.810693798 +0000 UTC m=+98.232362749 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls") pod "dns-default-pnmwc" (UID: "23665133-39c5-4391-bafe-d17164250221") : secret "dns-default-metrics-tls" not found Apr 23 17:42:11.813834 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:11.813817 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 23 17:42:11.821517 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:11.821498 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 23 17:42:11.821567 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:11.821551 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:43:15.821536989 +0000 UTC m=+130.243205933 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : secret "metrics-daemon-secret" not found Apr 23 17:42:11.913938 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:11.911047 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:42:11.913938 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:11.911441 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:42:11.913938 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:11.911594 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert podName:c0a77136-ccae-4958-8ad5-7373ea79258f nodeName:}" failed. No retries permitted until 2026-04-23 17:42:43.911570875 +0000 UTC m=+98.333239830 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert") pod "ingress-canary-wfmxn" (UID: "c0a77136-ccae-4958-8ad5-7373ea79258f") : secret "canary-serving-cert" not found Apr 23 17:42:12.011834 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:12.011784 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:42:12.015429 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:12.015409 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 23 17:42:12.025354 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:12.025336 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 23 17:42:12.036138 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:12.036106 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmx9k\" (UniqueName: \"kubernetes.io/projected/c0594da3-a624-4d0d-9765-82537ca166c3-kube-api-access-vmx9k\") pod \"network-check-target-lz78w\" (UID: \"c0594da3-a624-4d0d-9765-82537ca166c3\") " pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:42:12.049212 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:12.049189 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-kjp2q\"" Apr 23 17:42:12.056838 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:12.056821 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:42:12.181395 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:12.181360 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-lz78w"] Apr 23 17:42:12.184292 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:42:12.184264 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0594da3_a624_4d0d_9765_82537ca166c3.slice/crio-18f726f3503078d78574d5ac6bae9d38bc3f091b4085d0f5a2c580e78b1e8be2 WatchSource:0}: Error finding container 18f726f3503078d78574d5ac6bae9d38bc3f091b4085d0f5a2c580e78b1e8be2: Status 404 returned error can't find the container with id 18f726f3503078d78574d5ac6bae9d38bc3f091b4085d0f5a2c580e78b1e8be2 Apr 23 17:42:12.396999 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:12.396965 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-lz78w" event={"ID":"c0594da3-a624-4d0d-9765-82537ca166c3","Type":"ContainerStarted","Data":"18f726f3503078d78574d5ac6bae9d38bc3f091b4085d0f5a2c580e78b1e8be2"} Apr 23 17:42:13.017573 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:13.017535 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:42:13.021419 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:13.021398 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 23 17:42:13.030647 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:13.030608 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/71650021-930d-4f87-9886-b770243bb591-original-pull-secret\") pod \"global-pull-secret-syncer-pthrv\" (UID: \"71650021-930d-4f87-9886-b770243bb591\") " pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:42:13.240763 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:13.240720 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-pthrv" Apr 23 17:42:13.392282 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:13.392247 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-pthrv"] Apr 23 17:42:13.395554 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:42:13.395524 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71650021_930d_4f87_9886_b770243bb591.slice/crio-cac761b1e0da45d712bd61ebb2aee765ff2e6c2c196943b3c66ec3b9516b29ae WatchSource:0}: Error finding container cac761b1e0da45d712bd61ebb2aee765ff2e6c2c196943b3c66ec3b9516b29ae: Status 404 returned error can't find the container with id cac761b1e0da45d712bd61ebb2aee765ff2e6c2c196943b3c66ec3b9516b29ae Apr 23 17:42:13.399992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:13.399962 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-pthrv" event={"ID":"71650021-930d-4f87-9886-b770243bb591","Type":"ContainerStarted","Data":"cac761b1e0da45d712bd61ebb2aee765ff2e6c2c196943b3c66ec3b9516b29ae"} Apr 23 17:42:16.408745 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:16.408697 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-lz78w" event={"ID":"c0594da3-a624-4d0d-9765-82537ca166c3","Type":"ContainerStarted","Data":"ec2dbde0927059804ec72ed79e34505f5c44350903ebb838ef56e3fd07a5d8f2"} Apr 23 17:42:16.409336 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:16.409014 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:42:16.428969 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:16.428919 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-lz78w" podStartSLOduration=67.261602229 podStartE2EDuration="1m10.428902527s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:42:12.186109661 +0000 UTC m=+66.607778597" lastFinishedPulling="2026-04-23 17:42:15.353409955 +0000 UTC m=+69.775078895" observedRunningTime="2026-04-23 17:42:16.42822209 +0000 UTC m=+70.849891049" watchObservedRunningTime="2026-04-23 17:42:16.428902527 +0000 UTC m=+70.850571486" Apr 23 17:42:17.412259 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:17.412228 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-pthrv" event={"ID":"71650021-930d-4f87-9886-b770243bb591","Type":"ContainerStarted","Data":"8b9f5c00d49505a8f7cc3280b0c817ca4f9dcd2bbb77d835899a9221cc3cc546"} Apr 23 17:42:17.429149 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:17.429084 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-pthrv" podStartSLOduration=64.575247661 podStartE2EDuration="1m8.429067319s" podCreationTimestamp="2026-04-23 17:41:09 +0000 UTC" firstStartedPulling="2026-04-23 17:42:13.397856725 +0000 UTC m=+67.819525667" lastFinishedPulling="2026-04-23 17:42:17.251676388 +0000 UTC m=+71.673345325" observedRunningTime="2026-04-23 17:42:17.428674669 +0000 UTC m=+71.850343630" watchObservedRunningTime="2026-04-23 17:42:17.429067319 +0000 UTC m=+71.850736273" Apr 23 17:42:43.827024 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:43.826988 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:42:43.827420 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:43.827101 2574 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 23 17:42:43.827420 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:43.827161 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls podName:23665133-39c5-4391-bafe-d17164250221 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:47.827146705 +0000 UTC m=+162.248815642 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls") pod "dns-default-pnmwc" (UID: "23665133-39c5-4391-bafe-d17164250221") : secret "dns-default-metrics-tls" not found Apr 23 17:42:43.928044 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:43.928012 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:42:43.928127 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:43.928119 2574 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 23 17:42:43.928177 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:42:43.928168 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert podName:c0a77136-ccae-4958-8ad5-7373ea79258f nodeName:}" failed. No retries permitted until 2026-04-23 17:43:47.928155 +0000 UTC m=+162.349823937 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert") pod "ingress-canary-wfmxn" (UID: "c0a77136-ccae-4958-8ad5-7373ea79258f") : secret "canary-serving-cert" not found Apr 23 17:42:47.414683 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:42:47.414652 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-lz78w" Apr 23 17:43:10.136047 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.136014 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj"] Apr 23 17:43:10.140815 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.140786 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj" Apr 23 17:43:10.144670 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.144650 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\"" Apr 23 17:43:10.144892 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.144881 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:43:10.149198 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.149176 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-storage-operator\"/\"volume-data-source-validator-dockercfg-ndl6j\"" Apr 23 17:43:10.150183 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.150165 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-75ddc44-mjcts"] Apr 23 17:43:10.152986 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.152971 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.158142 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.158078 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"default-ingress-cert\"" Apr 23 17:43:10.158408 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.158390 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Apr 23 17:43:10.158499 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.158440 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Apr 23 17:43:10.158587 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.158445 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Apr 23 17:43:10.158780 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.158763 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Apr 23 17:43:10.160100 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.160080 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj"] Apr 23 17:43:10.164220 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.164201 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Apr 23 17:43:10.164412 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.164401 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-bjtk7\"" Apr 23 17:43:10.176733 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.176706 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-75ddc44-mjcts"] Apr 23 17:43:10.211874 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.211845 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc44z\" (UniqueName: \"kubernetes.io/projected/49ce7885-097a-4c7c-8f10-cb427f7f72c3-kube-api-access-hc44z\") pod \"volume-data-source-validator-7c6cbb6c87-btbxj\" (UID: \"49ce7885-097a-4c7c-8f10-cb427f7f72c3\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj" Apr 23 17:43:10.256654 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.256608 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw"] Apr 23 17:43:10.259800 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.259778 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:10.262814 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.262773 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 23 17:43:10.263209 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.263193 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 23 17:43:10.263209 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.263202 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-4b6np\"" Apr 23 17:43:10.263349 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.263195 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:43:10.270152 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.270133 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw"] Apr 23 17:43:10.312744 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.312695 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.312903 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.312747 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-default-certificate\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.312903 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.312866 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hc44z\" (UniqueName: \"kubernetes.io/projected/49ce7885-097a-4c7c-8f10-cb427f7f72c3-kube-api-access-hc44z\") pod \"volume-data-source-validator-7c6cbb6c87-btbxj\" (UID: \"49ce7885-097a-4c7c-8f10-cb427f7f72c3\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj" Apr 23 17:43:10.312977 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.312902 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpr85\" (UniqueName: \"kubernetes.io/projected/c647dab7-a8c4-4b49-ab18-6a3500f88227-kube-api-access-jpr85\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.312977 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.312937 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.312977 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.312960 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-stats-auth\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.323090 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.323066 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc44z\" (UniqueName: \"kubernetes.io/projected/49ce7885-097a-4c7c-8f10-cb427f7f72c3-kube-api-access-hc44z\") pod \"volume-data-source-validator-7c6cbb6c87-btbxj\" (UID: \"49ce7885-097a-4c7c-8f10-cb427f7f72c3\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj" Apr 23 17:43:10.353946 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.353916 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq"] Apr 23 17:43:10.356932 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.356915 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.360656 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.360623 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Apr 23 17:43:10.360938 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.360921 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Apr 23 17:43:10.361262 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.361248 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-dm8lg\"" Apr 23 17:43:10.361330 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.361251 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Apr 23 17:43:10.361520 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.361505 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Apr 23 17:43:10.367218 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.367196 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq"] Apr 23 17:43:10.413957 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.413869 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:10.413957 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.413911 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jpr85\" (UniqueName: \"kubernetes.io/projected/c647dab7-a8c4-4b49-ab18-6a3500f88227-kube-api-access-jpr85\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.414162 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.413981 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.414162 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.414021 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-stats-auth\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.414162 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.414046 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.414162 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.414071 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mvt6\" (UniqueName: \"kubernetes.io/projected/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-kube-api-access-6mvt6\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:10.414305 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:10.414172 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:43:10.414305 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.414188 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-default-certificate\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.414305 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:10.414290 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:10.914268665 +0000 UTC m=+125.335937608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : configmap references non-existent config key: service-ca.crt Apr 23 17:43:10.414416 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:10.414311 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:10.914300263 +0000 UTC m=+125.335969200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : secret "router-metrics-certs-default" not found Apr 23 17:43:10.416652 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.416605 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-default-certificate\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.416652 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.416621 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-stats-auth\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.428072 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.428054 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpr85\" (UniqueName: \"kubernetes.io/projected/c647dab7-a8c4-4b49-ab18-6a3500f88227-kube-api-access-jpr85\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.449651 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.449607 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj" Apr 23 17:43:10.514900 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.514717 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eeed746-7c2a-49ef-98bd-977fa1136b3c-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.514900 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.514758 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:10.514900 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.514792 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snzww\" (UniqueName: \"kubernetes.io/projected/8eeed746-7c2a-49ef-98bd-977fa1136b3c-kube-api-access-snzww\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.514900 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.514868 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eeed746-7c2a-49ef-98bd-977fa1136b3c-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.514900 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:10.514878 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:43:10.514900 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.514905 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6mvt6\" (UniqueName: \"kubernetes.io/projected/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-kube-api-access-6mvt6\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:10.515244 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:10.514938 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls podName:1c5aff8e-0dd9-41ed-b97c-43dcdd3901da nodeName:}" failed. No retries permitted until 2026-04-23 17:43:11.014919592 +0000 UTC m=+125.436588530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-ksvzw" (UID: "1c5aff8e-0dd9-41ed-b97c-43dcdd3901da") : secret "samples-operator-tls" not found Apr 23 17:43:10.526820 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.526781 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mvt6\" (UniqueName: \"kubernetes.io/projected/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-kube-api-access-6mvt6\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:10.564947 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.564914 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj"] Apr 23 17:43:10.567898 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:43:10.567868 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ce7885_097a_4c7c_8f10_cb427f7f72c3.slice/crio-e0e705ac1f452c6c73368b5c756cfab485fecbc2782176a69d47977a165afc18 WatchSource:0}: Error finding container e0e705ac1f452c6c73368b5c756cfab485fecbc2782176a69d47977a165afc18: Status 404 returned error can't find the container with id e0e705ac1f452c6c73368b5c756cfab485fecbc2782176a69d47977a165afc18 Apr 23 17:43:10.616349 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.616318 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eeed746-7c2a-49ef-98bd-977fa1136b3c-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.616506 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.616393 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eeed746-7c2a-49ef-98bd-977fa1136b3c-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.616506 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.616423 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-snzww\" (UniqueName: \"kubernetes.io/projected/8eeed746-7c2a-49ef-98bd-977fa1136b3c-kube-api-access-snzww\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.617004 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.616982 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eeed746-7c2a-49ef-98bd-977fa1136b3c-config\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.618842 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.618826 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eeed746-7c2a-49ef-98bd-977fa1136b3c-serving-cert\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.630254 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.630232 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-snzww\" (UniqueName: \"kubernetes.io/projected/8eeed746-7c2a-49ef-98bd-977fa1136b3c-kube-api-access-snzww\") pod \"kube-storage-version-migrator-operator-6769c5d45-hjgnq\" (UID: \"8eeed746-7c2a-49ef-98bd-977fa1136b3c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.665927 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.665876 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" Apr 23 17:43:10.780317 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.780284 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq"] Apr 23 17:43:10.783304 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:43:10.783278 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eeed746_7c2a_49ef_98bd_977fa1136b3c.slice/crio-1dbca9052fc7068259d7cfefab603c0d1eb70aa9661550beafea760ccafd1982 WatchSource:0}: Error finding container 1dbca9052fc7068259d7cfefab603c0d1eb70aa9661550beafea760ccafd1982: Status 404 returned error can't find the container with id 1dbca9052fc7068259d7cfefab603c0d1eb70aa9661550beafea760ccafd1982 Apr 23 17:43:10.918957 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.918856 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.918957 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:10.918916 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:10.919155 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:10.919006 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:43:10.919155 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:10.919035 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:11.919021085 +0000 UTC m=+126.340690022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : configmap references non-existent config key: service-ca.crt Apr 23 17:43:10.919155 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:10.919057 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:11.919044425 +0000 UTC m=+126.340713361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : secret "router-metrics-certs-default" not found Apr 23 17:43:11.020055 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:11.020022 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:11.020229 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:11.020181 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:43:11.020282 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:11.020244 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls podName:1c5aff8e-0dd9-41ed-b97c-43dcdd3901da nodeName:}" failed. No retries permitted until 2026-04-23 17:43:12.020228416 +0000 UTC m=+126.441897352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-ksvzw" (UID: "1c5aff8e-0dd9-41ed-b97c-43dcdd3901da") : secret "samples-operator-tls" not found Apr 23 17:43:11.510690 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:11.510620 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj" event={"ID":"49ce7885-097a-4c7c-8f10-cb427f7f72c3","Type":"ContainerStarted","Data":"e0e705ac1f452c6c73368b5c756cfab485fecbc2782176a69d47977a165afc18"} Apr 23 17:43:11.511863 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:11.511833 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" event={"ID":"8eeed746-7c2a-49ef-98bd-977fa1136b3c","Type":"ContainerStarted","Data":"1dbca9052fc7068259d7cfefab603c0d1eb70aa9661550beafea760ccafd1982"} Apr 23 17:43:11.927291 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:11.927252 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:11.927475 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:11.927319 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:11.927475 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:11.927415 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:43:11.927578 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:11.927487 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:13.927468961 +0000 UTC m=+128.349137903 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : configmap references non-existent config key: service-ca.crt Apr 23 17:43:11.927663 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:11.927596 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:13.927557003 +0000 UTC m=+128.349225948 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : secret "router-metrics-certs-default" not found Apr 23 17:43:12.028299 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:12.028256 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:12.028485 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:12.028440 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:43:12.028564 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:12.028523 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls podName:1c5aff8e-0dd9-41ed-b97c-43dcdd3901da nodeName:}" failed. No retries permitted until 2026-04-23 17:43:14.028501581 +0000 UTC m=+128.450170538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-ksvzw" (UID: "1c5aff8e-0dd9-41ed-b97c-43dcdd3901da") : secret "samples-operator-tls" not found Apr 23 17:43:12.515155 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:12.515126 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj" event={"ID":"49ce7885-097a-4c7c-8f10-cb427f7f72c3","Type":"ContainerStarted","Data":"604b4909a47037edf5322d0acaabb3070165dd114b2883065ec0227aab8ebda6"} Apr 23 17:43:12.534043 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:12.533993 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-btbxj" podStartSLOduration=1.302760105 podStartE2EDuration="2.533977427s" podCreationTimestamp="2026-04-23 17:43:10 +0000 UTC" firstStartedPulling="2026-04-23 17:43:10.569662382 +0000 UTC m=+124.991331319" lastFinishedPulling="2026-04-23 17:43:11.800879693 +0000 UTC m=+126.222548641" observedRunningTime="2026-04-23 17:43:12.533556639 +0000 UTC m=+126.955225599" watchObservedRunningTime="2026-04-23 17:43:12.533977427 +0000 UTC m=+126.955646384" Apr 23 17:43:13.518342 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:13.518304 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" event={"ID":"8eeed746-7c2a-49ef-98bd-977fa1136b3c","Type":"ContainerStarted","Data":"dbe60cafa77eea912a4c04caaa12d15cc2f07813a8a68fe2efa9164e94fae9e4"} Apr 23 17:43:13.542175 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:13.542117 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" podStartSLOduration=1.863115147 podStartE2EDuration="3.542097942s" podCreationTimestamp="2026-04-23 17:43:10 +0000 UTC" firstStartedPulling="2026-04-23 17:43:10.785198347 +0000 UTC m=+125.206867284" lastFinishedPulling="2026-04-23 17:43:12.464181141 +0000 UTC m=+126.885850079" observedRunningTime="2026-04-23 17:43:13.541759736 +0000 UTC m=+127.963428694" watchObservedRunningTime="2026-04-23 17:43:13.542097942 +0000 UTC m=+127.963766902" Apr 23 17:43:13.944531 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:13.944490 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:13.944773 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:13.944542 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:13.944773 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:13.944688 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:17.944670676 +0000 UTC m=+132.366339636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : configmap references non-existent config key: service-ca.crt Apr 23 17:43:13.944773 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:13.944708 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:43:13.944907 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:13.944780 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:17.944763561 +0000 UTC m=+132.366432498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : secret "router-metrics-certs-default" not found Apr 23 17:43:14.045255 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:14.045219 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:14.045408 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:14.045338 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:43:14.045408 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:14.045395 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls podName:1c5aff8e-0dd9-41ed-b97c-43dcdd3901da nodeName:}" failed. No retries permitted until 2026-04-23 17:43:18.045377681 +0000 UTC m=+132.467046618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-ksvzw" (UID: "1c5aff8e-0dd9-41ed-b97c-43dcdd3901da") : secret "samples-operator-tls" not found Apr 23 17:43:15.857206 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:15.857158 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:43:15.857690 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:15.857304 2574 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 23 17:43:15.857690 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:15.857367 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs podName:3d52817f-2284-48d3-800c-a67ac0e0fe4b nodeName:}" failed. No retries permitted until 2026-04-23 17:45:17.857350176 +0000 UTC m=+252.279019112 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs") pod "network-metrics-daemon-mfhnv" (UID: "3d52817f-2284-48d3-800c-a67ac0e0fe4b") : secret "metrics-daemon-secret" not found Apr 23 17:43:17.061958 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.061924 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-865cb79987-j7pzm"] Apr 23 17:43:17.064565 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.064546 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.067536 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.067515 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Apr 23 17:43:17.067665 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.067624 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Apr 23 17:43:17.067772 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.067757 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Apr 23 17:43:17.069014 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.068994 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Apr 23 17:43:17.069143 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.069027 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-fbhfg\"" Apr 23 17:43:17.073514 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.073493 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-865cb79987-j7pzm"] Apr 23 17:43:17.167914 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.167873 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-signing-key\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.168094 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.167977 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-signing-cabundle\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.168094 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.167997 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpr9n\" (UniqueName: \"kubernetes.io/projected/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-kube-api-access-gpr9n\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.268605 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.268553 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-signing-key\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.268836 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.268697 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-signing-cabundle\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.268836 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.268716 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gpr9n\" (UniqueName: \"kubernetes.io/projected/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-kube-api-access-gpr9n\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.269423 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.269400 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-signing-cabundle\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.271193 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.271170 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-signing-key\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.278023 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.277998 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpr9n\" (UniqueName: \"kubernetes.io/projected/504e1c86-14ef-42b0-8ac6-11fcdcb861ac-kube-api-access-gpr9n\") pod \"service-ca-865cb79987-j7pzm\" (UID: \"504e1c86-14ef-42b0-8ac6-11fcdcb861ac\") " pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.344793 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.344713 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-l747m_828447ca-91a9-49c8-a1b8-50a5cfbe0580/dns-node-resolver/0.log" Apr 23 17:43:17.374032 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.374005 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-865cb79987-j7pzm" Apr 23 17:43:17.486380 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.486350 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-865cb79987-j7pzm"] Apr 23 17:43:17.488959 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:43:17.488932 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod504e1c86_14ef_42b0_8ac6_11fcdcb861ac.slice/crio-ba38b2408711b3255197d1db978727dcdaaa2db3346f8063df1e5debced2a062 WatchSource:0}: Error finding container ba38b2408711b3255197d1db978727dcdaaa2db3346f8063df1e5debced2a062: Status 404 returned error can't find the container with id ba38b2408711b3255197d1db978727dcdaaa2db3346f8063df1e5debced2a062 Apr 23 17:43:17.526883 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.526844 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-865cb79987-j7pzm" event={"ID":"504e1c86-14ef-42b0-8ac6-11fcdcb861ac","Type":"ContainerStarted","Data":"ba38b2408711b3255197d1db978727dcdaaa2db3346f8063df1e5debced2a062"} Apr 23 17:43:17.973981 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.973946 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:17.974153 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:17.973995 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:17.974153 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:17.974126 2574 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 23 17:43:17.974228 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:17.974141 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:25.974124061 +0000 UTC m=+140.395793001 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : configmap references non-existent config key: service-ca.crt Apr 23 17:43:17.974228 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:17.974203 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:25.97418351 +0000 UTC m=+140.395852447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : secret "router-metrics-certs-default" not found Apr 23 17:43:18.075000 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:18.074962 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:18.075516 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:18.075111 2574 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 23 17:43:18.075516 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:18.075187 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls podName:1c5aff8e-0dd9-41ed-b97c-43dcdd3901da nodeName:}" failed. No retries permitted until 2026-04-23 17:43:26.075164908 +0000 UTC m=+140.496833845 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-ksvzw" (UID: "1c5aff8e-0dd9-41ed-b97c-43dcdd3901da") : secret "samples-operator-tls" not found Apr 23 17:43:18.145225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:18.145199 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-76svx_59053c21-2759-4fb0-86d0-fd32dd514204/node-ca/0.log" Apr 23 17:43:19.532300 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:19.532265 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-865cb79987-j7pzm" event={"ID":"504e1c86-14ef-42b0-8ac6-11fcdcb861ac","Type":"ContainerStarted","Data":"02d8bd90625f8d004d9e78369d1b855a4df208dc4a61253752b72af331e92f35"} Apr 23 17:43:19.561345 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:19.561292 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-865cb79987-j7pzm" podStartSLOduration=1.012041932 podStartE2EDuration="2.561272767s" podCreationTimestamp="2026-04-23 17:43:17 +0000 UTC" firstStartedPulling="2026-04-23 17:43:17.490824678 +0000 UTC m=+131.912493616" lastFinishedPulling="2026-04-23 17:43:19.040055513 +0000 UTC m=+133.461724451" observedRunningTime="2026-04-23 17:43:19.561138219 +0000 UTC m=+133.982807179" watchObservedRunningTime="2026-04-23 17:43:19.561272767 +0000 UTC m=+133.982941727" Apr 23 17:43:19.946515 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:19.946477 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-hjgnq_8eeed746-7c2a-49ef-98bd-977fa1136b3c/kube-storage-version-migrator-operator/0.log" Apr 23 17:43:26.039693 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:26.039652 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:26.040165 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:26.039715 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:26.040165 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:26.039862 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle podName:c647dab7-a8c4-4b49-ab18-6a3500f88227 nodeName:}" failed. No retries permitted until 2026-04-23 17:43:42.039841835 +0000 UTC m=+156.461510788 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle") pod "router-default-75ddc44-mjcts" (UID: "c647dab7-a8c4-4b49-ab18-6a3500f88227") : configmap references non-existent config key: service-ca.crt Apr 23 17:43:26.042109 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:26.042085 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c647dab7-a8c4-4b49-ab18-6a3500f88227-metrics-certs\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:26.140153 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:26.140123 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:26.142749 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:26.142727 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5aff8e-0dd9-41ed-b97c-43dcdd3901da-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-ksvzw\" (UID: \"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:26.167796 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:26.167765 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" Apr 23 17:43:26.316331 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:26.316304 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw"] Apr 23 17:43:26.551389 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:26.551354 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" event={"ID":"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da","Type":"ContainerStarted","Data":"8c6c7f9c40d99e54fbb22b6c0c5174664713090385f597105ef96c507f0ec0c9"} Apr 23 17:43:28.560997 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:28.560958 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" event={"ID":"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da","Type":"ContainerStarted","Data":"d2940cc175d93739bcc7c076de531a967d712c9a0062752f27762d855c56602d"} Apr 23 17:43:28.560997 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:28.561000 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" event={"ID":"1c5aff8e-0dd9-41ed-b97c-43dcdd3901da","Type":"ContainerStarted","Data":"c3b22e9347aad7089fbf01b3c1a71d450a65f670e38b7b7f8c11003feebe895d"} Apr 23 17:43:28.580866 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:28.580814 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-ksvzw" podStartSLOduration=17.116842128 podStartE2EDuration="18.580797825s" podCreationTimestamp="2026-04-23 17:43:10 +0000 UTC" firstStartedPulling="2026-04-23 17:43:26.362948687 +0000 UTC m=+140.784617623" lastFinishedPulling="2026-04-23 17:43:27.82690438 +0000 UTC m=+142.248573320" observedRunningTime="2026-04-23 17:43:28.579818925 +0000 UTC m=+143.001487885" watchObservedRunningTime="2026-04-23 17:43:28.580797825 +0000 UTC m=+143.002466826" Apr 23 17:43:42.058302 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:42.058262 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:42.059007 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:42.058984 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c647dab7-a8c4-4b49-ab18-6a3500f88227-service-ca-bundle\") pod \"router-default-75ddc44-mjcts\" (UID: \"c647dab7-a8c4-4b49-ab18-6a3500f88227\") " pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:42.260878 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:42.260838 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:42.395545 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:42.395386 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-75ddc44-mjcts"] Apr 23 17:43:42.403970 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:43:42.403934 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc647dab7_a8c4_4b49_ab18_6a3500f88227.slice/crio-57d235e66c05ad2e32ded87295dd046355e9923983579876f2592bec21439b9a WatchSource:0}: Error finding container 57d235e66c05ad2e32ded87295dd046355e9923983579876f2592bec21439b9a: Status 404 returned error can't find the container with id 57d235e66c05ad2e32ded87295dd046355e9923983579876f2592bec21439b9a Apr 23 17:43:42.600657 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:42.600565 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-75ddc44-mjcts" event={"ID":"c647dab7-a8c4-4b49-ab18-6a3500f88227","Type":"ContainerStarted","Data":"c0d496b402c4b893399454235ce7d6477c3d8d4511966fa7e4c88ddde0fcf1cb"} Apr 23 17:43:42.600657 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:42.600601 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-75ddc44-mjcts" event={"ID":"c647dab7-a8c4-4b49-ab18-6a3500f88227","Type":"ContainerStarted","Data":"57d235e66c05ad2e32ded87295dd046355e9923983579876f2592bec21439b9a"} Apr 23 17:43:42.626224 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:42.626178 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-75ddc44-mjcts" podStartSLOduration=32.626162111 podStartE2EDuration="32.626162111s" podCreationTimestamp="2026-04-23 17:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:43:42.624963673 +0000 UTC m=+157.046632631" watchObservedRunningTime="2026-04-23 17:43:42.626162111 +0000 UTC m=+157.047831051" Apr 23 17:43:42.994506 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:42.994460 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-pnmwc" podUID="23665133-39c5-4391-bafe-d17164250221" Apr 23 17:43:43.018698 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:43.018653 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-wfmxn" podUID="c0a77136-ccae-4958-8ad5-7373ea79258f" Apr 23 17:43:43.261376 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:43.261290 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:43.263745 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:43.263718 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:43.263745 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:43.263745 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:43.263745 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:43.263935 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:43.263772 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:43.602955 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:43.602865 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:43:43.602955 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:43.602870 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pnmwc" Apr 23 17:43:44.150863 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:43:44.150828 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-mfhnv" podUID="3d52817f-2284-48d3-800c-a67ac0e0fe4b" Apr 23 17:43:44.262622 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:44.262587 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:44.262622 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:44.262622 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:44.262622 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:44.263053 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:44.262671 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:45.262693 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:45.262665 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:45.262693 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:45.262693 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:45.262693 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:45.263148 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:45.262715 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:46.262841 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:46.262805 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:46.262841 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:46.262841 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:46.262841 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:46.263262 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:46.262862 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:47.262969 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:47.262933 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:47.262969 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:47.262969 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:47.262969 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:47.263431 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:47.262991 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:47.903291 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:47.903251 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:43:47.905669 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:47.905625 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23665133-39c5-4391-bafe-d17164250221-metrics-tls\") pod \"dns-default-pnmwc\" (UID: \"23665133-39c5-4391-bafe-d17164250221\") " pod="openshift-dns/dns-default-pnmwc" Apr 23 17:43:48.003894 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.003854 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:43:48.006353 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.006320 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0a77136-ccae-4958-8ad5-7373ea79258f-cert\") pod \"ingress-canary-wfmxn\" (UID: \"c0a77136-ccae-4958-8ad5-7373ea79258f\") " pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:43:48.107117 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.107088 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-bmxrf\"" Apr 23 17:43:48.108317 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.108303 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-b6vkm\"" Apr 23 17:43:48.114612 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.114585 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pnmwc" Apr 23 17:43:48.114736 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.114614 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wfmxn" Apr 23 17:43:48.245420 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.245169 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pnmwc"] Apr 23 17:43:48.247840 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:43:48.247813 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23665133_39c5_4391_bafe_d17164250221.slice/crio-e6d44de975d8267010c9f042675f27696f99fc1353428f9098b88593ad764935 WatchSource:0}: Error finding container e6d44de975d8267010c9f042675f27696f99fc1353428f9098b88593ad764935: Status 404 returned error can't find the container with id e6d44de975d8267010c9f042675f27696f99fc1353428f9098b88593ad764935 Apr 23 17:43:48.256469 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.256443 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wfmxn"] Apr 23 17:43:48.258666 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:43:48.258621 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0a77136_ccae_4958_8ad5_7373ea79258f.slice/crio-669448e45c3bb4fef0d8f8934f008f21f0d182160f8a465aff148b01f3330f16 WatchSource:0}: Error finding container 669448e45c3bb4fef0d8f8934f008f21f0d182160f8a465aff148b01f3330f16: Status 404 returned error can't find the container with id 669448e45c3bb4fef0d8f8934f008f21f0d182160f8a465aff148b01f3330f16 Apr 23 17:43:48.262764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.262742 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:48.262764 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:48.262764 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:48.262764 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:48.262950 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.262788 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:48.615971 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.615879 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pnmwc" event={"ID":"23665133-39c5-4391-bafe-d17164250221","Type":"ContainerStarted","Data":"e6d44de975d8267010c9f042675f27696f99fc1353428f9098b88593ad764935"} Apr 23 17:43:48.617144 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:48.617113 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wfmxn" event={"ID":"c0a77136-ccae-4958-8ad5-7373ea79258f","Type":"ContainerStarted","Data":"669448e45c3bb4fef0d8f8934f008f21f0d182160f8a465aff148b01f3330f16"} Apr 23 17:43:49.262067 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:49.262030 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:49.262067 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:49.262067 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:49.262067 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:49.262374 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:49.262101 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:50.262894 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:50.262859 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:50.262894 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:50.262894 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:50.262894 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:50.263311 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:50.262910 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:50.623619 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:50.623582 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wfmxn" event={"ID":"c0a77136-ccae-4958-8ad5-7373ea79258f","Type":"ContainerStarted","Data":"6feccf9f12879172ab988e704e202028d4ae1953c4578e314242f96cb820723a"} Apr 23 17:43:50.625026 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:50.625003 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pnmwc" event={"ID":"23665133-39c5-4391-bafe-d17164250221","Type":"ContainerStarted","Data":"2ffa03d47123677e1615ac29d86174b6e27445bdffb2b86451088dc5df83cbe6"} Apr 23 17:43:50.625132 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:50.625031 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pnmwc" event={"ID":"23665133-39c5-4391-bafe-d17164250221","Type":"ContainerStarted","Data":"1620067336f326d7bc6ad09751f16d3281211e1742ae4eff8cdf778dbe348acf"} Apr 23 17:43:50.625176 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:50.625142 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-pnmwc" Apr 23 17:43:50.642345 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:50.642299 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wfmxn" podStartSLOduration=129.934339199 podStartE2EDuration="2m11.642286459s" podCreationTimestamp="2026-04-23 17:41:39 +0000 UTC" firstStartedPulling="2026-04-23 17:43:48.26034268 +0000 UTC m=+162.682011617" lastFinishedPulling="2026-04-23 17:43:49.968289938 +0000 UTC m=+164.389958877" observedRunningTime="2026-04-23 17:43:50.641583207 +0000 UTC m=+165.063252168" watchObservedRunningTime="2026-04-23 17:43:50.642286459 +0000 UTC m=+165.063955417" Apr 23 17:43:50.660544 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:50.660498 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pnmwc" podStartSLOduration=129.9459692 podStartE2EDuration="2m11.66048358s" podCreationTimestamp="2026-04-23 17:41:39 +0000 UTC" firstStartedPulling="2026-04-23 17:43:48.249671503 +0000 UTC m=+162.671340440" lastFinishedPulling="2026-04-23 17:43:49.964185882 +0000 UTC m=+164.385854820" observedRunningTime="2026-04-23 17:43:50.659992103 +0000 UTC m=+165.081661062" watchObservedRunningTime="2026-04-23 17:43:50.66048358 +0000 UTC m=+165.082152539" Apr 23 17:43:51.262963 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:51.262928 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:51.262963 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:51.262963 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:51.262963 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:51.263376 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:51.262980 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:52.261735 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:52.261698 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:43:52.262797 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:52.262774 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:52.262797 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:52.262797 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:52.262797 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:52.262974 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:52.262833 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:53.262698 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:53.262665 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:53.262698 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:53.262698 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:53.262698 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:53.263119 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:53.262718 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:54.262239 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:54.262200 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:54.262239 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:54.262239 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:54.262239 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:54.262475 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:54.262259 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:55.262683 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:55.262650 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:55.262683 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:55.262683 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:55.262683 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:55.263118 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:55.262702 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:56.133234 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:56.133201 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:43:56.262481 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:56.262447 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:56.262481 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:56.262481 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:56.262481 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:56.262723 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:56.262505 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:57.262969 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:57.262931 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:57.262969 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:57.262969 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:57.262969 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:57.263389 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:57.262992 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:58.262547 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:58.262517 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:58.262547 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:58.262547 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:58.262547 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:58.262871 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:58.262567 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:43:59.262487 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:59.262453 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:43:59.262487 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:43:59.262487 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:43:59.262487 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:43:59.262953 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:43:59.262511 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:00.262997 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:00.262962 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:00.262997 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:00.262997 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:00.262997 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:00.263419 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:00.263019 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:00.629268 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:00.629241 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pnmwc" Apr 23 17:44:01.262297 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:01.262263 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:01.262297 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:01.262297 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:01.262297 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:01.262515 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:01.262314 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:02.262650 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:02.262604 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:02.262650 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:02.262650 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:02.262650 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:02.263074 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:02.262680 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:03.262948 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:03.262918 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:03.262948 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:03.262948 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:03.262948 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:03.263359 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:03.262968 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:04.262348 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:04.262310 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:04.262348 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:04.262348 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:04.262348 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:04.262579 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:04.262371 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:05.263109 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:05.263076 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:05.263109 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:05.263109 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:05.263109 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:05.263564 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:05.263131 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:06.262438 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:06.262406 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:06.262438 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:06.262438 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:06.262438 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:06.262702 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:06.262470 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:07.262577 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:07.262544 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:07.262577 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:07.262577 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:07.262577 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:07.263029 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:07.262604 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:08.262938 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:08.262909 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:08.262938 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:08.262938 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:08.262938 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:08.263366 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:08.262960 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:09.262768 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:09.262735 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:09.262768 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:09.262768 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:09.262768 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:09.263299 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:09.262792 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:10.262083 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:10.262046 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:10.262083 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:10.262083 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:10.262083 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:10.262316 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:10.262110 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:11.262578 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:11.262543 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:11.262578 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:11.262578 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:11.262578 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:11.263037 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:11.262612 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:12.262793 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:12.262754 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:12.262793 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:12.262793 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:12.262793 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:12.263225 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:12.262814 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:13.262436 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:13.262403 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:13.262436 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:13.262436 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:13.262436 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:13.262712 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:13.262460 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:14.262156 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:14.262121 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:14.262156 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:14.262156 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:14.262156 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:14.262726 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:14.262192 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:15.262987 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:15.262948 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:15.262987 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:15.262987 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:15.262987 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:15.263440 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:15.263007 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:16.262769 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:16.262734 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:16.262769 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:16.262769 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:16.262769 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:16.263029 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:16.262787 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:17.262891 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:17.262853 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:17.262891 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:17.262891 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:17.262891 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:17.263323 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:17.262926 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:18.262810 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:18.262776 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:18.262810 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:18.262810 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:18.262810 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:18.263047 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:18.262845 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:19.262479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:19.262446 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:19.262479 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:19.262479 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:19.262479 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:19.262732 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:19.262501 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:20.262904 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:20.262871 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:20.262904 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:20.262904 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:20.262904 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:20.263410 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:20.262960 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:21.262515 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:21.262481 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:21.262515 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:21.262515 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:21.262515 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:21.262795 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:21.262535 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:22.262749 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:22.262718 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:22.262749 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:22.262749 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:22.262749 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:22.263189 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:22.262763 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:23.262622 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:23.262590 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:23.262622 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:23.262622 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:23.262622 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:23.263072 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:23.262657 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:24.262191 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:24.262148 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:24.262191 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:24.262191 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:24.262191 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:24.262437 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:24.262202 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:25.261945 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:25.261906 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:25.261945 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:25.261945 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:25.261945 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:25.262389 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:25.261972 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:26.262923 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:26.262889 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:26.262923 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:26.262923 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:26.262923 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:26.263437 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:26.262956 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:27.262105 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:27.262070 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:27.262105 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:27.262105 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:27.262105 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:27.262342 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:27.262149 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:28.262212 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:28.262180 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:28.262212 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:28.262212 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:28.262212 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:28.262709 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:28.262247 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:29.262028 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:29.261987 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:29.262028 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:29.262028 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:29.262028 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:29.262566 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:29.262053 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:30.262898 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:30.262860 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:30.262898 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:30.262898 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:30.262898 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:30.263334 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:30.262918 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:31.262443 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:31.262411 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:31.262443 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:31.262443 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:31.262443 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:31.262719 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:31.262468 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:32.262549 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:32.262508 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:32.262549 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:32.262549 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:32.262549 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:32.263100 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:32.262563 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:33.262604 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:33.262572 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:33.262604 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:33.262604 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:33.262604 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:33.263060 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:33.262627 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:33.735124 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:33.735089 2574 generic.go:358] "Generic (PLEG): container finished" podID="8eeed746-7c2a-49ef-98bd-977fa1136b3c" containerID="dbe60cafa77eea912a4c04caaa12d15cc2f07813a8a68fe2efa9164e94fae9e4" exitCode=0 Apr 23 17:44:33.735306 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:33.735166 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" event={"ID":"8eeed746-7c2a-49ef-98bd-977fa1136b3c","Type":"ContainerDied","Data":"dbe60cafa77eea912a4c04caaa12d15cc2f07813a8a68fe2efa9164e94fae9e4"} Apr 23 17:44:33.735507 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:33.735494 2574 scope.go:117] "RemoveContainer" containerID="dbe60cafa77eea912a4c04caaa12d15cc2f07813a8a68fe2efa9164e94fae9e4" Apr 23 17:44:34.262466 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:34.262432 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:34.262466 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:34.262466 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:34.262466 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:34.262986 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:34.262490 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:34.738927 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:34.738898 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6769c5d45-hjgnq" event={"ID":"8eeed746-7c2a-49ef-98bd-977fa1136b3c","Type":"ContainerStarted","Data":"3ce5eff7e275aa4da943c553cfff30f5950b5427906b05e65e4210603ce033b5"} Apr 23 17:44:35.262841 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:35.262807 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:35.262841 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:35.262841 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:35.262841 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:35.263294 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:35.262861 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:36.262574 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:36.262532 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:36.262574 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:36.262574 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:36.262574 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:36.262867 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:36.262607 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:37.262836 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:37.262793 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:37.262836 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:37.262836 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:37.262836 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:37.263321 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:37.262854 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:38.262310 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:38.262274 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:38.262310 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:38.262310 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:38.262310 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:38.262563 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:38.262341 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:39.262441 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:39.262407 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:39.262441 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:39.262441 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:39.262441 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:39.262899 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:39.262479 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:40.262997 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:40.262956 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:40.262997 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:40.262997 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:40.262997 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:40.263439 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:40.263014 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:41.262251 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:41.262214 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:41.262251 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:41.262251 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:41.262251 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:41.262488 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:41.262279 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:41.665041 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:41.665013 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-pnmwc_23665133-39c5-4391-bafe-d17164250221/dns/0.log" Apr 23 17:44:41.865570 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:41.865543 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-pnmwc_23665133-39c5-4391-bafe-d17164250221/kube-rbac-proxy/0.log" Apr 23 17:44:42.262732 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:42.262690 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:42.262732 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:42.262732 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:42.262732 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:42.262977 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:42.262759 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:42.266842 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:42.266818 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-l747m_828447ca-91a9-49c8-a1b8-50a5cfbe0580/dns-node-resolver/0.log" Apr 23 17:44:43.263060 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:43.263024 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:43.263060 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:43.263060 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:43.263060 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:43.263492 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:43.263094 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:43.265767 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:43.265749 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-76svx_59053c21-2759-4fb0-86d0-fd32dd514204/node-ca/0.log" Apr 23 17:44:43.868589 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:43.868563 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-75ddc44-mjcts_c647dab7-a8c4-4b49-ab18-6a3500f88227/router/0.log" Apr 23 17:44:44.262544 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:44.262507 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:44.262544 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:44.262544 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:44.262544 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:44.262946 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:44.262574 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:44.465751 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:44.465719 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-wfmxn_c0a77136-ccae-4958-8ad5-7373ea79258f/serve-healthcheck-canary/0.log" Apr 23 17:44:45.262502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:45.262469 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:45.262502 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:45.262502 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:45.262502 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:45.262769 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:45.262525 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:46.262445 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:46.262407 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:46.262445 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:46.262445 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:46.262445 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:46.262926 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:46.262477 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:47.263075 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:47.263040 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:47.263075 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:47.263075 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:47.263075 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:47.263518 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:47.263094 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:48.262372 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:48.262339 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:48.262372 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:48.262372 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:48.262372 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:48.262617 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:48.262399 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:49.262345 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:49.262313 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:49.262345 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:49.262345 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:49.262345 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:49.262809 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:49.262367 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:50.262171 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:50.262135 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:50.262171 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:50.262171 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:50.262171 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:50.262664 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:50.262202 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:51.262138 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:51.262107 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:51.262138 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:51.262138 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:51.262138 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:51.262380 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:51.262158 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:52.262821 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:52.262789 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:52.262821 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:52.262821 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:52.262821 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:52.263260 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:52.262842 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:53.262745 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:53.262710 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:53.262745 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:53.262745 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:53.262745 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:53.263204 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:53.262777 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:54.262281 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:54.262242 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:54.262281 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:54.262281 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:54.262281 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:54.262529 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:54.262313 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:55.262621 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:55.262587 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:55.262621 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:55.262621 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:55.262621 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:55.263100 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:55.262671 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:56.262242 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:56.262209 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:56.262242 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:56.262242 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:56.262242 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:56.262499 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:56.262277 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:57.262121 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:57.262087 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:57.262121 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:57.262121 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:57.262121 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:57.262561 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:57.262144 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:58.262381 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:58.262349 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:58.262381 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:58.262381 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:58.262381 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:58.262835 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:58.262403 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:44:59.262289 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:59.262257 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:44:59.262289 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:44:59.262289 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:44:59.262289 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:44:59.262763 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:44:59.262310 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:00.263109 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:00.263075 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:00.263109 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:00.263109 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:00.263109 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:00.263545 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:00.263131 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:01.262543 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:01.262511 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:01.262543 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:01.262543 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:01.262543 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:01.262804 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:01.262571 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:02.262094 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:02.262054 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:02.262094 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:02.262094 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:02.262094 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:02.262536 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:02.262118 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:03.262339 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:03.262302 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:03.262339 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:03.262339 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:03.262339 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:03.262868 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:03.262370 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:04.262569 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:04.262539 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:04.262569 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:04.262569 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:04.262569 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:04.263027 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:04.262588 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:05.262533 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:05.262501 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:05.262533 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:05.262533 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:05.262533 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:05.262989 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:05.262551 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:06.262619 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:06.262584 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:06.262619 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:06.262619 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:06.262619 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:06.263064 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:06.262647 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:07.262569 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:07.262537 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:07.262569 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:07.262569 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:07.262569 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:07.263010 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:07.262590 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:08.262226 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:08.262187 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:08.262226 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:08.262226 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:08.262226 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:08.262458 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:08.262245 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:09.262138 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:09.262103 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:09.262138 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:09.262138 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:09.262138 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:09.262563 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:09.262171 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:10.262999 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:10.262960 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:10.262999 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:10.262999 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:10.262999 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:10.263414 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:10.263025 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:11.262497 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:11.262463 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:11.262497 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:11.262497 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:11.262497 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:11.262763 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:11.262515 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:12.262115 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:12.262083 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:12.262115 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:12.262115 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:12.262115 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:12.262561 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:12.262131 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:13.262607 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:13.262568 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:13.262607 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:13.262607 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:13.262607 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:13.263054 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:13.262621 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:14.262737 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:14.262704 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:14.262737 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:14.262737 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:14.262737 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:14.263183 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:14.262771 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:15.262862 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:15.262829 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:15.262862 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:15.262862 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:15.262862 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:15.263310 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:15.262883 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:16.262992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:16.262958 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:16.262992 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:16.262992 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:16.262992 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:16.263511 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:16.263023 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:17.262038 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:17.262004 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:17.262038 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:17.262038 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:17.262038 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:17.262281 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:17.262063 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:17.937830 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:17.937793 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:45:17.940259 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:17.940235 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d52817f-2284-48d3-800c-a67ac0e0fe4b-metrics-certs\") pod \"network-metrics-daemon-mfhnv\" (UID: \"3d52817f-2284-48d3-800c-a67ac0e0fe4b\") " pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:45:18.037677 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:18.037623 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-qk5s8\"" Apr 23 17:45:18.044619 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:18.044595 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfhnv" Apr 23 17:45:18.161516 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:18.161483 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mfhnv"] Apr 23 17:45:18.165619 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:45:18.165590 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d52817f_2284_48d3_800c_a67ac0e0fe4b.slice/crio-44453dbe0ab51c647ca40001dda527d55ea26fc2ef45ead044dd8cce24b62081 WatchSource:0}: Error finding container 44453dbe0ab51c647ca40001dda527d55ea26fc2ef45ead044dd8cce24b62081: Status 404 returned error can't find the container with id 44453dbe0ab51c647ca40001dda527d55ea26fc2ef45ead044dd8cce24b62081 Apr 23 17:45:18.262351 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:18.262279 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:18.262351 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:18.262351 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:18.262351 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:18.262351 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:18.262339 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:18.844828 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:18.844789 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mfhnv" event={"ID":"3d52817f-2284-48d3-800c-a67ac0e0fe4b","Type":"ContainerStarted","Data":"44453dbe0ab51c647ca40001dda527d55ea26fc2ef45ead044dd8cce24b62081"} Apr 23 17:45:19.262043 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:19.262018 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:19.262043 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:19.262043 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:19.262043 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:19.262444 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:19.262069 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:19.850432 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:19.850396 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mfhnv" event={"ID":"3d52817f-2284-48d3-800c-a67ac0e0fe4b","Type":"ContainerStarted","Data":"0eeab4b127838af9e439dfe7d2e2923fa5b795debc894d2b84e85d20221d4561"} Apr 23 17:45:19.850432 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:19.850436 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mfhnv" event={"ID":"3d52817f-2284-48d3-800c-a67ac0e0fe4b","Type":"ContainerStarted","Data":"e01f86c81fbcbd3092b66d8186fb3a6d05c56bc9e7117c2a7f8c1cf89203f194"} Apr 23 17:45:19.869413 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:19.869326 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mfhnv" podStartSLOduration=252.988607407 podStartE2EDuration="4m13.869309818s" podCreationTimestamp="2026-04-23 17:41:06 +0000 UTC" firstStartedPulling="2026-04-23 17:45:18.167374998 +0000 UTC m=+252.589043934" lastFinishedPulling="2026-04-23 17:45:19.048077408 +0000 UTC m=+253.469746345" observedRunningTime="2026-04-23 17:45:19.868518128 +0000 UTC m=+254.290187086" watchObservedRunningTime="2026-04-23 17:45:19.869309818 +0000 UTC m=+254.290978778" Apr 23 17:45:20.262231 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:20.262199 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:20.262231 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:20.262231 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:20.262231 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:20.262671 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:20.262253 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:21.262954 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:21.262919 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:21.262954 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:21.262954 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:21.262954 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:21.263400 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:21.262977 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:22.262234 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:22.262204 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:22.262234 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:22.262234 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:22.262234 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:22.262483 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:22.262258 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:23.262352 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:23.262316 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:23.262352 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:23.262352 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:23.262352 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:23.262818 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:23.262391 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:24.262017 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:24.261984 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:24.262017 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:24.262017 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:24.262017 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:24.262252 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:24.262040 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:25.262507 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:25.262471 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:25.262507 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:25.262507 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:25.262507 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:25.263038 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:25.262534 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:26.262928 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:26.262894 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:26.262928 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:26.262928 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:26.262928 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:26.263360 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:26.262951 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:27.262942 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:27.262911 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:27.262942 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:27.262942 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:27.262942 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:27.263376 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:27.262968 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:28.262452 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:28.262424 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:28.262452 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:28.262452 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:28.262452 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:28.262765 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:28.262476 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:29.262070 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:29.262025 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:29.262070 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:29.262070 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:29.262070 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:29.262483 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:29.262084 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:30.262280 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:30.262242 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:30.262280 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:30.262280 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:30.262280 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:30.262843 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:30.262306 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:31.262299 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:31.262266 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:31.262299 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:31.262299 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:31.262299 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:31.262756 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:31.262318 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:32.262412 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:32.262381 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:32.262412 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:32.262412 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:32.262412 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:32.262869 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:32.262433 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:33.261988 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:33.261955 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:33.261988 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:33.261988 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:33.261988 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:33.262214 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:33.262011 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:34.262843 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:34.262813 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:34.262843 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:34.262843 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:34.262843 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:34.263269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:34.262889 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:35.262068 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:35.262033 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:35.262068 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:35.262068 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:35.262068 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:35.262398 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:35.262095 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:36.262268 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:36.262230 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:36.262268 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:36.262268 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:36.262268 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:36.262788 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:36.262298 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:37.262677 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:37.262623 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:37.262677 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:37.262677 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:37.262677 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:37.263132 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:37.262698 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:38.262552 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:38.262518 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:38.262552 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:38.262552 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:38.262552 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:38.263001 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:38.262575 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:39.262826 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:39.262796 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:39.262826 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:39.262826 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:39.262826 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:39.263239 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:39.262850 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:40.262331 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:40.262296 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:40.262331 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:40.262331 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:40.262331 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:40.262572 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:40.262362 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:41.262416 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:41.262382 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:41.262416 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:41.262416 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:41.262416 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:41.262867 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:41.262439 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:42.262438 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:42.262398 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:45:42.262438 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:45:42.262438 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:45:42.262438 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:45:42.262907 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:42.262452 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:45:42.262907 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:42.262492 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:45:42.262971 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:42.262940 2574 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"c0d496b402c4b893399454235ce7d6477c3d8d4511966fa7e4c88ddde0fcf1cb"} pod="openshift-ingress/router-default-75ddc44-mjcts" containerMessage="Container router failed startup probe, will be restarted" Apr 23 17:45:42.263017 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:45:42.263002 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" containerID="cri-o://c0d496b402c4b893399454235ce7d6477c3d8d4511966fa7e4c88ddde0fcf1cb" gracePeriod=3600 Apr 23 17:46:06.019718 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:06.019691 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:46:06.022359 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:06.022334 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:46:06.023073 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:06.023053 2574 kubelet.go:1628] "Image garbage collection succeeded" Apr 23 17:46:28.379837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:28.379816 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 17:46:29.019915 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:29.019885 2574 generic.go:358] "Generic (PLEG): container finished" podID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerID="c0d496b402c4b893399454235ce7d6477c3d8d4511966fa7e4c88ddde0fcf1cb" exitCode=0 Apr 23 17:46:29.020095 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:29.019930 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-75ddc44-mjcts" event={"ID":"c647dab7-a8c4-4b49-ab18-6a3500f88227","Type":"ContainerDied","Data":"c0d496b402c4b893399454235ce7d6477c3d8d4511966fa7e4c88ddde0fcf1cb"} Apr 23 17:46:29.020095 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:29.019953 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-75ddc44-mjcts" event={"ID":"c647dab7-a8c4-4b49-ab18-6a3500f88227","Type":"ContainerStarted","Data":"3d5b5dffc8b6ac6de8b254f63e78f912f9055a3eccadfbe6f4293e0bf71589dc"} Apr 23 17:46:29.261209 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:29.261173 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:46:29.263382 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:29.263356 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:29.263382 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:29.263382 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:29.263382 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:29.263585 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:29.263412 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:30.262300 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:30.262265 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:30.262300 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:30.262300 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:30.262300 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:30.262819 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:30.262323 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:31.262451 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:31.262416 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:31.262451 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:31.262451 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:31.262451 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:31.262920 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:31.262479 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:32.261297 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:32.261262 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:46:32.262339 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:32.262318 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:32.262339 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:32.262339 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:32.262339 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:32.262722 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:32.262361 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:33.262930 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:33.262895 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:33.262930 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:33.262930 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:33.262930 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:33.263364 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:33.262950 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:34.262782 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:34.262749 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:34.262782 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:34.262782 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:34.262782 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:34.263228 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:34.262815 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:35.262891 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:35.262855 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:35.262891 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:35.262891 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:35.262891 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:35.263424 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:35.262931 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:36.262290 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:36.262256 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:36.262290 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:36.262290 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:36.262290 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:36.262555 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:36.262345 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:37.262894 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.262857 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:37.262894 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:37.262894 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:37.262894 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:37.263333 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.262927 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:37.469393 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.469364 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n"] Apr 23 17:46:37.472304 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.472287 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" Apr 23 17:46:37.476818 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.476791 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-ccf84\"" Apr 23 17:46:37.476818 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.476806 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 23 17:46:37.476980 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.476791 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 23 17:46:37.487944 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.487922 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n"] Apr 23 17:46:37.496764 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.496742 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-rwkcd"] Apr 23 17:46:37.499771 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.499754 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.505950 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.505776 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 23 17:46:37.505950 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.505788 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 23 17:46:37.505950 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.505788 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 23 17:46:37.506464 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.506442 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-dt5dv\"" Apr 23 17:46:37.507521 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.507501 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 23 17:46:37.519651 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.519582 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-rwkcd"] Apr 23 17:46:37.560492 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.560458 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/21d8a344-f03b-4bf0-845c-dcc9f5fc81fb-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-qtf6n\" (UID: \"21d8a344-f03b-4bf0-845c-dcc9f5fc81fb\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" Apr 23 17:46:37.560656 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.560498 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/f5a18479-499c-485f-ba5a-83ecc0d54ca4-crio-socket\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.560656 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.560528 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/f5a18479-499c-485f-ba5a-83ecc0d54ca4-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.560656 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.560573 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm4jl\" (UniqueName: \"kubernetes.io/projected/f5a18479-499c-485f-ba5a-83ecc0d54ca4-kube-api-access-xm4jl\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.560656 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.560614 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5a18479-499c-485f-ba5a-83ecc0d54ca4-data-volume\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.560854 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.560712 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/21d8a344-f03b-4bf0-845c-dcc9f5fc81fb-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-qtf6n\" (UID: \"21d8a344-f03b-4bf0-845c-dcc9f5fc81fb\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" Apr 23 17:46:37.560854 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.560743 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/f5a18479-499c-485f-ba5a-83ecc0d54ca4-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.583106 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.583072 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6"] Apr 23 17:46:37.586007 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.585991 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" Apr 23 17:46:37.590781 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.590755 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-tls\"" Apr 23 17:46:37.590904 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.590875 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-dockercfg-wrs6x\"" Apr 23 17:46:37.597410 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.597390 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-6bcc868b7-jklss"] Apr 23 17:46:37.600307 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.600293 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6bcc868b7-jklss" Apr 23 17:46:37.603576 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.603553 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-x48j6\"" Apr 23 17:46:37.603931 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.603918 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 23 17:46:37.603983 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.603931 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 23 17:46:37.615279 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.615256 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-59c6488d5c-6pw5f"] Apr 23 17:46:37.618185 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.618166 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6"] Apr 23 17:46:37.618288 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.618276 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.621672 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.621653 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-7hx6w\"" Apr 23 17:46:37.621773 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.621754 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 23 17:46:37.623649 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.623618 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 23 17:46:37.632574 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.632553 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 23 17:46:37.645378 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.644615 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 23 17:46:37.648251 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.648233 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-6bcc868b7-jklss"] Apr 23 17:46:37.661830 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661810 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-registry-tls\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.661930 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661843 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/21d8a344-f03b-4bf0-845c-dcc9f5fc81fb-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-qtf6n\" (UID: \"21d8a344-f03b-4bf0-845c-dcc9f5fc81fb\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" Apr 23 17:46:37.661930 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661865 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0775f5a9-0672-43b2-9425-ebc191d0f124-installation-pull-secrets\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.661930 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661890 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/f5a18479-499c-485f-ba5a-83ecc0d54ca4-crio-socket\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.661930 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661917 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/f5a18479-499c-485f-ba5a-83ecc0d54ca4-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.662122 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661940 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xm4jl\" (UniqueName: \"kubernetes.io/projected/f5a18479-499c-485f-ba5a-83ecc0d54ca4-kube-api-access-xm4jl\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.662122 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661967 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0775f5a9-0672-43b2-9425-ebc191d0f124-trusted-ca\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.662122 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661991 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s577d\" (UniqueName: \"kubernetes.io/projected/13a332d6-578a-4838-8bd3-9a2a0eb00e2f-kube-api-access-s577d\") pod \"downloads-6bcc868b7-jklss\" (UID: \"13a332d6-578a-4838-8bd3-9a2a0eb00e2f\") " pod="openshift-console/downloads-6bcc868b7-jklss" Apr 23 17:46:37.662122 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.661990 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/f5a18479-499c-485f-ba5a-83ecc0d54ca4-crio-socket\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.662122 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662035 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5a18479-499c-485f-ba5a-83ecc0d54ca4-data-volume\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.662122 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662073 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-bound-sa-token\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.662122 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662107 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wr2t\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-kube-api-access-6wr2t\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.662502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662180 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/21d8a344-f03b-4bf0-845c-dcc9f5fc81fb-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-qtf6n\" (UID: \"21d8a344-f03b-4bf0-845c-dcc9f5fc81fb\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" Apr 23 17:46:37.662502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662218 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/f5a18479-499c-485f-ba5a-83ecc0d54ca4-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.662502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662337 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8abaf28c-2dbf-42c9-af60-3679eeb62d64-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-lnsd6\" (UID: \"8abaf28c-2dbf-42c9-af60-3679eeb62d64\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" Apr 23 17:46:37.662502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662368 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0775f5a9-0672-43b2-9425-ebc191d0f124-registry-certificates\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.662502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662398 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0775f5a9-0672-43b2-9425-ebc191d0f124-ca-trust-extracted\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.662502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662416 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5a18479-499c-485f-ba5a-83ecc0d54ca4-data-volume\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.662502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662427 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/0775f5a9-0672-43b2-9425-ebc191d0f124-image-registry-private-configuration\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.662774 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662575 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/21d8a344-f03b-4bf0-845c-dcc9f5fc81fb-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-qtf6n\" (UID: \"21d8a344-f03b-4bf0-845c-dcc9f5fc81fb\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" Apr 23 17:46:37.662954 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.662935 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/f5a18479-499c-485f-ba5a-83ecc0d54ca4-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.664388 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.664365 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/f5a18479-499c-485f-ba5a-83ecc0d54ca4-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.664602 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.664585 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/21d8a344-f03b-4bf0-845c-dcc9f5fc81fb-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-qtf6n\" (UID: \"21d8a344-f03b-4bf0-845c-dcc9f5fc81fb\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" Apr 23 17:46:37.669038 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.669010 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-59c6488d5c-6pw5f"] Apr 23 17:46:37.692951 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.692919 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm4jl\" (UniqueName: \"kubernetes.io/projected/f5a18479-499c-485f-ba5a-83ecc0d54ca4-kube-api-access-xm4jl\") pod \"insights-runtime-extractor-rwkcd\" (UID: \"f5a18479-499c-485f-ba5a-83ecc0d54ca4\") " pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.763276 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763240 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/0775f5a9-0672-43b2-9425-ebc191d0f124-image-registry-private-configuration\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.763276 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763278 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-registry-tls\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.763517 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763312 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0775f5a9-0672-43b2-9425-ebc191d0f124-installation-pull-secrets\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.763517 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763345 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0775f5a9-0672-43b2-9425-ebc191d0f124-trusted-ca\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.763517 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763368 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s577d\" (UniqueName: \"kubernetes.io/projected/13a332d6-578a-4838-8bd3-9a2a0eb00e2f-kube-api-access-s577d\") pod \"downloads-6bcc868b7-jklss\" (UID: \"13a332d6-578a-4838-8bd3-9a2a0eb00e2f\") " pod="openshift-console/downloads-6bcc868b7-jklss" Apr 23 17:46:37.763517 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763409 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-bound-sa-token\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.763517 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763435 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wr2t\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-kube-api-access-6wr2t\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.763517 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763476 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8abaf28c-2dbf-42c9-af60-3679eeb62d64-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-lnsd6\" (UID: \"8abaf28c-2dbf-42c9-af60-3679eeb62d64\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" Apr 23 17:46:37.763517 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763502 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0775f5a9-0672-43b2-9425-ebc191d0f124-registry-certificates\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.763898 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763528 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0775f5a9-0672-43b2-9425-ebc191d0f124-ca-trust-extracted\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.763976 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.763951 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0775f5a9-0672-43b2-9425-ebc191d0f124-ca-trust-extracted\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.764403 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.764380 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0775f5a9-0672-43b2-9425-ebc191d0f124-registry-certificates\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.764909 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.764888 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0775f5a9-0672-43b2-9425-ebc191d0f124-trusted-ca\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.766054 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.766034 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/0775f5a9-0672-43b2-9425-ebc191d0f124-image-registry-private-configuration\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.766146 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.766097 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8abaf28c-2dbf-42c9-af60-3679eeb62d64-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-lnsd6\" (UID: \"8abaf28c-2dbf-42c9-af60-3679eeb62d64\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" Apr 23 17:46:37.766409 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.766391 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-registry-tls\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.766616 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.766594 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0775f5a9-0672-43b2-9425-ebc191d0f124-installation-pull-secrets\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.777218 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.777160 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-bound-sa-token\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.777397 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.777375 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wr2t\" (UniqueName: \"kubernetes.io/projected/0775f5a9-0672-43b2-9425-ebc191d0f124-kube-api-access-6wr2t\") pod \"image-registry-59c6488d5c-6pw5f\" (UID: \"0775f5a9-0672-43b2-9425-ebc191d0f124\") " pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.780586 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.780543 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" Apr 23 17:46:37.782852 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.782832 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s577d\" (UniqueName: \"kubernetes.io/projected/13a332d6-578a-4838-8bd3-9a2a0eb00e2f-kube-api-access-s577d\") pod \"downloads-6bcc868b7-jklss\" (UID: \"13a332d6-578a-4838-8bd3-9a2a0eb00e2f\") " pod="openshift-console/downloads-6bcc868b7-jklss" Apr 23 17:46:37.808724 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.808699 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-rwkcd" Apr 23 17:46:37.894970 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.894682 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" Apr 23 17:46:37.908119 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.908087 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6bcc868b7-jklss" Apr 23 17:46:37.917095 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.917069 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n"] Apr 23 17:46:37.920116 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:46:37.920086 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d8a344_f03b_4bf0_845c_dcc9f5fc81fb.slice/crio-ba4082e4b7f5805ba7a911e600702f02bc176e41b7cd181387dfd996938a5082 WatchSource:0}: Error finding container ba4082e4b7f5805ba7a911e600702f02bc176e41b7cd181387dfd996938a5082: Status 404 returned error can't find the container with id ba4082e4b7f5805ba7a911e600702f02bc176e41b7cd181387dfd996938a5082 Apr 23 17:46:37.927014 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.926989 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:37.979939 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:37.979864 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-rwkcd"] Apr 23 17:46:37.984136 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:46:37.984086 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5a18479_499c_485f_ba5a_83ecc0d54ca4.slice/crio-3881496c0e5803a0d69a5ba722052ede9a734567202f01f7b51bf51cf0ffef9e WatchSource:0}: Error finding container 3881496c0e5803a0d69a5ba722052ede9a734567202f01f7b51bf51cf0ffef9e: Status 404 returned error can't find the container with id 3881496c0e5803a0d69a5ba722052ede9a734567202f01f7b51bf51cf0ffef9e Apr 23 17:46:38.047102 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:38.047073 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" event={"ID":"21d8a344-f03b-4bf0-845c-dcc9f5fc81fb","Type":"ContainerStarted","Data":"ba4082e4b7f5805ba7a911e600702f02bc176e41b7cd181387dfd996938a5082"} Apr 23 17:46:38.048581 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:38.048525 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-rwkcd" event={"ID":"f5a18479-499c-485f-ba5a-83ecc0d54ca4","Type":"ContainerStarted","Data":"3881496c0e5803a0d69a5ba722052ede9a734567202f01f7b51bf51cf0ffef9e"} Apr 23 17:46:38.084332 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:38.084192 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6"] Apr 23 17:46:38.086760 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:46:38.086703 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8abaf28c_2dbf_42c9_af60_3679eeb62d64.slice/crio-cfffb9c705acbc46d95f2ed0c029e935d1309143bd204aca654f8f1145e11878 WatchSource:0}: Error finding container cfffb9c705acbc46d95f2ed0c029e935d1309143bd204aca654f8f1145e11878: Status 404 returned error can't find the container with id cfffb9c705acbc46d95f2ed0c029e935d1309143bd204aca654f8f1145e11878 Apr 23 17:46:38.091944 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:38.091918 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-6bcc868b7-jklss"] Apr 23 17:46:38.095001 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:46:38.094975 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13a332d6_578a_4838_8bd3_9a2a0eb00e2f.slice/crio-d8bb158b8848a73ea1f227c0634cb8f2179d0a935749934e232c7c3956c9e761 WatchSource:0}: Error finding container d8bb158b8848a73ea1f227c0634cb8f2179d0a935749934e232c7c3956c9e761: Status 404 returned error can't find the container with id d8bb158b8848a73ea1f227c0634cb8f2179d0a935749934e232c7c3956c9e761 Apr 23 17:46:38.125477 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:38.125447 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-59c6488d5c-6pw5f"] Apr 23 17:46:38.127045 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:46:38.127022 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0775f5a9_0672_43b2_9425_ebc191d0f124.slice/crio-8b6e70d6f982573ee47c32379af9d1029300de89c52b7e4f2490cf6807bbc6a8 WatchSource:0}: Error finding container 8b6e70d6f982573ee47c32379af9d1029300de89c52b7e4f2490cf6807bbc6a8: Status 404 returned error can't find the container with id 8b6e70d6f982573ee47c32379af9d1029300de89c52b7e4f2490cf6807bbc6a8 Apr 23 17:46:38.262655 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:38.262596 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:38.262655 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:38.262655 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:38.262655 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:38.263253 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:38.262670 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:39.052559 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.052517 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6bcc868b7-jklss" event={"ID":"13a332d6-578a-4838-8bd3-9a2a0eb00e2f","Type":"ContainerStarted","Data":"d8bb158b8848a73ea1f227c0634cb8f2179d0a935749934e232c7c3956c9e761"} Apr 23 17:46:39.054098 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.054067 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-rwkcd" event={"ID":"f5a18479-499c-485f-ba5a-83ecc0d54ca4","Type":"ContainerStarted","Data":"be36052a52c6b7201307cd395e057e03bf36d1a1beb1cada9bd2a9f445c0ecbc"} Apr 23 17:46:39.055951 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.055923 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" event={"ID":"0775f5a9-0672-43b2-9425-ebc191d0f124","Type":"ContainerStarted","Data":"8aa5f4402d71f89c8b540a4c226f33e7202b063a8eee4a83632560590127d7f2"} Apr 23 17:46:39.056051 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.055959 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" event={"ID":"0775f5a9-0672-43b2-9425-ebc191d0f124","Type":"ContainerStarted","Data":"8b6e70d6f982573ee47c32379af9d1029300de89c52b7e4f2490cf6807bbc6a8"} Apr 23 17:46:39.056110 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.056073 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:46:39.057370 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.057342 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" event={"ID":"8abaf28c-2dbf-42c9-af60-3679eeb62d64","Type":"ContainerStarted","Data":"cfffb9c705acbc46d95f2ed0c029e935d1309143bd204aca654f8f1145e11878"} Apr 23 17:46:39.082244 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.081025 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" podStartSLOduration=2.081008885 podStartE2EDuration="2.081008885s" podCreationTimestamp="2026-04-23 17:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 17:46:39.079510193 +0000 UTC m=+333.501179174" watchObservedRunningTime="2026-04-23 17:46:39.081008885 +0000 UTC m=+333.502677845" Apr 23 17:46:39.262444 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.262399 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:39.262444 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:39.262444 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:39.262444 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:39.262608 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:39.262443 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:40.062612 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.062566 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" event={"ID":"21d8a344-f03b-4bf0-845c-dcc9f5fc81fb","Type":"ContainerStarted","Data":"94346cd9630aac8e27c45c88c9e17c850bcb1895b583056fb3c1fd937785cb1e"} Apr 23 17:46:40.064776 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.064746 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-rwkcd" event={"ID":"f5a18479-499c-485f-ba5a-83ecc0d54ca4","Type":"ContainerStarted","Data":"4112e41209c64e70bb2e6b42e968340334a49ccb3228a4a456ea20a6cb066d06"} Apr 23 17:46:40.066402 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.066370 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" event={"ID":"8abaf28c-2dbf-42c9-af60-3679eeb62d64","Type":"ContainerStarted","Data":"fa37ccf42a9e6387012b1f42514292184b2648b2d5319c9ecc65d28a9a2a4787"} Apr 23 17:46:40.066652 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.066615 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" Apr 23 17:46:40.072593 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.072574 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" Apr 23 17:46:40.082384 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.082208 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-cb95c66f6-qtf6n" podStartSLOduration=1.749540857 podStartE2EDuration="3.082191398s" podCreationTimestamp="2026-04-23 17:46:37 +0000 UTC" firstStartedPulling="2026-04-23 17:46:37.9222989 +0000 UTC m=+332.343967841" lastFinishedPulling="2026-04-23 17:46:39.25494924 +0000 UTC m=+333.676618382" observedRunningTime="2026-04-23 17:46:40.080957393 +0000 UTC m=+334.502626354" watchObservedRunningTime="2026-04-23 17:46:40.082191398 +0000 UTC m=+334.503860357" Apr 23 17:46:40.263071 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.263011 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:40.263071 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:40.263071 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:40.263071 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:40.263364 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.263075 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:40.454271 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.454207 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-lnsd6" podStartSLOduration=2.287942327 podStartE2EDuration="3.454186393s" podCreationTimestamp="2026-04-23 17:46:37 +0000 UTC" firstStartedPulling="2026-04-23 17:46:38.091380724 +0000 UTC m=+332.513049669" lastFinishedPulling="2026-04-23 17:46:39.257624797 +0000 UTC m=+333.679293735" observedRunningTime="2026-04-23 17:46:40.09976296 +0000 UTC m=+334.521431920" watchObservedRunningTime="2026-04-23 17:46:40.454186393 +0000 UTC m=+334.875855356" Apr 23 17:46:40.455130 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.455103 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-c9r2d"] Apr 23 17:46:40.458741 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.458719 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.464692 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.463658 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 23 17:46:40.464692 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.463733 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 23 17:46:40.464692 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.463978 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-jlqcm\"" Apr 23 17:46:40.464692 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.464107 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 23 17:46:40.464692 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.464297 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 23 17:46:40.464692 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.464474 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 23 17:46:40.470608 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.470575 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-c9r2d"] Apr 23 17:46:40.584940 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.584891 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fcf3918d-5f1c-49dc-995d-7e8153dcee95-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.585123 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.584977 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.585123 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.585037 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.585123 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.585064 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcg9f\" (UniqueName: \"kubernetes.io/projected/fcf3918d-5f1c-49dc-995d-7e8153dcee95-kube-api-access-rcg9f\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.687766 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.686063 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fcf3918d-5f1c-49dc-995d-7e8153dcee95-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.687766 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.686135 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.687766 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.686197 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.687766 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.686217 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rcg9f\" (UniqueName: \"kubernetes.io/projected/fcf3918d-5f1c-49dc-995d-7e8153dcee95-kube-api-access-rcg9f\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.687766 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:46:40.686583 2574 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Apr 23 17:46:40.687766 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:46:40.686682 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-tls podName:fcf3918d-5f1c-49dc-995d-7e8153dcee95 nodeName:}" failed. No retries permitted until 2026-04-23 17:46:41.186661223 +0000 UTC m=+335.608330160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-tls") pod "prometheus-operator-5676c8c784-c9r2d" (UID: "fcf3918d-5f1c-49dc-995d-7e8153dcee95") : secret "prometheus-operator-tls" not found Apr 23 17:46:40.687766 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.687712 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fcf3918d-5f1c-49dc-995d-7e8153dcee95-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.691461 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.691415 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:40.696887 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:40.696845 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcg9f\" (UniqueName: \"kubernetes.io/projected/fcf3918d-5f1c-49dc-995d-7e8153dcee95-kube-api-access-rcg9f\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:41.072041 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:41.072001 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-rwkcd" event={"ID":"f5a18479-499c-485f-ba5a-83ecc0d54ca4","Type":"ContainerStarted","Data":"73f27f27d630111b5ba3b8ae4343c3986f14940a5ad34f8c131c328a936a6a4c"} Apr 23 17:46:41.102503 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:41.102450 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-rwkcd" podStartSLOduration=1.243127448 podStartE2EDuration="4.102435862s" podCreationTimestamp="2026-04-23 17:46:37 +0000 UTC" firstStartedPulling="2026-04-23 17:46:38.057028207 +0000 UTC m=+332.478697144" lastFinishedPulling="2026-04-23 17:46:40.916336576 +0000 UTC m=+335.338005558" observedRunningTime="2026-04-23 17:46:41.101395747 +0000 UTC m=+335.523064718" watchObservedRunningTime="2026-04-23 17:46:41.102435862 +0000 UTC m=+335.524104821" Apr 23 17:46:41.189520 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:41.189437 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:41.192476 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:41.192445 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcf3918d-5f1c-49dc-995d-7e8153dcee95-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-c9r2d\" (UID: \"fcf3918d-5f1c-49dc-995d-7e8153dcee95\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:41.262068 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:41.262021 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:41.262068 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:41.262068 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:41.262068 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:41.262350 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:41.262100 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:41.372051 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:41.372018 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" Apr 23 17:46:41.510205 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:41.510066 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-c9r2d"] Apr 23 17:46:41.513281 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:46:41.513251 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcf3918d_5f1c_49dc_995d_7e8153dcee95.slice/crio-3610fbf97ed06cea61e1eb2716b6d4616cb6a0ffc4e51762352790d8a1e5463d WatchSource:0}: Error finding container 3610fbf97ed06cea61e1eb2716b6d4616cb6a0ffc4e51762352790d8a1e5463d: Status 404 returned error can't find the container with id 3610fbf97ed06cea61e1eb2716b6d4616cb6a0ffc4e51762352790d8a1e5463d Apr 23 17:46:42.076222 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:42.076185 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" event={"ID":"fcf3918d-5f1c-49dc-995d-7e8153dcee95","Type":"ContainerStarted","Data":"3610fbf97ed06cea61e1eb2716b6d4616cb6a0ffc4e51762352790d8a1e5463d"} Apr 23 17:46:42.263068 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:42.263031 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:42.263068 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:42.263068 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:42.263068 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:42.263400 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:42.263093 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:43.081389 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:43.081293 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" event={"ID":"fcf3918d-5f1c-49dc-995d-7e8153dcee95","Type":"ContainerStarted","Data":"6e84184158256e7801fd7def0cc7811a9f2c57a57c22c3170c1aee2d967350b4"} Apr 23 17:46:43.081389 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:43.081339 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" event={"ID":"fcf3918d-5f1c-49dc-995d-7e8153dcee95","Type":"ContainerStarted","Data":"e8463eeae5b60df18ada1deb5e390996c6e5d35b488c5efdb9394aaf9c76a1af"} Apr 23 17:46:43.118365 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:43.118245 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5676c8c784-c9r2d" podStartSLOduration=1.948409395 podStartE2EDuration="3.118226166s" podCreationTimestamp="2026-04-23 17:46:40 +0000 UTC" firstStartedPulling="2026-04-23 17:46:41.515367388 +0000 UTC m=+335.937036324" lastFinishedPulling="2026-04-23 17:46:42.685184132 +0000 UTC m=+337.106853095" observedRunningTime="2026-04-23 17:46:43.116307181 +0000 UTC m=+337.537976142" watchObservedRunningTime="2026-04-23 17:46:43.118226166 +0000 UTC m=+337.539895126" Apr 23 17:46:43.262418 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:43.262383 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:43.262418 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:43.262418 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:43.262418 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:43.262714 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:43.262448 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:44.262412 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:44.262381 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:44.262412 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:44.262412 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:44.262412 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:44.262955 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:44.262439 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:45.097837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.097795 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-7przb"] Apr 23 17:46:45.102959 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.102934 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.109327 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.109235 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 23 17:46:45.109960 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.109452 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 23 17:46:45.109960 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.109668 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 23 17:46:45.109960 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.109833 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-fbl2d\"" Apr 23 17:46:45.227158 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227109 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-accelerators-collector-config\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.227343 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227173 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.227343 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227199 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-tls\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.227343 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227241 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-root\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.227343 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227264 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctjwg\" (UniqueName: \"kubernetes.io/projected/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-kube-api-access-ctjwg\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.227343 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227297 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-wtmp\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.227343 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227320 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-metrics-client-ca\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.227565 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227373 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-textfile\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.227565 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.227419 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-sys\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.262487 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.262451 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:45.262487 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:45.262487 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:45.262487 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:45.263018 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.262519 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:45.328255 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328221 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328255 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328260 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-tls\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328509 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328315 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-root\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328509 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328342 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ctjwg\" (UniqueName: \"kubernetes.io/projected/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-kube-api-access-ctjwg\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328509 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328381 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-wtmp\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328509 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328405 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-metrics-client-ca\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328509 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328436 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-textfile\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328509 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328489 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-sys\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328517 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-accelerators-collector-config\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328532 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-root\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.328992 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:46:45.328662 2574 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 23 17:46:45.328992 ip-10-0-139-215 kubenswrapper[2574]: E0423 17:46:45.328720 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-tls podName:51f42b9a-8b48-44d4-b4c8-1ffc6a890c24 nodeName:}" failed. No retries permitted until 2026-04-23 17:46:45.828698918 +0000 UTC m=+340.250367880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-tls") pod "node-exporter-7przb" (UID: "51f42b9a-8b48-44d4-b4c8-1ffc6a890c24") : secret "node-exporter-tls" not found Apr 23 17:46:45.328992 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.328807 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-sys\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.329230 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.329079 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-wtmp\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.329524 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.329478 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-textfile\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.329857 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.329832 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-metrics-client-ca\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.330075 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.330045 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-accelerators-collector-config\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.331535 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.331514 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.366260 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.366186 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctjwg\" (UniqueName: \"kubernetes.io/projected/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-kube-api-access-ctjwg\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.832833 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.832784 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-tls\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:45.835840 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:45.835810 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/51f42b9a-8b48-44d4-b4c8-1ffc6a890c24-node-exporter-tls\") pod \"node-exporter-7przb\" (UID: \"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24\") " pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:46.014319 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:46.014283 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7przb" Apr 23 17:46:46.024501 ip-10-0-139-215 kubenswrapper[2574]: W0423 17:46:46.024458 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51f42b9a_8b48_44d4_b4c8_1ffc6a890c24.slice/crio-3baf342097e22e8aa210449ecc3072d6cf9154e8f5d6ecaef486a3e4c760dbf7 WatchSource:0}: Error finding container 3baf342097e22e8aa210449ecc3072d6cf9154e8f5d6ecaef486a3e4c760dbf7: Status 404 returned error can't find the container with id 3baf342097e22e8aa210449ecc3072d6cf9154e8f5d6ecaef486a3e4c760dbf7 Apr 23 17:46:46.091688 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:46.091567 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7przb" event={"ID":"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24","Type":"ContainerStarted","Data":"3baf342097e22e8aa210449ecc3072d6cf9154e8f5d6ecaef486a3e4c760dbf7"} Apr 23 17:46:46.262269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:46.262203 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:46.262269 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:46.262269 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:46.262269 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:46.262539 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:46.262273 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:47.096768 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:47.096683 2574 generic.go:358] "Generic (PLEG): container finished" podID="51f42b9a-8b48-44d4-b4c8-1ffc6a890c24" containerID="59e3d0034fa0d2dd3bfb4e2a7d1660a7a9923a3b58e7fa510535e7abff75e826" exitCode=0 Apr 23 17:46:47.096768 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:47.096721 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7przb" event={"ID":"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24","Type":"ContainerDied","Data":"59e3d0034fa0d2dd3bfb4e2a7d1660a7a9923a3b58e7fa510535e7abff75e826"} Apr 23 17:46:47.262349 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:47.262311 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:47.262349 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:47.262349 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:47.262349 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:47.262555 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:47.262378 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:48.101601 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:48.101562 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7przb" event={"ID":"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24","Type":"ContainerStarted","Data":"016ec1177097678d33f034ca5ecd1ccbd4caae59e036d4143e8a654600da7ddc"} Apr 23 17:46:48.101808 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:48.101610 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7przb" event={"ID":"51f42b9a-8b48-44d4-b4c8-1ffc6a890c24","Type":"ContainerStarted","Data":"98623ce32ddae065fbd358d892c9bf509a4addf1d89abbfaeb697689401ac2c4"} Apr 23 17:46:48.135218 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:48.135046 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-7przb" podStartSLOduration=2.400030128 podStartE2EDuration="3.135025528s" podCreationTimestamp="2026-04-23 17:46:45 +0000 UTC" firstStartedPulling="2026-04-23 17:46:46.026690261 +0000 UTC m=+340.448359206" lastFinishedPulling="2026-04-23 17:46:46.761685668 +0000 UTC m=+341.183354606" observedRunningTime="2026-04-23 17:46:48.132187359 +0000 UTC m=+342.553856318" watchObservedRunningTime="2026-04-23 17:46:48.135025528 +0000 UTC m=+342.556694487" Apr 23 17:46:48.262152 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:48.262118 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:48.262152 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:48.262152 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:48.262152 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:48.262427 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:48.262175 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:49.262287 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:49.262250 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:49.262287 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:49.262287 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:49.262287 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:49.262833 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:49.262315 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:50.262930 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:50.262891 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:50.262930 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:50.262930 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:50.262930 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:50.263421 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:50.262950 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:51.262502 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:51.262467 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:51.262502 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:51.262502 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:51.262502 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:51.262785 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:51.262524 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:52.262233 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:52.261995 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:52.262233 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:52.262233 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:52.262233 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:52.262233 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:52.262061 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:53.263062 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:53.263028 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:53.263062 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:53.263062 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:53.263062 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:53.263682 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:53.263090 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:54.262277 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:54.262249 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:54.262277 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:54.262277 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:54.262277 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:54.262504 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:54.262304 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:55.125754 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:55.125716 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6bcc868b7-jklss" event={"ID":"13a332d6-578a-4838-8bd3-9a2a0eb00e2f","Type":"ContainerStarted","Data":"0e47c0a26bf8f2a525c73de9e88465af416fc3edc8bc0fee579fcfae11494bf1"} Apr 23 17:46:55.126195 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:55.125978 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-6bcc868b7-jklss" Apr 23 17:46:55.144503 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:55.144472 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-6bcc868b7-jklss" Apr 23 17:46:55.162335 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:55.162277 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-6bcc868b7-jklss" podStartSLOduration=2.091978762 podStartE2EDuration="18.162261816s" podCreationTimestamp="2026-04-23 17:46:37 +0000 UTC" firstStartedPulling="2026-04-23 17:46:38.096886939 +0000 UTC m=+332.518555876" lastFinishedPulling="2026-04-23 17:46:54.167169979 +0000 UTC m=+348.588838930" observedRunningTime="2026-04-23 17:46:55.161119542 +0000 UTC m=+349.582788503" watchObservedRunningTime="2026-04-23 17:46:55.162261816 +0000 UTC m=+349.583930775" Apr 23 17:46:55.262620 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:55.262576 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:55.262620 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:55.262620 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:55.262620 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:55.262910 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:55.262670 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:56.262816 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:56.262778 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:56.262816 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:56.262816 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:56.262816 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:56.263360 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:56.262845 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:57.262779 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:57.262743 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:57.262779 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:57.262779 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:57.262779 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:57.263296 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:57.262810 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:58.262877 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:58.262836 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:58.262877 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:58.262877 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:58.262877 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:58.263432 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:58.262902 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:46:59.262279 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:59.262240 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:46:59.262279 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:46:59.262279 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:46:59.262279 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:46:59.262717 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:46:59.262308 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:00.071452 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:00.071415 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-59c6488d5c-6pw5f" Apr 23 17:47:00.262679 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:00.262615 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:00.262679 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:00.262679 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:00.262679 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:00.263002 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:00.262727 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:01.262307 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:01.262273 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:01.262307 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:01.262307 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:01.262307 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:01.262862 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:01.262377 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:02.262417 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:02.262378 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:02.262417 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:02.262417 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:02.262417 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:02.262953 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:02.262448 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:03.262348 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:03.262302 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:03.262348 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:03.262348 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:03.262348 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:03.262896 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:03.262401 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:04.262878 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:04.262843 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:04.262878 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:04.262878 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:04.262878 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:04.263310 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:04.262897 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:05.262936 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:05.262898 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:05.262936 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:05.262936 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:05.262936 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:05.263522 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:05.262963 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:06.262499 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:06.262464 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:06.262499 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:06.262499 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:06.262499 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:06.262782 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:06.262531 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:07.262049 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:07.262013 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:07.262049 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:07.262049 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:07.262049 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:07.262514 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:07.262076 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:08.262599 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:08.262564 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:08.262599 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:08.262599 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:08.262599 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:08.263064 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:08.262621 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:09.262388 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:09.262351 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:09.262388 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:09.262388 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:09.262388 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:09.262625 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:09.262407 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:10.262684 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:10.262620 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:10.262684 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:10.262684 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:10.262684 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:10.263122 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:10.262706 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:11.262795 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:11.262760 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:11.262795 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:11.262795 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:11.262795 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:11.263311 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:11.262815 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:12.262480 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:12.262440 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:12.262480 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:12.262480 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:12.262480 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:12.262737 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:12.262501 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:13.262379 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:13.262344 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:13.262379 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:13.262379 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:13.262379 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:13.262857 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:13.262395 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:14.262314 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:14.262281 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:14.262314 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:14.262314 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:14.262314 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:14.262825 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:14.262332 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:15.262137 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:15.262096 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:15.262137 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:15.262137 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:15.262137 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:15.262392 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:15.262167 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:16.262887 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:16.262854 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:16.262887 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:16.262887 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:16.262887 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:16.263342 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:16.262909 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:17.262704 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:17.262672 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:17.262704 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:17.262704 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:17.262704 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:17.263149 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:17.262723 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:18.262878 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:18.262842 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:18.262878 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:18.262878 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:18.262878 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:18.263412 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:18.262916 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:19.262761 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:19.262727 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:19.262761 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:19.262761 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:19.262761 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:19.263016 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:19.262778 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:20.262730 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:20.262693 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:20.262730 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:20.262730 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:20.262730 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:20.263161 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:20.262755 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:21.262288 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:21.262254 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:21.262288 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:21.262288 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:21.262288 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:21.262527 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:21.262308 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:22.261832 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:22.261799 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:22.261832 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:22.261832 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:22.261832 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:22.262269 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:22.261852 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:23.262260 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:23.262226 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:23.262260 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:23.262260 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:23.262260 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:23.262721 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:23.262279 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:24.262411 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:24.262378 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:24.262411 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:24.262411 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:24.262411 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:24.262854 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:24.262434 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:25.262040 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:25.262001 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:25.262040 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:25.262040 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:25.262040 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:25.262282 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:25.262057 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:26.262837 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:26.262805 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:26.262837 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:26.262837 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:26.262837 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:26.263266 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:26.262859 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:27.262714 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:27.262668 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:27.262714 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:27.262714 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:27.262714 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:27.263157 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:27.262740 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:28.262314 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:28.262276 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:28.262314 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:28.262314 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:28.262314 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:28.262557 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:28.262344 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:29.262226 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:29.262187 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:29.262226 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:29.262226 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:29.262226 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:29.262692 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:29.262258 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:30.262044 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:30.262011 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:30.262044 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:30.262044 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:30.262044 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:30.262484 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:30.262070 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:31.262722 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:31.262685 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:31.262722 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:31.262722 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:31.262722 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:31.263251 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:31.262763 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:32.262223 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:32.262187 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:32.262223 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:32.262223 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:32.262223 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:32.262471 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:32.262256 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:33.262418 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:33.262384 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:33.262418 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:33.262418 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:33.262418 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:33.262958 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:33.262438 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:34.262649 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:34.262594 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:34.262649 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:34.262649 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:34.262649 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:34.263085 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:34.262669 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:35.262701 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:35.262661 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:35.262701 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:35.262701 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:35.262701 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:35.263222 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:35.262725 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:36.262577 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:36.262536 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:36.262577 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:36.262577 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:36.262577 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:36.263060 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:36.262597 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:37.262730 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:37.262692 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:37.262730 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:37.262730 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:37.262730 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:37.263243 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:37.262756 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:38.262258 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:38.262224 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:38.262258 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:38.262258 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:38.262258 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:38.262493 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:38.262276 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:39.262914 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:39.262882 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:39.262914 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:39.262914 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:39.262914 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:39.263345 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:39.262932 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:40.262193 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:40.262161 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:40.262193 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:40.262193 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:40.262193 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:40.262442 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:40.262217 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:41.262201 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:41.262161 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:41.262201 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:41.262201 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:41.262201 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:41.262665 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:41.262240 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:42.262710 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:42.262677 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:42.262710 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:42.262710 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:42.262710 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:42.263221 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:42.262746 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:43.262811 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:43.262778 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:43.262811 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:43.262811 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:43.262811 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:43.263280 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:43.262831 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:44.262791 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:44.262759 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:44.262791 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:44.262791 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:44.262791 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:44.263231 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:44.262818 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:45.262134 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:45.262102 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:45.262134 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:45.262134 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:45.262134 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:45.262394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:45.262167 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:46.262565 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:46.262530 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:46.262565 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:46.262565 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:46.262565 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:46.263124 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:46.262584 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:47.262323 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:47.262286 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:47.262323 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:47.262323 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:47.262323 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:47.262560 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:47.262358 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:48.263421 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:48.263386 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:48.263421 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:48.263421 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:48.263421 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:48.263907 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:48.263444 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:49.262330 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:49.262289 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:49.262330 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:49.262330 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:49.262330 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:49.262619 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:49.262351 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:50.262318 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:50.262277 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:50.262318 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:50.262318 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:50.262318 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:50.262792 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:50.262336 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:51.262243 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:51.262203 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:51.262243 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:51.262243 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:51.262243 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:51.262738 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:51.262261 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:52.262107 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:52.262066 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:52.262107 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:52.262107 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:52.262107 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:52.262386 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:52.262118 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:53.262893 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:53.262849 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:53.262893 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:53.262893 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:53.262893 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:53.263406 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:53.262928 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:54.262066 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:54.262023 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:54.262066 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:54.262066 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:54.262066 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:54.262307 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:54.262107 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:55.262579 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:55.262541 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:55.262579 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:55.262579 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:55.262579 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:55.263054 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:55.262595 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:56.262441 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:56.262405 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:56.262441 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:56.262441 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:56.262441 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:56.262912 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:56.262485 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:57.262229 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:57.262197 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:57.262229 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:57.262229 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:57.262229 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:57.262479 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:57.262262 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:58.262350 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:58.262316 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:58.262350 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:58.262350 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:58.262350 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:58.262802 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:58.262367 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:47:59.262654 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:59.262600 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:47:59.262654 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:47:59.262654 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:47:59.262654 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:47:59.263141 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:47:59.262675 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:00.262144 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:00.262107 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:00.262144 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:00.262144 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:00.262144 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:00.262379 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:00.262158 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:01.262550 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:01.262514 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:01.262550 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:01.262550 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:01.262550 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:01.263027 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:01.262599 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:02.262426 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:02.262395 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:02.262426 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:02.262426 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:02.262426 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:02.262890 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:02.262452 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:03.262314 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:03.262272 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:03.262314 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:03.262314 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:03.262314 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:03.262570 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:03.262341 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:04.262319 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:04.262285 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:04.262319 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:04.262319 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:04.262319 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:04.262780 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:04.262341 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:05.262154 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:05.262117 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:05.262154 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:05.262154 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:05.262154 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:05.262590 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:05.262172 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:06.262447 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:06.262409 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:06.262447 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:06.262447 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:06.262447 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:06.262927 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:06.262476 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:07.262583 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:07.262552 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:07.262583 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:07.262583 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:07.262583 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:07.263072 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:07.262611 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:08.262684 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:08.262654 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:08.262684 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:08.262684 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:08.262684 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:08.263186 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:08.262710 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:09.262258 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:09.262223 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:09.262258 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:09.262258 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:09.262258 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:09.262488 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:09.262274 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:10.263310 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:10.263273 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:10.263310 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:10.263310 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:10.263310 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:10.263776 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:10.263341 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:11.262726 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:11.262684 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:11.262726 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:11.262726 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:11.262726 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:11.262963 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:11.262753 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:12.262226 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:12.262192 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:12.262226 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:12.262226 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:12.262226 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:12.262682 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:12.262245 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:13.262772 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:13.262728 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:13.262772 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:13.262772 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:13.262772 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:13.263215 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:13.262797 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:14.262678 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:14.262622 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:14.262678 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:14.262678 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:14.262678 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:14.263219 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:14.262711 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:15.262623 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:15.262592 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:15.262623 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:15.262623 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:15.262623 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:15.262878 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:15.262680 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:16.262706 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:16.262673 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:16.262706 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:16.262706 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:16.262706 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:16.263149 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:16.262738 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:17.262865 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:17.262829 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:17.262865 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:17.262865 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:17.262865 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:17.263295 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:17.262885 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:18.262394 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:18.262356 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:18.262394 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:18.262394 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:18.262394 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:18.262623 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:18.262421 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:19.262443 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:19.262406 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:19.262443 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:19.262443 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:19.262443 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:19.262913 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:19.262478 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:20.262316 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:20.262280 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:20.262316 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:20.262316 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:20.262316 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:20.262799 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:20.262345 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:21.262538 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:21.262508 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:21.262538 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:21.262538 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:21.262538 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:21.262983 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:21.262557 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:22.261817 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:22.261782 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:22.261817 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:22.261817 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:22.261817 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:22.262059 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:22.261846 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:23.262503 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:23.262465 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:23.262503 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:23.262503 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:23.262503 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:23.262965 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:23.262532 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:24.262114 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:24.262083 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:24.262114 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:24.262114 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:24.262114 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:24.262351 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:24.262134 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:25.262568 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:25.262529 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:25.262568 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:25.262568 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:25.262568 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:25.263025 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:25.262598 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:26.262480 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:26.262448 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:26.262480 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:26.262480 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:26.262480 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:26.262959 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:26.262497 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:27.262923 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:27.262887 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:27.262923 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:27.262923 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:27.262923 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:27.263348 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:27.262942 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:28.262261 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:28.262227 2574 patch_prober.go:28] interesting pod/router-default-75ddc44-mjcts container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-proxy-http failed: reason withheld Apr 23 17:48:28.262261 ip-10-0-139-215 kubenswrapper[2574]: [-]has-synced failed: reason withheld Apr 23 17:48:28.262261 ip-10-0-139-215 kubenswrapper[2574]: [+]process-running ok Apr 23 17:48:28.262261 ip-10-0-139-215 kubenswrapper[2574]: healthz check failed Apr 23 17:48:28.262499 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:28.262280 2574 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 23 17:48:28.262499 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:28.262318 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:48:28.262769 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:28.262750 2574 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"3d5b5dffc8b6ac6de8b254f63e78f912f9055a3eccadfbe6f4293e0bf71589dc"} pod="openshift-ingress/router-default-75ddc44-mjcts" containerMessage="Container router failed startup probe, will be restarted" Apr 23 17:48:28.262847 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:48:28.262789 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-ingress/router-default-75ddc44-mjcts" podUID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerName="router" containerID="cri-o://3d5b5dffc8b6ac6de8b254f63e78f912f9055a3eccadfbe6f4293e0bf71589dc" gracePeriod=3600 Apr 23 17:49:14.507250 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:49:14.507214 2574 generic.go:358] "Generic (PLEG): container finished" podID="c647dab7-a8c4-4b49-ab18-6a3500f88227" containerID="3d5b5dffc8b6ac6de8b254f63e78f912f9055a3eccadfbe6f4293e0bf71589dc" exitCode=0 Apr 23 17:49:14.507710 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:49:14.507283 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-75ddc44-mjcts" event={"ID":"c647dab7-a8c4-4b49-ab18-6a3500f88227","Type":"ContainerDied","Data":"3d5b5dffc8b6ac6de8b254f63e78f912f9055a3eccadfbe6f4293e0bf71589dc"} Apr 23 17:49:14.507710 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:49:14.507315 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-75ddc44-mjcts" event={"ID":"c647dab7-a8c4-4b49-ab18-6a3500f88227","Type":"ContainerStarted","Data":"c8c0485ab410b30dd7554148fd48de6f7412a6ab59274463c40b1bcb2f971e0f"} Apr 23 17:49:14.507710 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:49:14.507335 2574 scope.go:117] "RemoveContainer" containerID="c0d496b402c4b893399454235ce7d6477c3d8d4511966fa7e4c88ddde0fcf1cb" Apr 23 17:49:15.261136 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:49:15.261103 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:49:15.263434 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:49:15.263409 2574 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:49:15.511604 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:49:15.511529 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:49:15.512714 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:49:15.512699 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-75ddc44-mjcts" Apr 23 17:51:06.038751 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:51:06.038719 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:51:06.041170 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:51:06.041148 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:56:06.057662 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:56:06.057620 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 17:56:06.058744 ip-10-0-139-215 kubenswrapper[2574]: I0423 17:56:06.058723 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:01:06.077076 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:01:06.077049 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:01:06.079763 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:01:06.077195 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:02:47.477307 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.477273 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c"] Apr 23 18:02:47.480731 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.480712 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.483861 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.483837 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 23 18:02:47.483983 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.483868 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 23 18:02:47.483983 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.483905 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-88z22\"" Apr 23 18:02:47.489458 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.489432 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c"] Apr 23 18:02:47.625761 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.625723 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.625761 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.625764 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgljr\" (UniqueName: \"kubernetes.io/projected/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-kube-api-access-hgljr\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.625954 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.625802 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.726334 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.726300 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.726482 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.726341 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgljr\" (UniqueName: \"kubernetes.io/projected/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-kube-api-access-hgljr\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.726482 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.726366 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.726708 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.726691 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.726787 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.726766 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.736203 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.736150 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgljr\" (UniqueName: \"kubernetes.io/projected/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-kube-api-access-hgljr\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.790679 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.790652 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:02:47.912124 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.912096 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c"] Apr 23 18:02:47.914001 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:02:47.913973 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2e46bc2_dd9a_4d66_9779_f25af897dfb1.slice/crio-955015dd3a70e8596084c7be386ff46c9746c1cd903c865bd91ae2815e9066a2 WatchSource:0}: Error finding container 955015dd3a70e8596084c7be386ff46c9746c1cd903c865bd91ae2815e9066a2: Status 404 returned error can't find the container with id 955015dd3a70e8596084c7be386ff46c9746c1cd903c865bd91ae2815e9066a2 Apr 23 18:02:47.915665 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:47.915624 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:02:48.682161 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:48.682117 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" event={"ID":"c2e46bc2-dd9a-4d66-9779-f25af897dfb1","Type":"ContainerStarted","Data":"955015dd3a70e8596084c7be386ff46c9746c1cd903c865bd91ae2815e9066a2"} Apr 23 18:02:53.700142 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:53.700106 2574 generic.go:358] "Generic (PLEG): container finished" podID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerID="711929d3fc866e209f2316997740412f85309214ba56fb3b5ab89d4904e04d91" exitCode=0 Apr 23 18:02:53.700499 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:53.700192 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" event={"ID":"c2e46bc2-dd9a-4d66-9779-f25af897dfb1","Type":"ContainerDied","Data":"711929d3fc866e209f2316997740412f85309214ba56fb3b5ab89d4904e04d91"} Apr 23 18:02:55.707033 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:55.706946 2574 generic.go:358] "Generic (PLEG): container finished" podID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerID="19626dfecf73611e3ea57cfd059d4cbfea526990673963e2d57f538ca4b928f2" exitCode=0 Apr 23 18:02:55.707368 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:02:55.707039 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" event={"ID":"c2e46bc2-dd9a-4d66-9779-f25af897dfb1","Type":"ContainerDied","Data":"19626dfecf73611e3ea57cfd059d4cbfea526990673963e2d57f538ca4b928f2"} Apr 23 18:03:02.734617 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:02.734580 2574 generic.go:358] "Generic (PLEG): container finished" podID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerID="d229182027516e4530d1e346cec5f5a9067dbf709d793c2a1c87c8477732bcf1" exitCode=0 Apr 23 18:03:02.734999 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:02.734706 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" event={"ID":"c2e46bc2-dd9a-4d66-9779-f25af897dfb1","Type":"ContainerDied","Data":"d229182027516e4530d1e346cec5f5a9067dbf709d793c2a1c87c8477732bcf1"} Apr 23 18:03:03.855805 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:03.855779 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:03:03.959502 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:03.959463 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgljr\" (UniqueName: \"kubernetes.io/projected/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-kube-api-access-hgljr\") pod \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " Apr 23 18:03:03.959502 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:03.959506 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-bundle\") pod \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " Apr 23 18:03:03.959736 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:03.959545 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-util\") pod \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\" (UID: \"c2e46bc2-dd9a-4d66-9779-f25af897dfb1\") " Apr 23 18:03:03.960173 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:03.960104 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-bundle" (OuterVolumeSpecName: "bundle") pod "c2e46bc2-dd9a-4d66-9779-f25af897dfb1" (UID: "c2e46bc2-dd9a-4d66-9779-f25af897dfb1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:03:03.961872 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:03.961849 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-kube-api-access-hgljr" (OuterVolumeSpecName: "kube-api-access-hgljr") pod "c2e46bc2-dd9a-4d66-9779-f25af897dfb1" (UID: "c2e46bc2-dd9a-4d66-9779-f25af897dfb1"). InnerVolumeSpecName "kube-api-access-hgljr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:03:03.965112 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:03.965083 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-util" (OuterVolumeSpecName: "util") pod "c2e46bc2-dd9a-4d66-9779-f25af897dfb1" (UID: "c2e46bc2-dd9a-4d66-9779-f25af897dfb1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 23 18:03:04.060439 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:04.060370 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgljr\" (UniqueName: \"kubernetes.io/projected/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-kube-api-access-hgljr\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:03:04.060439 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:04.060395 2574 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-bundle\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:03:04.060439 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:04.060404 2574 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2e46bc2-dd9a-4d66-9779-f25af897dfb1-util\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:03:04.741792 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:04.741760 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" Apr 23 18:03:04.741957 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:04.741750 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29c25v9c" event={"ID":"c2e46bc2-dd9a-4d66-9779-f25af897dfb1","Type":"ContainerDied","Data":"955015dd3a70e8596084c7be386ff46c9746c1cd903c865bd91ae2815e9066a2"} Apr 23 18:03:04.741957 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:04.741874 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955015dd3a70e8596084c7be386ff46c9746c1cd903c865bd91ae2815e9066a2" Apr 23 18:03:09.461225 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.461193 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72"] Apr 23 18:03:09.461600 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.461478 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerName="pull" Apr 23 18:03:09.461600 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.461491 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerName="pull" Apr 23 18:03:09.461600 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.461505 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerName="extract" Apr 23 18:03:09.461600 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.461511 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerName="extract" Apr 23 18:03:09.461600 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.461519 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerName="util" Apr 23 18:03:09.461600 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.461525 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerName="util" Apr 23 18:03:09.461600 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.461573 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2e46bc2-dd9a-4d66-9779-f25af897dfb1" containerName="extract" Apr 23 18:03:09.476757 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.476735 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:09.478677 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.478654 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72"] Apr 23 18:03:09.480362 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.480332 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 23 18:03:09.480457 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.480429 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 23 18:03:09.480559 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.480540 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 23 18:03:09.480743 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.480726 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"custom-metrics-autoscaler-operator-dockercfg-xshg9\"" Apr 23 18:03:09.497996 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.497966 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/52759705-c604-4d3d-b123-884d54754f29-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-h7z72\" (UID: \"52759705-c604-4d3d-b123-884d54754f29\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:09.498111 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.498028 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlvkm\" (UniqueName: \"kubernetes.io/projected/52759705-c604-4d3d-b123-884d54754f29-kube-api-access-nlvkm\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-h7z72\" (UID: \"52759705-c604-4d3d-b123-884d54754f29\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:09.598965 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.598930 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nlvkm\" (UniqueName: \"kubernetes.io/projected/52759705-c604-4d3d-b123-884d54754f29-kube-api-access-nlvkm\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-h7z72\" (UID: \"52759705-c604-4d3d-b123-884d54754f29\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:09.599134 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.599012 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/52759705-c604-4d3d-b123-884d54754f29-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-h7z72\" (UID: \"52759705-c604-4d3d-b123-884d54754f29\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:09.601499 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.601475 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/secret/52759705-c604-4d3d-b123-884d54754f29-certificates\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-h7z72\" (UID: \"52759705-c604-4d3d-b123-884d54754f29\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:09.608494 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.608468 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlvkm\" (UniqueName: \"kubernetes.io/projected/52759705-c604-4d3d-b123-884d54754f29-kube-api-access-nlvkm\") pod \"custom-metrics-autoscaler-operator-bbf89fd5d-h7z72\" (UID: \"52759705-c604-4d3d-b123-884d54754f29\") " pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:09.788791 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.788715 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:09.912450 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:09.912415 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72"] Apr 23 18:03:09.915942 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:03:09.915908 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52759705_c604_4d3d_b123_884d54754f29.slice/crio-c95c2f87e80d1102bfce0cb8df0cc660215e3427c9158f8323c19b75d61f6ddd WatchSource:0}: Error finding container c95c2f87e80d1102bfce0cb8df0cc660215e3427c9158f8323c19b75d61f6ddd: Status 404 returned error can't find the container with id c95c2f87e80d1102bfce0cb8df0cc660215e3427c9158f8323c19b75d61f6ddd Apr 23 18:03:10.761797 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:10.761752 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" event={"ID":"52759705-c604-4d3d-b123-884d54754f29","Type":"ContainerStarted","Data":"c95c2f87e80d1102bfce0cb8df0cc660215e3427c9158f8323c19b75d61f6ddd"} Apr 23 18:03:14.407939 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.407905 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-hk462"] Apr 23 18:03:14.435465 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.435438 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-hk462"] Apr 23 18:03:14.435648 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.435553 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.439416 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.439395 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-4w94g\"" Apr 23 18:03:14.440044 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.440027 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 23 18:03:14.440133 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.440052 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 23 18:03:14.538875 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.538841 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-cabundle0\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.539027 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.538885 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhccf\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-kube-api-access-hhccf\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.539070 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.539016 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.640386 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.640356 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.640587 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.640395 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-cabundle0\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.640587 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.640430 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hhccf\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-kube-api-access-hhccf\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.640587 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.640518 2574 secret.go:281] references non-existent secret key: ca.crt Apr 23 18:03:14.640587 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.640533 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 18:03:14.640587 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.640542 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-hk462: references non-existent secret key: ca.crt Apr 23 18:03:14.640859 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.640593 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates podName:dd55335f-eb3b-4dcd-a7c2-1924e1c527ed nodeName:}" failed. No retries permitted until 2026-04-23 18:03:15.140577269 +0000 UTC m=+1329.562246206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates") pod "keda-operator-ffbb595cb-hk462" (UID: "dd55335f-eb3b-4dcd-a7c2-1924e1c527ed") : references non-existent secret key: ca.crt Apr 23 18:03:14.641003 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.640974 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-cabundle0\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.651463 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.651439 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4"] Apr 23 18:03:14.657183 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.657162 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhccf\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-kube-api-access-hhccf\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:14.674044 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.673978 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4"] Apr 23 18:03:14.674168 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.674112 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.676932 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.676912 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 23 18:03:14.741327 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.741265 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrm9\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-kube-api-access-clrm9\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.741327 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.741325 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/28ba4a96-3d58-4b78-8099-ec451b8e0240-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.741555 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.741437 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.775568 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.775526 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" event={"ID":"52759705-c604-4d3d-b123-884d54754f29","Type":"ContainerStarted","Data":"5d3eca1d5bc3c905f1231ed3a432b090232820c18e8e085a39debbeb79aa23c5"} Apr 23 18:03:14.775784 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.775713 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:14.798341 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.798275 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" podStartSLOduration=1.929604022 podStartE2EDuration="5.79825872s" podCreationTimestamp="2026-04-23 18:03:09 +0000 UTC" firstStartedPulling="2026-04-23 18:03:09.917713867 +0000 UTC m=+1324.339382818" lastFinishedPulling="2026-04-23 18:03:13.786368578 +0000 UTC m=+1328.208037516" observedRunningTime="2026-04-23 18:03:14.795795538 +0000 UTC m=+1329.217464497" watchObservedRunningTime="2026-04-23 18:03:14.79825872 +0000 UTC m=+1329.219927680" Apr 23 18:03:14.841893 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.841855 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-clrm9\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-kube-api-access-clrm9\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.842049 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.841919 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/28ba4a96-3d58-4b78-8099-ec451b8e0240-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.842049 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.841990 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.842182 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.842165 2574 secret.go:281] references non-existent secret key: tls.crt Apr 23 18:03:14.842240 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.842188 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 18:03:14.842240 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.842210 2574 projected.go:264] Couldn't get secret openshift-keda/keda-metrics-apiserver-certs: secret "keda-metrics-apiserver-certs" not found Apr 23 18:03:14.842240 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.842233 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4: [references non-existent secret key: tls.crt, secret "keda-metrics-apiserver-certs" not found] Apr 23 18:03:14.842382 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:14.842298 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates podName:28ba4a96-3d58-4b78-8099-ec451b8e0240 nodeName:}" failed. No retries permitted until 2026-04-23 18:03:15.342277459 +0000 UTC m=+1329.763946413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates") pod "keda-metrics-apiserver-7c9f485588-mxjm4" (UID: "28ba4a96-3d58-4b78-8099-ec451b8e0240") : [references non-existent secret key: tls.crt, secret "keda-metrics-apiserver-certs" not found] Apr 23 18:03:14.842382 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.842357 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/28ba4a96-3d58-4b78-8099-ec451b8e0240-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.852692 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.852665 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-clrm9\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-kube-api-access-clrm9\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:14.959068 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.958993 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-hmwhw"] Apr 23 18:03:14.982098 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.981972 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-hmwhw"] Apr 23 18:03:14.982098 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.982097 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:14.985103 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:14.985080 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 23 18:03:15.044387 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.044361 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-certificates\") pod \"keda-admission-cf49989db-hmwhw\" (UID: \"311c573c-9488-478b-9411-54cc85a0cb0d\") " pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:15.044555 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.044405 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq87l\" (UniqueName: \"kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-kube-api-access-jq87l\") pod \"keda-admission-cf49989db-hmwhw\" (UID: \"311c573c-9488-478b-9411-54cc85a0cb0d\") " pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:15.145575 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.145538 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-certificates\") pod \"keda-admission-cf49989db-hmwhw\" (UID: \"311c573c-9488-478b-9411-54cc85a0cb0d\") " pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:15.145830 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.145614 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jq87l\" (UniqueName: \"kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-kube-api-access-jq87l\") pod \"keda-admission-cf49989db-hmwhw\" (UID: \"311c573c-9488-478b-9411-54cc85a0cb0d\") " pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:15.145830 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.145703 2574 projected.go:264] Couldn't get secret openshift-keda/keda-admission-webhooks-certs: secret "keda-admission-webhooks-certs" not found Apr 23 18:03:15.145830 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.145733 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-admission-cf49989db-hmwhw: secret "keda-admission-webhooks-certs" not found Apr 23 18:03:15.145830 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.145779 2574 secret.go:281] references non-existent secret key: ca.crt Apr 23 18:03:15.145830 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.145794 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-certificates podName:311c573c-9488-478b-9411-54cc85a0cb0d nodeName:}" failed. No retries permitted until 2026-04-23 18:03:15.645774614 +0000 UTC m=+1330.067443551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-certificates") pod "keda-admission-cf49989db-hmwhw" (UID: "311c573c-9488-478b-9411-54cc85a0cb0d") : secret "keda-admission-webhooks-certs" not found Apr 23 18:03:15.145830 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.145794 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 18:03:15.145830 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.145809 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-hk462: references non-existent secret key: ca.crt Apr 23 18:03:15.146106 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.145706 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:15.146106 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.145860 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates podName:dd55335f-eb3b-4dcd-a7c2-1924e1c527ed nodeName:}" failed. No retries permitted until 2026-04-23 18:03:16.145845582 +0000 UTC m=+1330.567514525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates") pod "keda-operator-ffbb595cb-hk462" (UID: "dd55335f-eb3b-4dcd-a7c2-1924e1c527ed") : references non-existent secret key: ca.crt Apr 23 18:03:15.155056 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.155028 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq87l\" (UniqueName: \"kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-kube-api-access-jq87l\") pod \"keda-admission-cf49989db-hmwhw\" (UID: \"311c573c-9488-478b-9411-54cc85a0cb0d\") " pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:15.347404 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.347308 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:15.347591 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.347457 2574 secret.go:281] references non-existent secret key: tls.crt Apr 23 18:03:15.347591 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.347481 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 18:03:15.347591 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.347500 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4: references non-existent secret key: tls.crt Apr 23 18:03:15.347591 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:15.347553 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates podName:28ba4a96-3d58-4b78-8099-ec451b8e0240 nodeName:}" failed. No retries permitted until 2026-04-23 18:03:16.347538979 +0000 UTC m=+1330.769207916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates") pod "keda-metrics-apiserver-7c9f485588-mxjm4" (UID: "28ba4a96-3d58-4b78-8099-ec451b8e0240") : references non-existent secret key: tls.crt Apr 23 18:03:15.649964 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.649928 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-certificates\") pod \"keda-admission-cf49989db-hmwhw\" (UID: \"311c573c-9488-478b-9411-54cc85a0cb0d\") " pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:15.652583 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.652560 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/311c573c-9488-478b-9411-54cc85a0cb0d-certificates\") pod \"keda-admission-cf49989db-hmwhw\" (UID: \"311c573c-9488-478b-9411-54cc85a0cb0d\") " pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:15.897386 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:15.897351 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:16.023000 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:16.022975 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-hmwhw"] Apr 23 18:03:16.026128 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:03:16.026091 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod311c573c_9488_478b_9411_54cc85a0cb0d.slice/crio-d3ad58d42b45651edd59a65cd8169bacb4fdfc5951ad3e415a0caf713eb45b75 WatchSource:0}: Error finding container d3ad58d42b45651edd59a65cd8169bacb4fdfc5951ad3e415a0caf713eb45b75: Status 404 returned error can't find the container with id d3ad58d42b45651edd59a65cd8169bacb4fdfc5951ad3e415a0caf713eb45b75 Apr 23 18:03:16.154225 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:16.154190 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:16.154408 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:16.154329 2574 secret.go:281] references non-existent secret key: ca.crt Apr 23 18:03:16.154408 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:16.154346 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 18:03:16.154408 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:16.154355 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-hk462: references non-existent secret key: ca.crt Apr 23 18:03:16.154408 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:16.154405 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates podName:dd55335f-eb3b-4dcd-a7c2-1924e1c527ed nodeName:}" failed. No retries permitted until 2026-04-23 18:03:18.154390774 +0000 UTC m=+1332.576059711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates") pod "keda-operator-ffbb595cb-hk462" (UID: "dd55335f-eb3b-4dcd-a7c2-1924e1c527ed") : references non-existent secret key: ca.crt Apr 23 18:03:16.355541 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:16.355461 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:16.355716 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:16.355600 2574 secret.go:281] references non-existent secret key: tls.crt Apr 23 18:03:16.355716 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:16.355619 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 18:03:16.355716 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:16.355658 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4: references non-existent secret key: tls.crt Apr 23 18:03:16.355716 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:16.355713 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates podName:28ba4a96-3d58-4b78-8099-ec451b8e0240 nodeName:}" failed. No retries permitted until 2026-04-23 18:03:18.355696574 +0000 UTC m=+1332.777365510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates") pod "keda-metrics-apiserver-7c9f485588-mxjm4" (UID: "28ba4a96-3d58-4b78-8099-ec451b8e0240") : references non-existent secret key: tls.crt Apr 23 18:03:16.784169 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:16.784117 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-hmwhw" event={"ID":"311c573c-9488-478b-9411-54cc85a0cb0d","Type":"ContainerStarted","Data":"d3ad58d42b45651edd59a65cd8169bacb4fdfc5951ad3e415a0caf713eb45b75"} Apr 23 18:03:17.788921 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:17.788887 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-hmwhw" event={"ID":"311c573c-9488-478b-9411-54cc85a0cb0d","Type":"ContainerStarted","Data":"e5782fa55381318c5f9e05996f881b9473f4346abc25e8cecdb3cdc8d976aec1"} Apr 23 18:03:17.789327 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:17.789030 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:17.807319 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:17.807245 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-hmwhw" podStartSLOduration=2.571376772 podStartE2EDuration="3.807231437s" podCreationTimestamp="2026-04-23 18:03:14 +0000 UTC" firstStartedPulling="2026-04-23 18:03:16.027711326 +0000 UTC m=+1330.449380267" lastFinishedPulling="2026-04-23 18:03:17.263565994 +0000 UTC m=+1331.685234932" observedRunningTime="2026-04-23 18:03:17.805835842 +0000 UTC m=+1332.227504801" watchObservedRunningTime="2026-04-23 18:03:17.807231437 +0000 UTC m=+1332.228900396" Apr 23 18:03:18.172026 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:18.171996 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:18.172195 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:18.172118 2574 secret.go:281] references non-existent secret key: ca.crt Apr 23 18:03:18.172195 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:18.172131 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 23 18:03:18.172195 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:18.172142 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-hk462: references non-existent secret key: ca.crt Apr 23 18:03:18.172195 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:18.172192 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates podName:dd55335f-eb3b-4dcd-a7c2-1924e1c527ed nodeName:}" failed. No retries permitted until 2026-04-23 18:03:22.172178453 +0000 UTC m=+1336.593847391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates") pod "keda-operator-ffbb595cb-hk462" (UID: "dd55335f-eb3b-4dcd-a7c2-1924e1c527ed") : references non-existent secret key: ca.crt Apr 23 18:03:18.373543 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:18.373492 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:18.373746 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:18.373642 2574 secret.go:281] references non-existent secret key: tls.crt Apr 23 18:03:18.373746 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:18.373661 2574 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 23 18:03:18.373746 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:18.373683 2574 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4: references non-existent secret key: tls.crt Apr 23 18:03:18.373746 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:03:18.373732 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates podName:28ba4a96-3d58-4b78-8099-ec451b8e0240 nodeName:}" failed. No retries permitted until 2026-04-23 18:03:22.37371822 +0000 UTC m=+1336.795387156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates") pod "keda-metrics-apiserver-7c9f485588-mxjm4" (UID: "28ba4a96-3d58-4b78-8099-ec451b8e0240") : references non-existent secret key: tls.crt Apr 23 18:03:22.205680 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.205628 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:22.208165 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.208138 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/dd55335f-eb3b-4dcd-a7c2-1924e1c527ed-certificates\") pod \"keda-operator-ffbb595cb-hk462\" (UID: \"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed\") " pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:22.245189 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.245164 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:22.367339 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.367318 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-hk462"] Apr 23 18:03:22.369867 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:03:22.369833 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd55335f_eb3b_4dcd_a7c2_1924e1c527ed.slice/crio-dc52da1c83d18aca4a83ed5f47e6e61906868b3c45067edad5e339ed8effedda WatchSource:0}: Error finding container dc52da1c83d18aca4a83ed5f47e6e61906868b3c45067edad5e339ed8effedda: Status 404 returned error can't find the container with id dc52da1c83d18aca4a83ed5f47e6e61906868b3c45067edad5e339ed8effedda Apr 23 18:03:22.407882 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.407853 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:22.410368 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.410343 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/28ba4a96-3d58-4b78-8099-ec451b8e0240-certificates\") pod \"keda-metrics-apiserver-7c9f485588-mxjm4\" (UID: \"28ba4a96-3d58-4b78-8099-ec451b8e0240\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:22.485204 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.485126 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:22.601949 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.601920 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4"] Apr 23 18:03:22.604019 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:03:22.603993 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28ba4a96_3d58_4b78_8099_ec451b8e0240.slice/crio-96b0aa50ea662087a2b3ba9e51ad2e25a2bb06b7cc8340058dff7ba1d64e8d8d WatchSource:0}: Error finding container 96b0aa50ea662087a2b3ba9e51ad2e25a2bb06b7cc8340058dff7ba1d64e8d8d: Status 404 returned error can't find the container with id 96b0aa50ea662087a2b3ba9e51ad2e25a2bb06b7cc8340058dff7ba1d64e8d8d Apr 23 18:03:22.809289 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.809193 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-hk462" event={"ID":"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed","Type":"ContainerStarted","Data":"dc52da1c83d18aca4a83ed5f47e6e61906868b3c45067edad5e339ed8effedda"} Apr 23 18:03:22.810303 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:22.810268 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" event={"ID":"28ba4a96-3d58-4b78-8099-ec451b8e0240","Type":"ContainerStarted","Data":"96b0aa50ea662087a2b3ba9e51ad2e25a2bb06b7cc8340058dff7ba1d64e8d8d"} Apr 23 18:03:26.825016 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:26.824981 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" event={"ID":"28ba4a96-3d58-4b78-8099-ec451b8e0240","Type":"ContainerStarted","Data":"6659aa4603ec443f66fdda736b2f5c9089ce531035dbc96131e64ca1cf6bfa8f"} Apr 23 18:03:26.825458 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:26.825087 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:26.826336 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:26.826314 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-hk462" event={"ID":"dd55335f-eb3b-4dcd-a7c2-1924e1c527ed","Type":"ContainerStarted","Data":"de3a6f78be036b3c7a482b76ce306ce96938238ce787b5f93e755d2270c6dddf"} Apr 23 18:03:26.826462 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:26.826450 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:03:26.843871 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:26.843817 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" podStartSLOduration=9.513349749 podStartE2EDuration="12.843803754s" podCreationTimestamp="2026-04-23 18:03:14 +0000 UTC" firstStartedPulling="2026-04-23 18:03:22.605447325 +0000 UTC m=+1337.027116263" lastFinishedPulling="2026-04-23 18:03:25.935901329 +0000 UTC m=+1340.357570268" observedRunningTime="2026-04-23 18:03:26.843512092 +0000 UTC m=+1341.265181064" watchObservedRunningTime="2026-04-23 18:03:26.843803754 +0000 UTC m=+1341.265472714" Apr 23 18:03:26.863834 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:26.863778 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-hk462" podStartSLOduration=9.29309145 podStartE2EDuration="12.863763966s" podCreationTimestamp="2026-04-23 18:03:14 +0000 UTC" firstStartedPulling="2026-04-23 18:03:22.371179092 +0000 UTC m=+1336.792848028" lastFinishedPulling="2026-04-23 18:03:25.941851593 +0000 UTC m=+1340.363520544" observedRunningTime="2026-04-23 18:03:26.86358594 +0000 UTC m=+1341.285254899" watchObservedRunningTime="2026-04-23 18:03:26.863763966 +0000 UTC m=+1341.285432926" Apr 23 18:03:35.781676 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:35.781620 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/custom-metrics-autoscaler-operator-bbf89fd5d-h7z72" Apr 23 18:03:37.835057 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:37.835033 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-mxjm4" Apr 23 18:03:38.795065 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:38.795030 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-hmwhw" Apr 23 18:03:47.832379 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:03:47.832346 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-hk462" Apr 23 18:04:22.106062 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.106029 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-874ff48d-zv6jh"] Apr 23 18:04:22.111569 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.111549 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:22.115020 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.114998 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 23 18:04:22.115225 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.115203 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 23 18:04:22.117011 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.116500 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 23 18:04:22.117011 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.116502 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-z8bcr\"" Apr 23 18:04:22.119007 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.118939 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-zv6jh"] Apr 23 18:04:22.165100 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.165075 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgvp\" (UniqueName: \"kubernetes.io/projected/6af99dec-13a5-4460-b0a2-05e1a59b7389-kube-api-access-hbgvp\") pod \"kserve-controller-manager-874ff48d-zv6jh\" (UID: \"6af99dec-13a5-4460-b0a2-05e1a59b7389\") " pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:22.165277 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.165133 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6af99dec-13a5-4460-b0a2-05e1a59b7389-cert\") pod \"kserve-controller-manager-874ff48d-zv6jh\" (UID: \"6af99dec-13a5-4460-b0a2-05e1a59b7389\") " pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:22.266463 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.266436 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbgvp\" (UniqueName: \"kubernetes.io/projected/6af99dec-13a5-4460-b0a2-05e1a59b7389-kube-api-access-hbgvp\") pod \"kserve-controller-manager-874ff48d-zv6jh\" (UID: \"6af99dec-13a5-4460-b0a2-05e1a59b7389\") " pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:22.266617 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.266480 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6af99dec-13a5-4460-b0a2-05e1a59b7389-cert\") pod \"kserve-controller-manager-874ff48d-zv6jh\" (UID: \"6af99dec-13a5-4460-b0a2-05e1a59b7389\") " pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:22.269075 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.269053 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6af99dec-13a5-4460-b0a2-05e1a59b7389-cert\") pod \"kserve-controller-manager-874ff48d-zv6jh\" (UID: \"6af99dec-13a5-4460-b0a2-05e1a59b7389\") " pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:22.277899 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.277874 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbgvp\" (UniqueName: \"kubernetes.io/projected/6af99dec-13a5-4460-b0a2-05e1a59b7389-kube-api-access-hbgvp\") pod \"kserve-controller-manager-874ff48d-zv6jh\" (UID: \"6af99dec-13a5-4460-b0a2-05e1a59b7389\") " pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:22.423771 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.423739 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:22.553582 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.553557 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-zv6jh"] Apr 23 18:04:22.555570 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:04:22.555542 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6af99dec_13a5_4460_b0a2_05e1a59b7389.slice/crio-c6ddff6ad2cd9aebbacbb21a1e7c5d4e2d6557e136a1d8eb11d4e0b0248cfe87 WatchSource:0}: Error finding container c6ddff6ad2cd9aebbacbb21a1e7c5d4e2d6557e136a1d8eb11d4e0b0248cfe87: Status 404 returned error can't find the container with id c6ddff6ad2cd9aebbacbb21a1e7c5d4e2d6557e136a1d8eb11d4e0b0248cfe87 Apr 23 18:04:22.996993 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:22.996960 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" event={"ID":"6af99dec-13a5-4460-b0a2-05e1a59b7389","Type":"ContainerStarted","Data":"c6ddff6ad2cd9aebbacbb21a1e7c5d4e2d6557e136a1d8eb11d4e0b0248cfe87"} Apr 23 18:04:26.009339 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:26.009305 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" event={"ID":"6af99dec-13a5-4460-b0a2-05e1a59b7389","Type":"ContainerStarted","Data":"6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f"} Apr 23 18:04:26.009814 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:26.009391 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:26.030451 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:26.030404 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" podStartSLOduration=1.272695752 podStartE2EDuration="4.030391356s" podCreationTimestamp="2026-04-23 18:04:22 +0000 UTC" firstStartedPulling="2026-04-23 18:04:22.556912794 +0000 UTC m=+1396.978581735" lastFinishedPulling="2026-04-23 18:04:25.314608389 +0000 UTC m=+1399.736277339" observedRunningTime="2026-04-23 18:04:26.028813028 +0000 UTC m=+1400.450481990" watchObservedRunningTime="2026-04-23 18:04:26.030391356 +0000 UTC m=+1400.452060348" Apr 23 18:04:57.018253 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:57.018219 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:04:59.748230 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:59.748196 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-zv6jh"] Apr 23 18:04:59.748595 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:59.748421 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" podUID="6af99dec-13a5-4460-b0a2-05e1a59b7389" containerName="manager" containerID="cri-o://6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f" gracePeriod=10 Apr 23 18:04:59.776105 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:59.776071 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-874ff48d-m8gqz"] Apr 23 18:04:59.807475 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:59.807447 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-m8gqz"] Apr 23 18:04:59.807617 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:59.807507 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:04:59.974127 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:59.974090 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6fa1922-2eea-45cf-981e-0339309fc2d6-cert\") pod \"kserve-controller-manager-874ff48d-m8gqz\" (UID: \"a6fa1922-2eea-45cf-981e-0339309fc2d6\") " pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:04:59.974459 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:04:59.974186 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfnn\" (UniqueName: \"kubernetes.io/projected/a6fa1922-2eea-45cf-981e-0339309fc2d6-kube-api-access-hvfnn\") pod \"kserve-controller-manager-874ff48d-m8gqz\" (UID: \"a6fa1922-2eea-45cf-981e-0339309fc2d6\") " pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:05:00.017545 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.017523 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:05:00.074856 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.074827 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6fa1922-2eea-45cf-981e-0339309fc2d6-cert\") pod \"kserve-controller-manager-874ff48d-m8gqz\" (UID: \"a6fa1922-2eea-45cf-981e-0339309fc2d6\") " pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:05:00.075015 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.074872 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfnn\" (UniqueName: \"kubernetes.io/projected/a6fa1922-2eea-45cf-981e-0339309fc2d6-kube-api-access-hvfnn\") pod \"kserve-controller-manager-874ff48d-m8gqz\" (UID: \"a6fa1922-2eea-45cf-981e-0339309fc2d6\") " pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:05:00.077402 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.077374 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a6fa1922-2eea-45cf-981e-0339309fc2d6-cert\") pod \"kserve-controller-manager-874ff48d-m8gqz\" (UID: \"a6fa1922-2eea-45cf-981e-0339309fc2d6\") " pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:05:00.083405 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.083381 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfnn\" (UniqueName: \"kubernetes.io/projected/a6fa1922-2eea-45cf-981e-0339309fc2d6-kube-api-access-hvfnn\") pod \"kserve-controller-manager-874ff48d-m8gqz\" (UID: \"a6fa1922-2eea-45cf-981e-0339309fc2d6\") " pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:05:00.117547 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.117522 2574 generic.go:358] "Generic (PLEG): container finished" podID="6af99dec-13a5-4460-b0a2-05e1a59b7389" containerID="6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f" exitCode=0 Apr 23 18:05:00.117701 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.117554 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" event={"ID":"6af99dec-13a5-4460-b0a2-05e1a59b7389","Type":"ContainerDied","Data":"6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f"} Apr 23 18:05:00.117701 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.117574 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" event={"ID":"6af99dec-13a5-4460-b0a2-05e1a59b7389","Type":"ContainerDied","Data":"c6ddff6ad2cd9aebbacbb21a1e7c5d4e2d6557e136a1d8eb11d4e0b0248cfe87"} Apr 23 18:05:00.117701 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.117588 2574 scope.go:117] "RemoveContainer" containerID="6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f" Apr 23 18:05:00.117701 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.117593 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-zv6jh" Apr 23 18:05:00.125363 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.125346 2574 scope.go:117] "RemoveContainer" containerID="6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f" Apr 23 18:05:00.125617 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:05:00.125593 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f\": container with ID starting with 6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f not found: ID does not exist" containerID="6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f" Apr 23 18:05:00.125696 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.125644 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f"} err="failed to get container status \"6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f\": rpc error: code = NotFound desc = could not find container \"6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f\": container with ID starting with 6585e1638013067c9e847b0a12592f89fb15d83291e5736d380c68197d204c0f not found: ID does not exist" Apr 23 18:05:00.165617 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.165594 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:05:00.175376 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.175351 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6af99dec-13a5-4460-b0a2-05e1a59b7389-cert\") pod \"6af99dec-13a5-4460-b0a2-05e1a59b7389\" (UID: \"6af99dec-13a5-4460-b0a2-05e1a59b7389\") " Apr 23 18:05:00.175458 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.175392 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbgvp\" (UniqueName: \"kubernetes.io/projected/6af99dec-13a5-4460-b0a2-05e1a59b7389-kube-api-access-hbgvp\") pod \"6af99dec-13a5-4460-b0a2-05e1a59b7389\" (UID: \"6af99dec-13a5-4460-b0a2-05e1a59b7389\") " Apr 23 18:05:00.177571 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.177548 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af99dec-13a5-4460-b0a2-05e1a59b7389-cert" (OuterVolumeSpecName: "cert") pod "6af99dec-13a5-4460-b0a2-05e1a59b7389" (UID: "6af99dec-13a5-4460-b0a2-05e1a59b7389"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:05:00.177687 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.177626 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af99dec-13a5-4460-b0a2-05e1a59b7389-kube-api-access-hbgvp" (OuterVolumeSpecName: "kube-api-access-hbgvp") pod "6af99dec-13a5-4460-b0a2-05e1a59b7389" (UID: "6af99dec-13a5-4460-b0a2-05e1a59b7389"). InnerVolumeSpecName "kube-api-access-hbgvp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 18:05:00.276379 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.276311 2574 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6af99dec-13a5-4460-b0a2-05e1a59b7389-cert\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:05:00.276379 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.276339 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hbgvp\" (UniqueName: \"kubernetes.io/projected/6af99dec-13a5-4460-b0a2-05e1a59b7389-kube-api-access-hbgvp\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:05:00.286734 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.286709 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-m8gqz"] Apr 23 18:05:00.288386 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:05:00.288363 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6fa1922_2eea_45cf_981e_0339309fc2d6.slice/crio-cfe9f08d50cd34ccaeebc7037ad0b414e078197dad32b5e325a37577713353a5 WatchSource:0}: Error finding container cfe9f08d50cd34ccaeebc7037ad0b414e078197dad32b5e325a37577713353a5: Status 404 returned error can't find the container with id cfe9f08d50cd34ccaeebc7037ad0b414e078197dad32b5e325a37577713353a5 Apr 23 18:05:00.440648 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.440594 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-zv6jh"] Apr 23 18:05:00.443501 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:00.443478 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-874ff48d-zv6jh"] Apr 23 18:05:01.124323 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:01.124287 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-m8gqz" event={"ID":"a6fa1922-2eea-45cf-981e-0339309fc2d6","Type":"ContainerStarted","Data":"0ed9574a3e544addd8758d615782f0b5ba206432e7e248ec51a72c2469039093"} Apr 23 18:05:01.124733 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:01.124332 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-874ff48d-m8gqz" event={"ID":"a6fa1922-2eea-45cf-981e-0339309fc2d6","Type":"ContainerStarted","Data":"cfe9f08d50cd34ccaeebc7037ad0b414e078197dad32b5e325a37577713353a5"} Apr 23 18:05:01.124733 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:01.124405 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:05:01.144824 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:01.144784 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-874ff48d-m8gqz" podStartSLOduration=1.7954016240000001 podStartE2EDuration="2.144773576s" podCreationTimestamp="2026-04-23 18:04:59 +0000 UTC" firstStartedPulling="2026-04-23 18:05:00.289604269 +0000 UTC m=+1434.711273219" lastFinishedPulling="2026-04-23 18:05:00.63897623 +0000 UTC m=+1435.060645171" observedRunningTime="2026-04-23 18:05:01.144109815 +0000 UTC m=+1435.565778774" watchObservedRunningTime="2026-04-23 18:05:01.144773576 +0000 UTC m=+1435.566442533" Apr 23 18:05:02.135956 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:02.135922 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af99dec-13a5-4460-b0a2-05e1a59b7389" path="/var/lib/kubelet/pods/6af99dec-13a5-4460-b0a2-05e1a59b7389/volumes" Apr 23 18:05:32.141078 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:32.141050 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-874ff48d-m8gqz" Apr 23 18:05:33.025956 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.025924 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-8n7fr"] Apr 23 18:05:33.026305 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.026286 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6af99dec-13a5-4460-b0a2-05e1a59b7389" containerName="manager" Apr 23 18:05:33.026390 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.026308 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af99dec-13a5-4460-b0a2-05e1a59b7389" containerName="manager" Apr 23 18:05:33.026442 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.026414 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="6af99dec-13a5-4460-b0a2-05e1a59b7389" containerName="manager" Apr 23 18:05:33.028508 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.028484 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:33.031374 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.031350 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-pk7lw\"" Apr 23 18:05:33.032578 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.032557 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 23 18:05:33.038426 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.038404 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-8n7fr"] Apr 23 18:05:33.041387 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.041364 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-lfjdg"] Apr 23 18:05:33.043758 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.043739 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:33.046519 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.046498 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 23 18:05:33.046622 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.046534 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-wksl2\"" Apr 23 18:05:33.055352 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.055326 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-lfjdg"] Apr 23 18:05:33.120426 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.120385 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhfn\" (UniqueName: \"kubernetes.io/projected/2cc98294-11d9-4c7a-83ea-393d4460b0a9-kube-api-access-bjhfn\") pod \"odh-model-controller-696fc77849-lfjdg\" (UID: \"2cc98294-11d9-4c7a-83ea-393d4460b0a9\") " pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:33.120426 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.120422 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/72a0b2b7-604c-465b-b122-f56cc40b0933-tls-certs\") pod \"model-serving-api-86f7b4b499-8n7fr\" (UID: \"72a0b2b7-604c-465b-b122-f56cc40b0933\") " pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:33.120683 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.120459 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54flb\" (UniqueName: \"kubernetes.io/projected/72a0b2b7-604c-465b-b122-f56cc40b0933-kube-api-access-54flb\") pod \"model-serving-api-86f7b4b499-8n7fr\" (UID: \"72a0b2b7-604c-465b-b122-f56cc40b0933\") " pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:33.120683 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.120531 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert\") pod \"odh-model-controller-696fc77849-lfjdg\" (UID: \"2cc98294-11d9-4c7a-83ea-393d4460b0a9\") " pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:33.221573 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.221529 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bjhfn\" (UniqueName: \"kubernetes.io/projected/2cc98294-11d9-4c7a-83ea-393d4460b0a9-kube-api-access-bjhfn\") pod \"odh-model-controller-696fc77849-lfjdg\" (UID: \"2cc98294-11d9-4c7a-83ea-393d4460b0a9\") " pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:33.222045 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.221592 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/72a0b2b7-604c-465b-b122-f56cc40b0933-tls-certs\") pod \"model-serving-api-86f7b4b499-8n7fr\" (UID: \"72a0b2b7-604c-465b-b122-f56cc40b0933\") " pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:33.222045 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.221684 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-54flb\" (UniqueName: \"kubernetes.io/projected/72a0b2b7-604c-465b-b122-f56cc40b0933-kube-api-access-54flb\") pod \"model-serving-api-86f7b4b499-8n7fr\" (UID: \"72a0b2b7-604c-465b-b122-f56cc40b0933\") " pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:33.222045 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.221740 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert\") pod \"odh-model-controller-696fc77849-lfjdg\" (UID: \"2cc98294-11d9-4c7a-83ea-393d4460b0a9\") " pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:33.222045 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:05:33.221874 2574 secret.go:189] Couldn't get secret kserve/odh-model-controller-webhook-cert: secret "odh-model-controller-webhook-cert" not found Apr 23 18:05:33.222045 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:05:33.221943 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert podName:2cc98294-11d9-4c7a-83ea-393d4460b0a9 nodeName:}" failed. No retries permitted until 2026-04-23 18:05:33.721920445 +0000 UTC m=+1468.143589383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert") pod "odh-model-controller-696fc77849-lfjdg" (UID: "2cc98294-11d9-4c7a-83ea-393d4460b0a9") : secret "odh-model-controller-webhook-cert" not found Apr 23 18:05:33.224756 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.224726 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/72a0b2b7-604c-465b-b122-f56cc40b0933-tls-certs\") pod \"model-serving-api-86f7b4b499-8n7fr\" (UID: \"72a0b2b7-604c-465b-b122-f56cc40b0933\") " pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:33.231178 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.231150 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjhfn\" (UniqueName: \"kubernetes.io/projected/2cc98294-11d9-4c7a-83ea-393d4460b0a9-kube-api-access-bjhfn\") pod \"odh-model-controller-696fc77849-lfjdg\" (UID: \"2cc98294-11d9-4c7a-83ea-393d4460b0a9\") " pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:33.231305 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.231286 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-54flb\" (UniqueName: \"kubernetes.io/projected/72a0b2b7-604c-465b-b122-f56cc40b0933-kube-api-access-54flb\") pod \"model-serving-api-86f7b4b499-8n7fr\" (UID: \"72a0b2b7-604c-465b-b122-f56cc40b0933\") " pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:33.343199 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.343104 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:33.469183 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.469155 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-8n7fr"] Apr 23 18:05:33.470819 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:05:33.470791 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72a0b2b7_604c_465b_b122_f56cc40b0933.slice/crio-fc0b4fd6ea59d73528dd806a4021f95a0586f5d9f5ea55dbfa4ec1cb0e3e620f WatchSource:0}: Error finding container fc0b4fd6ea59d73528dd806a4021f95a0586f5d9f5ea55dbfa4ec1cb0e3e620f: Status 404 returned error can't find the container with id fc0b4fd6ea59d73528dd806a4021f95a0586f5d9f5ea55dbfa4ec1cb0e3e620f Apr 23 18:05:33.726261 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:33.726229 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert\") pod \"odh-model-controller-696fc77849-lfjdg\" (UID: \"2cc98294-11d9-4c7a-83ea-393d4460b0a9\") " pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:33.726433 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:05:33.726378 2574 secret.go:189] Couldn't get secret kserve/odh-model-controller-webhook-cert: secret "odh-model-controller-webhook-cert" not found Apr 23 18:05:33.726475 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:05:33.726446 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert podName:2cc98294-11d9-4c7a-83ea-393d4460b0a9 nodeName:}" failed. No retries permitted until 2026-04-23 18:05:34.726429503 +0000 UTC m=+1469.148098440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert") pod "odh-model-controller-696fc77849-lfjdg" (UID: "2cc98294-11d9-4c7a-83ea-393d4460b0a9") : secret "odh-model-controller-webhook-cert" not found Apr 23 18:05:34.228536 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:34.228495 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-8n7fr" event={"ID":"72a0b2b7-604c-465b-b122-f56cc40b0933","Type":"ContainerStarted","Data":"fc0b4fd6ea59d73528dd806a4021f95a0586f5d9f5ea55dbfa4ec1cb0e3e620f"} Apr 23 18:05:34.737296 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:34.737261 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert\") pod \"odh-model-controller-696fc77849-lfjdg\" (UID: \"2cc98294-11d9-4c7a-83ea-393d4460b0a9\") " pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:34.739905 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:34.739874 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cc98294-11d9-4c7a-83ea-393d4460b0a9-cert\") pod \"odh-model-controller-696fc77849-lfjdg\" (UID: \"2cc98294-11d9-4c7a-83ea-393d4460b0a9\") " pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:34.855122 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:34.855034 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:34.980678 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:34.980648 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-lfjdg"] Apr 23 18:05:34.981384 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:05:34.981356 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cc98294_11d9_4c7a_83ea_393d4460b0a9.slice/crio-886ed85e2c0349b74dece6a4ea4d271f28a92daac8ce2205e9f5b8004b54a165 WatchSource:0}: Error finding container 886ed85e2c0349b74dece6a4ea4d271f28a92daac8ce2205e9f5b8004b54a165: Status 404 returned error can't find the container with id 886ed85e2c0349b74dece6a4ea4d271f28a92daac8ce2205e9f5b8004b54a165 Apr 23 18:05:35.232863 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:35.232832 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-lfjdg" event={"ID":"2cc98294-11d9-4c7a-83ea-393d4460b0a9","Type":"ContainerStarted","Data":"886ed85e2c0349b74dece6a4ea4d271f28a92daac8ce2205e9f5b8004b54a165"} Apr 23 18:05:35.234221 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:35.234199 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-8n7fr" event={"ID":"72a0b2b7-604c-465b-b122-f56cc40b0933","Type":"ContainerStarted","Data":"b0d3cec1c5d878d78407e277dc70d667a7c4c99f68e6544525df79d04e1ebd23"} Apr 23 18:05:35.234360 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:35.234348 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:35.253169 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:35.253125 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-8n7fr" podStartSLOduration=1.199409852 podStartE2EDuration="2.253111378s" podCreationTimestamp="2026-04-23 18:05:33 +0000 UTC" firstStartedPulling="2026-04-23 18:05:33.472692914 +0000 UTC m=+1467.894361855" lastFinishedPulling="2026-04-23 18:05:34.526394429 +0000 UTC m=+1468.948063381" observedRunningTime="2026-04-23 18:05:35.2524351 +0000 UTC m=+1469.674104058" watchObservedRunningTime="2026-04-23 18:05:35.253111378 +0000 UTC m=+1469.674780338" Apr 23 18:05:38.245778 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:38.245746 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-lfjdg" event={"ID":"2cc98294-11d9-4c7a-83ea-393d4460b0a9","Type":"ContainerStarted","Data":"894649d8351c14534c2d577b7bcd56cce28b1a66796baf0e7921a8b8b1a2e215"} Apr 23 18:05:38.246190 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:38.245972 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:05:38.264214 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:38.264158 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-lfjdg" podStartSLOduration=2.554343471 podStartE2EDuration="5.264145797s" podCreationTimestamp="2026-04-23 18:05:33 +0000 UTC" firstStartedPulling="2026-04-23 18:05:34.983089705 +0000 UTC m=+1469.404758642" lastFinishedPulling="2026-04-23 18:05:37.69289202 +0000 UTC m=+1472.114560968" observedRunningTime="2026-04-23 18:05:38.263828253 +0000 UTC m=+1472.685497224" watchObservedRunningTime="2026-04-23 18:05:38.264145797 +0000 UTC m=+1472.685814757" Apr 23 18:05:46.241816 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:46.241782 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-8n7fr" Apr 23 18:05:49.252389 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:05:49.252356 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-lfjdg" Apr 23 18:06:06.100749 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:06:06.100626 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:06:06.101526 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:06:06.100761 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:09:34.813795 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.813706 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q"] Apr 23 18:09:34.815902 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.815884 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:34.818976 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.818949 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-16a43-serving-cert\"" Apr 23 18:09:34.819092 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.819032 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-rvdb2\"" Apr 23 18:09:34.819092 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.819047 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:09:34.819092 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.819066 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-16a43-kube-rbac-proxy-sar-config\"" Apr 23 18:09:34.826827 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.826807 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q"] Apr 23 18:09:34.849417 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.849393 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls\") pod \"model-chainer-raw-16a43-98bb746d-5xp2q\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:34.849535 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.849451 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5880b68-0059-412f-bae2-80d01a81c6a1-openshift-service-ca-bundle\") pod \"model-chainer-raw-16a43-98bb746d-5xp2q\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:34.950383 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.950345 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5880b68-0059-412f-bae2-80d01a81c6a1-openshift-service-ca-bundle\") pod \"model-chainer-raw-16a43-98bb746d-5xp2q\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:34.950539 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.950435 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls\") pod \"model-chainer-raw-16a43-98bb746d-5xp2q\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:34.950582 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:09:34.950558 2574 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-raw-16a43-serving-cert: secret "model-chainer-raw-16a43-serving-cert" not found Apr 23 18:09:34.950700 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:09:34.950675 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls podName:d5880b68-0059-412f-bae2-80d01a81c6a1 nodeName:}" failed. No retries permitted until 2026-04-23 18:09:35.450622499 +0000 UTC m=+1709.872291439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls") pod "model-chainer-raw-16a43-98bb746d-5xp2q" (UID: "d5880b68-0059-412f-bae2-80d01a81c6a1") : secret "model-chainer-raw-16a43-serving-cert" not found Apr 23 18:09:34.951045 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:34.951027 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5880b68-0059-412f-bae2-80d01a81c6a1-openshift-service-ca-bundle\") pod \"model-chainer-raw-16a43-98bb746d-5xp2q\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:35.453848 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:35.453815 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls\") pod \"model-chainer-raw-16a43-98bb746d-5xp2q\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:35.456350 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:35.456330 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls\") pod \"model-chainer-raw-16a43-98bb746d-5xp2q\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:35.727824 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:35.727745 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:35.848035 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:35.848010 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q"] Apr 23 18:09:35.850278 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:09:35.850249 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5880b68_0059_412f_bae2_80d01a81c6a1.slice/crio-6c5b00fdf1429c5d8daf44f2181557fc3f2d89d6f5fdc41b45a816986131ab15 WatchSource:0}: Error finding container 6c5b00fdf1429c5d8daf44f2181557fc3f2d89d6f5fdc41b45a816986131ab15: Status 404 returned error can't find the container with id 6c5b00fdf1429c5d8daf44f2181557fc3f2d89d6f5fdc41b45a816986131ab15 Apr 23 18:09:35.852158 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:35.852141 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:09:36.008344 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:36.008260 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" event={"ID":"d5880b68-0059-412f-bae2-80d01a81c6a1","Type":"ContainerStarted","Data":"6c5b00fdf1429c5d8daf44f2181557fc3f2d89d6f5fdc41b45a816986131ab15"} Apr 23 18:09:39.019147 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:39.019109 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" event={"ID":"d5880b68-0059-412f-bae2-80d01a81c6a1","Type":"ContainerStarted","Data":"1ef026133c830776401dda8c9af50976d545ab656e50afdce16a1305007a0c0c"} Apr 23 18:09:39.019515 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:39.019236 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:09:39.038177 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:39.038136 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podStartSLOduration=2.647260189 podStartE2EDuration="5.038124516s" podCreationTimestamp="2026-04-23 18:09:34 +0000 UTC" firstStartedPulling="2026-04-23 18:09:35.852271367 +0000 UTC m=+1710.273940305" lastFinishedPulling="2026-04-23 18:09:38.243135682 +0000 UTC m=+1712.664804632" observedRunningTime="2026-04-23 18:09:39.035905353 +0000 UTC m=+1713.457574316" watchObservedRunningTime="2026-04-23 18:09:39.038124516 +0000 UTC m=+1713.459793475" Apr 23 18:09:44.859782 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:44.859748 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q"] Apr 23 18:09:44.860184 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:44.860013 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" containerID="cri-o://1ef026133c830776401dda8c9af50976d545ab656e50afdce16a1305007a0c0c" gracePeriod=30 Apr 23 18:09:44.867278 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:44.867248 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:09:49.864697 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:49.864652 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:09:54.863906 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:54.863870 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:09:59.863804 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:09:59.863762 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:10:04.864914 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:04.864869 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:10:09.863901 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:09.863863 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:10:14.864071 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:14.864027 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" probeResult="failure" output="Get \"https://10.133.0.28:8080/readyz\": dial tcp 10.133.0.28:8080: connect: connection refused" Apr 23 18:10:14.891727 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:10:14.891669 2574 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5880b68_0059_412f_bae2_80d01a81c6a1.slice/crio-conmon-1ef026133c830776401dda8c9af50976d545ab656e50afdce16a1305007a0c0c.scope\": RecentStats: unable to find data in memory cache]" Apr 23 18:10:15.140697 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.140668 2574 generic.go:358] "Generic (PLEG): container finished" podID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerID="1ef026133c830776401dda8c9af50976d545ab656e50afdce16a1305007a0c0c" exitCode=0 Apr 23 18:10:15.140869 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.140703 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" event={"ID":"d5880b68-0059-412f-bae2-80d01a81c6a1","Type":"ContainerDied","Data":"1ef026133c830776401dda8c9af50976d545ab656e50afdce16a1305007a0c0c"} Apr 23 18:10:15.510617 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.510593 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:10:15.569923 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.569892 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5880b68-0059-412f-bae2-80d01a81c6a1-openshift-service-ca-bundle\") pod \"d5880b68-0059-412f-bae2-80d01a81c6a1\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " Apr 23 18:10:15.570103 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.569933 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls\") pod \"d5880b68-0059-412f-bae2-80d01a81c6a1\" (UID: \"d5880b68-0059-412f-bae2-80d01a81c6a1\") " Apr 23 18:10:15.570296 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.570269 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5880b68-0059-412f-bae2-80d01a81c6a1-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "d5880b68-0059-412f-bae2-80d01a81c6a1" (UID: "d5880b68-0059-412f-bae2-80d01a81c6a1"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:10:15.572201 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.572182 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d5880b68-0059-412f-bae2-80d01a81c6a1" (UID: "d5880b68-0059-412f-bae2-80d01a81c6a1"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:10:15.670960 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.670926 2574 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5880b68-0059-412f-bae2-80d01a81c6a1-openshift-service-ca-bundle\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:10:15.670960 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:15.670954 2574 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5880b68-0059-412f-bae2-80d01a81c6a1-proxy-tls\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:10:16.144507 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:16.144475 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" event={"ID":"d5880b68-0059-412f-bae2-80d01a81c6a1","Type":"ContainerDied","Data":"6c5b00fdf1429c5d8daf44f2181557fc3f2d89d6f5fdc41b45a816986131ab15"} Apr 23 18:10:16.144883 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:16.144513 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q" Apr 23 18:10:16.144883 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:16.144519 2574 scope.go:117] "RemoveContainer" containerID="1ef026133c830776401dda8c9af50976d545ab656e50afdce16a1305007a0c0c" Apr 23 18:10:16.165314 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:16.165291 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q"] Apr 23 18:10:16.168449 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:16.168428 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-16a43-98bb746d-5xp2q"] Apr 23 18:10:18.136327 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:10:18.136294 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" path="/var/lib/kubelet/pods/d5880b68-0059-412f-bae2-80d01a81c6a1/volumes" Apr 23 18:11:06.121343 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:06.121229 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:11:06.129225 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:06.123283 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:11:15.150861 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.150823 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp"] Apr 23 18:11:15.151433 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.151396 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" Apr 23 18:11:15.151433 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.151419 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" Apr 23 18:11:15.151603 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.151502 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5880b68-0059-412f-bae2-80d01a81c6a1" containerName="model-chainer-raw-16a43" Apr 23 18:11:15.154756 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.154734 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:15.157901 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.157880 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 23 18:11:15.158028 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.158005 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-hpa-d4310-kube-rbac-proxy-sar-config\"" Apr 23 18:11:15.158090 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.158020 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-rvdb2\"" Apr 23 18:11:15.159050 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.159024 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-raw-hpa-d4310-serving-cert\"" Apr 23 18:11:15.162310 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.162291 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp"] Apr 23 18:11:15.230935 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.230904 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca63d941-854b-4872-a822-312ec779392c-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-d4310-6497759bd4-ps2zp\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:15.231121 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.230968 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls\") pod \"model-chainer-raw-hpa-d4310-6497759bd4-ps2zp\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:15.331905 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.331869 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca63d941-854b-4872-a822-312ec779392c-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-d4310-6497759bd4-ps2zp\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:15.332089 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.331920 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls\") pod \"model-chainer-raw-hpa-d4310-6497759bd4-ps2zp\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:15.332089 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:11:15.332052 2574 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-serving-cert: secret "model-chainer-raw-hpa-d4310-serving-cert" not found Apr 23 18:11:15.332197 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:11:15.332118 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls podName:ca63d941-854b-4872-a822-312ec779392c nodeName:}" failed. No retries permitted until 2026-04-23 18:11:15.832101357 +0000 UTC m=+1810.253770294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls") pod "model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" (UID: "ca63d941-854b-4872-a822-312ec779392c") : secret "model-chainer-raw-hpa-d4310-serving-cert" not found Apr 23 18:11:15.332497 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.332478 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca63d941-854b-4872-a822-312ec779392c-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-d4310-6497759bd4-ps2zp\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:15.836405 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.836369 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls\") pod \"model-chainer-raw-hpa-d4310-6497759bd4-ps2zp\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:15.838967 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:15.838947 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls\") pod \"model-chainer-raw-hpa-d4310-6497759bd4-ps2zp\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:16.067614 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:16.067576 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:16.191089 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:16.191065 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp"] Apr 23 18:11:16.193264 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:11:16.193234 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca63d941_854b_4872_a822_312ec779392c.slice/crio-a350e3d5de1cd1c981d6c22e93d94f7f77b7ff5a9567b19c8a7d02441a2a30d6 WatchSource:0}: Error finding container a350e3d5de1cd1c981d6c22e93d94f7f77b7ff5a9567b19c8a7d02441a2a30d6: Status 404 returned error can't find the container with id a350e3d5de1cd1c981d6c22e93d94f7f77b7ff5a9567b19c8a7d02441a2a30d6 Apr 23 18:11:16.333497 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:16.333457 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" event={"ID":"ca63d941-854b-4872-a822-312ec779392c","Type":"ContainerStarted","Data":"5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6"} Apr 23 18:11:16.333696 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:16.333505 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" event={"ID":"ca63d941-854b-4872-a822-312ec779392c","Type":"ContainerStarted","Data":"a350e3d5de1cd1c981d6c22e93d94f7f77b7ff5a9567b19c8a7d02441a2a30d6"} Apr 23 18:11:16.333696 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:16.333548 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:16.351199 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:16.351156 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" podStartSLOduration=1.35114386 podStartE2EDuration="1.35114386s" podCreationTimestamp="2026-04-23 18:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:11:16.349910604 +0000 UTC m=+1810.771579563" watchObservedRunningTime="2026-04-23 18:11:16.35114386 +0000 UTC m=+1810.772812818" Apr 23 18:11:22.345212 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:22.345174 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:25.199901 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:25.199871 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp"] Apr 23 18:11:25.200347 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:25.200058 2574 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" containerID="cri-o://5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6" gracePeriod=30 Apr 23 18:11:27.343254 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:27.343211 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:11:32.343548 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:32.343512 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:11:37.343675 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:37.343612 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:11:37.344061 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:37.343767 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:42.343542 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:42.343505 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:11:47.343270 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:47.343229 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:11:52.343122 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:52.343080 2574 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 23 18:11:55.354393 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.354369 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:55.363604 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.363583 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca63d941-854b-4872-a822-312ec779392c-openshift-service-ca-bundle\") pod \"ca63d941-854b-4872-a822-312ec779392c\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " Apr 23 18:11:55.363690 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.363654 2574 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls\") pod \"ca63d941-854b-4872-a822-312ec779392c\" (UID: \"ca63d941-854b-4872-a822-312ec779392c\") " Apr 23 18:11:55.363960 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.363935 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca63d941-854b-4872-a822-312ec779392c-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "ca63d941-854b-4872-a822-312ec779392c" (UID: "ca63d941-854b-4872-a822-312ec779392c"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 18:11:55.365772 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.365754 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "ca63d941-854b-4872-a822-312ec779392c" (UID: "ca63d941-854b-4872-a822-312ec779392c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 18:11:55.458668 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.458554 2574 generic.go:358] "Generic (PLEG): container finished" podID="ca63d941-854b-4872-a822-312ec779392c" containerID="5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6" exitCode=0 Apr 23 18:11:55.458668 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.458620 2574 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" Apr 23 18:11:55.458873 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.458668 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" event={"ID":"ca63d941-854b-4872-a822-312ec779392c","Type":"ContainerDied","Data":"5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6"} Apr 23 18:11:55.458873 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.458706 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp" event={"ID":"ca63d941-854b-4872-a822-312ec779392c","Type":"ContainerDied","Data":"a350e3d5de1cd1c981d6c22e93d94f7f77b7ff5a9567b19c8a7d02441a2a30d6"} Apr 23 18:11:55.458873 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.458722 2574 scope.go:117] "RemoveContainer" containerID="5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6" Apr 23 18:11:55.464868 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.464842 2574 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca63d941-854b-4872-a822-312ec779392c-proxy-tls\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:11:55.464868 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.464865 2574 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca63d941-854b-4872-a822-312ec779392c-openshift-service-ca-bundle\") on node \"ip-10-0-139-215.ec2.internal\" DevicePath \"\"" Apr 23 18:11:55.467532 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.467514 2574 scope.go:117] "RemoveContainer" containerID="5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6" Apr 23 18:11:55.467843 ip-10-0-139-215 kubenswrapper[2574]: E0423 18:11:55.467822 2574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6\": container with ID starting with 5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6 not found: ID does not exist" containerID="5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6" Apr 23 18:11:55.467906 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.467857 2574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6"} err="failed to get container status \"5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6\": rpc error: code = NotFound desc = could not find container \"5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6\": container with ID starting with 5fef1092e50035267408e0f43fcfbe356c91f339039296e9b1b8d6bd345690a6 not found: ID does not exist" Apr 23 18:11:55.480933 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.480903 2574 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp"] Apr 23 18:11:55.486809 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:55.486784 2574 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-d4310-6497759bd4-ps2zp"] Apr 23 18:11:56.136599 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:11:56.136562 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca63d941-854b-4872-a822-312ec779392c" path="/var/lib/kubelet/pods/ca63d941-854b-4872-a822-312ec779392c/volumes" Apr 23 18:16:06.143113 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:16:06.143088 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:16:06.145466 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:16:06.145452 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:20:53.887475 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:53.887443 2574 ???:1] "http: TLS handshake error from 10.0.139.215:51186: EOF" Apr 23 18:20:53.888485 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:53.888467 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-pthrv_71650021-930d-4f87-9886-b770243bb591/global-pull-secret-syncer/0.log" Apr 23 18:20:54.029840 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:54.029811 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-vq478_3cccf98b-e13a-4889-a901-8e28ef02f8da/konnectivity-agent/0.log" Apr 23 18:20:54.130276 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:54.130246 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-139-215.ec2.internal_0d68c36ed96ea5528325ea66516f8810/haproxy/0.log" Apr 23 18:20:57.862990 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:57.862915 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-7przb_51f42b9a-8b48-44d4-b4c8-1ffc6a890c24/node-exporter/0.log" Apr 23 18:20:57.888247 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:57.888214 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-7przb_51f42b9a-8b48-44d4-b4c8-1ffc6a890c24/kube-rbac-proxy/0.log" Apr 23 18:20:57.911843 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:57.911821 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-7przb_51f42b9a-8b48-44d4-b4c8-1ffc6a890c24/init-textfile/0.log" Apr 23 18:20:58.409226 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:58.409193 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-c9r2d_fcf3918d-5f1c-49dc-995d-7e8153dcee95/prometheus-operator/0.log" Apr 23 18:20:58.430008 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:58.429982 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-c9r2d_fcf3918d-5f1c-49dc-995d-7e8153dcee95/kube-rbac-proxy/0.log" Apr 23 18:20:58.460451 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:58.460426 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-57cf98b594-lnsd6_8abaf28c-2dbf-42c9-af60-3679eeb62d64/prometheus-operator-admission-webhook/0.log" Apr 23 18:20:59.995739 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:20:59.995710 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-cb95c66f6-qtf6n_21d8a344-f03b-4bf0-845c-dcc9f5fc81fb/networking-console-plugin/0.log" Apr 23 18:21:00.898933 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:00.898898 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-6bcc868b7-jklss_13a332d6-578a-4838-8bd3-9a2a0eb00e2f/download-server/0.log" Apr 23 18:21:01.084374 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.084341 2574 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x"] Apr 23 18:21:01.084775 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.084671 2574 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" Apr 23 18:21:01.084775 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.084682 2574 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" Apr 23 18:21:01.084775 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.084741 2574 memory_manager.go:356] "RemoveStaleState removing state" podUID="ca63d941-854b-4872-a822-312ec779392c" containerName="model-chainer-raw-hpa-d4310" Apr 23 18:21:01.087679 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.087657 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.090653 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.090617 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-gtcqp\"/\"kube-root-ca.crt\"" Apr 23 18:21:01.091904 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.091888 2574 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-gtcqp\"/\"openshift-service-ca.crt\"" Apr 23 18:21:01.091963 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.091888 2574 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-gtcqp\"/\"default-dockercfg-qkf26\"" Apr 23 18:21:01.098969 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.098947 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x"] Apr 23 18:21:01.172698 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.172596 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-podres\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.172848 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.172750 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prncf\" (UniqueName: \"kubernetes.io/projected/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-kube-api-access-prncf\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.172848 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.172785 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-lib-modules\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.172848 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.172807 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-sys\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.172848 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.172835 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-proc\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273206 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273167 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prncf\" (UniqueName: \"kubernetes.io/projected/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-kube-api-access-prncf\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273206 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273208 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-lib-modules\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273463 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273228 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-sys\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273463 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273306 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-sys\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273463 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273362 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-proc\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273463 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273399 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-proc\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273463 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273367 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-lib-modules\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273463 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273417 2574 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-podres\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.273737 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.273532 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-podres\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.283120 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.283099 2574 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prncf\" (UniqueName: \"kubernetes.io/projected/ffd14ce5-ce37-4156-a4a1-4944de47eb3d-kube-api-access-prncf\") pod \"perf-node-gather-daemonset-gv28x\" (UID: \"ffd14ce5-ce37-4156-a4a1-4944de47eb3d\") " pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.343381 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.343351 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7c6cbb6c87-btbxj_49ce7885-097a-4c7c-8f10-cb427f7f72c3/volume-data-source-validator/0.log" Apr 23 18:21:01.397323 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.397291 2574 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:01.521207 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.521155 2574 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x"] Apr 23 18:21:01.523375 ip-10-0-139-215 kubenswrapper[2574]: W0423 18:21:01.523346 2574 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podffd14ce5_ce37_4156_a4a1_4944de47eb3d.slice/crio-65c114962585ac24107f6e8df85aae2c8df16a5edfff345bafdb5d3dd6b3036b WatchSource:0}: Error finding container 65c114962585ac24107f6e8df85aae2c8df16a5edfff345bafdb5d3dd6b3036b: Status 404 returned error can't find the container with id 65c114962585ac24107f6e8df85aae2c8df16a5edfff345bafdb5d3dd6b3036b Apr 23 18:21:01.524880 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:01.524862 2574 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 23 18:21:02.198154 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.198126 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-pnmwc_23665133-39c5-4391-bafe-d17164250221/dns/0.log" Apr 23 18:21:02.207730 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.207699 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" event={"ID":"ffd14ce5-ce37-4156-a4a1-4944de47eb3d","Type":"ContainerStarted","Data":"270e1c4b59c0fbc806c8ec9375dacaaaac69ef8eb8a08155edcd3f4eb50b458b"} Apr 23 18:21:02.207730 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.207731 2574 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" event={"ID":"ffd14ce5-ce37-4156-a4a1-4944de47eb3d","Type":"ContainerStarted","Data":"65c114962585ac24107f6e8df85aae2c8df16a5edfff345bafdb5d3dd6b3036b"} Apr 23 18:21:02.207916 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.207868 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:02.220994 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.220968 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-pnmwc_23665133-39c5-4391-bafe-d17164250221/kube-rbac-proxy/0.log" Apr 23 18:21:02.226495 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.226448 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" podStartSLOduration=1.226431579 podStartE2EDuration="1.226431579s" podCreationTimestamp="2026-04-23 18:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 18:21:02.224364095 +0000 UTC m=+2396.646033053" watchObservedRunningTime="2026-04-23 18:21:02.226431579 +0000 UTC m=+2396.648100540" Apr 23 18:21:02.277894 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.277869 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-l747m_828447ca-91a9-49c8-a1b8-50a5cfbe0580/dns-node-resolver/0.log" Apr 23 18:21:02.746642 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.746595 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-59c6488d5c-6pw5f_0775f5a9-0672-43b2-9425-ebc191d0f124/registry/0.log" Apr 23 18:21:02.766039 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:02.766008 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-76svx_59053c21-2759-4fb0-86d0-fd32dd514204/node-ca/0.log" Apr 23 18:21:03.625347 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:03.625315 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-75ddc44-mjcts_c647dab7-a8c4-4b49-ab18-6a3500f88227/router/2.log" Apr 23 18:21:03.629791 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:03.629759 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-75ddc44-mjcts_c647dab7-a8c4-4b49-ab18-6a3500f88227/router/1.log" Apr 23 18:21:04.029851 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:04.029773 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-wfmxn_c0a77136-ccae-4958-8ad5-7373ea79258f/serve-healthcheck-canary/0.log" Apr 23 18:21:04.590937 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:04.590903 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-rwkcd_f5a18479-499c-485f-ba5a-83ecc0d54ca4/kube-rbac-proxy/0.log" Apr 23 18:21:04.614379 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:04.614350 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-rwkcd_f5a18479-499c-485f-ba5a-83ecc0d54ca4/exporter/0.log" Apr 23 18:21:04.637087 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:04.637058 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-rwkcd_f5a18479-499c-485f-ba5a-83ecc0d54ca4/extractor/0.log" Apr 23 18:21:06.163933 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:06.163835 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:21:06.166785 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:06.166766 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:21:06.667358 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:06.667328 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-874ff48d-m8gqz_a6fa1922-2eea-45cf-981e-0339309fc2d6/manager/0.log" Apr 23 18:21:06.712796 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:06.712769 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-8n7fr_72a0b2b7-604c-465b-b122-f56cc40b0933/server/0.log" Apr 23 18:21:06.799576 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:06.799545 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-lfjdg_2cc98294-11d9-4c7a-83ea-393d4460b0a9/manager/0.log" Apr 23 18:21:08.221354 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:08.221323 2574 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-gtcqp/perf-node-gather-daemonset-gv28x" Apr 23 18:21:11.531412 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:11.531334 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-hjgnq_8eeed746-7c2a-49ef-98bd-977fa1136b3c/kube-storage-version-migrator-operator/1.log" Apr 23 18:21:11.532288 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:11.532272 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6769c5d45-hjgnq_8eeed746-7c2a-49ef-98bd-977fa1136b3c/kube-storage-version-migrator-operator/0.log" Apr 23 18:21:12.529981 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:12.529951 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-5h5xl_2e14deef-4985-48d4-a516-5ed2e89733cf/kube-multus-additional-cni-plugins/0.log" Apr 23 18:21:12.553185 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:12.553158 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-5h5xl_2e14deef-4985-48d4-a516-5ed2e89733cf/egress-router-binary-copy/0.log" Apr 23 18:21:12.575890 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:12.575856 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-5h5xl_2e14deef-4985-48d4-a516-5ed2e89733cf/cni-plugins/0.log" Apr 23 18:21:12.597251 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:12.597228 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-5h5xl_2e14deef-4985-48d4-a516-5ed2e89733cf/bond-cni-plugin/0.log" Apr 23 18:21:12.621072 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:12.621002 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-5h5xl_2e14deef-4985-48d4-a516-5ed2e89733cf/routeoverride-cni/0.log" Apr 23 18:21:12.644300 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:12.644283 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-5h5xl_2e14deef-4985-48d4-a516-5ed2e89733cf/whereabouts-cni-bincopy/0.log" Apr 23 18:21:12.672111 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:12.672088 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-5h5xl_2e14deef-4985-48d4-a516-5ed2e89733cf/whereabouts-cni/0.log" Apr 23 18:21:13.081591 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:13.081559 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r2mgw_a29bcdd0-8e46-4bba-9d0f-3db54ee9f75b/kube-multus/0.log" Apr 23 18:21:13.187226 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:13.187198 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-mfhnv_3d52817f-2284-48d3-800c-a67ac0e0fe4b/network-metrics-daemon/0.log" Apr 23 18:21:13.207748 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:13.207723 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-mfhnv_3d52817f-2284-48d3-800c-a67ac0e0fe4b/kube-rbac-proxy/0.log" Apr 23 18:21:14.044473 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.044446 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-controller/0.log" Apr 23 18:21:14.068429 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.068405 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/0.log" Apr 23 18:21:14.082508 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.082483 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovn-acl-logging/1.log" Apr 23 18:21:14.103621 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.103593 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/kube-rbac-proxy-node/0.log" Apr 23 18:21:14.129361 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.129332 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/kube-rbac-proxy-ovn-metrics/0.log" Apr 23 18:21:14.149881 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.149854 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/northd/0.log" Apr 23 18:21:14.171400 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.171376 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/nbdb/0.log" Apr 23 18:21:14.197902 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.197875 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/sbdb/0.log" Apr 23 18:21:14.300734 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:14.300606 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-246wr_0f6164a3-aee1-463f-8c3a-a432711f40db/ovnkube-controller/0.log" Apr 23 18:21:16.265480 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:16.265449 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-lz78w_c0594da3-a624-4d0d-9765-82537ca166c3/network-check-target-container/0.log" Apr 23 18:21:17.375626 ip-10-0-139-215 kubenswrapper[2574]: I0423 18:21:17.375554 2574 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-s97fv_a046af0e-862d-4ab0-abeb-47a68683f10f/iptables-alerter/0.log"