Apr 24 19:06:29.387254 ip-10-0-130-85 systemd[1]: Starting Kubernetes Kubelet... Apr 24 19:06:29.845528 ip-10-0-130-85 kubenswrapper[2571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 19:06:29.845528 ip-10-0-130-85 kubenswrapper[2571]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 24 19:06:29.845528 ip-10-0-130-85 kubenswrapper[2571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 19:06:29.845528 ip-10-0-130-85 kubenswrapper[2571]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 19:06:29.845528 ip-10-0-130-85 kubenswrapper[2571]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 19:06:29.848037 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.847875 2571 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 19:06:29.852112 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852097 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 19:06:29.852112 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852112 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852117 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852120 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852123 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852126 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852129 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852132 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852134 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852141 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852144 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852147 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852150 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852153 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852156 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852159 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852161 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852164 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852167 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852170 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852173 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 19:06:29.852176 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852175 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852178 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852181 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852184 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852187 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852190 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852193 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852196 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852198 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852201 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852203 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852206 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852209 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852212 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852214 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852217 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852219 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852222 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852224 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852227 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 19:06:29.852652 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852230 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852232 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852235 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852237 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852240 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852242 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852245 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852247 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852250 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852253 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852255 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852259 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852264 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852268 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852271 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852275 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852278 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852281 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852284 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 19:06:29.853154 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852286 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852289 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852291 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852294 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852297 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852299 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852302 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852305 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852308 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852313 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852316 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852319 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852322 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852324 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852327 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852330 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852333 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852335 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852338 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 19:06:29.853613 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852340 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852343 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852345 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852348 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852351 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852354 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.852357 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853961 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853969 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853972 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853975 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853979 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853982 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853984 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853987 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853990 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853993 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853996 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.853999 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 19:06:29.854113 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854002 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854005 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854007 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854010 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854012 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854015 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854017 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854020 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854023 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854026 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854030 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854033 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854036 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854039 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854043 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854046 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854050 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854054 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854057 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 19:06:29.854582 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854060 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854063 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854066 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854068 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854071 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854074 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854077 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854079 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854082 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854084 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854087 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854089 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854092 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854095 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854098 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854101 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854103 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854105 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854108 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854111 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 19:06:29.855068 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854113 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854115 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854118 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854120 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854123 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854126 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854128 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854131 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854134 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854136 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854139 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854142 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854144 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854147 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854149 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854152 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854154 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854157 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854159 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854162 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 19:06:29.855552 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854164 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854167 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854170 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854172 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854175 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854177 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854180 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854182 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854185 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854188 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854191 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854193 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854197 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854199 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854202 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854268 2571 flags.go:64] FLAG: --address="0.0.0.0" Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854277 2571 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854283 2571 flags.go:64] FLAG: --anonymous-auth="true" Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854287 2571 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854292 2571 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854295 2571 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 24 19:06:29.856067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854300 2571 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854304 2571 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854308 2571 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854310 2571 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854314 2571 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854317 2571 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854320 2571 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854324 2571 flags.go:64] FLAG: --cgroup-root="" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854326 2571 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854329 2571 flags.go:64] FLAG: --client-ca-file="" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854332 2571 flags.go:64] FLAG: --cloud-config="" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854335 2571 flags.go:64] FLAG: --cloud-provider="external" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854339 2571 flags.go:64] FLAG: --cluster-dns="[]" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854344 2571 flags.go:64] FLAG: --cluster-domain="" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854347 2571 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854350 2571 flags.go:64] FLAG: --config-dir="" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854353 2571 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854357 2571 flags.go:64] FLAG: --container-log-max-files="5" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854361 2571 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854364 2571 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854367 2571 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854370 2571 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854374 2571 flags.go:64] FLAG: --contention-profiling="false" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854377 2571 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 24 19:06:29.856578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854380 2571 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854383 2571 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854386 2571 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854390 2571 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854394 2571 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854396 2571 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854399 2571 flags.go:64] FLAG: --enable-load-reader="false" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854403 2571 flags.go:64] FLAG: --enable-server="true" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854406 2571 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854410 2571 flags.go:64] FLAG: --event-burst="100" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854414 2571 flags.go:64] FLAG: --event-qps="50" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854417 2571 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854420 2571 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854423 2571 flags.go:64] FLAG: --eviction-hard="" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854427 2571 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854430 2571 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854433 2571 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854436 2571 flags.go:64] FLAG: --eviction-soft="" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854439 2571 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854442 2571 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854445 2571 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854448 2571 flags.go:64] FLAG: --experimental-mounter-path="" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854451 2571 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854454 2571 flags.go:64] FLAG: --fail-swap-on="true" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854457 2571 flags.go:64] FLAG: --feature-gates="" Apr 24 19:06:29.857163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854460 2571 flags.go:64] FLAG: --file-check-frequency="20s" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854464 2571 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854467 2571 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854470 2571 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854474 2571 flags.go:64] FLAG: --healthz-port="10248" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854477 2571 flags.go:64] FLAG: --help="false" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854480 2571 flags.go:64] FLAG: --hostname-override="ip-10-0-130-85.ec2.internal" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854483 2571 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854486 2571 flags.go:64] FLAG: --http-check-frequency="20s" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854489 2571 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854493 2571 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854496 2571 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854499 2571 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854502 2571 flags.go:64] FLAG: --image-service-endpoint="" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854504 2571 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854508 2571 flags.go:64] FLAG: --kube-api-burst="100" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854515 2571 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854518 2571 flags.go:64] FLAG: --kube-api-qps="50" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854521 2571 flags.go:64] FLAG: --kube-reserved="" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854524 2571 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854527 2571 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854530 2571 flags.go:64] FLAG: --kubelet-cgroups="" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854533 2571 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854536 2571 flags.go:64] FLAG: --lock-file="" Apr 24 19:06:29.857769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854539 2571 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854542 2571 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854544 2571 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854550 2571 flags.go:64] FLAG: --log-json-split-stream="false" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854553 2571 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854556 2571 flags.go:64] FLAG: --log-text-split-stream="false" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854559 2571 flags.go:64] FLAG: --logging-format="text" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854562 2571 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854565 2571 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854568 2571 flags.go:64] FLAG: --manifest-url="" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854571 2571 flags.go:64] FLAG: --manifest-url-header="" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854576 2571 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854579 2571 flags.go:64] FLAG: --max-open-files="1000000" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854584 2571 flags.go:64] FLAG: --max-pods="110" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854587 2571 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854590 2571 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854593 2571 flags.go:64] FLAG: --memory-manager-policy="None" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854596 2571 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854599 2571 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854602 2571 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854605 2571 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854612 2571 flags.go:64] FLAG: --node-status-max-images="50" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854615 2571 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854618 2571 flags.go:64] FLAG: --oom-score-adj="-999" Apr 24 19:06:29.858362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854622 2571 flags.go:64] FLAG: --pod-cidr="" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854625 2571 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854631 2571 flags.go:64] FLAG: --pod-manifest-path="" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854635 2571 flags.go:64] FLAG: --pod-max-pids="-1" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854638 2571 flags.go:64] FLAG: --pods-per-core="0" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854641 2571 flags.go:64] FLAG: --port="10250" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854644 2571 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854647 2571 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0ffd2cbcc704d0358" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854650 2571 flags.go:64] FLAG: --qos-reserved="" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854653 2571 flags.go:64] FLAG: --read-only-port="10255" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854656 2571 flags.go:64] FLAG: --register-node="true" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854659 2571 flags.go:64] FLAG: --register-schedulable="true" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854662 2571 flags.go:64] FLAG: --register-with-taints="" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854666 2571 flags.go:64] FLAG: --registry-burst="10" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854668 2571 flags.go:64] FLAG: --registry-qps="5" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854674 2571 flags.go:64] FLAG: --reserved-cpus="" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854677 2571 flags.go:64] FLAG: --reserved-memory="" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854680 2571 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854683 2571 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854686 2571 flags.go:64] FLAG: --rotate-certificates="false" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854690 2571 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854693 2571 flags.go:64] FLAG: --runonce="false" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854696 2571 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854699 2571 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854702 2571 flags.go:64] FLAG: --seccomp-default="false" Apr 24 19:06:29.858984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854705 2571 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854708 2571 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854711 2571 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854714 2571 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854717 2571 flags.go:64] FLAG: --storage-driver-password="root" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854720 2571 flags.go:64] FLAG: --storage-driver-secure="false" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854723 2571 flags.go:64] FLAG: --storage-driver-table="stats" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854726 2571 flags.go:64] FLAG: --storage-driver-user="root" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854730 2571 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854733 2571 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854737 2571 flags.go:64] FLAG: --system-cgroups="" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854740 2571 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854745 2571 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854748 2571 flags.go:64] FLAG: --tls-cert-file="" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854751 2571 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854756 2571 flags.go:64] FLAG: --tls-min-version="" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854758 2571 flags.go:64] FLAG: --tls-private-key-file="" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854761 2571 flags.go:64] FLAG: --topology-manager-policy="none" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854764 2571 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854767 2571 flags.go:64] FLAG: --topology-manager-scope="container" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854770 2571 flags.go:64] FLAG: --v="2" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854774 2571 flags.go:64] FLAG: --version="false" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854779 2571 flags.go:64] FLAG: --vmodule="" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854784 2571 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.854787 2571 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 24 19:06:29.859586 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854884 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854887 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854891 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854894 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854897 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854900 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854917 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854920 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854923 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854926 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854928 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854931 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854934 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854936 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854939 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854943 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854945 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854948 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854950 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 19:06:29.860204 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854953 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854956 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854959 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854961 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854964 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854966 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854969 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854972 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854975 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854977 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854981 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854984 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854986 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854992 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854995 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.854998 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855000 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855003 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855006 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855009 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 19:06:29.860736 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855013 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855016 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855019 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855021 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855024 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855026 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855029 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855031 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855035 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855038 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855040 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855042 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855045 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855048 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855050 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855053 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855055 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855058 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855060 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855063 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 19:06:29.861306 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855065 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855068 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855072 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855075 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855078 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855080 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855083 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855085 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855088 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855091 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855094 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855096 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855099 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855102 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855104 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855107 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855109 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855112 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855115 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855117 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 19:06:29.861826 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855122 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 19:06:29.862667 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855126 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 19:06:29.862667 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855128 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 19:06:29.862667 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855131 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 19:06:29.862667 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855134 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 19:06:29.862667 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855136 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 19:06:29.862667 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.855139 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 19:06:29.862667 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.856161 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 19:06:29.864793 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.864768 2571 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 24 19:06:29.864793 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.864791 2571 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864843 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864848 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864851 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864854 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864857 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864860 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864863 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864866 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864869 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864872 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864874 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864877 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864880 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864882 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864885 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864888 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864890 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864893 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 19:06:29.864945 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864897 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864978 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864983 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864986 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864989 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864993 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864996 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.864998 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865001 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865004 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865007 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865009 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865012 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865015 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865017 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865020 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865023 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865025 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865028 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865031 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 19:06:29.865423 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865033 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865036 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865038 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865041 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865043 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865045 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865048 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865050 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865053 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865055 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865058 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865060 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865063 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865065 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865068 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865071 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865073 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865076 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865078 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865081 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 19:06:29.865932 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865083 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865086 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865088 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865091 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865093 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865096 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865098 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865101 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865103 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865107 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865110 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865113 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865115 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865118 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865121 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865123 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865126 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865128 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865132 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 19:06:29.866427 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865134 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865137 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865139 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865142 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865144 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865147 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865150 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865153 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865156 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.865161 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865259 2571 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865265 2571 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865267 2571 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865271 2571 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865273 2571 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 24 19:06:29.866881 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865276 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865279 2571 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865282 2571 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865284 2571 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865287 2571 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865289 2571 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865292 2571 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865296 2571 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865300 2571 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865302 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865305 2571 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865307 2571 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865310 2571 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865312 2571 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865315 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865317 2571 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865320 2571 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865324 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865326 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865329 2571 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 24 19:06:29.867272 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865331 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865334 2571 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865336 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865339 2571 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865342 2571 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865344 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865347 2571 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865349 2571 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865353 2571 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865355 2571 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865358 2571 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865361 2571 feature_gate.go:328] unrecognized feature gate: Example Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865363 2571 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865365 2571 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865368 2571 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865371 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865373 2571 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865375 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865378 2571 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865380 2571 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 24 19:06:29.867773 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865383 2571 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865385 2571 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865388 2571 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865390 2571 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865393 2571 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865395 2571 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865398 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865400 2571 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865403 2571 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865405 2571 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865408 2571 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865411 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865413 2571 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865416 2571 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865419 2571 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865421 2571 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865424 2571 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865427 2571 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865430 2571 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 24 19:06:29.868284 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865433 2571 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865437 2571 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865441 2571 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865444 2571 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865447 2571 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865449 2571 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865452 2571 feature_gate.go:328] unrecognized feature gate: Example2 Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865454 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865457 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865459 2571 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865462 2571 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865465 2571 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865467 2571 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865470 2571 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865472 2571 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865475 2571 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865477 2571 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865480 2571 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865482 2571 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 24 19:06:29.868741 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865485 2571 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 24 19:06:29.869219 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865487 2571 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 24 19:06:29.869219 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:29.865490 2571 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 24 19:06:29.869219 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.865494 2571 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 24 19:06:29.869219 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.866211 2571 server.go:962] "Client rotation is on, will bootstrap in background" Apr 24 19:06:29.869330 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.869315 2571 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 24 19:06:29.870326 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.870312 2571 server.go:1019] "Starting client certificate rotation" Apr 24 19:06:29.870427 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.870410 2571 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 24 19:06:29.870462 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.870448 2571 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 24 19:06:29.897425 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.897405 2571 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 24 19:06:29.899994 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.899969 2571 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 24 19:06:29.915770 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.915744 2571 log.go:25] "Validated CRI v1 runtime API" Apr 24 19:06:29.922731 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.922712 2571 log.go:25] "Validated CRI v1 image API" Apr 24 19:06:29.924812 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.924788 2571 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 19:06:29.927375 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.927353 2571 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 24 19:06:29.927834 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.927815 2571 fs.go:135] Filesystem UUIDs: map[79261a26-b196-4e20-94a0-33344aeb1bba:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 e6558e78-87e1-4b3e-851d-2ac5c83cda94:/dev/nvme0n1p4] Apr 24 19:06:29.927873 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.927835 2571 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 24 19:06:29.933757 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.933645 2571 manager.go:217] Machine: {Timestamp:2026-04-24 19:06:29.932360202 +0000 UTC m=+0.421784491 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3099996 MemoryCapacity:33164488704 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec257ab289ffe6dc1c57592e9d4943cd SystemUUID:ec257ab2-89ff-e6dc-1c57-592e9d4943cd BootID:369f4f6a-9283-43a9-abaa-e1f98d369128 Filesystems:[{Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582242304 Type:vfs Inodes:4048399 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:c0:fc:73:db:11 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:c0:fc:73:db:11 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:16:cc:62:80:ff:5a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164488704 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 24 19:06:29.933757 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.933746 2571 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 24 19:06:29.933873 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.933835 2571 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 24 19:06:29.936328 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.936294 2571 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 19:06:29.936466 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.936331 2571 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-130-85.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 19:06:29.937070 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.937061 2571 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 19:06:29.937108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.937073 2571 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 19:06:29.937108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.937086 2571 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 24 19:06:29.937808 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.937798 2571 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 24 19:06:29.939266 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.939256 2571 state_mem.go:36] "Initialized new in-memory state store" Apr 24 19:06:29.939381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.939372 2571 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 24 19:06:29.942270 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.942261 2571 kubelet.go:491] "Attempting to sync node with API server" Apr 24 19:06:29.942320 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.942274 2571 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 19:06:29.942320 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.942291 2571 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 24 19:06:29.942320 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.942300 2571 kubelet.go:397] "Adding apiserver pod source" Apr 24 19:06:29.942320 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.942309 2571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 19:06:29.943513 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.943501 2571 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 24 19:06:29.943668 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.943521 2571 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 24 19:06:29.946520 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.946505 2571 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 24 19:06:29.947964 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.947951 2571 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 19:06:29.950141 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950127 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950145 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950151 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950156 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950162 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950169 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950175 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950181 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950188 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 24 19:06:29.950196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950195 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 24 19:06:29.950434 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950204 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 24 19:06:29.950434 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.950213 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 24 19:06:29.951073 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.951064 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 24 19:06:29.951110 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.951074 2571 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 24 19:06:29.953997 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.953980 2571 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-130-85.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 24 19:06:29.954449 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:29.954419 2571 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 19:06:29.954485 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:29.954419 2571 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-130-85.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 19:06:29.954730 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.954718 2571 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 19:06:29.954762 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.954753 2571 server.go:1295] "Started kubelet" Apr 24 19:06:29.954918 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.954843 2571 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 19:06:29.955004 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.954842 2571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 19:06:29.955004 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.954961 2571 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 24 19:06:29.955747 ip-10-0-130-85 systemd[1]: Started Kubernetes Kubelet. Apr 24 19:06:29.956131 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.956113 2571 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 19:06:29.957191 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.957175 2571 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-cgqgz" Apr 24 19:06:29.957885 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.957869 2571 server.go:317] "Adding debug handlers to kubelet server" Apr 24 19:06:29.962401 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.962383 2571 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 24 19:06:29.963041 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.963024 2571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 19:06:29.963655 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.963631 2571 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 24 19:06:29.963754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.963663 2571 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 19:06:29.963754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.963629 2571 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 19:06:29.963851 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.963790 2571 reconstruct.go:97] "Volume reconstruction finished" Apr 24 19:06:29.963851 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.963799 2571 reconciler.go:26] "Reconciler: start to sync state" Apr 24 19:06:29.963988 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:29.963935 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:29.964209 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.964189 2571 factory.go:55] Registering systemd factory Apr 24 19:06:29.964277 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.964254 2571 factory.go:223] Registration of the systemd container factory successfully Apr 24 19:06:29.965765 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.965741 2571 factory.go:153] Registering CRI-O factory Apr 24 19:06:29.965765 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.965767 2571 factory.go:223] Registration of the crio container factory successfully Apr 24 19:06:29.965889 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.965817 2571 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 24 19:06:29.965889 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.965839 2571 factory.go:103] Registering Raw factory Apr 24 19:06:29.965889 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.965855 2571 manager.go:1196] Started watching for new ooms in manager Apr 24 19:06:29.966471 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.966214 2571 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-cgqgz" Apr 24 19:06:29.966580 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.966563 2571 manager.go:319] Starting recovery of all containers Apr 24 19:06:29.967557 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:29.967516 2571 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 24 19:06:29.967716 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:29.967682 2571 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-130-85.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 24 19:06:29.967878 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:29.967859 2571 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 19:06:29.972510 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:29.967783 2571 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-130-85.ec2.internal.18a960767aa9dae1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-130-85.ec2.internal,UID:ip-10-0-130-85.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-130-85.ec2.internal,},FirstTimestamp:2026-04-24 19:06:29.954730721 +0000 UTC m=+0.444155012,LastTimestamp:2026-04-24 19:06:29.954730721 +0000 UTC m=+0.444155012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-130-85.ec2.internal,}" Apr 24 19:06:29.979080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.978923 2571 manager.go:324] Recovery completed Apr 24 19:06:29.983314 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.983302 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 19:06:29.985516 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.985500 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientMemory" Apr 24 19:06:29.985586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.985534 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 19:06:29.985586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.985549 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientPID" Apr 24 19:06:29.986058 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.986043 2571 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 24 19:06:29.986058 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.986058 2571 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 24 19:06:29.986134 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.986074 2571 state_mem.go:36] "Initialized new in-memory state store" Apr 24 19:06:29.988296 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.988285 2571 policy_none.go:49] "None policy: Start" Apr 24 19:06:29.988340 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.988300 2571 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 19:06:29.988340 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:29.988310 2571 state_mem.go:35] "Initializing new in-memory state store" Apr 24 19:06:30.024490 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.024474 2571 manager.go:341] "Starting Device Plugin manager" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.024514 2571 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.024526 2571 server.go:85] "Starting device plugin registration server" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.024765 2571 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.024777 2571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.024871 2571 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.025468 2571 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.025479 2571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.025962 2571 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 24 19:06:30.039109 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.026001 2571 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.101328 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.101264 2571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 19:06:30.102482 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.102466 2571 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 19:06:30.102584 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.102497 2571 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 19:06:30.102584 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.102514 2571 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 19:06:30.102584 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.102523 2571 kubelet.go:2451] "Starting kubelet main sync loop" Apr 24 19:06:30.102584 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.102552 2571 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 24 19:06:30.106616 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.106594 2571 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 19:06:30.125529 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.125511 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 19:06:30.126467 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.126453 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientMemory" Apr 24 19:06:30.126533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.126483 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 19:06:30.126533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.126493 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientPID" Apr 24 19:06:30.126533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.126515 2571 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.134335 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.134320 2571 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.134408 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.134341 2571 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-130-85.ec2.internal\": node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.159524 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.159502 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.202984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.202960 2571 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal"] Apr 24 19:06:30.203084 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.203033 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 19:06:30.204558 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.204544 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientMemory" Apr 24 19:06:30.204628 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.204574 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 19:06:30.204628 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.204584 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientPID" Apr 24 19:06:30.205982 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.205969 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 19:06:30.206139 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.206122 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.206199 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.206156 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 19:06:30.206660 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.206645 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientMemory" Apr 24 19:06:30.206660 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.206651 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientMemory" Apr 24 19:06:30.206771 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.206671 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 19:06:30.206771 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.206682 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientPID" Apr 24 19:06:30.206771 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.206672 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 19:06:30.206771 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.206751 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientPID" Apr 24 19:06:30.208036 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.208022 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.208089 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.208048 2571 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 24 19:06:30.208658 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.208641 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientMemory" Apr 24 19:06:30.208743 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.208664 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasNoDiskPressure" Apr 24 19:06:30.208743 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.208676 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeHasSufficientPID" Apr 24 19:06:30.223449 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.223429 2571 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-130-85.ec2.internal\" not found" node="ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.226572 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.226556 2571 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-130-85.ec2.internal\" not found" node="ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.259946 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.259924 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.265010 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.264996 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/e6111c3de51e90f6a3a0694ec4ce1e2d-config\") pod \"kube-apiserver-proxy-ip-10-0-130-85.ec2.internal\" (UID: \"e6111c3de51e90f6a3a0694ec4ce1e2d\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.265067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.265020 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/09694a31517020611c7d019fc3713cd8-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal\" (UID: \"09694a31517020611c7d019fc3713cd8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.265067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.265038 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09694a31517020611c7d019fc3713cd8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal\" (UID: \"09694a31517020611c7d019fc3713cd8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.360074 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.359991 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.365310 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.365287 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/e6111c3de51e90f6a3a0694ec4ce1e2d-config\") pod \"kube-apiserver-proxy-ip-10-0-130-85.ec2.internal\" (UID: \"e6111c3de51e90f6a3a0694ec4ce1e2d\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.365402 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.365303 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/e6111c3de51e90f6a3a0694ec4ce1e2d-config\") pod \"kube-apiserver-proxy-ip-10-0-130-85.ec2.internal\" (UID: \"e6111c3de51e90f6a3a0694ec4ce1e2d\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.365402 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.365329 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/09694a31517020611c7d019fc3713cd8-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal\" (UID: \"09694a31517020611c7d019fc3713cd8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.365402 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.365363 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/09694a31517020611c7d019fc3713cd8-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal\" (UID: \"09694a31517020611c7d019fc3713cd8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.365402 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.365374 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09694a31517020611c7d019fc3713cd8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal\" (UID: \"09694a31517020611c7d019fc3713cd8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.365542 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.365417 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09694a31517020611c7d019fc3713cd8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal\" (UID: \"09694a31517020611c7d019fc3713cd8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.460678 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.460650 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.525083 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.525058 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.529770 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.529747 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.561621 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.561595 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.662196 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.662124 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.762650 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.762616 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.863156 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.863130 2571 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-130-85.ec2.internal\" not found" Apr 24 19:06:30.870511 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.870491 2571 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 24 19:06:30.870643 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.870624 2571 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 24 19:06:30.916581 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.916514 2571 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 19:06:30.939648 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.939626 2571 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 19:06:30.942560 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.942539 2571 apiserver.go:52] "Watching apiserver" Apr 24 19:06:30.950638 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.950619 2571 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 24 19:06:30.951017 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.950988 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5sdvh","openshift-multus/multus-additional-cni-plugins-gjqqt","openshift-multus/network-metrics-daemon-l2vd2","openshift-ovn-kubernetes/ovnkube-node-9rqzs","kube-system/konnectivity-agent-ztvgx","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb","openshift-dns/node-resolver-qgvbw","openshift-multus/multus-8snvl","openshift-network-diagnostics/network-check-target-f4z44","openshift-network-operator/iptables-alerter-9cd8l","openshift-cluster-node-tuning-operator/tuned-2whc8"] Apr 24 19:06:30.952986 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.952969 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:30.954170 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.954153 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.955354 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.955336 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 24 19:06:30.955459 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.955442 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:30.955537 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.955517 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:30.955755 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.955739 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 24 19:06:30.955814 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.955797 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 24 19:06:30.955892 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.955863 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-824w8\"" Apr 24 19:06:30.956755 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.956737 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 24 19:06:30.956850 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.956777 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 24 19:06:30.957086 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.957072 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 24 19:06:30.957154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.957142 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 24 19:06:30.957329 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.957316 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-7slfx\"" Apr 24 19:06:30.957976 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.957896 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 24 19:06:30.958156 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.958140 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.959280 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.959263 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:30.960593 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.960577 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:30.960971 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.960951 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 24 19:06:30.960971 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.960954 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 24 19:06:30.961108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.961002 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-jgnvn\"" Apr 24 19:06:30.961108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.960960 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 24 19:06:30.961108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.960971 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 24 19:06:30.961108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.961051 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 24 19:06:30.961290 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.961092 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 24 19:06:30.961863 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.961783 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 24 19:06:30.961863 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.961820 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 24 19:06:30.962020 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.961965 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-l7ql5\"" Apr 24 19:06:30.962122 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.962104 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:30.962280 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.962261 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.962632 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.962613 2571 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 24 19:06:30.962760 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.962742 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-msb7g\"" Apr 24 19:06:30.963178 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.963162 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 24 19:06:30.963405 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.963390 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 24 19:06:30.963478 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.963403 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 24 19:06:30.963597 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.963579 2571 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.964304 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.964210 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:30.964422 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:30.964288 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:30.964643 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.964620 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 24 19:06:30.964740 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.964727 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 24 19:06:30.964919 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.964887 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-rgrrt\"" Apr 24 19:06:30.964999 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.964975 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 24 19:06:30.965097 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.965072 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-lfjkm\"" Apr 24 19:06:30.966062 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.966044 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:30.967272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.967257 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:30.968283 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968268 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 24 19:06:30.968469 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968447 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztfl6\" (UniqueName: \"kubernetes.io/projected/42b9ec03-8a07-41ff-bcea-adb703ebf407-kube-api-access-ztfl6\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:30.968588 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968503 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-systemd\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.968588 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968523 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 24 19:06:30.968588 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968527 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eb058d66-c070-4f69-80a8-1e14b2f69f5d-cni-binary-copy\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.968588 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968549 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-hostroot\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.968588 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968564 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-daemon-config\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.968588 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968579 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-registration-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968593 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-var-lib-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968610 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-ovnkube-config\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968530 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968651 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-env-overrides\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968675 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-cnibin\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968689 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9742f1-146e-47e5-a302-4d5570284239-host\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968714 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-log-socket\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968737 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-cni-bin\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968732 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-23 19:01:29 +0000 UTC" deadline="2027-09-22 09:11:09.505860166 +0000 UTC" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968751 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-device-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968757 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12374h4m38.537105538s" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968765 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-conf-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968772 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-dcm94\"" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968782 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968822 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqcz\" (UniqueName: \"kubernetes.io/projected/0f254dda-8cf9-4910-99c2-638993f242ac-kube-api-access-xkqcz\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968843 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-system-cni-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968858 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pncf\" (UniqueName: \"kubernetes.io/projected/5f9742f1-146e-47e5-a302-4d5570284239-kube-api-access-5pncf\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:30.968942 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968883 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968914 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f254dda-8cf9-4910-99c2-638993f242ac-ovn-node-metrics-cert\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968935 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-socket-dir-parent\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968949 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-k8s-cni-cncf-io\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968963 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb96x\" (UniqueName: \"kubernetes.io/projected/eb058d66-c070-4f69-80a8-1e14b2f69f5d-kube-api-access-xb96x\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.968982 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-systemd-units\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969019 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-ovn\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969037 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-cni-bin\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969054 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-run-netns\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969085 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-cni-netd\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969105 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-netns\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969120 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-etc-kubernetes\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969134 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6qvd\" (UniqueName: \"kubernetes.io/projected/c48fa480-1be5-43ad-b4b7-25d95f327d97-kube-api-access-m6qvd\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969149 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969190 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-socket-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969204 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-etc-selinux\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:30.969666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969218 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcsh9\" (UniqueName: \"kubernetes.io/projected/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-kube-api-access-hcsh9\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969240 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-kubelet\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969268 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/5767c74f-a79f-44ed-966e-2106086c99dc-agent-certs\") pod \"konnectivity-agent-ztvgx\" (UID: \"5767c74f-a79f-44ed-966e-2106086c99dc\") " pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969301 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/5767c74f-a79f-44ed-966e-2106086c99dc-konnectivity-ca\") pod \"konnectivity-agent-ztvgx\" (UID: \"5767c74f-a79f-44ed-966e-2106086c99dc\") " pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969330 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-slash\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969347 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969360 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-system-cni-dir\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969386 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-cni-binary-copy\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969411 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-kubelet\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969429 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-cni-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969465 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-multus-certs\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969491 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969514 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969545 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-kubelet-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969572 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-cnibin\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969588 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-os-release\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:30.970552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969614 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-os-release\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969630 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42b9ec03-8a07-41ff-bcea-adb703ebf407-tmp-dir\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969650 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969666 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969689 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-ovnkube-script-lib\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969734 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-sys-fs\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969767 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-cni-multus\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969799 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42b9ec03-8a07-41ff-bcea-adb703ebf407-hosts-file\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969830 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f9742f1-146e-47e5-a302-4d5570284239-serviceca\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969848 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vdq2\" (UniqueName: \"kubernetes.io/projected/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-kube-api-access-9vdq2\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969871 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-etc-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969890 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.969892 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-node-log\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.970107 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-76jbw\"" Apr 24 19:06:30.971358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.970306 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 24 19:06:30.979116 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.979099 2571 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 24 19:06:30.980388 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.980373 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal"] Apr 24 19:06:30.980491 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.980477 2571 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 19:06:30.980540 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.980533 2571 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" Apr 24 19:06:30.987333 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.987311 2571 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 19:06:30.987333 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.987325 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal"] Apr 24 19:06:30.989077 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:30.989054 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09694a31517020611c7d019fc3713cd8.slice/crio-55958ae5c7eb77f465a853142ec32bb49516e6bd9c0e5cd5ad3ea0d5710bf857 WatchSource:0}: Error finding container 55958ae5c7eb77f465a853142ec32bb49516e6bd9c0e5cd5ad3ea0d5710bf857: Status 404 returned error can't find the container with id 55958ae5c7eb77f465a853142ec32bb49516e6bd9c0e5cd5ad3ea0d5710bf857 Apr 24 19:06:30.993210 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:30.993196 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 19:06:31.011364 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.011347 2571 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-lf56v" Apr 24 19:06:31.024422 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.024403 2571 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-lf56v" Apr 24 19:06:31.057751 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.057731 2571 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 19:06:31.066788 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.066769 2571 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 19:06:31.070199 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070183 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-log-socket\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070266 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070205 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-cni-bin\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070266 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070222 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-device-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.070266 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070236 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-conf-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070266 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070251 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070274 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysctl-d\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070281 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-cni-bin\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070298 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-run\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070314 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-log-socket\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070325 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-device-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070321 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-sys\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070338 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-conf-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070360 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqcz\" (UniqueName: \"kubernetes.io/projected/0f254dda-8cf9-4910-99c2-638993f242ac-kube-api-access-xkqcz\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070378 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-system-cni-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070395 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5pncf\" (UniqueName: \"kubernetes.io/projected/5f9742f1-146e-47e5-a302-4d5570284239-kube-api-access-5pncf\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:31.070417 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070412 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-systemd\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070428 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070428 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070431 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-system-cni-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070450 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f254dda-8cf9-4910-99c2-638993f242ac-ovn-node-metrics-cert\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070483 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-socket-dir-parent\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070487 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070507 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-k8s-cni-cncf-io\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070532 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xb96x\" (UniqueName: \"kubernetes.io/projected/eb058d66-c070-4f69-80a8-1e14b2f69f5d-kube-api-access-xb96x\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070549 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-socket-dir-parent\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070556 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-var-lib-kubelet\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070599 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-k8s-cni-cncf-io\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070671 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-tmp\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070707 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-systemd-units\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070735 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-ovn\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070754 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-systemd-units\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.070764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070759 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-cni-bin\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070756 2571 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070787 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-ovn\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070790 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0fb5806-85c6-498a-9018-3d8218e45d60-host-slash\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070818 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnsqw\" (UniqueName: \"kubernetes.io/projected/a0fb5806-85c6-498a-9018-3d8218e45d60-kube-api-access-rnsqw\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070818 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-cni-bin\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070836 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-run-netns\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070862 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-cni-netd\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070872 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-run-netns\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070880 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-netns\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070899 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-etc-kubernetes\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070922 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-cni-netd\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070940 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m6qvd\" (UniqueName: \"kubernetes.io/projected/c48fa480-1be5-43ad-b4b7-25d95f327d97-kube-api-access-m6qvd\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070945 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-netns\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070972 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysctl-conf\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.070982 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-etc-kubernetes\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071001 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-host\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071027 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.071338 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071043 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-socket-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071060 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-etc-selinux\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071083 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hcsh9\" (UniqueName: \"kubernetes.io/projected/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-kube-api-access-hcsh9\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071127 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-kubelet\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071156 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/5767c74f-a79f-44ed-966e-2106086c99dc-agent-certs\") pod \"konnectivity-agent-ztvgx\" (UID: \"5767c74f-a79f-44ed-966e-2106086c99dc\") " pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071179 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-socket-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071197 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-etc-selinux\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071203 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-kubelet\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071186 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/5767c74f-a79f-44ed-966e-2106086c99dc-konnectivity-ca\") pod \"konnectivity-agent-ztvgx\" (UID: \"5767c74f-a79f-44ed-966e-2106086c99dc\") " pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071251 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-slash\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071278 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071298 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-system-cni-dir\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071319 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-cni-binary-copy\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071320 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071329 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-system-cni-dir\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071298 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-slash\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071346 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-kubernetes\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.072154 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071367 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-kubelet\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071387 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-cni-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071406 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-multus-certs\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071426 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071443 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-kubelet\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071449 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-modprobe-d\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071506 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-run-multus-certs\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071601 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071653 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-cni-dir\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071658 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071685 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/5767c74f-a79f-44ed-966e-2106086c99dc-konnectivity-ca\") pod \"konnectivity-agent-ztvgx\" (UID: \"5767c74f-a79f-44ed-966e-2106086c99dc\") " pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071716 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-kubelet-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.071736 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071750 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-cnibin\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071766 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-kubelet-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.071809 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs podName:4a2cb1aa-59c8-4abb-9a8a-daf68903a358 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:31.571781097 +0000 UTC m=+2.061205391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs") pod "network-metrics-daemon-l2vd2" (UID: "4a2cb1aa-59c8-4abb-9a8a-daf68903a358") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071816 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-cni-binary-copy\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.072941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071825 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-os-release\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071833 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-cnibin\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071848 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6q6\" (UniqueName: \"kubernetes.io/projected/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-kube-api-access-tc6q6\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071883 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-os-release\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071924 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42b9ec03-8a07-41ff-bcea-adb703ebf407-tmp-dir\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071927 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c48fa480-1be5-43ad-b4b7-25d95f327d97-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071935 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c48fa480-1be5-43ad-b4b7-25d95f327d97-os-release\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071951 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071976 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.071968 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-os-release\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072101 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-ovnkube-script-lib\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072120 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072130 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-sys-fs\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072154 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-cni-multus\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072177 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42b9ec03-8a07-41ff-bcea-adb703ebf407-hosts-file\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072196 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42b9ec03-8a07-41ff-bcea-adb703ebf407-tmp-dir\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072205 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-sys-fs\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.073711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072200 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f9742f1-146e-47e5-a302-4d5570284239-serviceca\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072254 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42b9ec03-8a07-41ff-bcea-adb703ebf407-hosts-file\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072242 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-host-var-lib-cni-multus\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072274 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vdq2\" (UniqueName: \"kubernetes.io/projected/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-kube-api-access-9vdq2\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072313 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-etc-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072339 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-node-log\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072436 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztfl6\" (UniqueName: \"kubernetes.io/projected/42b9ec03-8a07-41ff-bcea-adb703ebf407-kube-api-access-ztfl6\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072460 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-systemd\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072392 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-node-log\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072396 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-etc-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072519 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eb058d66-c070-4f69-80a8-1e14b2f69f5d-cni-binary-copy\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072533 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-run-systemd\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072545 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-hostroot\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072578 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-daemon-config\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072606 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a0fb5806-85c6-498a-9018-3d8218e45d60-iptables-alerter-script\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072613 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5f9742f1-146e-47e5-a302-4d5570284239-serviceca\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072634 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-tuned\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072652 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-ovnkube-script-lib\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072680 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-hostroot\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072710 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-registration-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072735 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-var-lib-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072760 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-ovnkube-config\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072771 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0f254dda-8cf9-4910-99c2-638993f242ac-var-lib-openvswitch\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072774 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-registration-dir\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072785 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-env-overrides\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072810 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-cnibin\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072833 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9742f1-146e-47e5-a302-4d5570284239-host\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072864 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysconfig\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072888 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-lib-modules\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072918 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eb058d66-c070-4f69-80a8-1e14b2f69f5d-cnibin\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.072977 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9742f1-146e-47e5-a302-4d5570284239-host\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.073029 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eb058d66-c070-4f69-80a8-1e14b2f69f5d-cni-binary-copy\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.073138 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eb058d66-c070-4f69-80a8-1e14b2f69f5d-multus-daemon-config\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.073197 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-ovnkube-config\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.073267 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0f254dda-8cf9-4910-99c2-638993f242ac-env-overrides\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.074289 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/5767c74f-a79f-44ed-966e-2106086c99dc-agent-certs\") pod \"konnectivity-agent-ztvgx\" (UID: \"5767c74f-a79f-44ed-966e-2106086c99dc\") " pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:31.074747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.074602 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0f254dda-8cf9-4910-99c2-638993f242ac-ovn-node-metrics-cert\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.081763 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.081739 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 19:06:31.081763 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.081759 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 19:06:31.081763 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.081769 2571 projected.go:194] Error preparing data for projected volume kube-api-access-tcjp5 for pod openshift-network-diagnostics/network-check-target-f4z44: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:31.082024 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.081829 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5 podName:4c1189cb-51d5-415c-8dc4-6c1970496280 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:31.581799038 +0000 UTC m=+2.071223317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tcjp5" (UniqueName: "kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5") pod "network-check-target-f4z44" (UID: "4c1189cb-51d5-415c-8dc4-6c1970496280") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:31.083159 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.083137 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztfl6\" (UniqueName: \"kubernetes.io/projected/42b9ec03-8a07-41ff-bcea-adb703ebf407-kube-api-access-ztfl6\") pod \"node-resolver-qgvbw\" (UID: \"42b9ec03-8a07-41ff-bcea-adb703ebf407\") " pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:31.083710 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.083691 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pncf\" (UniqueName: \"kubernetes.io/projected/5f9742f1-146e-47e5-a302-4d5570284239-kube-api-access-5pncf\") pod \"node-ca-5sdvh\" (UID: \"5f9742f1-146e-47e5-a302-4d5570284239\") " pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:31.084134 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.084105 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vdq2\" (UniqueName: \"kubernetes.io/projected/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-kube-api-access-9vdq2\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:31.084323 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.084305 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6qvd\" (UniqueName: \"kubernetes.io/projected/c48fa480-1be5-43ad-b4b7-25d95f327d97-kube-api-access-m6qvd\") pod \"multus-additional-cni-plugins-gjqqt\" (UID: \"c48fa480-1be5-43ad-b4b7-25d95f327d97\") " pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.084453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.084436 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcsh9\" (UniqueName: \"kubernetes.io/projected/5fbff284-fcfa-4f9b-9c3a-12eb318d19f8-kube-api-access-hcsh9\") pod \"aws-ebs-csi-driver-node-999gb\" (UID: \"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.084713 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.084694 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqcz\" (UniqueName: \"kubernetes.io/projected/0f254dda-8cf9-4910-99c2-638993f242ac-kube-api-access-xkqcz\") pod \"ovnkube-node-9rqzs\" (UID: \"0f254dda-8cf9-4910-99c2-638993f242ac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.085077 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.085059 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb96x\" (UniqueName: \"kubernetes.io/projected/eb058d66-c070-4f69-80a8-1e14b2f69f5d-kube-api-access-xb96x\") pod \"multus-8snvl\" (UID: \"eb058d66-c070-4f69-80a8-1e14b2f69f5d\") " pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.106173 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.106132 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" event={"ID":"09694a31517020611c7d019fc3713cd8","Type":"ContainerStarted","Data":"55958ae5c7eb77f465a853142ec32bb49516e6bd9c0e5cd5ad3ea0d5710bf857"} Apr 24 19:06:31.107317 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.107293 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6111c3de51e90f6a3a0694ec4ce1e2d.slice/crio-4c293a9c5232e8159c50b296f2e027531e585c9a264808ead76fc550781635e7 WatchSource:0}: Error finding container 4c293a9c5232e8159c50b296f2e027531e585c9a264808ead76fc550781635e7: Status 404 returned error can't find the container with id 4c293a9c5232e8159c50b296f2e027531e585c9a264808ead76fc550781635e7 Apr 24 19:06:31.174009 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.173933 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tc6q6\" (UniqueName: \"kubernetes.io/projected/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-kube-api-access-tc6q6\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174009 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.173978 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a0fb5806-85c6-498a-9018-3d8218e45d60-iptables-alerter-script\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.174190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174026 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-tuned\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174055 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysconfig\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174102 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysconfig\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174121 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-lib-modules\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174155 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysctl-d\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174180 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-run\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174202 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-sys\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174231 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-systemd\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174259 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-var-lib-kubelet\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174286 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-tmp\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174297 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-run\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174313 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0fb5806-85c6-498a-9018-3d8218e45d60-host-slash\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174328 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysctl-d\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174339 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-var-lib-kubelet\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174332 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-systemd\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174358 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0fb5806-85c6-498a-9018-3d8218e45d60-host-slash\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174359 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rnsqw\" (UniqueName: \"kubernetes.io/projected/a0fb5806-85c6-498a-9018-3d8218e45d60-kube-api-access-rnsqw\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174301 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-lib-modules\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174372 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-sys\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174410 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysctl-conf\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.174438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174437 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-host\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.175028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174472 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-kubernetes\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.175028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174498 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/a0fb5806-85c6-498a-9018-3d8218e45d60-iptables-alerter-script\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.175028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174509 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-modprobe-d\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.175028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174557 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-host\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.175028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174593 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-kubernetes\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.175028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174598 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-sysctl-conf\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.175028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.174652 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-modprobe-d\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.176095 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.176074 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-etc-tuned\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.176212 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.176197 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-tmp\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.181587 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.181567 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnsqw\" (UniqueName: \"kubernetes.io/projected/a0fb5806-85c6-498a-9018-3d8218e45d60-kube-api-access-rnsqw\") pod \"iptables-alerter-9cd8l\" (UID: \"a0fb5806-85c6-498a-9018-3d8218e45d60\") " pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.181669 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.181596 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc6q6\" (UniqueName: \"kubernetes.io/projected/7141b48e-7f6d-4770-ab5e-9f920fa03a0d-kube-api-access-tc6q6\") pod \"tuned-2whc8\" (UID: \"7141b48e-7f6d-4770-ab5e-9f920fa03a0d\") " pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.286633 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.286607 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5sdvh" Apr 24 19:06:31.292798 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.292773 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f9742f1_146e_47e5_a302_4d5570284239.slice/crio-b42ecf1ee514cd9aa7b315181c42a606cd988e6262388d63704978c05e00f20c WatchSource:0}: Error finding container b42ecf1ee514cd9aa7b315181c42a606cd988e6262388d63704978c05e00f20c: Status 404 returned error can't find the container with id b42ecf1ee514cd9aa7b315181c42a606cd988e6262388d63704978c05e00f20c Apr 24 19:06:31.293762 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.293739 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" Apr 24 19:06:31.300203 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.300179 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc48fa480_1be5_43ad_b4b7_25d95f327d97.slice/crio-2d2dec360097f612557ffa41c00df0fd0d5b8ea927733eed19a1af3c384f9c65 WatchSource:0}: Error finding container 2d2dec360097f612557ffa41c00df0fd0d5b8ea927733eed19a1af3c384f9c65: Status 404 returned error can't find the container with id 2d2dec360097f612557ffa41c00df0fd0d5b8ea927733eed19a1af3c384f9c65 Apr 24 19:06:31.300834 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.300817 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:31.306080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.306061 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:31.307322 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.307291 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f254dda_8cf9_4910_99c2_638993f242ac.slice/crio-008f43361c1c0b094e727b8b4f86df158e33f608323db214f3aa9e53b1c1736b WatchSource:0}: Error finding container 008f43361c1c0b094e727b8b4f86df158e33f608323db214f3aa9e53b1c1736b: Status 404 returned error can't find the container with id 008f43361c1c0b094e727b8b4f86df158e33f608323db214f3aa9e53b1c1736b Apr 24 19:06:31.310732 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.310714 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" Apr 24 19:06:31.315199 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.315180 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5767c74f_a79f_44ed_966e_2106086c99dc.slice/crio-b91b83b3e59943006495fe8847335ba1bb083af8ff4396001ea13e4f2440570b WatchSource:0}: Error finding container b91b83b3e59943006495fe8847335ba1bb083af8ff4396001ea13e4f2440570b: Status 404 returned error can't find the container with id b91b83b3e59943006495fe8847335ba1bb083af8ff4396001ea13e4f2440570b Apr 24 19:06:31.315928 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.315917 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qgvbw" Apr 24 19:06:31.320941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.320881 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8snvl" Apr 24 19:06:31.323052 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.323030 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42b9ec03_8a07_41ff_bcea_adb703ebf407.slice/crio-65ec907f4073f495616a9902434a38630ff56f4ac4709f82619d84a189fa83b7 WatchSource:0}: Error finding container 65ec907f4073f495616a9902434a38630ff56f4ac4709f82619d84a189fa83b7: Status 404 returned error can't find the container with id 65ec907f4073f495616a9902434a38630ff56f4ac4709f82619d84a189fa83b7 Apr 24 19:06:31.327153 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.327136 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-9cd8l" Apr 24 19:06:31.327648 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.327632 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb058d66_c070_4f69_80a8_1e14b2f69f5d.slice/crio-dada29439f94ad2edc854b9a2af92dd7ed843126eec045951b9da5bd09c280d0 WatchSource:0}: Error finding container dada29439f94ad2edc854b9a2af92dd7ed843126eec045951b9da5bd09c280d0: Status 404 returned error can't find the container with id dada29439f94ad2edc854b9a2af92dd7ed843126eec045951b9da5bd09c280d0 Apr 24 19:06:31.332437 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.332414 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-2whc8" Apr 24 19:06:31.333033 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.332855 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0fb5806_85c6_498a_9018_3d8218e45d60.slice/crio-08fb6c0281a22f525ce7ef1396595fdcd0aae78ccbdc5b59ab85a3b277db0549 WatchSource:0}: Error finding container 08fb6c0281a22f525ce7ef1396595fdcd0aae78ccbdc5b59ab85a3b277db0549: Status 404 returned error can't find the container with id 08fb6c0281a22f525ce7ef1396595fdcd0aae78ccbdc5b59ab85a3b277db0549 Apr 24 19:06:31.338145 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:06:31.338125 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7141b48e_7f6d_4770_ab5e_9f920fa03a0d.slice/crio-8f535e192f94a2d48b71dc4b4e73d616cbd055175ed13cfebbfefe568cbdbeaf WatchSource:0}: Error finding container 8f535e192f94a2d48b71dc4b4e73d616cbd055175ed13cfebbfefe568cbdbeaf: Status 404 returned error can't find the container with id 8f535e192f94a2d48b71dc4b4e73d616cbd055175ed13cfebbfefe568cbdbeaf Apr 24 19:06:31.577298 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.577189 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:31.577450 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.577344 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:31.577450 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.577420 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs podName:4a2cb1aa-59c8-4abb-9a8a-daf68903a358 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:32.577399448 +0000 UTC m=+3.066823743 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs") pod "network-metrics-daemon-l2vd2" (UID: "4a2cb1aa-59c8-4abb-9a8a-daf68903a358") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:31.678468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.677868 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:31.678468 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.678051 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 19:06:31.678468 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.678071 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 19:06:31.678468 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.678084 2571 projected.go:194] Error preparing data for projected volume kube-api-access-tcjp5 for pod openshift-network-diagnostics/network-check-target-f4z44: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:31.678468 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:31.678138 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5 podName:4c1189cb-51d5-415c-8dc4-6c1970496280 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:32.678119254 +0000 UTC m=+3.167543537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tcjp5" (UniqueName: "kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5") pod "network-check-target-f4z44" (UID: "4c1189cb-51d5-415c-8dc4-6c1970496280") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:31.922444 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:31.922369 2571 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 24 19:06:32.025679 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.025587 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-23 19:01:31 +0000 UTC" deadline="2028-01-28 04:35:28.059115988 +0000 UTC" Apr 24 19:06:32.025679 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.025622 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15441h28m56.033497858s" Apr 24 19:06:32.103250 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.103216 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:32.103409 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:32.103350 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:32.131266 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.131225 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" event={"ID":"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8","Type":"ContainerStarted","Data":"6c4603207335a5a01af05f2763ab3bf23d87806c4dcb59d20b44147c4453dd87"} Apr 24 19:06:32.139276 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.139205 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"008f43361c1c0b094e727b8b4f86df158e33f608323db214f3aa9e53b1c1736b"} Apr 24 19:06:32.158975 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.158889 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" event={"ID":"e6111c3de51e90f6a3a0694ec4ce1e2d","Type":"ContainerStarted","Data":"4c293a9c5232e8159c50b296f2e027531e585c9a264808ead76fc550781635e7"} Apr 24 19:06:32.167392 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.167358 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-2whc8" event={"ID":"7141b48e-7f6d-4770-ab5e-9f920fa03a0d","Type":"ContainerStarted","Data":"8f535e192f94a2d48b71dc4b4e73d616cbd055175ed13cfebbfefe568cbdbeaf"} Apr 24 19:06:32.172607 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.172532 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qgvbw" event={"ID":"42b9ec03-8a07-41ff-bcea-adb703ebf407","Type":"ContainerStarted","Data":"65ec907f4073f495616a9902434a38630ff56f4ac4709f82619d84a189fa83b7"} Apr 24 19:06:32.184562 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.184525 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-ztvgx" event={"ID":"5767c74f-a79f-44ed-966e-2106086c99dc","Type":"ContainerStarted","Data":"b91b83b3e59943006495fe8847335ba1bb083af8ff4396001ea13e4f2440570b"} Apr 24 19:06:32.197190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.197154 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerStarted","Data":"2d2dec360097f612557ffa41c00df0fd0d5b8ea927733eed19a1af3c384f9c65"} Apr 24 19:06:32.216705 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.216665 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5sdvh" event={"ID":"5f9742f1-146e-47e5-a302-4d5570284239","Type":"ContainerStarted","Data":"b42ecf1ee514cd9aa7b315181c42a606cd988e6262388d63704978c05e00f20c"} Apr 24 19:06:32.225923 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.223635 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9cd8l" event={"ID":"a0fb5806-85c6-498a-9018-3d8218e45d60","Type":"ContainerStarted","Data":"08fb6c0281a22f525ce7ef1396595fdcd0aae78ccbdc5b59ab85a3b277db0549"} Apr 24 19:06:32.239610 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.239572 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8snvl" event={"ID":"eb058d66-c070-4f69-80a8-1e14b2f69f5d","Type":"ContainerStarted","Data":"dada29439f94ad2edc854b9a2af92dd7ed843126eec045951b9da5bd09c280d0"} Apr 24 19:06:32.583115 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.582472 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:32.583115 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:32.582648 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:32.583115 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:32.582713 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs podName:4a2cb1aa-59c8-4abb-9a8a-daf68903a358 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:34.582691811 +0000 UTC m=+5.072116089 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs") pod "network-metrics-daemon-l2vd2" (UID: "4a2cb1aa-59c8-4abb-9a8a-daf68903a358") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:32.684041 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:32.683409 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:32.684041 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:32.683582 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 19:06:32.684041 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:32.683600 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 19:06:32.684041 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:32.683614 2571 projected.go:194] Error preparing data for projected volume kube-api-access-tcjp5 for pod openshift-network-diagnostics/network-check-target-f4z44: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:32.684041 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:32.683669 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5 podName:4c1189cb-51d5-415c-8dc4-6c1970496280 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:34.683652222 +0000 UTC m=+5.173076505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tcjp5" (UniqueName: "kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5") pod "network-check-target-f4z44" (UID: "4c1189cb-51d5-415c-8dc4-6c1970496280") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:33.026741 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:33.026653 2571 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-23 19:01:31 +0000 UTC" deadline="2027-12-14 08:03:44.580630701 +0000 UTC" Apr 24 19:06:33.026741 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:33.026694 2571 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14364h57m11.553940707s" Apr 24 19:06:33.104043 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:33.103423 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:33.104043 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:33.103561 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:34.104939 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:34.104894 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:34.105397 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:34.105027 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:34.601634 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:34.600999 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:34.601634 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:34.601157 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:34.601634 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:34.601238 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs podName:4a2cb1aa-59c8-4abb-9a8a-daf68903a358 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:38.60121649 +0000 UTC m=+9.090640767 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs") pod "network-metrics-daemon-l2vd2" (UID: "4a2cb1aa-59c8-4abb-9a8a-daf68903a358") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:34.702477 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:34.702388 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:34.702631 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:34.702599 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 19:06:34.702631 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:34.702625 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 19:06:34.702748 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:34.702641 2571 projected.go:194] Error preparing data for projected volume kube-api-access-tcjp5 for pod openshift-network-diagnostics/network-check-target-f4z44: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:34.702748 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:34.702712 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5 podName:4c1189cb-51d5-415c-8dc4-6c1970496280 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:38.702694029 +0000 UTC m=+9.192118309 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tcjp5" (UniqueName: "kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5") pod "network-check-target-f4z44" (UID: "4c1189cb-51d5-415c-8dc4-6c1970496280") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:35.104126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:35.103527 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:35.104126 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:35.103706 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:36.103954 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:36.103786 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:36.104489 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:36.104448 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:37.103985 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:37.103473 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:37.103985 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:37.103617 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:38.103480 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:38.103439 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:38.103669 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:38.103591 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:38.632751 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:38.632706 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:38.633244 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:38.632849 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:38.633244 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:38.632932 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs podName:4a2cb1aa-59c8-4abb-9a8a-daf68903a358 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:46.632898896 +0000 UTC m=+17.122323184 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs") pod "network-metrics-daemon-l2vd2" (UID: "4a2cb1aa-59c8-4abb-9a8a-daf68903a358") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:38.734145 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:38.734107 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:38.734325 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:38.734296 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 19:06:38.734390 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:38.734327 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 19:06:38.734390 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:38.734342 2571 projected.go:194] Error preparing data for projected volume kube-api-access-tcjp5 for pod openshift-network-diagnostics/network-check-target-f4z44: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:38.734497 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:38.734404 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5 podName:4c1189cb-51d5-415c-8dc4-6c1970496280 nodeName:}" failed. No retries permitted until 2026-04-24 19:06:46.734385595 +0000 UTC m=+17.223809895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tcjp5" (UniqueName: "kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5") pod "network-check-target-f4z44" (UID: "4c1189cb-51d5-415c-8dc4-6c1970496280") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:39.103772 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:39.103692 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:39.103953 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:39.103831 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:40.104407 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:40.104375 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:40.104811 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:40.104505 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:41.103134 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:41.103099 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:41.103303 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:41.103239 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:42.102870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:42.102840 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:42.103309 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:42.102967 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:43.103384 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:43.103344 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:43.103791 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:43.103466 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:44.103850 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:44.103654 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:44.104237 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:44.103945 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:45.103531 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:45.103486 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:45.103707 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:45.103624 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:46.103276 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:46.103239 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:46.103720 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:46.103370 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:46.691092 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:46.691041 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:46.691304 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:46.691211 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:46.691304 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:46.691284 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs podName:4a2cb1aa-59c8-4abb-9a8a-daf68903a358 nodeName:}" failed. No retries permitted until 2026-04-24 19:07:02.691263316 +0000 UTC m=+33.180687601 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs") pod "network-metrics-daemon-l2vd2" (UID: "4a2cb1aa-59c8-4abb-9a8a-daf68903a358") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:06:46.791515 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:46.791486 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:46.791671 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:46.791617 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 19:06:46.791671 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:46.791631 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 19:06:46.791671 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:46.791641 2571 projected.go:194] Error preparing data for projected volume kube-api-access-tcjp5 for pod openshift-network-diagnostics/network-check-target-f4z44: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:46.791780 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:46.791701 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5 podName:4c1189cb-51d5-415c-8dc4-6c1970496280 nodeName:}" failed. No retries permitted until 2026-04-24 19:07:02.791684861 +0000 UTC m=+33.281109162 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tcjp5" (UniqueName: "kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5") pod "network-check-target-f4z44" (UID: "4c1189cb-51d5-415c-8dc4-6c1970496280") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:06:47.102898 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:47.102824 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:47.103067 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:47.102966 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:48.103363 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:48.103330 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:48.103782 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:48.103452 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:49.103665 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:49.103635 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:49.104105 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:49.103762 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:50.103473 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.103362 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:50.103576 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:50.103558 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:50.276808 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.276671 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" event={"ID":"e6111c3de51e90f6a3a0694ec4ce1e2d","Type":"ContainerStarted","Data":"8595eb1b5cfc56b20de5b055e6c1763553ea6ca11f0293ef7ce8a589cf80b3c7"} Apr 24 19:06:50.278605 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.278576 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-2whc8" event={"ID":"7141b48e-7f6d-4770-ab5e-9f920fa03a0d","Type":"ContainerStarted","Data":"da7b34d56f0ca0fecbd759e907cd69a47e843b347d3e7bd12bab38d5113864dc"} Apr 24 19:06:50.280463 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.280431 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8snvl" event={"ID":"eb058d66-c070-4f69-80a8-1e14b2f69f5d","Type":"ContainerStarted","Data":"54826fe0487e2a858caced89c745ab104dfb45a0761219e577df94112ae3b9e9"} Apr 24 19:06:50.283803 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.283785 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:06:50.284693 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.284351 2571 generic.go:358] "Generic (PLEG): container finished" podID="0f254dda-8cf9-4910-99c2-638993f242ac" containerID="f7d2af9c9b73110728dc9cb56df6471721c4e7ecfde5e943cdd4b1e61cb466e6" exitCode=1 Apr 24 19:06:50.284693 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.284389 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"6aea2c58f4718985e87b88b1662c1386a80761dcf1aebbde4df514a2ea334c43"} Apr 24 19:06:50.284693 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.284408 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"7f69e7f749d3dbc8c58e59c011408418d7a723ec0bb19be5185192f090eb1b70"} Apr 24 19:06:50.284693 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.284424 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"4b5749fcc3cd1668938df86b0d2c933b56d36b71e021e994e34ecc9ced9d7cc7"} Apr 24 19:06:50.284693 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.284439 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerDied","Data":"f7d2af9c9b73110728dc9cb56df6471721c4e7ecfde5e943cdd4b1e61cb466e6"} Apr 24 19:06:50.284693 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.284455 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"2603b66efc4408b77b786889da6c34a7307dd9f8074803d785b19075efe52165"} Apr 24 19:06:50.310194 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.310132 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-130-85.ec2.internal" podStartSLOduration=20.310112092 podStartE2EDuration="20.310112092s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:06:50.291796956 +0000 UTC m=+20.781221257" watchObservedRunningTime="2026-04-24 19:06:50.310112092 +0000 UTC m=+20.799536391" Apr 24 19:06:50.334200 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.333533 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8snvl" podStartSLOduration=1.936472931 podStartE2EDuration="20.333517088s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.330036168 +0000 UTC m=+1.819460445" lastFinishedPulling="2026-04-24 19:06:49.727080322 +0000 UTC m=+20.216504602" observedRunningTime="2026-04-24 19:06:50.331409092 +0000 UTC m=+20.820833402" watchObservedRunningTime="2026-04-24 19:06:50.333517088 +0000 UTC m=+20.822941386" Apr 24 19:06:50.334200 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:50.333866 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-2whc8" podStartSLOduration=1.945818088 podStartE2EDuration="20.333859205s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.339381602 +0000 UTC m=+1.828805878" lastFinishedPulling="2026-04-24 19:06:49.727422705 +0000 UTC m=+20.216846995" observedRunningTime="2026-04-24 19:06:50.310603935 +0000 UTC m=+20.800028237" watchObservedRunningTime="2026-04-24 19:06:50.333859205 +0000 UTC m=+20.823283504" Apr 24 19:06:51.103208 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.103176 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:51.103425 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:51.103293 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:51.287241 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.287196 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qgvbw" event={"ID":"42b9ec03-8a07-41ff-bcea-adb703ebf407","Type":"ContainerStarted","Data":"08dac198edf52a8f4cae193176703fe346e1df6563059029c1314d5114c400bc"} Apr 24 19:06:51.288711 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.288671 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-ztvgx" event={"ID":"5767c74f-a79f-44ed-966e-2106086c99dc","Type":"ContainerStarted","Data":"820462cf56aff0dc4ab8c4a523af5a480068a6db85c33e421242fd29e0e3ccdb"} Apr 24 19:06:51.290168 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.290107 2571 generic.go:358] "Generic (PLEG): container finished" podID="c48fa480-1be5-43ad-b4b7-25d95f327d97" containerID="57f444c785a5e4476f4d5aa4b1e4cacadd90eaf6eded1e3da700d969bda797eb" exitCode=0 Apr 24 19:06:51.290259 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.290176 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerDied","Data":"57f444c785a5e4476f4d5aa4b1e4cacadd90eaf6eded1e3da700d969bda797eb"} Apr 24 19:06:51.291867 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.291842 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5sdvh" event={"ID":"5f9742f1-146e-47e5-a302-4d5570284239","Type":"ContainerStarted","Data":"ad23c951032cc65c2b2d95393d04690832b5ca36ebd45ab5cd740804f4da6df7"} Apr 24 19:06:51.293431 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.293411 2571 generic.go:358] "Generic (PLEG): container finished" podID="09694a31517020611c7d019fc3713cd8" containerID="178ada21944e89c17c1cba6b3cc3c0221797fca03e7978d474367ae4e7d32a1e" exitCode=0 Apr 24 19:06:51.293511 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.293475 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" event={"ID":"09694a31517020611c7d019fc3713cd8","Type":"ContainerDied","Data":"178ada21944e89c17c1cba6b3cc3c0221797fca03e7978d474367ae4e7d32a1e"} Apr 24 19:06:51.294889 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.294865 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9cd8l" event={"ID":"a0fb5806-85c6-498a-9018-3d8218e45d60","Type":"ContainerStarted","Data":"28a3fb8a5769ec8437d0d9ec2df85be3fc5ed75c4c9f256def1fdf97226619e0"} Apr 24 19:06:51.296286 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.296265 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" event={"ID":"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8","Type":"ContainerStarted","Data":"eaa57be215f04c685e038dfafafadbe6fd712b73ca5b3bf3c81483b7815b2f85"} Apr 24 19:06:51.298772 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.298751 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:06:51.299229 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.299207 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"068161a6e49cb991965524abf4803dd77dda79dc432731511f0adddb3c6623b5"} Apr 24 19:06:51.306542 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.306499 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qgvbw" podStartSLOduration=2.903893301 podStartE2EDuration="21.306484774s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.324791823 +0000 UTC m=+1.814216099" lastFinishedPulling="2026-04-24 19:06:49.727383285 +0000 UTC m=+20.216807572" observedRunningTime="2026-04-24 19:06:51.306454184 +0000 UTC m=+21.795878481" watchObservedRunningTime="2026-04-24 19:06:51.306484774 +0000 UTC m=+21.795909070" Apr 24 19:06:51.348818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.347987 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-ztvgx" podStartSLOduration=7.728108211 podStartE2EDuration="21.347974282s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.316788592 +0000 UTC m=+1.806212868" lastFinishedPulling="2026-04-24 19:06:44.936654663 +0000 UTC m=+15.426078939" observedRunningTime="2026-04-24 19:06:51.347573335 +0000 UTC m=+21.836997636" watchObservedRunningTime="2026-04-24 19:06:51.347974282 +0000 UTC m=+21.837398580" Apr 24 19:06:51.378712 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.378665 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5sdvh" podStartSLOduration=2.973025706 podStartE2EDuration="21.378649718s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.294318924 +0000 UTC m=+1.783743200" lastFinishedPulling="2026-04-24 19:06:49.699942931 +0000 UTC m=+20.189367212" observedRunningTime="2026-04-24 19:06:51.378222298 +0000 UTC m=+21.867646597" watchObservedRunningTime="2026-04-24 19:06:51.378649718 +0000 UTC m=+21.868074018" Apr 24 19:06:51.503596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:51.503573 2571 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 24 19:06:52.038186 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:52.038055 2571 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-24T19:06:51.503592721Z","UUID":"6fa65750-64ec-42f4-a87f-ce7be0188987","Handler":null,"Name":"","Endpoint":""} Apr 24 19:06:52.040557 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:52.040531 2571 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 24 19:06:52.040557 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:52.040563 2571 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 24 19:06:52.103120 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:52.103095 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:52.103271 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:52.103190 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:52.303167 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:52.303088 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" event={"ID":"09694a31517020611c7d019fc3713cd8","Type":"ContainerStarted","Data":"92f77015cee4636e49a56e7ab07db73f236a8d65c2f5963866b312d3322aa6a0"} Apr 24 19:06:52.304897 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:52.304866 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" event={"ID":"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8","Type":"ContainerStarted","Data":"7ec7882d9239906bb0ec538599c715f54493c60aa42ef5b418ee94147d1b0e81"} Apr 24 19:06:52.318241 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:52.318192 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-9cd8l" podStartSLOduration=3.953646826 podStartE2EDuration="22.318180125s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.33539488 +0000 UTC m=+1.824819157" lastFinishedPulling="2026-04-24 19:06:49.699928161 +0000 UTC m=+20.189352456" observedRunningTime="2026-04-24 19:06:51.397315647 +0000 UTC m=+21.886739946" watchObservedRunningTime="2026-04-24 19:06:52.318180125 +0000 UTC m=+22.807604423" Apr 24 19:06:52.318579 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:52.318553 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-130-85.ec2.internal" podStartSLOduration=22.31854886 podStartE2EDuration="22.31854886s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:06:52.317933222 +0000 UTC m=+22.807357520" watchObservedRunningTime="2026-04-24 19:06:52.31854886 +0000 UTC m=+22.807973155" Apr 24 19:06:53.103777 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:53.103744 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:53.103978 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:53.103890 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:53.309709 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:53.309684 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:06:53.310181 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:53.310080 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"5c48bb40a09803613c8f7f2257c9bef5131169bef58dd69b9283924b65a3ab17"} Apr 24 19:06:53.311955 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:53.311888 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" event={"ID":"5fbff284-fcfa-4f9b-9c3a-12eb318d19f8","Type":"ContainerStarted","Data":"a8078c4431a0704fadaaff3f266ef7c730639a16fd5aa9828758cc72cb28e764"} Apr 24 19:06:53.335071 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:53.335023 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-999gb" podStartSLOduration=2.329664014 podStartE2EDuration="23.335007679s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.321199125 +0000 UTC m=+1.810623401" lastFinishedPulling="2026-04-24 19:06:52.326542787 +0000 UTC m=+22.815967066" observedRunningTime="2026-04-24 19:06:53.334925924 +0000 UTC m=+23.824350221" watchObservedRunningTime="2026-04-24 19:06:53.335007679 +0000 UTC m=+23.824431980" Apr 24 19:06:54.103236 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:54.103207 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:54.103419 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:54.103334 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:55.103585 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:55.103554 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:55.104026 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:55.103681 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:55.231155 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:55.231122 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:55.231790 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:55.231760 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:55.315984 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:55.315932 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:55.316473 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:55.316456 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-ztvgx" Apr 24 19:06:56.103759 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:56.103537 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:56.104490 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:56.103814 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:56.321847 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:56.321826 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:06:56.322178 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:56.322155 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"e428a3814c6008429495843d38ff246195163e620047df92bedab430c8355450"} Apr 24 19:06:56.322454 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:56.322427 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:56.322606 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:56.322586 2571 scope.go:117] "RemoveContainer" containerID="f7d2af9c9b73110728dc9cb56df6471721c4e7ecfde5e943cdd4b1e61cb466e6" Apr 24 19:06:56.324029 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:56.324009 2571 generic.go:358] "Generic (PLEG): container finished" podID="c48fa480-1be5-43ad-b4b7-25d95f327d97" containerID="8e98401674c67424cffa48e7067a466c54e4ac5328e61ddc1f2b9c67d170ca08" exitCode=0 Apr 24 19:06:56.324123 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:56.324090 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerDied","Data":"8e98401674c67424cffa48e7067a466c54e4ac5328e61ddc1f2b9c67d170ca08"} Apr 24 19:06:56.338698 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:56.338675 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:57.103145 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:57.102968 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:57.103372 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:57.103253 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:57.328771 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:57.328749 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:06:57.329107 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:57.329085 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" event={"ID":"0f254dda-8cf9-4910-99c2-638993f242ac","Type":"ContainerStarted","Data":"24d704b13f1118ef99c9fd263c3cf22bb3ed028ede6da445f9a458fddc3632d1"} Apr 24 19:06:57.329180 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:57.329165 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 19:06:57.329418 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:57.329397 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:57.343532 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:57.343512 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:06:57.361544 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:57.361503 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" podStartSLOduration=8.805229152999999 podStartE2EDuration="27.36149154s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.310880076 +0000 UTC m=+1.800304352" lastFinishedPulling="2026-04-24 19:06:49.86714246 +0000 UTC m=+20.356566739" observedRunningTime="2026-04-24 19:06:57.361176496 +0000 UTC m=+27.850600830" watchObservedRunningTime="2026-04-24 19:06:57.36149154 +0000 UTC m=+27.850915838" Apr 24 19:06:58.105552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:58.105523 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:06:58.105688 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:58.105611 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:06:58.332896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:58.332856 2571 generic.go:358] "Generic (PLEG): container finished" podID="c48fa480-1be5-43ad-b4b7-25d95f327d97" containerID="0b912c1105201612986edd110ea3ef9c1fb4aee9e79ddb7041a48d99e0347161" exitCode=0 Apr 24 19:06:58.333350 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:58.332929 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerDied","Data":"0b912c1105201612986edd110ea3ef9c1fb4aee9e79ddb7041a48d99e0347161"} Apr 24 19:06:58.333350 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:58.333132 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 19:06:59.103505 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:59.103478 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:06:59.103608 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:06:59.103595 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:06:59.336811 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:59.336779 2571 generic.go:358] "Generic (PLEG): container finished" podID="c48fa480-1be5-43ad-b4b7-25d95f327d97" containerID="a1992c801ec95f0d1ab573e8a6691c3d3a263abe0c7f03a43f6489a595a4932b" exitCode=0 Apr 24 19:06:59.337273 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:59.336842 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerDied","Data":"a1992c801ec95f0d1ab573e8a6691c3d3a263abe0c7f03a43f6489a595a4932b"} Apr 24 19:06:59.337273 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:59.337021 2571 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 19:06:59.901465 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:06:59.901434 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:07:00.103799 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:00.103766 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:00.104046 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:00.103857 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:07:00.349702 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:00.349653 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" podUID="0f254dda-8cf9-4910-99c2-638993f242ac" containerName="ovnkube-controller" probeResult="failure" output="" Apr 24 19:07:01.102839 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:01.102810 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:01.103026 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:01.102931 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:07:02.103770 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:02.103739 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:02.104228 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:02.103844 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:07:02.708579 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:02.708539 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:02.708869 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:02.708696 2571 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:07:02.708869 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:02.708772 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs podName:4a2cb1aa-59c8-4abb-9a8a-daf68903a358 nodeName:}" failed. No retries permitted until 2026-04-24 19:07:34.708750934 +0000 UTC m=+65.198175225 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs") pod "network-metrics-daemon-l2vd2" (UID: "4a2cb1aa-59c8-4abb-9a8a-daf68903a358") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 24 19:07:02.809244 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:02.809207 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:02.809399 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:02.809352 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 24 19:07:02.809399 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:02.809367 2571 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 24 19:07:02.809399 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:02.809377 2571 projected.go:194] Error preparing data for projected volume kube-api-access-tcjp5 for pod openshift-network-diagnostics/network-check-target-f4z44: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:07:02.809543 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:02.809426 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5 podName:4c1189cb-51d5-415c-8dc4-6c1970496280 nodeName:}" failed. No retries permitted until 2026-04-24 19:07:34.80941152 +0000 UTC m=+65.298835802 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tcjp5" (UniqueName: "kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5") pod "network-check-target-f4z44" (UID: "4c1189cb-51d5-415c-8dc4-6c1970496280") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 24 19:07:03.103080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:03.103008 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:03.103241 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:03.103109 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:07:04.103241 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:04.103208 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:04.103707 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:04.103329 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:07:05.103522 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:05.103504 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:05.103792 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:05.103606 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:07:05.350744 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:05.350674 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerStarted","Data":"287ef06bd30564f5fdd4e47e6c18aa5692437ae60037ab3e048e92caefe3352f"} Apr 24 19:07:06.103123 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:06.103088 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:06.103287 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:06.103220 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:07:06.355062 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:06.354998 2571 generic.go:358] "Generic (PLEG): container finished" podID="c48fa480-1be5-43ad-b4b7-25d95f327d97" containerID="287ef06bd30564f5fdd4e47e6c18aa5692437ae60037ab3e048e92caefe3352f" exitCode=0 Apr 24 19:07:06.355062 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:06.355057 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerDied","Data":"287ef06bd30564f5fdd4e47e6c18aa5692437ae60037ab3e048e92caefe3352f"} Apr 24 19:07:06.446339 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:06.446300 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l2vd2"] Apr 24 19:07:06.447545 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:06.446444 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:06.447545 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:06.446554 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:07:06.449236 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:06.449196 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-f4z44"] Apr 24 19:07:06.449408 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:06.449369 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:06.449493 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:06.449469 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:07:07.359808 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:07.359538 2571 generic.go:358] "Generic (PLEG): container finished" podID="c48fa480-1be5-43ad-b4b7-25d95f327d97" containerID="3ae07d9fa53ce065dd6cb2d61efe623e75747d0abd22863e162c0ce6bf1c192e" exitCode=0 Apr 24 19:07:07.359808 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:07.359597 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerDied","Data":"3ae07d9fa53ce065dd6cb2d61efe623e75747d0abd22863e162c0ce6bf1c192e"} Apr 24 19:07:08.103486 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:08.103450 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:08.103653 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:08.103555 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:07:08.103653 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:08.103588 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:08.103653 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:08.103645 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:07:08.364511 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:08.364424 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" event={"ID":"c48fa480-1be5-43ad-b4b7-25d95f327d97","Type":"ContainerStarted","Data":"86fcf03898b63974bf045c7f269ec21fe329ea6ac4534b0142a8db91dab2f72f"} Apr 24 19:07:08.392446 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:08.392400 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gjqqt" podStartSLOduration=4.623160723 podStartE2EDuration="38.392386729s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:06:31.301655451 +0000 UTC m=+1.791079726" lastFinishedPulling="2026-04-24 19:07:05.070881457 +0000 UTC m=+35.560305732" observedRunningTime="2026-04-24 19:07:08.392221722 +0000 UTC m=+38.881646021" watchObservedRunningTime="2026-04-24 19:07:08.392386729 +0000 UTC m=+38.881811024" Apr 24 19:07:10.103351 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.103326 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:10.103761 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:10.103399 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-f4z44" podUID="4c1189cb-51d5-415c-8dc4-6c1970496280" Apr 24 19:07:10.103761 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.103472 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:10.103761 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:10.103564 2571 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l2vd2" podUID="4a2cb1aa-59c8-4abb-9a8a-daf68903a358" Apr 24 19:07:10.807588 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.807514 2571 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-130-85.ec2.internal" event="NodeReady" Apr 24 19:07:10.807700 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.807608 2571 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 24 19:07:10.876369 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.876339 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5g8bb"] Apr 24 19:07:10.909291 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.909260 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qx2g9"] Apr 24 19:07:10.909418 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.909299 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:10.912867 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.912815 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-km7hc\"" Apr 24 19:07:10.913051 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.913033 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 24 19:07:10.913139 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.913121 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 24 19:07:10.922108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.922086 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qx2g9"] Apr 24 19:07:10.922108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.922110 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5g8bb"] Apr 24 19:07:10.922238 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.922123 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-nvfsw"] Apr 24 19:07:10.922238 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.922216 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qx2g9" Apr 24 19:07:10.924569 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.924547 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-jvjrt\"" Apr 24 19:07:10.924654 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.924620 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 24 19:07:10.924718 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.924676 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 24 19:07:10.924764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.924725 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 24 19:07:10.943174 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.943155 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-nvfsw"] Apr 24 19:07:10.943270 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.943260 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:10.946127 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.946108 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 24 19:07:10.946245 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.946230 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 24 19:07:10.946318 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.946112 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 24 19:07:10.946318 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.946294 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-nzsf6\"" Apr 24 19:07:10.946432 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:10.946359 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 24 19:07:11.067964 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.067870 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/32a6dbba-0d8b-45d1-a36a-94876710e4fe-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.067964 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.067945 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvkj\" (UniqueName: \"kubernetes.io/projected/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-kube-api-access-nrvkj\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.068101 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.067968 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-tmp-dir\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.068101 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.067988 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9xnr\" (UniqueName: \"kubernetes.io/projected/349784ed-94ac-4a05-8df5-2577a4c28f7a-kube-api-access-f9xnr\") pod \"ingress-canary-qx2g9\" (UID: \"349784ed-94ac-4a05-8df5-2577a4c28f7a\") " pod="openshift-ingress-canary/ingress-canary-qx2g9" Apr 24 19:07:11.068101 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.068006 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d8zt\" (UniqueName: \"kubernetes.io/projected/32a6dbba-0d8b-45d1-a36a-94876710e4fe-kube-api-access-9d8zt\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.068101 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.068026 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/32a6dbba-0d8b-45d1-a36a-94876710e4fe-crio-socket\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.068215 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.068102 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-config-volume\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.068215 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.068125 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/349784ed-94ac-4a05-8df5-2577a4c28f7a-cert\") pod \"ingress-canary-qx2g9\" (UID: \"349784ed-94ac-4a05-8df5-2577a4c28f7a\") " pod="openshift-ingress-canary/ingress-canary-qx2g9" Apr 24 19:07:11.068215 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.068171 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/32a6dbba-0d8b-45d1-a36a-94876710e4fe-data-volume\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.068215 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.068186 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/32a6dbba-0d8b-45d1-a36a-94876710e4fe-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.068215 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.068212 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-metrics-tls\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.168781 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.168751 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/32a6dbba-0d8b-45d1-a36a-94876710e4fe-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.169176 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.168788 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrvkj\" (UniqueName: \"kubernetes.io/projected/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-kube-api-access-nrvkj\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.169176 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.168807 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-tmp-dir\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.169176 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.168826 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9xnr\" (UniqueName: \"kubernetes.io/projected/349784ed-94ac-4a05-8df5-2577a4c28f7a-kube-api-access-f9xnr\") pod \"ingress-canary-qx2g9\" (UID: \"349784ed-94ac-4a05-8df5-2577a4c28f7a\") " pod="openshift-ingress-canary/ingress-canary-qx2g9" Apr 24 19:07:11.169176 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.168842 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9d8zt\" (UniqueName: \"kubernetes.io/projected/32a6dbba-0d8b-45d1-a36a-94876710e4fe-kube-api-access-9d8zt\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.169176 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.168862 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/32a6dbba-0d8b-45d1-a36a-94876710e4fe-crio-socket\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.169176 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.168890 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-config-volume\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.169176 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.168934 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/349784ed-94ac-4a05-8df5-2577a4c28f7a-cert\") pod \"ingress-canary-qx2g9\" (UID: \"349784ed-94ac-4a05-8df5-2577a4c28f7a\") " pod="openshift-ingress-canary/ingress-canary-qx2g9" Apr 24 19:07:11.169521 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.169496 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/32a6dbba-0d8b-45d1-a36a-94876710e4fe-data-volume\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.169587 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.169516 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-tmp-dir\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.169587 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.169561 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/32a6dbba-0d8b-45d1-a36a-94876710e4fe-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.169587 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.169567 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/32a6dbba-0d8b-45d1-a36a-94876710e4fe-crio-socket\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.169823 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.169798 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/32a6dbba-0d8b-45d1-a36a-94876710e4fe-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.170028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.169992 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-metrics-tls\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.170126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.170022 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-config-volume\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.170126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.170025 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/32a6dbba-0d8b-45d1-a36a-94876710e4fe-data-volume\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.173722 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.173695 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/32a6dbba-0d8b-45d1-a36a-94876710e4fe-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.173831 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.173751 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/349784ed-94ac-4a05-8df5-2577a4c28f7a-cert\") pod \"ingress-canary-qx2g9\" (UID: \"349784ed-94ac-4a05-8df5-2577a4c28f7a\") " pod="openshift-ingress-canary/ingress-canary-qx2g9" Apr 24 19:07:11.178983 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.178961 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d8zt\" (UniqueName: \"kubernetes.io/projected/32a6dbba-0d8b-45d1-a36a-94876710e4fe-kube-api-access-9d8zt\") pod \"insights-runtime-extractor-nvfsw\" (UID: \"32a6dbba-0d8b-45d1-a36a-94876710e4fe\") " pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.179554 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.179535 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrvkj\" (UniqueName: \"kubernetes.io/projected/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-kube-api-access-nrvkj\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.181891 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.181875 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6ca0a6d-3eab-40ba-8698-e1fbdb43065a-metrics-tls\") pod \"dns-default-5g8bb\" (UID: \"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a\") " pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.190687 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.190665 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9xnr\" (UniqueName: \"kubernetes.io/projected/349784ed-94ac-4a05-8df5-2577a4c28f7a-kube-api-access-f9xnr\") pod \"ingress-canary-qx2g9\" (UID: \"349784ed-94ac-4a05-8df5-2577a4c28f7a\") " pod="openshift-ingress-canary/ingress-canary-qx2g9" Apr 24 19:07:11.218685 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.218661 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:11.230370 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.230346 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qx2g9" Apr 24 19:07:11.252074 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.252046 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-nvfsw" Apr 24 19:07:11.408272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.407797 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5g8bb"] Apr 24 19:07:11.409027 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.409004 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qx2g9"] Apr 24 19:07:11.411353 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:11.411333 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-nvfsw"] Apr 24 19:07:11.413079 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:11.413048 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6ca0a6d_3eab_40ba_8698_e1fbdb43065a.slice/crio-342b13de90d5a9e35577452f954f66f61bf5941b1b8276c97bb19f25a536a8c4 WatchSource:0}: Error finding container 342b13de90d5a9e35577452f954f66f61bf5941b1b8276c97bb19f25a536a8c4: Status 404 returned error can't find the container with id 342b13de90d5a9e35577452f954f66f61bf5941b1b8276c97bb19f25a536a8c4 Apr 24 19:07:11.413560 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:11.413471 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod349784ed_94ac_4a05_8df5_2577a4c28f7a.slice/crio-36617534c8820088f28e3d2ea422643b34d6decde334d3410e5d13f3033e82ed WatchSource:0}: Error finding container 36617534c8820088f28e3d2ea422643b34d6decde334d3410e5d13f3033e82ed: Status 404 returned error can't find the container with id 36617534c8820088f28e3d2ea422643b34d6decde334d3410e5d13f3033e82ed Apr 24 19:07:11.417857 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:11.417836 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32a6dbba_0d8b_45d1_a36a_94876710e4fe.slice/crio-30c41fe081c2bf6c3e1c1ed7448cd5139a474c4e5c6048d00603cddba45789be WatchSource:0}: Error finding container 30c41fe081c2bf6c3e1c1ed7448cd5139a474c4e5c6048d00603cddba45789be: Status 404 returned error can't find the container with id 30c41fe081c2bf6c3e1c1ed7448cd5139a474c4e5c6048d00603cddba45789be Apr 24 19:07:12.103706 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.103676 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:12.103706 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.103705 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:12.108192 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.108170 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 24 19:07:12.108192 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.108193 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 24 19:07:12.108404 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.108225 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 24 19:07:12.108404 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.108172 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-5l888\"" Apr 24 19:07:12.108404 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.108173 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-vpn2d\"" Apr 24 19:07:12.374144 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.374061 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qx2g9" event={"ID":"349784ed-94ac-4a05-8df5-2577a4c28f7a","Type":"ContainerStarted","Data":"36617534c8820088f28e3d2ea422643b34d6decde334d3410e5d13f3033e82ed"} Apr 24 19:07:12.375570 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.375515 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-nvfsw" event={"ID":"32a6dbba-0d8b-45d1-a36a-94876710e4fe","Type":"ContainerStarted","Data":"e154a03d1b3cd574182ce83e6b5253b3f340ea371e300c0a856247d2f4809e4b"} Apr 24 19:07:12.375695 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.375578 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-nvfsw" event={"ID":"32a6dbba-0d8b-45d1-a36a-94876710e4fe","Type":"ContainerStarted","Data":"30c41fe081c2bf6c3e1c1ed7448cd5139a474c4e5c6048d00603cddba45789be"} Apr 24 19:07:12.376644 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:12.376614 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5g8bb" event={"ID":"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a","Type":"ContainerStarted","Data":"342b13de90d5a9e35577452f954f66f61bf5941b1b8276c97bb19f25a536a8c4"} Apr 24 19:07:13.220156 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.220124 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn"] Apr 24 19:07:13.237112 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.236992 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn"] Apr 24 19:07:13.237275 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.237132 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" Apr 24 19:07:13.240013 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.239987 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-dockercfg-w5r4p\"" Apr 24 19:07:13.240172 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.240016 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-tls\"" Apr 24 19:07:13.384748 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.384716 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e369c5e1-62ad-4a07-bd91-f9b696624a1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-2c9bn\" (UID: \"e369c5e1-62ad-4a07-bd91-f9b696624a1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" Apr 24 19:07:13.485368 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.485275 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e369c5e1-62ad-4a07-bd91-f9b696624a1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-2c9bn\" (UID: \"e369c5e1-62ad-4a07-bd91-f9b696624a1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" Apr 24 19:07:13.489352 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.489328 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e369c5e1-62ad-4a07-bd91-f9b696624a1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-57cf98b594-2c9bn\" (UID: \"e369c5e1-62ad-4a07-bd91-f9b696624a1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" Apr 24 19:07:13.548281 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:13.548239 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" Apr 24 19:07:14.142374 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.142137 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn"] Apr 24 19:07:14.147377 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:14.147345 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode369c5e1_62ad_4a07_bd91_f9b696624a1c.slice/crio-4454e55ccdc4b6da1e9bce6be729701ed4c2f6ce67fa121f3174908597f6c2a2 WatchSource:0}: Error finding container 4454e55ccdc4b6da1e9bce6be729701ed4c2f6ce67fa121f3174908597f6c2a2: Status 404 returned error can't find the container with id 4454e55ccdc4b6da1e9bce6be729701ed4c2f6ce67fa121f3174908597f6c2a2 Apr 24 19:07:14.383886 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.383846 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qx2g9" event={"ID":"349784ed-94ac-4a05-8df5-2577a4c28f7a","Type":"ContainerStarted","Data":"81bbb10a388fc6e0f8e442087b1b07b255e7d8316565f7a3bb0eaa4aadeb5ca1"} Apr 24 19:07:14.384972 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.384941 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" event={"ID":"e369c5e1-62ad-4a07-bd91-f9b696624a1c","Type":"ContainerStarted","Data":"4454e55ccdc4b6da1e9bce6be729701ed4c2f6ce67fa121f3174908597f6c2a2"} Apr 24 19:07:14.387091 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.387063 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-nvfsw" event={"ID":"32a6dbba-0d8b-45d1-a36a-94876710e4fe","Type":"ContainerStarted","Data":"0c829d23e09952024ca48d9c2b9e8c4df22430e25c8db3becb3468daa2f58dcd"} Apr 24 19:07:14.388577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.388555 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5g8bb" event={"ID":"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a","Type":"ContainerStarted","Data":"6d53273ee0d73dd99555e56b29230e2ed04afde255f73778e2a8a2534d206f5b"} Apr 24 19:07:14.388689 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.388582 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5g8bb" event={"ID":"e6ca0a6d-3eab-40ba-8698-e1fbdb43065a","Type":"ContainerStarted","Data":"31807061f148ddb2b2add66d7c2619df21d95ce68743c30e866697721f6b7ba7"} Apr 24 19:07:14.388742 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.388696 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:14.399004 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.398958 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qx2g9" podStartSLOduration=1.7980632399999998 podStartE2EDuration="4.398946501s" podCreationTimestamp="2026-04-24 19:07:10 +0000 UTC" firstStartedPulling="2026-04-24 19:07:11.417175728 +0000 UTC m=+41.906600005" lastFinishedPulling="2026-04-24 19:07:14.01805899 +0000 UTC m=+44.507483266" observedRunningTime="2026-04-24 19:07:14.398680556 +0000 UTC m=+44.888104855" watchObservedRunningTime="2026-04-24 19:07:14.398946501 +0000 UTC m=+44.888370798" Apr 24 19:07:14.416319 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:14.416277 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5g8bb" podStartSLOduration=1.809352978 podStartE2EDuration="4.416261875s" podCreationTimestamp="2026-04-24 19:07:10 +0000 UTC" firstStartedPulling="2026-04-24 19:07:11.417169386 +0000 UTC m=+41.906593662" lastFinishedPulling="2026-04-24 19:07:14.024078266 +0000 UTC m=+44.513502559" observedRunningTime="2026-04-24 19:07:14.416145995 +0000 UTC m=+44.905570294" watchObservedRunningTime="2026-04-24 19:07:14.416261875 +0000 UTC m=+44.905686178" Apr 24 19:07:16.396531 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:16.396308 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-nvfsw" event={"ID":"32a6dbba-0d8b-45d1-a36a-94876710e4fe","Type":"ContainerStarted","Data":"f97c23607fdb7d39dfdd6cf2cc541d48ef773ae628a978cbc4b80c811fa2c16b"} Apr 24 19:07:16.397555 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:16.397533 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" event={"ID":"e369c5e1-62ad-4a07-bd91-f9b696624a1c","Type":"ContainerStarted","Data":"c23cf71f398889feb673dde494c65a235edc1ef96174f694731664769afa84d5"} Apr 24 19:07:16.397794 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:16.397777 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" Apr 24 19:07:16.403516 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:16.403493 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" Apr 24 19:07:16.421018 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:16.420976 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-nvfsw" podStartSLOduration=1.833079636 podStartE2EDuration="6.420964091s" podCreationTimestamp="2026-04-24 19:07:10 +0000 UTC" firstStartedPulling="2026-04-24 19:07:11.5642229 +0000 UTC m=+42.053647190" lastFinishedPulling="2026-04-24 19:07:16.152107366 +0000 UTC m=+46.641531645" observedRunningTime="2026-04-24 19:07:16.418014783 +0000 UTC m=+46.907439081" watchObservedRunningTime="2026-04-24 19:07:16.420964091 +0000 UTC m=+46.910388388" Apr 24 19:07:16.453321 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:16.453273 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-57cf98b594-2c9bn" podStartSLOduration=1.448883859 podStartE2EDuration="3.453260596s" podCreationTimestamp="2026-04-24 19:07:13 +0000 UTC" firstStartedPulling="2026-04-24 19:07:14.149784103 +0000 UTC m=+44.639208393" lastFinishedPulling="2026-04-24 19:07:16.154160839 +0000 UTC m=+46.643585130" observedRunningTime="2026-04-24 19:07:16.452946587 +0000 UTC m=+46.942370886" watchObservedRunningTime="2026-04-24 19:07:16.453260596 +0000 UTC m=+46.942684893" Apr 24 19:07:17.285415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.285381 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-mn72d"] Apr 24 19:07:17.300385 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.300354 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-mn72d"] Apr 24 19:07:17.300512 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.300409 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.303094 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.303073 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 24 19:07:17.303094 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.303073 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 24 19:07:17.304351 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.304332 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 24 19:07:17.304527 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.304358 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 24 19:07:17.304527 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.304364 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 24 19:07:17.304527 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.304388 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-5ml9v\"" Apr 24 19:07:17.414652 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.414625 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1027239-cc37-421e-ba6a-e801f278db4c-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.415034 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.414679 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.415034 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.414701 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkfm\" (UniqueName: \"kubernetes.io/projected/b1027239-cc37-421e-ba6a-e801f278db4c-kube-api-access-zbkfm\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.415034 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.414757 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.515110 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.515082 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.515195 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.515115 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkfm\" (UniqueName: \"kubernetes.io/projected/b1027239-cc37-421e-ba6a-e801f278db4c-kube-api-access-zbkfm\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.515234 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:17.515206 2571 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Apr 24 19:07:17.515287 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:17.515276 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-tls podName:b1027239-cc37-421e-ba6a-e801f278db4c nodeName:}" failed. No retries permitted until 2026-04-24 19:07:18.015257316 +0000 UTC m=+48.504681596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-tls") pod "prometheus-operator-5676c8c784-mn72d" (UID: "b1027239-cc37-421e-ba6a-e801f278db4c") : secret "prometheus-operator-tls" not found Apr 24 19:07:17.515359 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.515340 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.515422 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.515407 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1027239-cc37-421e-ba6a-e801f278db4c-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.524795 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.524772 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.524922 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.524780 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1027239-cc37-421e-ba6a-e801f278db4c-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:17.524922 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:17.524822 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkfm\" (UniqueName: \"kubernetes.io/projected/b1027239-cc37-421e-ba6a-e801f278db4c-kube-api-access-zbkfm\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:18.018480 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:18.018443 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:18.020742 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:18.020723 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1027239-cc37-421e-ba6a-e801f278db4c-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-mn72d\" (UID: \"b1027239-cc37-421e-ba6a-e801f278db4c\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:18.209225 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:18.209179 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" Apr 24 19:07:18.319585 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:18.319557 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-mn72d"] Apr 24 19:07:18.323126 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:18.323097 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1027239_cc37_421e_ba6a_e801f278db4c.slice/crio-ff054aae27a3b8c30fb4609c0653ce6a8980b67be1389c85607a6333de96220e WatchSource:0}: Error finding container ff054aae27a3b8c30fb4609c0653ce6a8980b67be1389c85607a6333de96220e: Status 404 returned error can't find the container with id ff054aae27a3b8c30fb4609c0653ce6a8980b67be1389c85607a6333de96220e Apr 24 19:07:18.403624 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:18.403590 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" event={"ID":"b1027239-cc37-421e-ba6a-e801f278db4c","Type":"ContainerStarted","Data":"ff054aae27a3b8c30fb4609c0653ce6a8980b67be1389c85607a6333de96220e"} Apr 24 19:07:20.410807 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:20.410630 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" event={"ID":"b1027239-cc37-421e-ba6a-e801f278db4c","Type":"ContainerStarted","Data":"a67a5f9cb3d5b8ea9acbe5769c535a4d18092ed9dd0c747a5d09d14f16098890"} Apr 24 19:07:20.411333 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:20.410814 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" event={"ID":"b1027239-cc37-421e-ba6a-e801f278db4c","Type":"ContainerStarted","Data":"cce9d303078866a274798dc23d0d8bf7ff6675549c74f19bf06767139f7a5456"} Apr 24 19:07:20.441159 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:20.441115 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5676c8c784-mn72d" podStartSLOduration=1.742036433 podStartE2EDuration="3.441102041s" podCreationTimestamp="2026-04-24 19:07:17 +0000 UTC" firstStartedPulling="2026-04-24 19:07:18.324993231 +0000 UTC m=+48.814417507" lastFinishedPulling="2026-04-24 19:07:20.024058827 +0000 UTC m=+50.513483115" observedRunningTime="2026-04-24 19:07:20.44061945 +0000 UTC m=+50.930043747" watchObservedRunningTime="2026-04-24 19:07:20.441102041 +0000 UTC m=+50.930526369" Apr 24 19:07:22.669501 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.669467 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6"] Apr 24 19:07:22.673156 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.673142 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.676668 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.676650 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-kube-rbac-proxy-config\"" Apr 24 19:07:22.676783 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.676672 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-dockercfg-cztkb\"" Apr 24 19:07:22.676860 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.676847 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"openshift-state-metrics-tls\"" Apr 24 19:07:22.683134 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.683112 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6"] Apr 24 19:07:22.697934 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.697897 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-6xdh4"] Apr 24 19:07:22.701368 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.701355 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.704435 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.704411 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-kube-rbac-proxy-config\"" Apr 24 19:07:22.704613 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.704411 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-tls\"" Apr 24 19:07:22.704804 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.704784 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-dockercfg-fx2k2\"" Apr 24 19:07:22.705658 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.705634 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-custom-resource-state-configmap\"" Apr 24 19:07:22.713509 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.713485 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-6xdh4"] Apr 24 19:07:22.719180 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.719161 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-qj24z"] Apr 24 19:07:22.722539 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.722525 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.725944 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.725929 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 24 19:07:22.726033 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.726018 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-m9g75\"" Apr 24 19:07:22.726276 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.726262 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 24 19:07:22.726345 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.726291 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 24 19:07:22.856080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856040 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhzg\" (UniqueName: \"kubernetes.io/projected/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-kube-api-access-cwhzg\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.856272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856102 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.856272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856126 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-tls\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.856272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856175 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.856272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856197 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/49a37844-298e-4826-a760-5ba43867d446-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.856272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856222 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.856468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856266 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.856468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856308 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwtfg\" (UniqueName: \"kubernetes.io/projected/49a37844-298e-4826-a760-5ba43867d446-kube-api-access-wwtfg\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.856468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856343 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-wtmp\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.856468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856371 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-sys\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.856468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856393 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.856468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856423 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49a37844-298e-4826-a760-5ba43867d446-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.856468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856454 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/82d6a5b3-4b28-4b2e-8cef-302264f53d35-metrics-client-ca\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.856789 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856501 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-root\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.856789 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856522 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-accelerators-collector-config\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.856789 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856542 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.856789 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856596 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-textfile\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.856789 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856628 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.856789 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.856655 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9qvj\" (UniqueName: \"kubernetes.io/projected/82d6a5b3-4b28-4b2e-8cef-302264f53d35-kube-api-access-f9qvj\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.957950 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.957851 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-root\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.957950 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.957885 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-accelerators-collector-config\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.957950 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.957926 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.958220 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.957976 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-root\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958220 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958024 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-textfile\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958220 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958053 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.958220 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958073 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9qvj\" (UniqueName: \"kubernetes.io/projected/82d6a5b3-4b28-4b2e-8cef-302264f53d35-kube-api-access-f9qvj\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958220 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958212 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cwhzg\" (UniqueName: \"kubernetes.io/projected/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-kube-api-access-cwhzg\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.958453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958260 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958286 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-tls\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958316 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.958453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958341 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/49a37844-298e-4826-a760-5ba43867d446-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.958453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958362 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-textfile\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958373 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.958453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958401 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.958453 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958448 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wwtfg\" (UniqueName: \"kubernetes.io/projected/49a37844-298e-4826-a760-5ba43867d446-kube-api-access-wwtfg\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958473 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-wtmp\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958501 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-sys\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958530 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958567 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49a37844-298e-4826-a760-5ba43867d446-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958626 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/82d6a5b3-4b28-4b2e-8cef-302264f53d35-metrics-client-ca\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958632 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-accelerators-collector-config\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958677 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/49a37844-298e-4826-a760-5ba43867d446-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958719 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.958818 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958740 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-sys\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.959263 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958820 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-metrics-client-ca\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.959263 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.958849 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-wtmp\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.959263 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.959163 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/82d6a5b3-4b28-4b2e-8cef-302264f53d35-metrics-client-ca\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.959544 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.959412 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49a37844-298e-4826-a760-5ba43867d446-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.962107 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.962066 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-tls\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.962107 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.962084 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.962240 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.962214 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-openshift-state-metrics-tls\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.962240 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.962218 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/49a37844-298e-4826-a760-5ba43867d446-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.962240 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.962224 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.962332 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.962302 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/82d6a5b3-4b28-4b2e-8cef-302264f53d35-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.976085 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.976054 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9qvj\" (UniqueName: \"kubernetes.io/projected/82d6a5b3-4b28-4b2e-8cef-302264f53d35-kube-api-access-f9qvj\") pod \"node-exporter-qj24z\" (UID: \"82d6a5b3-4b28-4b2e-8cef-302264f53d35\") " pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:22.977084 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.977064 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwtfg\" (UniqueName: \"kubernetes.io/projected/49a37844-298e-4826-a760-5ba43867d446-kube-api-access-wwtfg\") pod \"kube-state-metrics-69db897b98-6xdh4\" (UID: \"49a37844-298e-4826-a760-5ba43867d446\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:22.978092 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.978074 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwhzg\" (UniqueName: \"kubernetes.io/projected/a7bb7d80-1c4f-459d-8ed3-3623da4d74b6-kube-api-access-cwhzg\") pod \"openshift-state-metrics-9d44df66c-z2tt6\" (UID: \"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6\") " pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:22.981983 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:22.981960 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" Apr 24 19:07:23.010771 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.010742 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" Apr 24 19:07:23.030552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.030512 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qj24z" Apr 24 19:07:23.043231 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:23.043189 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82d6a5b3_4b28_4b2e_8cef_302264f53d35.slice/crio-a805817a18a1dbab13c99a1aa37a53fe17f4ca1a17738efb21fea3876ae45705 WatchSource:0}: Error finding container a805817a18a1dbab13c99a1aa37a53fe17f4ca1a17738efb21fea3876ae45705: Status 404 returned error can't find the container with id a805817a18a1dbab13c99a1aa37a53fe17f4ca1a17738efb21fea3876ae45705 Apr 24 19:07:23.151530 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.151497 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6"] Apr 24 19:07:23.162059 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:23.162035 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7bb7d80_1c4f_459d_8ed3_3623da4d74b6.slice/crio-c47eb7dcdbba28590f83a1329aca905e039964ebb3e16e2986bfbbb9100a2d69 WatchSource:0}: Error finding container c47eb7dcdbba28590f83a1329aca905e039964ebb3e16e2986bfbbb9100a2d69: Status 404 returned error can't find the container with id c47eb7dcdbba28590f83a1329aca905e039964ebb3e16e2986bfbbb9100a2d69 Apr 24 19:07:23.190668 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.190643 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-6xdh4"] Apr 24 19:07:23.191957 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:23.191930 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49a37844_298e_4826_a760_5ba43867d446.slice/crio-d4aa27761b70d0ebbc94461419b7a293bab01594dfc57a0db3a42738f60bedd6 WatchSource:0}: Error finding container d4aa27761b70d0ebbc94461419b7a293bab01594dfc57a0db3a42738f60bedd6: Status 404 returned error can't find the container with id d4aa27761b70d0ebbc94461419b7a293bab01594dfc57a0db3a42738f60bedd6 Apr 24 19:07:23.419511 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.419470 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qj24z" event={"ID":"82d6a5b3-4b28-4b2e-8cef-302264f53d35","Type":"ContainerStarted","Data":"a805817a18a1dbab13c99a1aa37a53fe17f4ca1a17738efb21fea3876ae45705"} Apr 24 19:07:23.420529 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.420497 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" event={"ID":"49a37844-298e-4826-a760-5ba43867d446","Type":"ContainerStarted","Data":"d4aa27761b70d0ebbc94461419b7a293bab01594dfc57a0db3a42738f60bedd6"} Apr 24 19:07:23.422002 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.421983 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" event={"ID":"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6","Type":"ContainerStarted","Data":"12458cf6f1318b7b5b12f0cc914272cca12efff2627b2ddf28d549c564b14a2b"} Apr 24 19:07:23.422114 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.422007 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" event={"ID":"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6","Type":"ContainerStarted","Data":"954ee225b7866124ab3a5f9d2b3738c0d2a940bcf30fdf0ac63aa57e17f970e0"} Apr 24 19:07:23.422114 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.422016 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" event={"ID":"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6","Type":"ContainerStarted","Data":"c47eb7dcdbba28590f83a1329aca905e039964ebb3e16e2986bfbbb9100a2d69"} Apr 24 19:07:23.748672 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.748634 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:07:23.753051 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.753025 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.758670 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.758727 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.758760 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.758771 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-4xz9n\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.758843 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.758686 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.758775 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.758940 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.759016 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 24 19:07:23.759126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.759038 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 24 19:07:23.772501 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.772451 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:07:23.867205 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867153 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-out\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867205 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867199 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-web-config\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867455 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867221 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867455 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867267 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867455 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867303 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-volume\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867455 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867329 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-tls-assets\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867455 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867364 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867455 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867399 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867455 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867421 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867680 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867463 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867680 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867491 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hsr8\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-kube-api-access-6hsr8\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867680 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867509 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.867680 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.867535 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968636 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968595 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968756 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968641 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-volume\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968756 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968673 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-tls-assets\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968756 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968714 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968756 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968743 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968974 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968776 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968974 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:23.968798 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle podName:de4af98c-d040-481e-8fac-b92e6e4ef1a6 nodeName:}" failed. No retries permitted until 2026-04-24 19:07:24.468776205 +0000 UTC m=+54.958200486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6") : configmap references non-existent config key: ca-bundle.crt Apr 24 19:07:23.968974 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968858 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968974 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968927 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hsr8\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-kube-api-access-6hsr8\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.968974 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968963 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.969224 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.968993 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.969224 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.969043 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-out\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.969224 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.969070 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-web-config\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.969224 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.969106 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.969462 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:23.969380 2571 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Apr 24 19:07:23.969537 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:23.969485 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls podName:de4af98c-d040-481e-8fac-b92e6e4ef1a6 nodeName:}" failed. No retries permitted until 2026-04-24 19:07:24.469468144 +0000 UTC m=+54.958892426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6") : secret "alertmanager-main-tls" not found Apr 24 19:07:23.969600 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.969532 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.969600 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.969565 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.971916 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.971840 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.972387 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.972211 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-out\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.972874 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.972834 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.973051 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.973033 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-tls-assets\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.975941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.973836 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.975941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.974354 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.976329 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.976297 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-web-config\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.977008 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.976975 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-volume\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:23.979332 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:23.979310 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hsr8\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-kube-api-access-6hsr8\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:24.395123 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.395047 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5g8bb" Apr 24 19:07:24.427680 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.427644 2571 generic.go:358] "Generic (PLEG): container finished" podID="82d6a5b3-4b28-4b2e-8cef-302264f53d35" containerID="e16028290b0797d6eea9b02973356617f9892039202ac466449ce436c5469cc5" exitCode=0 Apr 24 19:07:24.427828 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.427724 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qj24z" event={"ID":"82d6a5b3-4b28-4b2e-8cef-302264f53d35","Type":"ContainerDied","Data":"e16028290b0797d6eea9b02973356617f9892039202ac466449ce436c5469cc5"} Apr 24 19:07:24.475810 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.475774 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:24.476012 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.475941 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:24.476875 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.476847 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:24.480066 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.480043 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:24.667704 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.667685 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:07:24.691532 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.690869 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6c69ddf84f-cltxb"] Apr 24 19:07:24.696197 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.696174 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.699121 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.699096 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-web\"" Apr 24 19:07:24.699259 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.699155 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-rules\"" Apr 24 19:07:24.699404 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.699387 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-metrics\"" Apr 24 19:07:24.699480 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.699467 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-dockercfg-69qkm\"" Apr 24 19:07:24.699784 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.699656 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-grpc-tls-381oir4rqnglm\"" Apr 24 19:07:24.699784 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.699666 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy\"" Apr 24 19:07:24.699962 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.699801 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-tls\"" Apr 24 19:07:24.711214 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.711189 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6c69ddf84f-cltxb"] Apr 24 19:07:24.778440 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.778363 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-grpc-tls\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.778440 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.778399 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.778858 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.778498 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.778858 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.778544 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czwq\" (UniqueName: \"kubernetes.io/projected/3d80891a-935a-4389-9c77-1a81e295f4dd-kube-api-access-8czwq\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.778858 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.778582 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d80891a-935a-4389-9c77-1a81e295f4dd-metrics-client-ca\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.778858 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.778616 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.778858 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.778688 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.778858 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.778751 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-tls\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.842344 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.842314 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:07:24.849293 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:24.849263 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde4af98c_d040_481e_8fac_b92e6e4ef1a6.slice/crio-9856b1c6c511033d58fcf1d0b3da55cd01bc95a3eb5dd65686f7e3788b0e07b1 WatchSource:0}: Error finding container 9856b1c6c511033d58fcf1d0b3da55cd01bc95a3eb5dd65686f7e3788b0e07b1: Status 404 returned error can't find the container with id 9856b1c6c511033d58fcf1d0b3da55cd01bc95a3eb5dd65686f7e3788b0e07b1 Apr 24 19:07:24.879639 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.879613 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.879747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.879647 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8czwq\" (UniqueName: \"kubernetes.io/projected/3d80891a-935a-4389-9c77-1a81e295f4dd-kube-api-access-8czwq\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.879747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.879667 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d80891a-935a-4389-9c77-1a81e295f4dd-metrics-client-ca\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.879747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.879685 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.879747 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.879710 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.879967 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.879750 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-tls\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.879967 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.879774 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-grpc-tls\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.879967 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.879791 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.882946 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.882515 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.883072 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.883048 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d80891a-935a-4389-9c77-1a81e295f4dd-metrics-client-ca\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.883638 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.883607 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-tls\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.884207 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.884184 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.885309 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.885077 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.885408 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.885331 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.885679 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.885662 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3d80891a-935a-4389-9c77-1a81e295f4dd-secret-grpc-tls\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:24.890915 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:24.890877 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8czwq\" (UniqueName: \"kubernetes.io/projected/3d80891a-935a-4389-9c77-1a81e295f4dd-kube-api-access-8czwq\") pod \"thanos-querier-6c69ddf84f-cltxb\" (UID: \"3d80891a-935a-4389-9c77-1a81e295f4dd\") " pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:25.014410 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.014321 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:25.143676 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.143605 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6c69ddf84f-cltxb"] Apr 24 19:07:25.146206 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:25.146175 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d80891a_935a_4389_9c77_1a81e295f4dd.slice/crio-fdec5959f47717ab2d634211e6487895565bb936a7590e3a3fea7dc55902b780 WatchSource:0}: Error finding container fdec5959f47717ab2d634211e6487895565bb936a7590e3a3fea7dc55902b780: Status 404 returned error can't find the container with id fdec5959f47717ab2d634211e6487895565bb936a7590e3a3fea7dc55902b780 Apr 24 19:07:25.432510 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.432443 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qj24z" event={"ID":"82d6a5b3-4b28-4b2e-8cef-302264f53d35","Type":"ContainerStarted","Data":"c73a3dd096a978511ae8cb38c8ca45b9c868dcdcbdf413ef5378035900b47eef"} Apr 24 19:07:25.432510 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.432485 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qj24z" event={"ID":"82d6a5b3-4b28-4b2e-8cef-302264f53d35","Type":"ContainerStarted","Data":"31f72fb8e6f7c0586391f27e8437c5bb02252cee28d75f27de6d00f43aca4600"} Apr 24 19:07:25.435832 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.435791 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" event={"ID":"49a37844-298e-4826-a760-5ba43867d446","Type":"ContainerStarted","Data":"4f1015305ec6eb35352444e49b71e9d6bbea1a7daa600077688f46ec158a87a5"} Apr 24 19:07:25.435832 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.435837 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" event={"ID":"49a37844-298e-4826-a760-5ba43867d446","Type":"ContainerStarted","Data":"704f7452cf596f978bb1cfabdae094c0bac9fa256571bb4dc6fa83d7923c4d6a"} Apr 24 19:07:25.436059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.435860 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" event={"ID":"49a37844-298e-4826-a760-5ba43867d446","Type":"ContainerStarted","Data":"bfe6defc4f68dd265084af5edabe5b1c6544fdd21fa2f14522af385b0cbd591f"} Apr 24 19:07:25.437340 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.437314 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" event={"ID":"3d80891a-935a-4389-9c77-1a81e295f4dd","Type":"ContainerStarted","Data":"fdec5959f47717ab2d634211e6487895565bb936a7590e3a3fea7dc55902b780"} Apr 24 19:07:25.439676 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.439652 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" event={"ID":"a7bb7d80-1c4f-459d-8ed3-3623da4d74b6","Type":"ContainerStarted","Data":"74dd4b200c66fd767e7624f0f67d791a5cea877ce3e57375180464886727aacf"} Apr 24 19:07:25.440762 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.440736 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerStarted","Data":"9856b1c6c511033d58fcf1d0b3da55cd01bc95a3eb5dd65686f7e3788b0e07b1"} Apr 24 19:07:25.463568 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.463508 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-qj24z" podStartSLOduration=2.612352085 podStartE2EDuration="3.463487826s" podCreationTimestamp="2026-04-24 19:07:22 +0000 UTC" firstStartedPulling="2026-04-24 19:07:23.045719019 +0000 UTC m=+53.535143295" lastFinishedPulling="2026-04-24 19:07:23.896854748 +0000 UTC m=+54.386279036" observedRunningTime="2026-04-24 19:07:25.46009004 +0000 UTC m=+55.949514337" watchObservedRunningTime="2026-04-24 19:07:25.463487826 +0000 UTC m=+55.952912125" Apr 24 19:07:25.487118 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.487060 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-9d44df66c-z2tt6" podStartSLOduration=2.104373398 podStartE2EDuration="3.487042732s" podCreationTimestamp="2026-04-24 19:07:22 +0000 UTC" firstStartedPulling="2026-04-24 19:07:23.281153908 +0000 UTC m=+53.770578185" lastFinishedPulling="2026-04-24 19:07:24.663823239 +0000 UTC m=+55.153247519" observedRunningTime="2026-04-24 19:07:25.486214355 +0000 UTC m=+55.975638648" watchObservedRunningTime="2026-04-24 19:07:25.487042732 +0000 UTC m=+55.976467033" Apr 24 19:07:25.517775 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:25.517723 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-69db897b98-6xdh4" podStartSLOduration=2.049714942 podStartE2EDuration="3.517708904s" podCreationTimestamp="2026-04-24 19:07:22 +0000 UTC" firstStartedPulling="2026-04-24 19:07:23.19399943 +0000 UTC m=+53.683423706" lastFinishedPulling="2026-04-24 19:07:24.661993377 +0000 UTC m=+55.151417668" observedRunningTime="2026-04-24 19:07:25.516313008 +0000 UTC m=+56.005737306" watchObservedRunningTime="2026-04-24 19:07:25.517708904 +0000 UTC m=+56.007133202" Apr 24 19:07:26.446029 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:26.445992 2571 generic.go:358] "Generic (PLEG): container finished" podID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerID="090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8" exitCode=0 Apr 24 19:07:26.446472 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:26.446094 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerDied","Data":"090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8"} Apr 24 19:07:27.451435 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.451401 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" event={"ID":"3d80891a-935a-4389-9c77-1a81e295f4dd","Type":"ContainerStarted","Data":"ab2d491d7a4bcb4c47f87701de9268ff9d7afee005556067abf46374d7578050"} Apr 24 19:07:27.451860 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.451442 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" event={"ID":"3d80891a-935a-4389-9c77-1a81e295f4dd","Type":"ContainerStarted","Data":"026b0d3d0a70ca997e9dbc8205f63dbe68d1eadee277b43ba35f19b1bdc16b90"} Apr 24 19:07:27.482295 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.480423 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz"] Apr 24 19:07:27.486348 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.486284 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" Apr 24 19:07:27.489300 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.489244 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"monitoring-plugin-cert\"" Apr 24 19:07:27.489442 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.489318 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"default-dockercfg-zg56k\"" Apr 24 19:07:27.493343 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.493317 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz"] Apr 24 19:07:27.505218 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.505185 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/272dfdad-dc51-4c52-8c46-f73a8d1bba06-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-w9klz\" (UID: \"272dfdad-dc51-4c52-8c46-f73a8d1bba06\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" Apr 24 19:07:27.606500 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:27.606466 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/272dfdad-dc51-4c52-8c46-f73a8d1bba06-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-w9klz\" (UID: \"272dfdad-dc51-4c52-8c46-f73a8d1bba06\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" Apr 24 19:07:27.606677 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:27.606623 2571 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: secret "monitoring-plugin-cert" not found Apr 24 19:07:27.606734 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:07:27.606701 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/272dfdad-dc51-4c52-8c46-f73a8d1bba06-monitoring-plugin-cert podName:272dfdad-dc51-4c52-8c46-f73a8d1bba06 nodeName:}" failed. No retries permitted until 2026-04-24 19:07:28.106684325 +0000 UTC m=+58.596108601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/272dfdad-dc51-4c52-8c46-f73a8d1bba06-monitoring-plugin-cert") pod "monitoring-plugin-7dccd58f55-w9klz" (UID: "272dfdad-dc51-4c52-8c46-f73a8d1bba06") : secret "monitoring-plugin-cert" not found Apr 24 19:07:28.111372 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:28.111332 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/272dfdad-dc51-4c52-8c46-f73a8d1bba06-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-w9klz\" (UID: \"272dfdad-dc51-4c52-8c46-f73a8d1bba06\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" Apr 24 19:07:28.114163 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:28.114133 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/272dfdad-dc51-4c52-8c46-f73a8d1bba06-monitoring-plugin-cert\") pod \"monitoring-plugin-7dccd58f55-w9klz\" (UID: \"272dfdad-dc51-4c52-8c46-f73a8d1bba06\") " pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" Apr 24 19:07:28.399017 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:28.398944 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" Apr 24 19:07:28.459088 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:28.459054 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" event={"ID":"3d80891a-935a-4389-9c77-1a81e295f4dd","Type":"ContainerStarted","Data":"8c61eb7dfd6c0058dbe40783404823f14dd9495bbb67ee207c7c7e714a1ca15f"} Apr 24 19:07:28.538410 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:28.538273 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz"] Apr 24 19:07:28.540844 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:28.540805 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272dfdad_dc51_4c52_8c46_f73a8d1bba06.slice/crio-8dc1f0fb83b253a785d8d0b74ffd31582717be1c97f9f38b0180c99a58cbf325 WatchSource:0}: Error finding container 8dc1f0fb83b253a785d8d0b74ffd31582717be1c97f9f38b0180c99a58cbf325: Status 404 returned error can't find the container with id 8dc1f0fb83b253a785d8d0b74ffd31582717be1c97f9f38b0180c99a58cbf325 Apr 24 19:07:28.999457 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:28.999395 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:07:29.005395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.005367 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.009179 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.008729 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-client-certs\"" Apr 24 19:07:29.010062 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.010025 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-sidecar-tls\"" Apr 24 19:07:29.010274 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.010257 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-web-config\"" Apr 24 19:07:29.010502 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.010480 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"serving-certs-ca-bundle\"" Apr 24 19:07:29.010590 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.010543 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-grpc-tls-d6a34aojtasma\"" Apr 24 19:07:29.010675 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.010659 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-prometheus-http-client-file\"" Apr 24 19:07:29.010755 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.010724 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-kube-rbac-proxy-web\"" Apr 24 19:07:29.010812 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.010766 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-rbac-proxy\"" Apr 24 19:07:29.010812 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.010786 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls-assets-0\"" Apr 24 19:07:29.011061 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.011041 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-dockercfg-fgf2s\"" Apr 24 19:07:29.011061 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.011054 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s\"" Apr 24 19:07:29.011505 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.011331 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-trusted-ca-bundle\"" Apr 24 19:07:29.011505 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.011388 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kubelet-serving-ca-bundle\"" Apr 24 19:07:29.011505 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.011405 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls\"" Apr 24 19:07:29.012571 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.012552 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-rulefiles-0\"" Apr 24 19:07:29.020210 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.020183 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:07:29.024780 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.024757 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.024917 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.024797 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.024917 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.024831 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025041 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.024939 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025041 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.024969 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025041 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025006 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025150 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025104 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025192 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025172 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025233 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025208 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025281 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025242 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025281 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025268 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-web-config\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025355 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025298 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config-out\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025370 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025456 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025431 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025509 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025467 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025509 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025484 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2k4g\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-kube-api-access-z2k4g\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025600 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025534 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.025600 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.025566 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127193 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127164 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127317 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127205 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127317 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127221 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127317 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127244 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2k4g\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-kube-api-access-z2k4g\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127317 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127266 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127317 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127287 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127317 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127314 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127340 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127369 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127421 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127448 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127487 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127534 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127576 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.127611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127601 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.128014 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127635 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.128014 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127658 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-web-config\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.128014 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.127688 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config-out\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.129651 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.128796 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.129651 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.128978 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.129651 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.129339 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.131098 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.130448 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.131288 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.131235 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.131375 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.131340 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.132618 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.132594 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.132873 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.132850 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.133277 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.133239 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config-out\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.133812 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.133617 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-web-config\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.133812 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.133778 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.134555 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.133965 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.134555 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.134316 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.134555 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.134544 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.135105 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.135076 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.135105 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.135086 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.136803 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.136781 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.138293 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.138270 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2k4g\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-kube-api-access-z2k4g\") pod \"prometheus-k8s-0\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.335426 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.335343 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:29.465222 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.465182 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" event={"ID":"3d80891a-935a-4389-9c77-1a81e295f4dd","Type":"ContainerStarted","Data":"1971b9cb320b57146a69dc598f04492e3e2dcdf22de478e60633f3b8c84b0258"} Apr 24 19:07:29.465222 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.465217 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" event={"ID":"3d80891a-935a-4389-9c77-1a81e295f4dd","Type":"ContainerStarted","Data":"ea700b65e2203dc3f06d738ce5bb93ea33d80ea921366fcab40a12404a1a1e76"} Apr 24 19:07:29.465688 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.465232 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" event={"ID":"3d80891a-935a-4389-9c77-1a81e295f4dd","Type":"ContainerStarted","Data":"13a1c8786dd77675f00fcbd5786e484901fe2358d11636a66f23b272a2f8d86e"} Apr 24 19:07:29.465688 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.465357 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:29.468530 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.468500 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerStarted","Data":"b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43"} Apr 24 19:07:29.468781 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.468535 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerStarted","Data":"9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b"} Apr 24 19:07:29.468781 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.468545 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerStarted","Data":"7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b"} Apr 24 19:07:29.468781 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.468555 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerStarted","Data":"635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356"} Apr 24 19:07:29.468781 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.468568 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerStarted","Data":"771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0"} Apr 24 19:07:29.468781 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.468580 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerStarted","Data":"d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c"} Apr 24 19:07:29.469738 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.469713 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" event={"ID":"272dfdad-dc51-4c52-8c46-f73a8d1bba06","Type":"ContainerStarted","Data":"8dc1f0fb83b253a785d8d0b74ffd31582717be1c97f9f38b0180c99a58cbf325"} Apr 24 19:07:29.491134 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.491101 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:07:29.496052 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:29.496014 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e1f5d2f_cc78_4f75_b9b4_976566e660fe.slice/crio-b687f1eb94deab104490527dc8f47bb8c5e3bda96ae26076c03f7292c699a465 WatchSource:0}: Error finding container b687f1eb94deab104490527dc8f47bb8c5e3bda96ae26076c03f7292c699a465: Status 404 returned error can't find the container with id b687f1eb94deab104490527dc8f47bb8c5e3bda96ae26076c03f7292c699a465 Apr 24 19:07:29.499648 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.499584 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" podStartSLOduration=1.683260253 podStartE2EDuration="5.499565672s" podCreationTimestamp="2026-04-24 19:07:24 +0000 UTC" firstStartedPulling="2026-04-24 19:07:25.148291174 +0000 UTC m=+55.637715455" lastFinishedPulling="2026-04-24 19:07:28.964596585 +0000 UTC m=+59.454020874" observedRunningTime="2026-04-24 19:07:29.498775537 +0000 UTC m=+59.988199845" watchObservedRunningTime="2026-04-24 19:07:29.499565672 +0000 UTC m=+59.988989971" Apr 24 19:07:29.534854 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:29.534795 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.2759044 podStartE2EDuration="6.534776799s" podCreationTimestamp="2026-04-24 19:07:23 +0000 UTC" firstStartedPulling="2026-04-24 19:07:24.852825943 +0000 UTC m=+55.342250235" lastFinishedPulling="2026-04-24 19:07:29.111698347 +0000 UTC m=+59.601122634" observedRunningTime="2026-04-24 19:07:29.531868117 +0000 UTC m=+60.021292617" watchObservedRunningTime="2026-04-24 19:07:29.534776799 +0000 UTC m=+60.024201098" Apr 24 19:07:30.349428 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:30.349350 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9rqzs" Apr 24 19:07:30.474475 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:30.474439 2571 generic.go:358] "Generic (PLEG): container finished" podID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerID="a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1" exitCode=0 Apr 24 19:07:30.474947 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:30.474521 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerDied","Data":"a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1"} Apr 24 19:07:30.474947 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:30.474557 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerStarted","Data":"b687f1eb94deab104490527dc8f47bb8c5e3bda96ae26076c03f7292c699a465"} Apr 24 19:07:30.475959 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:30.475938 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" event={"ID":"272dfdad-dc51-4c52-8c46-f73a8d1bba06","Type":"ContainerStarted","Data":"72e275d35cb5ee5174c18056aa00f838ef368121c0369bebc096297d5f5508b0"} Apr 24 19:07:30.476488 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:30.476468 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" Apr 24 19:07:30.483583 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:30.483564 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" Apr 24 19:07:30.538261 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:30.538205 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7dccd58f55-w9klz" podStartSLOduration=2.073610307 podStartE2EDuration="3.538189223s" podCreationTimestamp="2026-04-24 19:07:27 +0000 UTC" firstStartedPulling="2026-04-24 19:07:28.542788968 +0000 UTC m=+59.032213244" lastFinishedPulling="2026-04-24 19:07:30.007367873 +0000 UTC m=+60.496792160" observedRunningTime="2026-04-24 19:07:30.537685354 +0000 UTC m=+61.027109644" watchObservedRunningTime="2026-04-24 19:07:30.538189223 +0000 UTC m=+61.027613566" Apr 24 19:07:34.489481 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.489443 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerStarted","Data":"5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da"} Apr 24 19:07:34.489481 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.489482 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerStarted","Data":"ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839"} Apr 24 19:07:34.489481 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.489492 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerStarted","Data":"d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9"} Apr 24 19:07:34.489971 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.489501 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerStarted","Data":"1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f"} Apr 24 19:07:34.489971 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.489509 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerStarted","Data":"ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa"} Apr 24 19:07:34.489971 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.489517 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerStarted","Data":"fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e"} Apr 24 19:07:34.524538 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.524461 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.511803012 podStartE2EDuration="6.524444743s" podCreationTimestamp="2026-04-24 19:07:28 +0000 UTC" firstStartedPulling="2026-04-24 19:07:30.475835705 +0000 UTC m=+60.965259982" lastFinishedPulling="2026-04-24 19:07:33.488477431 +0000 UTC m=+63.977901713" observedRunningTime="2026-04-24 19:07:34.522425867 +0000 UTC m=+65.011850202" watchObservedRunningTime="2026-04-24 19:07:34.524444743 +0000 UTC m=+65.013869040" Apr 24 19:07:34.784365 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.784274 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:34.787724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.787705 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 24 19:07:34.796884 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.796864 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a2cb1aa-59c8-4abb-9a8a-daf68903a358-metrics-certs\") pod \"network-metrics-daemon-l2vd2\" (UID: \"4a2cb1aa-59c8-4abb-9a8a-daf68903a358\") " pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:34.885190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.885154 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:34.888022 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.888004 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 24 19:07:34.898249 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.898233 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 24 19:07:34.908808 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.908788 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcjp5\" (UniqueName: \"kubernetes.io/projected/4c1189cb-51d5-415c-8dc4-6c1970496280-kube-api-access-tcjp5\") pod \"network-check-target-f4z44\" (UID: \"4c1189cb-51d5-415c-8dc4-6c1970496280\") " pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:34.920341 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.920321 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-5l888\"" Apr 24 19:07:34.926479 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.926464 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-vpn2d\"" Apr 24 19:07:34.928355 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.928340 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:34.935027 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:34.935006 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l2vd2" Apr 24 19:07:35.058663 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:35.058501 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-f4z44"] Apr 24 19:07:35.062027 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:35.062001 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1189cb_51d5_415c_8dc4_6c1970496280.slice/crio-1645fa77d3582236c43653c27fa95c34bb40eb357cf2ede8b66d13ddc977055b WatchSource:0}: Error finding container 1645fa77d3582236c43653c27fa95c34bb40eb357cf2ede8b66d13ddc977055b: Status 404 returned error can't find the container with id 1645fa77d3582236c43653c27fa95c34bb40eb357cf2ede8b66d13ddc977055b Apr 24 19:07:35.078129 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:35.078105 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l2vd2"] Apr 24 19:07:35.080458 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:07:35.080428 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a2cb1aa_59c8_4abb_9a8a_daf68903a358.slice/crio-4611815694d53695797e1af0e6683ef568e4171a608e12f5aead08ff53b60858 WatchSource:0}: Error finding container 4611815694d53695797e1af0e6683ef568e4171a608e12f5aead08ff53b60858: Status 404 returned error can't find the container with id 4611815694d53695797e1af0e6683ef568e4171a608e12f5aead08ff53b60858 Apr 24 19:07:35.482195 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:35.482170 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6c69ddf84f-cltxb" Apr 24 19:07:35.493857 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:35.493819 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l2vd2" event={"ID":"4a2cb1aa-59c8-4abb-9a8a-daf68903a358","Type":"ContainerStarted","Data":"4611815694d53695797e1af0e6683ef568e4171a608e12f5aead08ff53b60858"} Apr 24 19:07:35.494890 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:35.494867 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-f4z44" event={"ID":"4c1189cb-51d5-415c-8dc4-6c1970496280","Type":"ContainerStarted","Data":"1645fa77d3582236c43653c27fa95c34bb40eb357cf2ede8b66d13ddc977055b"} Apr 24 19:07:37.351775 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:37.351741 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:37.417028 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:37.416999 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:37.503148 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:37.503113 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l2vd2" event={"ID":"4a2cb1aa-59c8-4abb-9a8a-daf68903a358","Type":"ContainerStarted","Data":"1b8d64ef159dc0e318f4e80d6b3b11c8f07012e84aa789d64469acc23ce354b7"} Apr 24 19:07:37.503148 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:37.503153 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l2vd2" event={"ID":"4a2cb1aa-59c8-4abb-9a8a-daf68903a358","Type":"ContainerStarted","Data":"0b4da74cd4ca949329c896a58f743396462f06d447e7901514ea0a194efbdcd5"} Apr 24 19:07:37.503708 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:37.503684 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:37.521009 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:37.520980 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:07:37.523749 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:37.523705 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-l2vd2" podStartSLOduration=65.77082316 podStartE2EDuration="1m7.523689771s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:07:35.08221851 +0000 UTC m=+65.571642786" lastFinishedPulling="2026-04-24 19:07:36.83508511 +0000 UTC m=+67.324509397" observedRunningTime="2026-04-24 19:07:37.521479815 +0000 UTC m=+68.010904113" watchObservedRunningTime="2026-04-24 19:07:37.523689771 +0000 UTC m=+68.013114095" Apr 24 19:07:38.506979 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:38.506941 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-f4z44" event={"ID":"4c1189cb-51d5-415c-8dc4-6c1970496280","Type":"ContainerStarted","Data":"157ceb7d66aa56fd7687f1d6b34606dd4d6668c08165c5bd25d61ff227e61ff7"} Apr 24 19:07:38.507481 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:38.507463 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:07:38.526362 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:07:38.526313 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-f4z44" podStartSLOduration=65.468363474 podStartE2EDuration="1m8.526299118s" podCreationTimestamp="2026-04-24 19:06:30 +0000 UTC" firstStartedPulling="2026-04-24 19:07:35.064351173 +0000 UTC m=+65.553775449" lastFinishedPulling="2026-04-24 19:07:38.122286814 +0000 UTC m=+68.611711093" observedRunningTime="2026-04-24 19:07:38.525547196 +0000 UTC m=+69.014971499" watchObservedRunningTime="2026-04-24 19:07:38.526299118 +0000 UTC m=+69.015723475" Apr 24 19:08:10.519260 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:10.519151 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-f4z44" Apr 24 19:08:12.547481 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:12.547451 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:08:12.548100 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:12.548052 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="alertmanager" containerID="cri-o://d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c" gracePeriod=120 Apr 24 19:08:12.548202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:12.548114 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy-web" containerID="cri-o://635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356" gracePeriod=120 Apr 24 19:08:12.548202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:12.548117 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="prom-label-proxy" containerID="cri-o://b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43" gracePeriod=120 Apr 24 19:08:12.548202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:12.548146 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy" containerID="cri-o://7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b" gracePeriod=120 Apr 24 19:08:12.548202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:12.548167 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="config-reloader" containerID="cri-o://771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0" gracePeriod=120 Apr 24 19:08:12.548364 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:12.548219 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy-metric" containerID="cri-o://9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b" gracePeriod=120 Apr 24 19:08:13.609533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609505 2571 generic.go:358] "Generic (PLEG): container finished" podID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerID="b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43" exitCode=0 Apr 24 19:08:13.609533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609527 2571 generic.go:358] "Generic (PLEG): container finished" podID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerID="9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b" exitCode=0 Apr 24 19:08:13.609533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609533 2571 generic.go:358] "Generic (PLEG): container finished" podID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerID="7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b" exitCode=0 Apr 24 19:08:13.609533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609538 2571 generic.go:358] "Generic (PLEG): container finished" podID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerID="771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0" exitCode=0 Apr 24 19:08:13.610036 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609543 2571 generic.go:358] "Generic (PLEG): container finished" podID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerID="d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c" exitCode=0 Apr 24 19:08:13.610036 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609547 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerDied","Data":"b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43"} Apr 24 19:08:13.610036 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609591 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerDied","Data":"9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b"} Apr 24 19:08:13.610036 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609607 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerDied","Data":"7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b"} Apr 24 19:08:13.610036 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609619 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerDied","Data":"771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0"} Apr 24 19:08:13.610036 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.609634 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerDied","Data":"d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c"} Apr 24 19:08:13.780873 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.780850 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:13.804377 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804358 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-main-db\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804487 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804391 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-volume\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804487 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804416 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-metrics-client-ca\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804487 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804438 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-out\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804487 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804455 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804487 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804475 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804730 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804503 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-web\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804730 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804528 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-tls-assets\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804730 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804564 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hsr8\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-kube-api-access-6hsr8\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804730 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804605 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804730 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804645 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-metric\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804730 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804687 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-cluster-tls-config\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.804730 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804714 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-web-config\") pod \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\" (UID: \"de4af98c-d040-481e-8fac-b92e6e4ef1a6\") " Apr 24 19:08:13.805090 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.804721 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 19:08:13.805149 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.805115 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:08:13.805598 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.805372 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:08:13.808710 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.808684 2571 reconciler_common.go:299] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-main-db\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.808811 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.808723 2571 reconciler_common.go:299] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-metrics-client-ca\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.808811 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.808739 2571 reconciler_common.go:299] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4af98c-d040-481e-8fac-b92e6e4ef1a6-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.809733 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.809707 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:13.809830 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.809739 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 19:08:13.809830 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.809804 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-out" (OuterVolumeSpecName: "config-out") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 19:08:13.809966 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.809832 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:13.809966 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.809879 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:13.810091 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.810056 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:13.810347 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.810324 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-kube-api-access-6hsr8" (OuterVolumeSpecName: "kube-api-access-6hsr8") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "kube-api-access-6hsr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 19:08:13.812117 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.811362 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:13.825428 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.825162 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-cluster-tls-config" (OuterVolumeSpecName: "cluster-tls-config") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "cluster-tls-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:13.831296 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.831266 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-web-config" (OuterVolumeSpecName: "web-config") pod "de4af98c-d040-481e-8fac-b92e6e4ef1a6" (UID: "de4af98c-d040-481e-8fac-b92e6e4ef1a6"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:13.909724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909697 2571 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-volume\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909724 2571 reconciler_common.go:299] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de4af98c-d040-481e-8fac-b92e6e4ef1a6-config-out\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909737 2571 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909749 2571 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-main-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909759 2571 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-web\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909768 2571 reconciler_common.go:299] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-tls-assets\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909778 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hsr8\" (UniqueName: \"kubernetes.io/projected/de4af98c-d040-481e-8fac-b92e6e4ef1a6-kube-api-access-6hsr8\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909787 2571 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909796 2571 reconciler_common.go:299] "Volume detached for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-cluster-tls-config\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:13.909896 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:13.909805 2571 reconciler_common.go:299] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de4af98c-d040-481e-8fac-b92e6e4ef1a6-web-config\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:14.615659 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.615613 2571 generic.go:358] "Generic (PLEG): container finished" podID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerID="635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356" exitCode=0 Apr 24 19:08:14.616046 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.615692 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerDied","Data":"635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356"} Apr 24 19:08:14.616046 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.615720 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.616046 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.615750 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"de4af98c-d040-481e-8fac-b92e6e4ef1a6","Type":"ContainerDied","Data":"9856b1c6c511033d58fcf1d0b3da55cd01bc95a3eb5dd65686f7e3788b0e07b1"} Apr 24 19:08:14.616046 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.615772 2571 scope.go:117] "RemoveContainer" containerID="b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43" Apr 24 19:08:14.623544 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.623527 2571 scope.go:117] "RemoveContainer" containerID="9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b" Apr 24 19:08:14.629866 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.629848 2571 scope.go:117] "RemoveContainer" containerID="7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b" Apr 24 19:08:14.636013 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.635993 2571 scope.go:117] "RemoveContainer" containerID="635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356" Apr 24 19:08:14.639551 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.639526 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:08:14.643680 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.643655 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:08:14.644081 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.644066 2571 scope.go:117] "RemoveContainer" containerID="771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0" Apr 24 19:08:14.650309 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.650294 2571 scope.go:117] "RemoveContainer" containerID="d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c" Apr 24 19:08:14.656516 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.656502 2571 scope.go:117] "RemoveContainer" containerID="090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8" Apr 24 19:08:14.663138 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.663013 2571 scope.go:117] "RemoveContainer" containerID="b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43" Apr 24 19:08:14.663296 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:14.663276 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43\": container with ID starting with b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43 not found: ID does not exist" containerID="b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43" Apr 24 19:08:14.663336 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.663305 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43"} err="failed to get container status \"b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43\": rpc error: code = NotFound desc = could not find container \"b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43\": container with ID starting with b0392a05a5fad895610b3b9ad43ee09afe6885e74af0008c450f4422f89f7b43 not found: ID does not exist" Apr 24 19:08:14.663389 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.663340 2571 scope.go:117] "RemoveContainer" containerID="9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b" Apr 24 19:08:14.663557 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:14.663538 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b\": container with ID starting with 9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b not found: ID does not exist" containerID="9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b" Apr 24 19:08:14.663602 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.663563 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b"} err="failed to get container status \"9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b\": rpc error: code = NotFound desc = could not find container \"9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b\": container with ID starting with 9b42623bd1a539e0eb1af56e10b4306e2f2fffba3f6759412655b4aa14a5398b not found: ID does not exist" Apr 24 19:08:14.663602 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.663579 2571 scope.go:117] "RemoveContainer" containerID="7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b" Apr 24 19:08:14.663780 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:14.663765 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b\": container with ID starting with 7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b not found: ID does not exist" containerID="7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b" Apr 24 19:08:14.663817 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.663784 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b"} err="failed to get container status \"7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b\": rpc error: code = NotFound desc = could not find container \"7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b\": container with ID starting with 7086bcb71796f2904e356bf97b3fc740278ccb393c73ad8b661c97f84305e05b not found: ID does not exist" Apr 24 19:08:14.663817 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.663799 2571 scope.go:117] "RemoveContainer" containerID="635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356" Apr 24 19:08:14.664037 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:14.664019 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356\": container with ID starting with 635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356 not found: ID does not exist" containerID="635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356" Apr 24 19:08:14.664086 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.664041 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356"} err="failed to get container status \"635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356\": rpc error: code = NotFound desc = could not find container \"635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356\": container with ID starting with 635edf91ca355bb721571e53582ca9af0606c02828099d110e25f34e8cd52356 not found: ID does not exist" Apr 24 19:08:14.664086 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.664055 2571 scope.go:117] "RemoveContainer" containerID="771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0" Apr 24 19:08:14.664266 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:14.664250 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0\": container with ID starting with 771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0 not found: ID does not exist" containerID="771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0" Apr 24 19:08:14.664305 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.664269 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0"} err="failed to get container status \"771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0\": rpc error: code = NotFound desc = could not find container \"771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0\": container with ID starting with 771982c9a7e842da878757cd525e7da3501e7d52743df17b66c2cf2e36123fd0 not found: ID does not exist" Apr 24 19:08:14.664305 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.664282 2571 scope.go:117] "RemoveContainer" containerID="d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c" Apr 24 19:08:14.664480 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:14.664466 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c\": container with ID starting with d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c not found: ID does not exist" containerID="d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c" Apr 24 19:08:14.664520 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.664483 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c"} err="failed to get container status \"d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c\": rpc error: code = NotFound desc = could not find container \"d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c\": container with ID starting with d490b8fbf3b7936b4ee0de56ae224b76f4a39b3ec8af7e0f4e9d2a0b7b7e617c not found: ID does not exist" Apr 24 19:08:14.664520 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.664495 2571 scope.go:117] "RemoveContainer" containerID="090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8" Apr 24 19:08:14.664709 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:14.664690 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8\": container with ID starting with 090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8 not found: ID does not exist" containerID="090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8" Apr 24 19:08:14.664775 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.664715 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8"} err="failed to get container status \"090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8\": rpc error: code = NotFound desc = could not find container \"090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8\": container with ID starting with 090a5607e5d6d869e3454558851255961c3195442b2446ce628e3bc7aaa15aa8 not found: ID does not exist" Apr 24 19:08:14.673029 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673006 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673397 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="init-config-reloader" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673417 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="init-config-reloader" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673431 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="prom-label-proxy" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673440 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="prom-label-proxy" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673458 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="config-reloader" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673467 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="config-reloader" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673486 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673494 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673509 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="alertmanager" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673518 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="alertmanager" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673531 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy-metric" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673540 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy-metric" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673554 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy-web" Apr 24 19:08:14.673596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673563 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy-web" Apr 24 19:08:14.674183 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673632 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy-metric" Apr 24 19:08:14.674183 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673650 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="alertmanager" Apr 24 19:08:14.674183 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673660 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy-web" Apr 24 19:08:14.674183 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673672 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="config-reloader" Apr 24 19:08:14.674183 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673681 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="prom-label-proxy" Apr 24 19:08:14.674183 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.673691 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" containerName="kube-rbac-proxy" Apr 24 19:08:14.678307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.678291 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.681532 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681510 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 24 19:08:14.681636 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681617 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 24 19:08:14.681636 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681632 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 24 19:08:14.681755 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681653 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 24 19:08:14.681923 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681888 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-4xz9n\"" Apr 24 19:08:14.681990 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681948 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 24 19:08:14.681990 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681962 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 24 19:08:14.682069 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681950 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 24 19:08:14.682069 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.681949 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 24 19:08:14.686934 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.686899 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 24 19:08:14.693349 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.693327 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:08:14.716059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716016 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-config-volume\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716049 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/025a79e2-8d13-41c3-89b6-2c7dd9616a66-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716082 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-web-config\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716123 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716150 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716169 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716190 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716219 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716242 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/025a79e2-8d13-41c3-89b6-2c7dd9616a66-config-out\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716259 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/025a79e2-8d13-41c3-89b6-2c7dd9616a66-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716512 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716280 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcv72\" (UniqueName: \"kubernetes.io/projected/025a79e2-8d13-41c3-89b6-2c7dd9616a66-kube-api-access-rcv72\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716512 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716377 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/025a79e2-8d13-41c3-89b6-2c7dd9616a66-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.716512 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.716412 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/025a79e2-8d13-41c3-89b6-2c7dd9616a66-tls-assets\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.816748 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816719 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/025a79e2-8d13-41c3-89b6-2c7dd9616a66-tls-assets\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.816748 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816755 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-config-volume\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816774 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/025a79e2-8d13-41c3-89b6-2c7dd9616a66-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816793 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-web-config\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816821 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816858 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816917 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816955 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.816986 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817343 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.817015 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/025a79e2-8d13-41c3-89b6-2c7dd9616a66-config-out\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817343 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.817043 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/025a79e2-8d13-41c3-89b6-2c7dd9616a66-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817343 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.817076 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rcv72\" (UniqueName: \"kubernetes.io/projected/025a79e2-8d13-41c3-89b6-2c7dd9616a66-kube-api-access-rcv72\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817343 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.817144 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/025a79e2-8d13-41c3-89b6-2c7dd9616a66-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.817343 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.817227 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/025a79e2-8d13-41c3-89b6-2c7dd9616a66-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.818684 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.818634 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/025a79e2-8d13-41c3-89b6-2c7dd9616a66-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.818954 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.818935 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/025a79e2-8d13-41c3-89b6-2c7dd9616a66-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.820245 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.820217 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/025a79e2-8d13-41c3-89b6-2c7dd9616a66-tls-assets\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.820372 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.820314 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.821465 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.821194 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-config-volume\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.821465 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.821207 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.821465 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.821374 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/025a79e2-8d13-41c3-89b6-2c7dd9616a66-config-out\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.821662 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.821471 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-web-config\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.821736 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.821708 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.821790 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.821735 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.822057 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.822040 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/025a79e2-8d13-41c3-89b6-2c7dd9616a66-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.827461 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.827436 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcv72\" (UniqueName: \"kubernetes.io/projected/025a79e2-8d13-41c3-89b6-2c7dd9616a66-kube-api-access-rcv72\") pod \"alertmanager-main-0\" (UID: \"025a79e2-8d13-41c3-89b6-2c7dd9616a66\") " pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:14.987705 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:14.987668 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 24 19:08:15.120509 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:15.120347 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 24 19:08:15.123688 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:08:15.123661 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod025a79e2_8d13_41c3_89b6_2c7dd9616a66.slice/crio-47818e5925bf49ed3346dfd1d75bae10d476a9606c31cab9f38b791776d1d927 WatchSource:0}: Error finding container 47818e5925bf49ed3346dfd1d75bae10d476a9606c31cab9f38b791776d1d927: Status 404 returned error can't find the container with id 47818e5925bf49ed3346dfd1d75bae10d476a9606c31cab9f38b791776d1d927 Apr 24 19:08:15.621295 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:15.621257 2571 generic.go:358] "Generic (PLEG): container finished" podID="025a79e2-8d13-41c3-89b6-2c7dd9616a66" containerID="e7de2d7990b77aab02dd2c7d2dbfd526c081748718c9f20921e4a3e3f4fcc4d1" exitCode=0 Apr 24 19:08:15.621653 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:15.621305 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"025a79e2-8d13-41c3-89b6-2c7dd9616a66","Type":"ContainerDied","Data":"e7de2d7990b77aab02dd2c7d2dbfd526c081748718c9f20921e4a3e3f4fcc4d1"} Apr 24 19:08:15.621653 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:15.621336 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"025a79e2-8d13-41c3-89b6-2c7dd9616a66","Type":"ContainerStarted","Data":"47818e5925bf49ed3346dfd1d75bae10d476a9606c31cab9f38b791776d1d927"} Apr 24 19:08:16.108525 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.108488 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4af98c-d040-481e-8fac-b92e6e4ef1a6" path="/var/lib/kubelet/pods/de4af98c-d040-481e-8fac-b92e6e4ef1a6/volumes" Apr 24 19:08:16.631742 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.631699 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"025a79e2-8d13-41c3-89b6-2c7dd9616a66","Type":"ContainerStarted","Data":"4956aa88a0f7a7373e3e376f071ef08b25d2b0450e7f545302cf56de31e12d0a"} Apr 24 19:08:16.632233 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.631747 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"025a79e2-8d13-41c3-89b6-2c7dd9616a66","Type":"ContainerStarted","Data":"b8d02ef4899ed8a19fff84494bc284c2267fbe2da6b18d9deb063fce019c199e"} Apr 24 19:08:16.632233 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.631764 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"025a79e2-8d13-41c3-89b6-2c7dd9616a66","Type":"ContainerStarted","Data":"4fba10075462bb02e6e2631b84c97aecfb45db311b26e3576433659ef6c34056"} Apr 24 19:08:16.632233 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.631778 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"025a79e2-8d13-41c3-89b6-2c7dd9616a66","Type":"ContainerStarted","Data":"97a6df016b6f51b43f64239abb6cede085ea04812f519fca0b1323f9bf3d22fe"} Apr 24 19:08:16.632233 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.631791 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"025a79e2-8d13-41c3-89b6-2c7dd9616a66","Type":"ContainerStarted","Data":"99fdaed5bcc6ac348f59bba924f27c0720e258d520587ed315d93709ca47001d"} Apr 24 19:08:16.632233 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.631803 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"025a79e2-8d13-41c3-89b6-2c7dd9616a66","Type":"ContainerStarted","Data":"9dd66def0bcadcc9359f58f4d413ea659ee5949e22ed7ecdb9c2d3734a16ce3b"} Apr 24 19:08:16.639815 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.639779 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-7fd887b88c-dsp79"] Apr 24 19:08:16.644828 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.644800 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.647715 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.647690 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client\"" Apr 24 19:08:16.647816 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.647685 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-kube-rbac-proxy-config\"" Apr 24 19:08:16.648044 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.648023 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-dockercfg-sj8rl\"" Apr 24 19:08:16.648330 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.648313 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-tls\"" Apr 24 19:08:16.649270 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.649248 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"federate-client-certs\"" Apr 24 19:08:16.649360 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.649285 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-client-serving-certs-ca-bundle\"" Apr 24 19:08:16.655661 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.655635 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-trusted-ca-bundle-8i12ta5c71j38\"" Apr 24 19:08:16.656990 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.656964 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7fd887b88c-dsp79"] Apr 24 19:08:16.673533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.673485 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.6734730620000002 podStartE2EDuration="2.673473062s" podCreationTimestamp="2026-04-24 19:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:08:16.671560644 +0000 UTC m=+107.160984971" watchObservedRunningTime="2026-04-24 19:08:16.673473062 +0000 UTC m=+107.162897360" Apr 24 19:08:16.736825 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.736732 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-serving-certs-ca-bundle\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.737008 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.736833 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-metrics-client-ca\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.737008 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.736867 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.737008 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.736914 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-secret-telemeter-client\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.737008 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.736960 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-federate-client-tls\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.737287 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.737257 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-telemeter-client-tls\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.737357 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.737305 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.737842 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.737424 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7hn2\" (UniqueName: \"kubernetes.io/projected/17d45a70-60d5-447b-96ab-bece4ce1be59-kube-api-access-n7hn2\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.839080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.838226 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7hn2\" (UniqueName: \"kubernetes.io/projected/17d45a70-60d5-447b-96ab-bece4ce1be59-kube-api-access-n7hn2\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.839080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.838303 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-serving-certs-ca-bundle\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.839080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.838361 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-metrics-client-ca\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.839080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.838402 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.839080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.838439 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-secret-telemeter-client\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.839080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.838474 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-federate-client-tls\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.839080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.838515 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-telemeter-client-tls\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.839080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.838543 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.840602 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.840573 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.842059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.841425 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-serving-certs-ca-bundle\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.842192 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.842002 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/17d45a70-60d5-447b-96ab-bece4ce1be59-metrics-client-ca\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.844656 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.844608 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-federate-client-tls\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.846585 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.846561 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-telemeter-client-tls\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.848665 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.848642 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-secret-telemeter-client\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.850689 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.850655 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/17d45a70-60d5-447b-96ab-bece4ce1be59-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.857365 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.857336 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7hn2\" (UniqueName: \"kubernetes.io/projected/17d45a70-60d5-447b-96ab-bece4ce1be59-kube-api-access-n7hn2\") pod \"telemeter-client-7fd887b88c-dsp79\" (UID: \"17d45a70-60d5-447b-96ab-bece4ce1be59\") " pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:16.959003 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:16.958565 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" Apr 24 19:08:17.075284 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.073677 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:08:17.075284 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.074298 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="prometheus" containerID="cri-o://fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e" gracePeriod=600 Apr 24 19:08:17.075284 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.074685 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy-thanos" containerID="cri-o://5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da" gracePeriod=600 Apr 24 19:08:17.075284 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.074785 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy" containerID="cri-o://ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839" gracePeriod=600 Apr 24 19:08:17.075284 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.074853 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy-web" containerID="cri-o://d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9" gracePeriod=600 Apr 24 19:08:17.075284 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.074952 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="thanos-sidecar" containerID="cri-o://1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f" gracePeriod=600 Apr 24 19:08:17.075284 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.075025 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="config-reloader" containerID="cri-o://ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa" gracePeriod=600 Apr 24 19:08:17.165078 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.165020 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7fd887b88c-dsp79"] Apr 24 19:08:17.168781 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:08:17.168735 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17d45a70_60d5_447b_96ab_bece4ce1be59.slice/crio-dc138c22234666739f87f398e24504bd8592ec4b475eafa62ebf9e20c38c5d26 WatchSource:0}: Error finding container dc138c22234666739f87f398e24504bd8592ec4b475eafa62ebf9e20c38c5d26: Status 404 returned error can't find the container with id dc138c22234666739f87f398e24504bd8592ec4b475eafa62ebf9e20c38c5d26 Apr 24 19:08:17.504058 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:17.503996 2571 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e is running failed: container process not found" containerID="fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Apr 24 19:08:17.504384 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:17.504339 2571 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e is running failed: container process not found" containerID="fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Apr 24 19:08:17.504704 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:17.504669 2571 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e is running failed: container process not found" containerID="fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Apr 24 19:08:17.504822 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:17.504716 2571 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="prometheus" probeResult="unknown" Apr 24 19:08:17.638027 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.637992 2571 generic.go:358] "Generic (PLEG): container finished" podID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerID="5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da" exitCode=0 Apr 24 19:08:17.638027 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638023 2571 generic.go:358] "Generic (PLEG): container finished" podID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerID="ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839" exitCode=0 Apr 24 19:08:17.638027 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638032 2571 generic.go:358] "Generic (PLEG): container finished" podID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerID="1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f" exitCode=0 Apr 24 19:08:17.638528 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638043 2571 generic.go:358] "Generic (PLEG): container finished" podID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerID="ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa" exitCode=0 Apr 24 19:08:17.638528 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638053 2571 generic.go:358] "Generic (PLEG): container finished" podID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerID="fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e" exitCode=0 Apr 24 19:08:17.638528 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638023 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerDied","Data":"5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da"} Apr 24 19:08:17.638528 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638140 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerDied","Data":"ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839"} Apr 24 19:08:17.638528 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638159 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerDied","Data":"1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f"} Apr 24 19:08:17.638528 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638172 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerDied","Data":"ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa"} Apr 24 19:08:17.638528 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.638186 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerDied","Data":"fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e"} Apr 24 19:08:17.639272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:17.639249 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" event={"ID":"17d45a70-60d5-447b-96ab-bece4ce1be59","Type":"ContainerStarted","Data":"dc138c22234666739f87f398e24504bd8592ec4b475eafa62ebf9e20c38c5d26"} Apr 24 19:08:18.364463 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.364433 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460450 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-trusted-ca-bundle\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460495 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-kube-rbac-proxy\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460524 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460565 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2k4g\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-kube-api-access-z2k4g\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460619 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460660 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-metrics-client-ca\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460692 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-rulefiles-0\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460718 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-db\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460742 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-web-config\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460771 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-serving-certs-ca-bundle\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460802 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-grpc-tls\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460832 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-tls\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460883 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460932 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-metrics-client-certs\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460960 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-tls-assets\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.460991 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-thanos-prometheus-http-client-file\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.461021 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-kubelet-serving-ca-bundle\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.461640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.461047 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config-out\") pod \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\" (UID: \"2e1f5d2f-cc78-4f75-b9b4-976566e660fe\") " Apr 24 19:08:18.463026 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.462990 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.468894 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.470240 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.470846 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.471089 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config-out" (OuterVolumeSpecName: "config-out") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.471612 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.472111 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.473405 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.474495 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 19:08:18.474870 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.474855 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:08:18.481117 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.480067 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config" (OuterVolumeSpecName: "config") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.481117 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.480452 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.481117 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.480466 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 19:08:18.481117 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.480472 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.481117 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.480935 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.481117 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.480944 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-kube-api-access-z2k4g" (OuterVolumeSpecName: "kube-api-access-z2k4g") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "kube-api-access-z2k4g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 19:08:18.481117 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.481048 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.490878 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.490849 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-web-config" (OuterVolumeSpecName: "web-config") pod "2e1f5d2f-cc78-4f75-b9b4-976566e660fe" (UID: "2e1f5d2f-cc78-4f75-b9b4-976566e660fe"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:08:18.562059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562031 2571 reconciler_common.go:299] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-trusted-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562057 2571 reconciler_common.go:299] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-kube-rbac-proxy\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562068 2571 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562077 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z2k4g\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-kube-api-access-z2k4g\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562087 2571 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562097 2571 reconciler_common.go:299] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-metrics-client-ca\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562108 2571 reconciler_common.go:299] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-rulefiles-0\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562123 2571 reconciler_common.go:299] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-prometheus-k8s-db\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562135 2571 reconciler_common.go:299] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-web-config\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562147 2571 reconciler_common.go:299] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-serving-certs-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562156 2571 reconciler_common.go:299] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-grpc-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562166 2571 reconciler_common.go:299] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-prometheus-k8s-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562174 2571 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562182 2571 reconciler_common.go:299] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-secret-metrics-client-certs\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562192 2571 reconciler_common.go:299] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-tls-assets\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562200 2571 reconciler_common.go:299] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-thanos-prometheus-http-client-file\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562209 2571 reconciler_common.go:299] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-configmap-kubelet-serving-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.562307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.562219 2571 reconciler_common.go:299] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e1f5d2f-cc78-4f75-b9b4-976566e660fe-config-out\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:08:18.645368 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.645336 2571 generic.go:358] "Generic (PLEG): container finished" podID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerID="d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9" exitCode=0 Apr 24 19:08:18.645754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.645377 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerDied","Data":"d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9"} Apr 24 19:08:18.645754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.645412 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e1f5d2f-cc78-4f75-b9b4-976566e660fe","Type":"ContainerDied","Data":"b687f1eb94deab104490527dc8f47bb8c5e3bda96ae26076c03f7292c699a465"} Apr 24 19:08:18.645754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.645432 2571 scope.go:117] "RemoveContainer" containerID="5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da" Apr 24 19:08:18.645754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.645431 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.674597 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.674564 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:08:18.678835 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.678811 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:08:18.711214 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711182 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:08:18.711576 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711556 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy-web" Apr 24 19:08:18.711576 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711573 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy-web" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711590 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711599 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711617 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="init-config-reloader" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711626 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="init-config-reloader" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711636 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="config-reloader" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711644 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="config-reloader" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711656 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy-thanos" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711664 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy-thanos" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711678 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="prometheus" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711685 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="prometheus" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711707 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="thanos-sidecar" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711714 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="thanos-sidecar" Apr 24 19:08:18.711774 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711773 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="thanos-sidecar" Apr 24 19:08:18.712415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711783 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="prometheus" Apr 24 19:08:18.712415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711794 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy" Apr 24 19:08:18.712415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711804 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy-thanos" Apr 24 19:08:18.712415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711812 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="kube-rbac-proxy-web" Apr 24 19:08:18.712415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.711817 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" containerName="config-reloader" Apr 24 19:08:18.716684 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.716664 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.721578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.721442 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-web-config\"" Apr 24 19:08:18.721578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.721444 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-kube-rbac-proxy-web\"" Apr 24 19:08:18.721578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.721530 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-grpc-tls-d6a34aojtasma\"" Apr 24 19:08:18.721859 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.721847 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-rbac-proxy\"" Apr 24 19:08:18.722260 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.722230 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls-assets-0\"" Apr 24 19:08:18.722954 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.722931 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-prometheus-http-client-file\"" Apr 24 19:08:18.723050 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.723003 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-dockercfg-fgf2s\"" Apr 24 19:08:18.723050 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.723027 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kubelet-serving-ca-bundle\"" Apr 24 19:08:18.724194 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.724143 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-thanos-sidecar-tls\"" Apr 24 19:08:18.724376 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.724358 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s\"" Apr 24 19:08:18.724465 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.724363 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"serving-certs-ca-bundle\"" Apr 24 19:08:18.726233 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.726209 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-client-certs\"" Apr 24 19:08:18.726329 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.726249 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-tls\"" Apr 24 19:08:18.727800 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.727783 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-trusted-ca-bundle\"" Apr 24 19:08:18.728502 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.728481 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"prometheus-k8s-rulefiles-0\"" Apr 24 19:08:18.731344 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.731320 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:08:18.764056 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764023 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764069 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764120 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-config\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764150 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764173 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764202 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764193 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7psx\" (UniqueName: \"kubernetes.io/projected/bf1b7947-aa90-42fa-b1a3-348370f46e19-kube-api-access-d7psx\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764386 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764242 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764386 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764261 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf1b7947-aa90-42fa-b1a3-348370f46e19-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764386 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764278 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764386 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764293 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764386 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764352 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764386 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764382 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764612 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764412 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf1b7947-aa90-42fa-b1a3-348370f46e19-config-out\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764612 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764440 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-web-config\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764612 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764469 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764612 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764507 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764612 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764534 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.764612 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.764564 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.865436 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865402 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-config\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.865604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865447 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.865604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865476 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.865604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865499 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7psx\" (UniqueName: \"kubernetes.io/projected/bf1b7947-aa90-42fa-b1a3-348370f46e19-kube-api-access-d7psx\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.865604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865526 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.865604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865557 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf1b7947-aa90-42fa-b1a3-348370f46e19-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.865604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865579 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.865604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865595 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866303 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865631 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866303 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865657 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866303 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865684 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf1b7947-aa90-42fa-b1a3-348370f46e19-config-out\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866303 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865714 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-web-config\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866303 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865748 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866303 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865777 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866303 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.865797 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866661 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.866306 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.866716 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.866685 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.868682 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.866819 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.868682 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.866877 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.868682 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.866937 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.868682 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.867632 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.868682 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.867751 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.868682 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.868188 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.869083 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.868729 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-config\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.871395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.869960 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.871395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.870418 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf1b7947-aa90-42fa-b1a3-348370f46e19-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.871395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.870560 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.871395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.870846 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-web-config\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.871395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.871351 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf1b7947-aa90-42fa-b1a3-348370f46e19-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.871679 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.871552 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf1b7947-aa90-42fa-b1a3-348370f46e19-config-out\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.871679 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.871640 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.871959 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.871938 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.872181 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.872159 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.873272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.873248 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.873541 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.873515 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bf1b7947-aa90-42fa-b1a3-348370f46e19-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:18.876146 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:18.876126 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7psx\" (UniqueName: \"kubernetes.io/projected/bf1b7947-aa90-42fa-b1a3-348370f46e19-kube-api-access-d7psx\") pod \"prometheus-k8s-0\" (UID: \"bf1b7947-aa90-42fa-b1a3-348370f46e19\") " pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:19.028946 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.028846 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:08:19.079677 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.079649 2571 scope.go:117] "RemoveContainer" containerID="ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839" Apr 24 19:08:19.101868 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.101734 2571 scope.go:117] "RemoveContainer" containerID="d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9" Apr 24 19:08:19.115792 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.115767 2571 scope.go:117] "RemoveContainer" containerID="1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f" Apr 24 19:08:19.143252 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.143223 2571 scope.go:117] "RemoveContainer" containerID="ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa" Apr 24 19:08:19.154079 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.154027 2571 scope.go:117] "RemoveContainer" containerID="fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e" Apr 24 19:08:19.171033 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.170073 2571 scope.go:117] "RemoveContainer" containerID="a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1" Apr 24 19:08:19.184165 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.184142 2571 scope.go:117] "RemoveContainer" containerID="5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da" Apr 24 19:08:19.184530 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:19.184509 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da\": container with ID starting with 5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da not found: ID does not exist" containerID="5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da" Apr 24 19:08:19.184604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.184548 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da"} err="failed to get container status \"5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da\": rpc error: code = NotFound desc = could not find container \"5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da\": container with ID starting with 5a1c48102148e451dee1a2629445a660f90ec76d2af6f5d75869223deb0391da not found: ID does not exist" Apr 24 19:08:19.184604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.184576 2571 scope.go:117] "RemoveContainer" containerID="ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839" Apr 24 19:08:19.184848 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:19.184825 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839\": container with ID starting with ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839 not found: ID does not exist" containerID="ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839" Apr 24 19:08:19.184898 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.184858 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839"} err="failed to get container status \"ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839\": rpc error: code = NotFound desc = could not find container \"ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839\": container with ID starting with ee65c74a44bed7911166558ecc42be608920c8633c14fef833879fd52613f839 not found: ID does not exist" Apr 24 19:08:19.184898 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.184880 2571 scope.go:117] "RemoveContainer" containerID="d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9" Apr 24 19:08:19.185299 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:19.185277 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9\": container with ID starting with d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9 not found: ID does not exist" containerID="d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9" Apr 24 19:08:19.185363 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.185304 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9"} err="failed to get container status \"d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9\": rpc error: code = NotFound desc = could not find container \"d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9\": container with ID starting with d9052633cb2d41f4006b17ec8ce933cefb6c972420a319069d890b329b9c05e9 not found: ID does not exist" Apr 24 19:08:19.185363 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.185323 2571 scope.go:117] "RemoveContainer" containerID="1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f" Apr 24 19:08:19.185604 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:19.185584 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f\": container with ID starting with 1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f not found: ID does not exist" containerID="1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f" Apr 24 19:08:19.185660 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.185608 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f"} err="failed to get container status \"1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f\": rpc error: code = NotFound desc = could not find container \"1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f\": container with ID starting with 1276125089ed250c8b7f5a84b318ce1ab6391438860f47fcae7735478e94f35f not found: ID does not exist" Apr 24 19:08:19.185660 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.185625 2571 scope.go:117] "RemoveContainer" containerID="ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa" Apr 24 19:08:19.185957 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:19.185834 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa\": container with ID starting with ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa not found: ID does not exist" containerID="ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa" Apr 24 19:08:19.185957 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.185857 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa"} err="failed to get container status \"ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa\": rpc error: code = NotFound desc = could not find container \"ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa\": container with ID starting with ba0c182a39ddf2236cefd98cf1dd86b9fee5e2592e9dc44c98d2cdc7de6045fa not found: ID does not exist" Apr 24 19:08:19.185957 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.185876 2571 scope.go:117] "RemoveContainer" containerID="fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e" Apr 24 19:08:19.186146 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:19.186124 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e\": container with ID starting with fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e not found: ID does not exist" containerID="fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e" Apr 24 19:08:19.186204 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.186142 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e"} err="failed to get container status \"fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e\": rpc error: code = NotFound desc = could not find container \"fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e\": container with ID starting with fd867fba1986b47f29050ca036f1233ddd8093e7a0e95a2d07feb7b9c68e1d7e not found: ID does not exist" Apr 24 19:08:19.186204 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.186157 2571 scope.go:117] "RemoveContainer" containerID="a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1" Apr 24 19:08:19.186396 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:08:19.186359 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1\": container with ID starting with a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1 not found: ID does not exist" containerID="a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1" Apr 24 19:08:19.186396 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.186375 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1"} err="failed to get container status \"a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1\": rpc error: code = NotFound desc = could not find container \"a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1\": container with ID starting with a584a338fe68345ce373ad174054e3bfbbf67b791863157c6509817e70fc29f1 not found: ID does not exist" Apr 24 19:08:19.267560 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.267489 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Apr 24 19:08:19.270896 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:08:19.270871 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1b7947_aa90_42fa_b1a3_348370f46e19.slice/crio-2f6d6a90977e8fbeaeb065fd77980b42be79bbd19cbe468f4fb9cdc9e1560c09 WatchSource:0}: Error finding container 2f6d6a90977e8fbeaeb065fd77980b42be79bbd19cbe468f4fb9cdc9e1560c09: Status 404 returned error can't find the container with id 2f6d6a90977e8fbeaeb065fd77980b42be79bbd19cbe468f4fb9cdc9e1560c09 Apr 24 19:08:19.650371 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.650273 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" event={"ID":"17d45a70-60d5-447b-96ab-bece4ce1be59","Type":"ContainerStarted","Data":"d637daab529a37fc8fda99586413b25e1a4ec6c1a9f3d691d5dc57da257ceabb"} Apr 24 19:08:19.650371 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.650317 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" event={"ID":"17d45a70-60d5-447b-96ab-bece4ce1be59","Type":"ContainerStarted","Data":"d34fd075d5fb9808fdf2a5a5ada37e5fd34972901a4564304338bb416c291022"} Apr 24 19:08:19.650371 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.650332 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" event={"ID":"17d45a70-60d5-447b-96ab-bece4ce1be59","Type":"ContainerStarted","Data":"de366b551146df3e47e972730db5432ae87bf72f35cf1b512273e2ac0338afb4"} Apr 24 19:08:19.651606 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.651582 2571 generic.go:358] "Generic (PLEG): container finished" podID="bf1b7947-aa90-42fa-b1a3-348370f46e19" containerID="2fc5457f4b6683fe5dd6d2d9cd7b19221053bdb7cee2eb0a9def385a55a9fa9c" exitCode=0 Apr 24 19:08:19.651715 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.651653 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf1b7947-aa90-42fa-b1a3-348370f46e19","Type":"ContainerDied","Data":"2fc5457f4b6683fe5dd6d2d9cd7b19221053bdb7cee2eb0a9def385a55a9fa9c"} Apr 24 19:08:19.651715 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.651680 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf1b7947-aa90-42fa-b1a3-348370f46e19","Type":"ContainerStarted","Data":"2f6d6a90977e8fbeaeb065fd77980b42be79bbd19cbe468f4fb9cdc9e1560c09"} Apr 24 19:08:19.676558 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:19.676511 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-7fd887b88c-dsp79" podStartSLOduration=1.739165475 podStartE2EDuration="3.676496317s" podCreationTimestamp="2026-04-24 19:08:16 +0000 UTC" firstStartedPulling="2026-04-24 19:08:17.220841463 +0000 UTC m=+107.710265742" lastFinishedPulling="2026-04-24 19:08:19.158172308 +0000 UTC m=+109.647596584" observedRunningTime="2026-04-24 19:08:19.67512328 +0000 UTC m=+110.164547621" watchObservedRunningTime="2026-04-24 19:08:19.676496317 +0000 UTC m=+110.165920641" Apr 24 19:08:20.109349 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:20.109315 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e1f5d2f-cc78-4f75-b9b4-976566e660fe" path="/var/lib/kubelet/pods/2e1f5d2f-cc78-4f75-b9b4-976566e660fe/volumes" Apr 24 19:08:20.660754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:20.660718 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf1b7947-aa90-42fa-b1a3-348370f46e19","Type":"ContainerStarted","Data":"39a3f2075ae584dcfb0971b8455be9251d4d4d83d9e3666f92f6eb195d5eeb5d"} Apr 24 19:08:20.660754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:20.660756 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf1b7947-aa90-42fa-b1a3-348370f46e19","Type":"ContainerStarted","Data":"8dfdd0554aaba2445a8b0516ab2383d4c32b06ab100be1e74838db242bda034b"} Apr 24 19:08:20.661258 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:20.660769 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf1b7947-aa90-42fa-b1a3-348370f46e19","Type":"ContainerStarted","Data":"81096face3993585a2d1cb16831c87f939d5ea8cbe32fcb697df8b939ef9425b"} Apr 24 19:08:20.661258 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:20.660780 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf1b7947-aa90-42fa-b1a3-348370f46e19","Type":"ContainerStarted","Data":"0f4750ae1b16dbf1b9a3fe7907fec9c4e4fd193b4b3f0077a64fe011eee62133"} Apr 24 19:08:20.661258 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:20.660790 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf1b7947-aa90-42fa-b1a3-348370f46e19","Type":"ContainerStarted","Data":"1a2a321b91a8b2683bb0d0ae90e2467cb3fea359f1533ad884def3b14803d44e"} Apr 24 19:08:20.661258 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:20.660801 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf1b7947-aa90-42fa-b1a3-348370f46e19","Type":"ContainerStarted","Data":"4458dd884d82599f90edd1ae8c269fd1ff54803d07fbc523042dbc6c2d9626d6"} Apr 24 19:08:20.694540 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:20.694478 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=2.694462987 podStartE2EDuration="2.694462987s" podCreationTimestamp="2026-04-24 19:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:08:20.692184592 +0000 UTC m=+111.181608891" watchObservedRunningTime="2026-04-24 19:08:20.694462987 +0000 UTC m=+111.183887284" Apr 24 19:08:24.029652 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:08:24.029609 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:09:19.029019 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:09:19.028942 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:09:19.044020 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:09:19.043991 2571 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:09:19.845462 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:09:19.845434 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Apr 24 19:10:03.863921 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.863870 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-sppcw"] Apr 24 19:10:03.867040 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.867022 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.869268 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.869252 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 24 19:10:03.874772 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.874753 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-sppcw"] Apr 24 19:10:03.885467 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.885447 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-kubelet-config\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.885581 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.885480 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-original-pull-secret\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.885581 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.885510 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-dbus\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.986388 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.986355 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-kubelet-config\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.986388 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.986389 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-original-pull-secret\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.986559 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.986410 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-dbus\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.986559 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.986470 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-kubelet-config\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.986630 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.986592 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-dbus\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:03.988642 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:03.988622 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/06e96d17-2cb4-45ff-b8aa-5dd288628fc3-original-pull-secret\") pod \"global-pull-secret-syncer-sppcw\" (UID: \"06e96d17-2cb4-45ff-b8aa-5dd288628fc3\") " pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:04.176103 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:04.176077 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-sppcw" Apr 24 19:10:04.296806 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:04.296775 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-sppcw"] Apr 24 19:10:04.298719 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:10:04.298691 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06e96d17_2cb4_45ff_b8aa_5dd288628fc3.slice/crio-e82fcdc6538dec230a97e5f7ec1333ea7821fa3e03a32fc75ad7675a2e606d4d WatchSource:0}: Error finding container e82fcdc6538dec230a97e5f7ec1333ea7821fa3e03a32fc75ad7675a2e606d4d: Status 404 returned error can't find the container with id e82fcdc6538dec230a97e5f7ec1333ea7821fa3e03a32fc75ad7675a2e606d4d Apr 24 19:10:04.958093 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:04.958053 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-sppcw" event={"ID":"06e96d17-2cb4-45ff-b8aa-5dd288628fc3","Type":"ContainerStarted","Data":"e82fcdc6538dec230a97e5f7ec1333ea7821fa3e03a32fc75ad7675a2e606d4d"} Apr 24 19:10:08.972026 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:08.971986 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-sppcw" event={"ID":"06e96d17-2cb4-45ff-b8aa-5dd288628fc3","Type":"ContainerStarted","Data":"c45ed4fb984c82f7dc58042bcd4d42c16152302a06ccf8da7475d9da8b674edd"} Apr 24 19:10:08.990069 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:10:08.990022 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-sppcw" podStartSLOduration=2.031011141 podStartE2EDuration="5.990008676s" podCreationTimestamp="2026-04-24 19:10:03 +0000 UTC" firstStartedPulling="2026-04-24 19:10:04.300350534 +0000 UTC m=+214.789774816" lastFinishedPulling="2026-04-24 19:10:08.259348071 +0000 UTC m=+218.748772351" observedRunningTime="2026-04-24 19:10:08.988878572 +0000 UTC m=+219.478302881" watchObservedRunningTime="2026-04-24 19:10:08.990008676 +0000 UTC m=+219.479432976" Apr 24 19:11:29.991952 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:11:29.991927 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:11:29.992493 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:11:29.991935 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:11:29.996249 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:11:29.996231 2571 kubelet.go:1628] "Image garbage collection succeeded" Apr 24 19:12:08.005514 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.005482 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4"] Apr 24 19:12:08.007661 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.007645 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:08.010463 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.010441 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 24 19:12:08.010570 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.010473 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 24 19:12:08.011658 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.011638 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-controller-manager-dockercfg-cc6sj\"" Apr 24 19:12:08.011764 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.011669 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"llmisvc-webhook-server-cert\"" Apr 24 19:12:08.017080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.017056 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4"] Apr 24 19:12:08.168526 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.168492 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/487924d6-78ed-407f-97bb-6ef4e63a0c74-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-7ndf4\" (UID: \"487924d6-78ed-407f-97bb-6ef4e63a0c74\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:08.168702 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.168551 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7vj\" (UniqueName: \"kubernetes.io/projected/487924d6-78ed-407f-97bb-6ef4e63a0c74-kube-api-access-kt7vj\") pod \"llmisvc-controller-manager-68cc5db7c4-7ndf4\" (UID: \"487924d6-78ed-407f-97bb-6ef4e63a0c74\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:08.269667 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.269581 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/487924d6-78ed-407f-97bb-6ef4e63a0c74-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-7ndf4\" (UID: \"487924d6-78ed-407f-97bb-6ef4e63a0c74\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:08.269667 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.269633 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kt7vj\" (UniqueName: \"kubernetes.io/projected/487924d6-78ed-407f-97bb-6ef4e63a0c74-kube-api-access-kt7vj\") pod \"llmisvc-controller-manager-68cc5db7c4-7ndf4\" (UID: \"487924d6-78ed-407f-97bb-6ef4e63a0c74\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:08.272059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.272036 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/487924d6-78ed-407f-97bb-6ef4e63a0c74-cert\") pod \"llmisvc-controller-manager-68cc5db7c4-7ndf4\" (UID: \"487924d6-78ed-407f-97bb-6ef4e63a0c74\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:08.280295 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.280273 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt7vj\" (UniqueName: \"kubernetes.io/projected/487924d6-78ed-407f-97bb-6ef4e63a0c74-kube-api-access-kt7vj\") pod \"llmisvc-controller-manager-68cc5db7c4-7ndf4\" (UID: \"487924d6-78ed-407f-97bb-6ef4e63a0c74\") " pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:08.318723 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.318691 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:08.434626 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.434599 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4"] Apr 24 19:12:08.437115 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:12:08.437089 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod487924d6_78ed_407f_97bb_6ef4e63a0c74.slice/crio-4d2d8770bfe2d788ed0f3ba4e65bc3eaf9ac369d80b31271a5b2d2c85b4c01a5 WatchSource:0}: Error finding container 4d2d8770bfe2d788ed0f3ba4e65bc3eaf9ac369d80b31271a5b2d2c85b4c01a5: Status 404 returned error can't find the container with id 4d2d8770bfe2d788ed0f3ba4e65bc3eaf9ac369d80b31271a5b2d2c85b4c01a5 Apr 24 19:12:08.438669 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:08.438653 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 19:12:09.308129 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:09.308091 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" event={"ID":"487924d6-78ed-407f-97bb-6ef4e63a0c74","Type":"ContainerStarted","Data":"4d2d8770bfe2d788ed0f3ba4e65bc3eaf9ac369d80b31271a5b2d2c85b4c01a5"} Apr 24 19:12:10.312311 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:10.312273 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" event={"ID":"487924d6-78ed-407f-97bb-6ef4e63a0c74","Type":"ContainerStarted","Data":"ee102469ec911b653af112789cbbf56f11a51c049e484c6d84e71734b3692e2b"} Apr 24 19:12:10.312691 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:10.312451 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:12:10.330015 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:10.329969 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" podStartSLOduration=1.626968458 podStartE2EDuration="3.32995437s" podCreationTimestamp="2026-04-24 19:12:07 +0000 UTC" firstStartedPulling="2026-04-24 19:12:08.438774214 +0000 UTC m=+338.928198490" lastFinishedPulling="2026-04-24 19:12:10.141760125 +0000 UTC m=+340.631184402" observedRunningTime="2026-04-24 19:12:10.32853217 +0000 UTC m=+340.817956470" watchObservedRunningTime="2026-04-24 19:12:10.32995437 +0000 UTC m=+340.819378668" Apr 24 19:12:41.316943 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:12:41.316914 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/llmisvc-controller-manager-68cc5db7c4-7ndf4" Apr 24 19:13:31.847010 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:31.846973 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/s3-init-rmzm5"] Apr 24 19:13:31.850544 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:31.850521 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-rmzm5" Apr 24 19:13:31.853353 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:31.853335 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-9bmvh\"" Apr 24 19:13:31.853448 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:31.853337 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 24 19:13:31.858304 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:31.858283 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-rmzm5"] Apr 24 19:13:31.940365 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:31.940330 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q89jr\" (UniqueName: \"kubernetes.io/projected/6ba70631-6e05-4949-8b53-5ca189f77144-kube-api-access-q89jr\") pod \"s3-init-rmzm5\" (UID: \"6ba70631-6e05-4949-8b53-5ca189f77144\") " pod="kserve/s3-init-rmzm5" Apr 24 19:13:32.041301 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:32.041272 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q89jr\" (UniqueName: \"kubernetes.io/projected/6ba70631-6e05-4949-8b53-5ca189f77144-kube-api-access-q89jr\") pod \"s3-init-rmzm5\" (UID: \"6ba70631-6e05-4949-8b53-5ca189f77144\") " pod="kserve/s3-init-rmzm5" Apr 24 19:13:32.050944 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:32.050899 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q89jr\" (UniqueName: \"kubernetes.io/projected/6ba70631-6e05-4949-8b53-5ca189f77144-kube-api-access-q89jr\") pod \"s3-init-rmzm5\" (UID: \"6ba70631-6e05-4949-8b53-5ca189f77144\") " pod="kserve/s3-init-rmzm5" Apr 24 19:13:32.167116 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:32.167093 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-rmzm5" Apr 24 19:13:32.293190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:32.293154 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-rmzm5"] Apr 24 19:13:32.296232 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:13:32.296205 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ba70631_6e05_4949_8b53_5ca189f77144.slice/crio-cc9808519068b796a8d863149ea2fce8f4344e637e75522cfecc5a310ef08bc5 WatchSource:0}: Error finding container cc9808519068b796a8d863149ea2fce8f4344e637e75522cfecc5a310ef08bc5: Status 404 returned error can't find the container with id cc9808519068b796a8d863149ea2fce8f4344e637e75522cfecc5a310ef08bc5 Apr 24 19:13:32.556011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:32.555928 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-rmzm5" event={"ID":"6ba70631-6e05-4949-8b53-5ca189f77144","Type":"ContainerStarted","Data":"cc9808519068b796a8d863149ea2fce8f4344e637e75522cfecc5a310ef08bc5"} Apr 24 19:13:37.573043 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:37.573003 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-rmzm5" event={"ID":"6ba70631-6e05-4949-8b53-5ca189f77144","Type":"ContainerStarted","Data":"37646a305cb94aadb51a06a3f3b9fdd7d529f7edce183b930a68ec0877b3586e"} Apr 24 19:13:37.589198 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:37.589143 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/s3-init-rmzm5" podStartSLOduration=2.168643498 podStartE2EDuration="6.589124991s" podCreationTimestamp="2026-04-24 19:13:31 +0000 UTC" firstStartedPulling="2026-04-24 19:13:32.298114924 +0000 UTC m=+422.787539200" lastFinishedPulling="2026-04-24 19:13:36.718596417 +0000 UTC m=+427.208020693" observedRunningTime="2026-04-24 19:13:37.588227498 +0000 UTC m=+428.077651798" watchObservedRunningTime="2026-04-24 19:13:37.589124991 +0000 UTC m=+428.078549290" Apr 24 19:13:40.583112 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:40.583081 2571 generic.go:358] "Generic (PLEG): container finished" podID="6ba70631-6e05-4949-8b53-5ca189f77144" containerID="37646a305cb94aadb51a06a3f3b9fdd7d529f7edce183b930a68ec0877b3586e" exitCode=0 Apr 24 19:13:40.583479 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:40.583161 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-rmzm5" event={"ID":"6ba70631-6e05-4949-8b53-5ca189f77144","Type":"ContainerDied","Data":"37646a305cb94aadb51a06a3f3b9fdd7d529f7edce183b930a68ec0877b3586e"} Apr 24 19:13:41.706195 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:41.706173 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-rmzm5" Apr 24 19:13:41.822830 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:41.822800 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q89jr\" (UniqueName: \"kubernetes.io/projected/6ba70631-6e05-4949-8b53-5ca189f77144-kube-api-access-q89jr\") pod \"6ba70631-6e05-4949-8b53-5ca189f77144\" (UID: \"6ba70631-6e05-4949-8b53-5ca189f77144\") " Apr 24 19:13:41.824852 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:41.824824 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba70631-6e05-4949-8b53-5ca189f77144-kube-api-access-q89jr" (OuterVolumeSpecName: "kube-api-access-q89jr") pod "6ba70631-6e05-4949-8b53-5ca189f77144" (UID: "6ba70631-6e05-4949-8b53-5ca189f77144"). InnerVolumeSpecName "kube-api-access-q89jr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 19:13:41.924109 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:41.924085 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q89jr\" (UniqueName: \"kubernetes.io/projected/6ba70631-6e05-4949-8b53-5ca189f77144-kube-api-access-q89jr\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:13:42.590216 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:42.590188 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-rmzm5" Apr 24 19:13:42.590381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:42.590187 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-rmzm5" event={"ID":"6ba70631-6e05-4949-8b53-5ca189f77144","Type":"ContainerDied","Data":"cc9808519068b796a8d863149ea2fce8f4344e637e75522cfecc5a310ef08bc5"} Apr 24 19:13:42.590381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:42.590294 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc9808519068b796a8d863149ea2fce8f4344e637e75522cfecc5a310ef08bc5" Apr 24 19:13:51.974922 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.974823 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7"] Apr 24 19:13:51.975264 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.975161 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ba70631-6e05-4949-8b53-5ca189f77144" containerName="s3-init" Apr 24 19:13:51.975264 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.975171 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba70631-6e05-4949-8b53-5ca189f77144" containerName="s3-init" Apr 24 19:13:51.975264 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.975232 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="6ba70631-6e05-4949-8b53-5ca189f77144" containerName="s3-init" Apr 24 19:13:51.978236 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.978221 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:51.980746 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.980719 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-xgboost-graph-predictor-serving-cert\"" Apr 24 19:13:51.980746 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.980739 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 24 19:13:51.980927 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.980859 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-xgboost-graph-kube-rbac-proxy-sar-config\"" Apr 24 19:13:51.981874 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.981858 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-rsc6c\"" Apr 24 19:13:51.981946 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.981894 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"kube-root-ca.crt\"" Apr 24 19:13:51.986578 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:51.986552 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7"] Apr 24 19:13:52.108168 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.108139 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjdd\" (UniqueName: \"kubernetes.io/projected/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kube-api-access-msjdd\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.108168 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.108172 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.108395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.108224 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.108395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.108310 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.209509 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.209480 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-msjdd\" (UniqueName: \"kubernetes.io/projected/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kube-api-access-msjdd\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.209630 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.209516 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.209630 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.209567 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.209630 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.209602 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.210003 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.209979 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kserve-provision-location\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.210226 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.210207 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.212013 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.211993 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-proxy-tls\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.218705 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.218678 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjdd\" (UniqueName: \"kubernetes.io/projected/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kube-api-access-msjdd\") pod \"isvc-xgboost-graph-predictor-669d8d6456-nnbc7\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.288726 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.288669 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:13:52.407627 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.407593 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7"] Apr 24 19:13:52.430495 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.430466 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv"] Apr 24 19:13:52.435273 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.435252 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.437799 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.437776 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\"" Apr 24 19:13:52.437867 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.437787 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"isvc-sklearn-graph-2-predictor-serving-cert\"" Apr 24 19:13:52.443654 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.443633 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv"] Apr 24 19:13:52.511951 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.511914 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-proxy-tls\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.511951 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.511952 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.512122 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.511996 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.512122 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.512104 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxx8\" (UniqueName: \"kubernetes.io/projected/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kube-api-access-kdxx8\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.613435 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.613355 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-proxy-tls\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.613435 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.613392 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.613435 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.613417 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.613699 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.613468 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxx8\" (UniqueName: \"kubernetes.io/projected/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kube-api-access-kdxx8\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.613863 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.613839 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kserve-provision-location\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.614164 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.614140 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.615802 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.615781 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-proxy-tls\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.619790 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.619766 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" event={"ID":"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3","Type":"ContainerStarted","Data":"7f438123cf2732547f753f17f9ce899770dd67eb6a11d10c48d022e96bf4a4a8"} Apr 24 19:13:52.621239 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.621215 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxx8\" (UniqueName: \"kubernetes.io/projected/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kube-api-access-kdxx8\") pod \"isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.745773 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.745740 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:13:52.875642 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:52.875613 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv"] Apr 24 19:13:52.877330 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:13:52.877305 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c4be8d4_5f52_4c1c_9ad0_de17127af99a.slice/crio-bf36327d73c79106b6a7592a27c47a5a845aeb921ac715f2b199538209d72811 WatchSource:0}: Error finding container bf36327d73c79106b6a7592a27c47a5a845aeb921ac715f2b199538209d72811: Status 404 returned error can't find the container with id bf36327d73c79106b6a7592a27c47a5a845aeb921ac715f2b199538209d72811 Apr 24 19:13:53.625972 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:53.625888 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" event={"ID":"4c4be8d4-5f52-4c1c-9ad0-de17127af99a","Type":"ContainerStarted","Data":"bf36327d73c79106b6a7592a27c47a5a845aeb921ac715f2b199538209d72811"} Apr 24 19:13:56.637108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:56.637064 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" event={"ID":"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3","Type":"ContainerStarted","Data":"baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2"} Apr 24 19:13:56.638609 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:13:56.638581 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" event={"ID":"4c4be8d4-5f52-4c1c-9ad0-de17127af99a","Type":"ContainerStarted","Data":"34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a"} Apr 24 19:14:00.653510 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:00.653438 2571 generic.go:358] "Generic (PLEG): container finished" podID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerID="34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a" exitCode=0 Apr 24 19:14:00.653894 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:00.653511 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" event={"ID":"4c4be8d4-5f52-4c1c-9ad0-de17127af99a","Type":"ContainerDied","Data":"34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a"} Apr 24 19:14:00.654976 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:00.654945 2571 generic.go:358] "Generic (PLEG): container finished" podID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerID="baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2" exitCode=0 Apr 24 19:14:00.655075 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:00.654979 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" event={"ID":"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3","Type":"ContainerDied","Data":"baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2"} Apr 24 19:14:24.758131 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:24.758099 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" event={"ID":"4c4be8d4-5f52-4c1c-9ad0-de17127af99a","Type":"ContainerStarted","Data":"77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5"} Apr 24 19:14:25.764819 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:25.764786 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" event={"ID":"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3","Type":"ContainerStarted","Data":"573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d"} Apr 24 19:14:27.773192 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:27.773149 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" event={"ID":"4c4be8d4-5f52-4c1c-9ad0-de17127af99a","Type":"ContainerStarted","Data":"d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c"} Apr 24 19:14:27.773621 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:27.773275 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:14:27.774950 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:27.774930 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" event={"ID":"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3","Type":"ContainerStarted","Data":"90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1"} Apr 24 19:14:27.775190 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:27.775173 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:14:27.775266 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:27.775195 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:14:27.776534 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:27.776512 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 24 19:14:27.797683 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:27.797637 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podStartSLOduration=1.5835047260000001 podStartE2EDuration="35.79762167s" podCreationTimestamp="2026-04-24 19:13:52 +0000 UTC" firstStartedPulling="2026-04-24 19:13:52.879186443 +0000 UTC m=+443.368610722" lastFinishedPulling="2026-04-24 19:14:27.09330339 +0000 UTC m=+477.582727666" observedRunningTime="2026-04-24 19:14:27.795742992 +0000 UTC m=+478.285167301" watchObservedRunningTime="2026-04-24 19:14:27.79762167 +0000 UTC m=+478.287045970" Apr 24 19:14:27.816648 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:27.816608 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podStartSLOduration=2.1287139760000002 podStartE2EDuration="36.816597682s" podCreationTimestamp="2026-04-24 19:13:51 +0000 UTC" firstStartedPulling="2026-04-24 19:13:52.411943267 +0000 UTC m=+442.901367545" lastFinishedPulling="2026-04-24 19:14:27.09982697 +0000 UTC m=+477.589251251" observedRunningTime="2026-04-24 19:14:27.815388356 +0000 UTC m=+478.304812656" watchObservedRunningTime="2026-04-24 19:14:27.816597682 +0000 UTC m=+478.306021981" Apr 24 19:14:28.778548 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:28.778503 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 24 19:14:28.778982 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:28.778522 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:14:28.779675 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:28.779651 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:14:29.781688 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:29.781648 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:14:33.782395 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:33.782368 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:14:33.782893 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:33.782865 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 24 19:14:34.785899 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:34.785865 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:14:34.786370 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:34.786332 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:14:43.783611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:43.783573 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 24 19:14:44.786540 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:44.786502 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:14:53.783481 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:53.783440 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 24 19:14:54.787110 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:14:54.787069 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:15:03.782782 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:03.782746 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 24 19:15:04.786988 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:04.786950 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:15:11.759151 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:11.759118 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2"] Apr 24 19:15:11.782349 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:11.782320 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2"] Apr 24 19:15:11.782486 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:11.782434 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:11.784888 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:11.784869 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-48e5e-kube-rbac-proxy-sar-config\"" Apr 24 19:15:11.785005 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:11.784869 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-48e5e-serving-cert\"" Apr 24 19:15:11.909819 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:11.909788 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79be89e8-7b53-4d89-8b28-4ec26fb7818d-openshift-service-ca-bundle\") pod \"switch-graph-48e5e-5bd9bbb6fb-bkjg2\" (UID: \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\") " pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:11.910001 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:11.909893 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79be89e8-7b53-4d89-8b28-4ec26fb7818d-proxy-tls\") pod \"switch-graph-48e5e-5bd9bbb6fb-bkjg2\" (UID: \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\") " pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:12.011216 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:12.011133 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79be89e8-7b53-4d89-8b28-4ec26fb7818d-proxy-tls\") pod \"switch-graph-48e5e-5bd9bbb6fb-bkjg2\" (UID: \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\") " pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:12.011216 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:12.011197 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79be89e8-7b53-4d89-8b28-4ec26fb7818d-openshift-service-ca-bundle\") pod \"switch-graph-48e5e-5bd9bbb6fb-bkjg2\" (UID: \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\") " pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:12.012037 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:12.012009 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79be89e8-7b53-4d89-8b28-4ec26fb7818d-openshift-service-ca-bundle\") pod \"switch-graph-48e5e-5bd9bbb6fb-bkjg2\" (UID: \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\") " pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:12.013470 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:12.013449 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79be89e8-7b53-4d89-8b28-4ec26fb7818d-proxy-tls\") pod \"switch-graph-48e5e-5bd9bbb6fb-bkjg2\" (UID: \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\") " pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:12.092547 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:12.092515 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:12.210735 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:12.210705 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2"] Apr 24 19:15:12.215993 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:15:12.215962 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be89e8_7b53_4d89_8b28_4ec26fb7818d.slice/crio-1bea3e3f5a2ff0b46220dc9a8598e899226fa16ed5d1d8694efac5567799066e WatchSource:0}: Error finding container 1bea3e3f5a2ff0b46220dc9a8598e899226fa16ed5d1d8694efac5567799066e: Status 404 returned error can't find the container with id 1bea3e3f5a2ff0b46220dc9a8598e899226fa16ed5d1d8694efac5567799066e Apr 24 19:15:12.916473 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:12.916433 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" event={"ID":"79be89e8-7b53-4d89-8b28-4ec26fb7818d","Type":"ContainerStarted","Data":"1bea3e3f5a2ff0b46220dc9a8598e899226fa16ed5d1d8694efac5567799066e"} Apr 24 19:15:13.783552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:13.783509 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 24 19:15:14.786941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:14.786881 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:15:14.924534 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:14.924495 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" event={"ID":"79be89e8-7b53-4d89-8b28-4ec26fb7818d","Type":"ContainerStarted","Data":"36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb"} Apr 24 19:15:14.924721 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:14.924574 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:14.944357 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:14.944314 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" podStartSLOduration=1.8929051669999999 podStartE2EDuration="3.944300343s" podCreationTimestamp="2026-04-24 19:15:11 +0000 UTC" firstStartedPulling="2026-04-24 19:15:12.218148126 +0000 UTC m=+522.707572402" lastFinishedPulling="2026-04-24 19:15:14.269543299 +0000 UTC m=+524.758967578" observedRunningTime="2026-04-24 19:15:14.942494272 +0000 UTC m=+525.431918570" watchObservedRunningTime="2026-04-24 19:15:14.944300343 +0000 UTC m=+525.433724640" Apr 24 19:15:20.934642 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:20.934568 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:21.863827 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:21.863797 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2"] Apr 24 19:15:21.864049 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:21.864028 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" containerID="cri-o://36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb" gracePeriod=30 Apr 24 19:15:23.784088 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:23.784055 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:15:24.786710 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:24.786674 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:15:25.933051 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:25.933008 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:15:30.932642 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:30.932600 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:15:34.787057 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:34.787024 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:15:35.933222 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:35.933181 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:15:35.933598 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:35.933332 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:40.933560 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:40.933522 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:15:45.933139 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:45.933096 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:15:50.933140 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:50.933101 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:15:51.697621 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.697589 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9"] Apr 24 19:15:51.699992 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.699972 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:51.702438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.702407 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-kube-rbac-proxy-sar-config\"" Apr 24 19:15:51.702535 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.702409 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"model-chainer-serving-cert\"" Apr 24 19:15:51.707816 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.707796 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9"] Apr 24 19:15:51.742143 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.742122 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls\") pod \"model-chainer-5cd78cfd67-m6wn9\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:51.742240 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.742161 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/489ddaf5-7441-48ad-aee3-efe71a0a941e-openshift-service-ca-bundle\") pod \"model-chainer-5cd78cfd67-m6wn9\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:51.842733 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.842707 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls\") pod \"model-chainer-5cd78cfd67-m6wn9\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:51.842850 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.842750 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/489ddaf5-7441-48ad-aee3-efe71a0a941e-openshift-service-ca-bundle\") pod \"model-chainer-5cd78cfd67-m6wn9\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:51.842894 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:15:51.842844 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/model-chainer-serving-cert: secret "model-chainer-serving-cert" not found Apr 24 19:15:51.842952 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:15:51.842923 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls podName:489ddaf5-7441-48ad-aee3-efe71a0a941e nodeName:}" failed. No retries permitted until 2026-04-24 19:15:52.342887245 +0000 UTC m=+562.832311520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls") pod "model-chainer-5cd78cfd67-m6wn9" (UID: "489ddaf5-7441-48ad-aee3-efe71a0a941e") : secret "model-chainer-serving-cert" not found Apr 24 19:15:51.843366 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:51.843349 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/489ddaf5-7441-48ad-aee3-efe71a0a941e-openshift-service-ca-bundle\") pod \"model-chainer-5cd78cfd67-m6wn9\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:51.892566 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:15:51.892534 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be89e8_7b53_4d89_8b28_4ec26fb7818d.slice/crio-36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be89e8_7b53_4d89_8b28_4ec26fb7818d.slice/crio-conmon-36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:15:51.892696 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:15:51.892556 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be89e8_7b53_4d89_8b28_4ec26fb7818d.slice/crio-36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be89e8_7b53_4d89_8b28_4ec26fb7818d.slice/crio-conmon-36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:15:51.892696 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:15:51.892559 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be89e8_7b53_4d89_8b28_4ec26fb7818d.slice/crio-1bea3e3f5a2ff0b46220dc9a8598e899226fa16ed5d1d8694efac5567799066e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be89e8_7b53_4d89_8b28_4ec26fb7818d.slice/crio-36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be89e8_7b53_4d89_8b28_4ec26fb7818d.slice/crio-conmon-36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:15:52.004965 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.004939 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:52.043217 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.043189 2571 generic.go:358] "Generic (PLEG): container finished" podID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerID="36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb" exitCode=0 Apr 24 19:15:52.043330 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.043252 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" Apr 24 19:15:52.043330 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.043256 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" event={"ID":"79be89e8-7b53-4d89-8b28-4ec26fb7818d","Type":"ContainerDied","Data":"36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb"} Apr 24 19:15:52.043442 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.043342 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2" event={"ID":"79be89e8-7b53-4d89-8b28-4ec26fb7818d","Type":"ContainerDied","Data":"1bea3e3f5a2ff0b46220dc9a8598e899226fa16ed5d1d8694efac5567799066e"} Apr 24 19:15:52.043442 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.043367 2571 scope.go:117] "RemoveContainer" containerID="36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb" Apr 24 19:15:52.044063 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.044038 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79be89e8-7b53-4d89-8b28-4ec26fb7818d-proxy-tls\") pod \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\" (UID: \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\") " Apr 24 19:15:52.044196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.044146 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79be89e8-7b53-4d89-8b28-4ec26fb7818d-openshift-service-ca-bundle\") pod \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\" (UID: \"79be89e8-7b53-4d89-8b28-4ec26fb7818d\") " Apr 24 19:15:52.044494 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.044470 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79be89e8-7b53-4d89-8b28-4ec26fb7818d-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "79be89e8-7b53-4d89-8b28-4ec26fb7818d" (UID: "79be89e8-7b53-4d89-8b28-4ec26fb7818d"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:15:52.046238 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.046204 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79be89e8-7b53-4d89-8b28-4ec26fb7818d-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "79be89e8-7b53-4d89-8b28-4ec26fb7818d" (UID: "79be89e8-7b53-4d89-8b28-4ec26fb7818d"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:15:52.055324 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.055308 2571 scope.go:117] "RemoveContainer" containerID="36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb" Apr 24 19:15:52.055565 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:15:52.055549 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb\": container with ID starting with 36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb not found: ID does not exist" containerID="36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb" Apr 24 19:15:52.055611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.055573 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb"} err="failed to get container status \"36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb\": rpc error: code = NotFound desc = could not find container \"36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb\": container with ID starting with 36e9a1865c50a415cfea9f0e8e9d9b2d9ae92b7c54ab710961ca3bb4d730a0eb not found: ID does not exist" Apr 24 19:15:52.144878 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.144855 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79be89e8-7b53-4d89-8b28-4ec26fb7818d-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:15:52.144878 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.144877 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79be89e8-7b53-4d89-8b28-4ec26fb7818d-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:15:52.346564 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.346482 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls\") pod \"model-chainer-5cd78cfd67-m6wn9\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:52.348761 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.348732 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls\") pod \"model-chainer-5cd78cfd67-m6wn9\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:52.358920 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.358883 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2"] Apr 24 19:15:52.363815 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.363796 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-48e5e-5bd9bbb6fb-bkjg2"] Apr 24 19:15:52.611634 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.611550 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:52.726405 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:52.726381 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9"] Apr 24 19:15:53.048594 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:53.048557 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" event={"ID":"489ddaf5-7441-48ad-aee3-efe71a0a941e","Type":"ContainerStarted","Data":"6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747"} Apr 24 19:15:53.048594 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:53.048595 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" event={"ID":"489ddaf5-7441-48ad-aee3-efe71a0a941e","Type":"ContainerStarted","Data":"2a2990a8ddbd2cf09dacf6f15b1dc468e00630007219d794476da50c528ac585"} Apr 24 19:15:53.049027 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:53.048629 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:15:53.065484 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:53.065440 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" podStartSLOduration=2.065426847 podStartE2EDuration="2.065426847s" podCreationTimestamp="2026-04-24 19:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:15:53.063408576 +0000 UTC m=+563.552832874" watchObservedRunningTime="2026-04-24 19:15:53.065426847 +0000 UTC m=+563.554851143" Apr 24 19:15:54.108019 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:54.107982 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" path="/var/lib/kubelet/pods/79be89e8-7b53-4d89-8b28-4ec26fb7818d/volumes" Apr 24 19:15:59.056375 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:15:59.056348 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:16:01.788047 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:01.788018 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9"] Apr 24 19:16:01.788415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:01.788217 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" containerID="cri-o://6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747" gracePeriod=30 Apr 24 19:16:01.850733 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:01.850701 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7"] Apr 24 19:16:01.851116 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:01.851088 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" containerID="cri-o://573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d" gracePeriod=30 Apr 24 19:16:01.851534 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:01.851392 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kube-rbac-proxy" containerID="cri-o://90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1" gracePeriod=30 Apr 24 19:16:01.904640 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:01.904609 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv"] Apr 24 19:16:01.905080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:01.905021 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" containerID="cri-o://77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5" gracePeriod=30 Apr 24 19:16:01.905080 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:01.905049 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kube-rbac-proxy" containerID="cri-o://d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c" gracePeriod=30 Apr 24 19:16:02.079123 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:02.079033 2571 generic.go:358] "Generic (PLEG): container finished" podID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerID="90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1" exitCode=2 Apr 24 19:16:02.079123 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:02.079098 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" event={"ID":"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3","Type":"ContainerDied","Data":"90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1"} Apr 24 19:16:02.080880 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:02.080859 2571 generic.go:358] "Generic (PLEG): container finished" podID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerID="d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c" exitCode=2 Apr 24 19:16:02.081008 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:02.080943 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" event={"ID":"4c4be8d4-5f52-4c1c-9ad0-de17127af99a","Type":"ContainerDied","Data":"d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c"} Apr 24 19:16:03.778645 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:03.778606 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.22:8643/healthz\": dial tcp 10.134.0.22:8643: connect: connection refused" Apr 24 19:16:03.783543 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:03.783516 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.22:8080: connect: connection refused" Apr 24 19:16:04.055128 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:04.055048 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:16:04.782047 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:04.782006 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kube-rbac-proxy" probeResult="failure" output="Get \"https://10.134.0.23:8643/healthz\": dial tcp 10.134.0.23:8643: connect: connection refused" Apr 24 19:16:04.786384 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:04.786363 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" probeResult="failure" output="dial tcp 10.134.0.23:8080: connect: connection refused" Apr 24 19:16:05.287288 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.287259 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:16:05.363282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.363257 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-proxy-tls\") pod \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " Apr 24 19:16:05.363420 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.363314 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kserve-provision-location\") pod \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " Apr 24 19:16:05.363420 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.363343 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msjdd\" (UniqueName: \"kubernetes.io/projected/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kube-api-access-msjdd\") pod \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " Apr 24 19:16:05.363420 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.363396 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") pod \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\" (UID: \"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3\") " Apr 24 19:16:05.363612 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.363590 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" (UID: "bed5ebf8-3246-41fb-acca-4d84f4c2a5d3"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 19:16:05.363809 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.363782 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-isvc-xgboost-graph-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "isvc-xgboost-graph-kube-rbac-proxy-sar-config") pod "bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" (UID: "bed5ebf8-3246-41fb-acca-4d84f4c2a5d3"). InnerVolumeSpecName "isvc-xgboost-graph-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:16:05.365291 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.365258 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" (UID: "bed5ebf8-3246-41fb-acca-4d84f4c2a5d3"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:16:05.365376 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.365307 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kube-api-access-msjdd" (OuterVolumeSpecName: "kube-api-access-msjdd") pod "bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" (UID: "bed5ebf8-3246-41fb-acca-4d84f4c2a5d3"). InnerVolumeSpecName "kube-api-access-msjdd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 19:16:05.464978 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.464944 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:05.464978 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.464976 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kserve-provision-location\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:05.464978 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.464985 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-msjdd\" (UniqueName: \"kubernetes.io/projected/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-kube-api-access-msjdd\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:05.465187 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.464995 2571 reconciler_common.go:299] "Volume detached for volume \"isvc-xgboost-graph-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3-isvc-xgboost-graph-kube-rbac-proxy-sar-config\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:05.945878 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:05.945856 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:16:06.070548 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.070468 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-proxy-tls\") pod \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " Apr 24 19:16:06.070548 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.070508 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") pod \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " Apr 24 19:16:06.070733 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.070551 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kserve-provision-location\") pod \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " Apr 24 19:16:06.070733 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.070588 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdxx8\" (UniqueName: \"kubernetes.io/projected/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kube-api-access-kdxx8\") pod \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\" (UID: \"4c4be8d4-5f52-4c1c-9ad0-de17127af99a\") " Apr 24 19:16:06.070943 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.070888 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "4c4be8d4-5f52-4c1c-9ad0-de17127af99a" (UID: "4c4be8d4-5f52-4c1c-9ad0-de17127af99a"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 19:16:06.070943 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.070899 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config" (OuterVolumeSpecName: "isvc-sklearn-graph-2-kube-rbac-proxy-sar-config") pod "4c4be8d4-5f52-4c1c-9ad0-de17127af99a" (UID: "4c4be8d4-5f52-4c1c-9ad0-de17127af99a"). InnerVolumeSpecName "isvc-sklearn-graph-2-kube-rbac-proxy-sar-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:16:06.072481 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.072459 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "4c4be8d4-5f52-4c1c-9ad0-de17127af99a" (UID: "4c4be8d4-5f52-4c1c-9ad0-de17127af99a"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:16:06.072574 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.072555 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kube-api-access-kdxx8" (OuterVolumeSpecName: "kube-api-access-kdxx8") pod "4c4be8d4-5f52-4c1c-9ad0-de17127af99a" (UID: "4c4be8d4-5f52-4c1c-9ad0-de17127af99a"). InnerVolumeSpecName "kube-api-access-kdxx8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 19:16:06.094611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.094583 2571 generic.go:358] "Generic (PLEG): container finished" podID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerID="573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d" exitCode=0 Apr 24 19:16:06.094728 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.094661 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" Apr 24 19:16:06.094787 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.094660 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" event={"ID":"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3","Type":"ContainerDied","Data":"573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d"} Apr 24 19:16:06.094787 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.094774 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7" event={"ID":"bed5ebf8-3246-41fb-acca-4d84f4c2a5d3","Type":"ContainerDied","Data":"7f438123cf2732547f753f17f9ce899770dd67eb6a11d10c48d022e96bf4a4a8"} Apr 24 19:16:06.094868 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.094799 2571 scope.go:117] "RemoveContainer" containerID="90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1" Apr 24 19:16:06.096436 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.096417 2571 generic.go:358] "Generic (PLEG): container finished" podID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerID="77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5" exitCode=0 Apr 24 19:16:06.096535 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.096446 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" event={"ID":"4c4be8d4-5f52-4c1c-9ad0-de17127af99a","Type":"ContainerDied","Data":"77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5"} Apr 24 19:16:06.096535 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.096463 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" event={"ID":"4c4be8d4-5f52-4c1c-9ad0-de17127af99a","Type":"ContainerDied","Data":"bf36327d73c79106b6a7592a27c47a5a845aeb921ac715f2b199538209d72811"} Apr 24 19:16:06.096535 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.096502 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv" Apr 24 19:16:06.102562 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.102542 2571 scope.go:117] "RemoveContainer" containerID="573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d" Apr 24 19:16:06.111015 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.110976 2571 scope.go:117] "RemoveContainer" containerID="baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2" Apr 24 19:16:06.117460 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.117443 2571 scope.go:117] "RemoveContainer" containerID="90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1" Apr 24 19:16:06.117681 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:06.117664 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1\": container with ID starting with 90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1 not found: ID does not exist" containerID="90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1" Apr 24 19:16:06.117750 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.117692 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1"} err="failed to get container status \"90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1\": rpc error: code = NotFound desc = could not find container \"90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1\": container with ID starting with 90c746c2c8a7e0a979d2e29090779e5ef4bb018db1a526d35a3f611e26b310c1 not found: ID does not exist" Apr 24 19:16:06.117750 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.117715 2571 scope.go:117] "RemoveContainer" containerID="573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d" Apr 24 19:16:06.117955 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:06.117938 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d\": container with ID starting with 573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d not found: ID does not exist" containerID="573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d" Apr 24 19:16:06.118000 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.117961 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d"} err="failed to get container status \"573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d\": rpc error: code = NotFound desc = could not find container \"573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d\": container with ID starting with 573a23d312fa2dccc801e9bb0d24948d6059737b629615825cf60a5e0352b33d not found: ID does not exist" Apr 24 19:16:06.118000 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.117976 2571 scope.go:117] "RemoveContainer" containerID="baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2" Apr 24 19:16:06.118185 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:06.118170 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2\": container with ID starting with baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2 not found: ID does not exist" containerID="baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2" Apr 24 19:16:06.118229 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.118189 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2"} err="failed to get container status \"baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2\": rpc error: code = NotFound desc = could not find container \"baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2\": container with ID starting with baa2c9552cf538836fc79813d0f92a8c898618239d02c2b422105f9af73c0cb2 not found: ID does not exist" Apr 24 19:16:06.118229 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.118203 2571 scope.go:117] "RemoveContainer" containerID="d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c" Apr 24 19:16:06.122629 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.122610 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7"] Apr 24 19:16:06.124843 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.124830 2571 scope.go:117] "RemoveContainer" containerID="77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5" Apr 24 19:16:06.129536 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.129517 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-predictor-669d8d6456-nnbc7"] Apr 24 19:16:06.132407 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.132391 2571 scope.go:117] "RemoveContainer" containerID="34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a" Apr 24 19:16:06.138963 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.138937 2571 scope.go:117] "RemoveContainer" containerID="d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c" Apr 24 19:16:06.139205 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:06.139185 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c\": container with ID starting with d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c not found: ID does not exist" containerID="d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c" Apr 24 19:16:06.139282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.139212 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c"} err="failed to get container status \"d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c\": rpc error: code = NotFound desc = could not find container \"d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c\": container with ID starting with d6d18f72b58ee50fbf0c1b4ca93926f0fdda54cdd19fd1dfbe7e8714db21148c not found: ID does not exist" Apr 24 19:16:06.139282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.139228 2571 scope.go:117] "RemoveContainer" containerID="77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5" Apr 24 19:16:06.139454 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:06.139439 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5\": container with ID starting with 77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5 not found: ID does not exist" containerID="77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5" Apr 24 19:16:06.139519 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.139457 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5"} err="failed to get container status \"77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5\": rpc error: code = NotFound desc = could not find container \"77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5\": container with ID starting with 77209ca5ed6d72983731700a9d4416810a821f1ca92d668198206cb29efc1ad5 not found: ID does not exist" Apr 24 19:16:06.139519 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.139472 2571 scope.go:117] "RemoveContainer" containerID="34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a" Apr 24 19:16:06.139685 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:06.139662 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a\": container with ID starting with 34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a not found: ID does not exist" containerID="34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a" Apr 24 19:16:06.139739 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.139693 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a"} err="failed to get container status \"34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a\": rpc error: code = NotFound desc = could not find container \"34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a\": container with ID starting with 34690b1dd2cc5e614ec090cee422d54c67059189fe033a3d62e55a12214afa3a not found: ID does not exist" Apr 24 19:16:06.140259 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.140243 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv"] Apr 24 19:16:06.147114 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.147008 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-2-predictor-8c9c45b69-nq2zv"] Apr 24 19:16:06.171146 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.171122 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:06.171146 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.171145 2571 reconciler_common.go:299] "Volume detached for volume \"isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\" (UniqueName: \"kubernetes.io/configmap/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-isvc-sklearn-graph-2-kube-rbac-proxy-sar-config\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:06.171268 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.171160 2571 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kserve-provision-location\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:06.171268 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:06.171175 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kdxx8\" (UniqueName: \"kubernetes.io/projected/4c4be8d4-5f52-4c1c-9ad0-de17127af99a-kube-api-access-kdxx8\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:08.106701 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:08.106673 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" path="/var/lib/kubelet/pods/4c4be8d4-5f52-4c1c-9ad0-de17127af99a/volumes" Apr 24 19:16:08.107218 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:08.107202 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" path="/var/lib/kubelet/pods/bed5ebf8-3246-41fb-acca-4d84f4c2a5d3/volumes" Apr 24 19:16:09.055149 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:09.055113 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:16:14.054791 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:14.054750 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:16:14.055160 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:14.054868 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:16:19.055618 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:19.055578 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:16:22.093045 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093012 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v"] Apr 24 19:16:22.093381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093338 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="storage-initializer" Apr 24 19:16:22.093381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093348 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="storage-initializer" Apr 24 19:16:22.093381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093358 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kube-rbac-proxy" Apr 24 19:16:22.093381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093363 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kube-rbac-proxy" Apr 24 19:16:22.093381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093372 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" Apr 24 19:16:22.093381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093377 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093385 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="storage-initializer" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093391 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="storage-initializer" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093403 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093407 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093420 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kube-rbac-proxy" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093426 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kube-rbac-proxy" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093437 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093442 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093492 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kserve-container" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093501 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="bed5ebf8-3246-41fb-acca-4d84f4c2a5d3" containerName="kube-rbac-proxy" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093508 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="79be89e8-7b53-4d89-8b28-4ec26fb7818d" containerName="switch-graph-48e5e" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093514 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kube-rbac-proxy" Apr 24 19:16:22.093577 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.093521 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c4be8d4-5f52-4c1c-9ad0-de17127af99a" containerName="kserve-container" Apr 24 19:16:22.096308 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.096293 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:22.098582 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.098550 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-d5820-serving-cert\"" Apr 24 19:16:22.098582 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.098565 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-d5820-kube-rbac-proxy-sar-config\"" Apr 24 19:16:22.107251 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.107232 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v"] Apr 24 19:16:22.206066 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.206037 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fafa372-2edb-479c-89c2-3e6c5b16af3d-openshift-service-ca-bundle\") pod \"switch-graph-d5820-6fc68fcc77-2vl8v\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:22.206252 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.206103 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls\") pod \"switch-graph-d5820-6fc68fcc77-2vl8v\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:22.307586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.307542 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls\") pod \"switch-graph-d5820-6fc68fcc77-2vl8v\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:22.307724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.307640 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fafa372-2edb-479c-89c2-3e6c5b16af3d-openshift-service-ca-bundle\") pod \"switch-graph-d5820-6fc68fcc77-2vl8v\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:22.307724 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:22.307650 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-d5820-serving-cert: secret "switch-graph-d5820-serving-cert" not found Apr 24 19:16:22.307724 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:22.307706 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls podName:8fafa372-2edb-479c-89c2-3e6c5b16af3d nodeName:}" failed. No retries permitted until 2026-04-24 19:16:22.807689636 +0000 UTC m=+593.297113911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls") pod "switch-graph-d5820-6fc68fcc77-2vl8v" (UID: "8fafa372-2edb-479c-89c2-3e6c5b16af3d") : secret "switch-graph-d5820-serving-cert" not found Apr 24 19:16:22.308258 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.308242 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fafa372-2edb-479c-89c2-3e6c5b16af3d-openshift-service-ca-bundle\") pod \"switch-graph-d5820-6fc68fcc77-2vl8v\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:22.811798 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.811765 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls\") pod \"switch-graph-d5820-6fc68fcc77-2vl8v\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:22.814128 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:22.814100 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls\") pod \"switch-graph-d5820-6fc68fcc77-2vl8v\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:23.006851 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:23.006816 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:23.118253 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:23.118231 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v"] Apr 24 19:16:23.120849 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:16:23.120807 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fafa372_2edb_479c_89c2_3e6c5b16af3d.slice/crio-b79f1b4b9e98f982feb9015289d0fd33b8e6187151d5e52ee67c3ab6451a7e14 WatchSource:0}: Error finding container b79f1b4b9e98f982feb9015289d0fd33b8e6187151d5e52ee67c3ab6451a7e14: Status 404 returned error can't find the container with id b79f1b4b9e98f982feb9015289d0fd33b8e6187151d5e52ee67c3ab6451a7e14 Apr 24 19:16:23.146873 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:23.146847 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" event={"ID":"8fafa372-2edb-479c-89c2-3e6c5b16af3d","Type":"ContainerStarted","Data":"b79f1b4b9e98f982feb9015289d0fd33b8e6187151d5e52ee67c3ab6451a7e14"} Apr 24 19:16:24.055241 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:24.055201 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:16:24.151167 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:24.151134 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" event={"ID":"8fafa372-2edb-479c-89c2-3e6c5b16af3d","Type":"ContainerStarted","Data":"fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675"} Apr 24 19:16:24.151523 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:24.151244 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:24.169609 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:24.169567 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" podStartSLOduration=2.169550823 podStartE2EDuration="2.169550823s" podCreationTimestamp="2026-04-24 19:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:16:24.169051728 +0000 UTC m=+594.658476037" watchObservedRunningTime="2026-04-24 19:16:24.169550823 +0000 UTC m=+594.658975120" Apr 24 19:16:29.055639 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:29.055601 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:16:30.017082 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:30.017047 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:16:30.019263 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:30.019242 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:16:30.159869 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:30.159841 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:16:31.958430 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:31.958408 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:16:31.995659 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:31.995633 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/489ddaf5-7441-48ad-aee3-efe71a0a941e-openshift-service-ca-bundle\") pod \"489ddaf5-7441-48ad-aee3-efe71a0a941e\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " Apr 24 19:16:31.995798 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:31.995687 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls\") pod \"489ddaf5-7441-48ad-aee3-efe71a0a941e\" (UID: \"489ddaf5-7441-48ad-aee3-efe71a0a941e\") " Apr 24 19:16:31.996032 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:31.996003 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489ddaf5-7441-48ad-aee3-efe71a0a941e-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "489ddaf5-7441-48ad-aee3-efe71a0a941e" (UID: "489ddaf5-7441-48ad-aee3-efe71a0a941e"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:16:31.997542 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:31.997521 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "489ddaf5-7441-48ad-aee3-efe71a0a941e" (UID: "489ddaf5-7441-48ad-aee3-efe71a0a941e"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:16:32.096925 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.096856 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/489ddaf5-7441-48ad-aee3-efe71a0a941e-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:32.096925 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.096886 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/489ddaf5-7441-48ad-aee3-efe71a0a941e-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:16:32.175246 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.175217 2571 generic.go:358] "Generic (PLEG): container finished" podID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerID="6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747" exitCode=0 Apr 24 19:16:32.175353 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.175272 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" Apr 24 19:16:32.175353 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.175297 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" event={"ID":"489ddaf5-7441-48ad-aee3-efe71a0a941e","Type":"ContainerDied","Data":"6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747"} Apr 24 19:16:32.175353 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.175342 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9" event={"ID":"489ddaf5-7441-48ad-aee3-efe71a0a941e","Type":"ContainerDied","Data":"2a2990a8ddbd2cf09dacf6f15b1dc468e00630007219d794476da50c528ac585"} Apr 24 19:16:32.175454 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.175358 2571 scope.go:117] "RemoveContainer" containerID="6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747" Apr 24 19:16:32.182753 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.182734 2571 scope.go:117] "RemoveContainer" containerID="6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747" Apr 24 19:16:32.183016 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:16:32.182999 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747\": container with ID starting with 6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747 not found: ID does not exist" containerID="6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747" Apr 24 19:16:32.183069 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.183023 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747"} err="failed to get container status \"6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747\": rpc error: code = NotFound desc = could not find container \"6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747\": container with ID starting with 6d9f2d2f5c316970bce9ba0b5b3b4ff0929b07932b0b1c37d4979320d53cf747 not found: ID does not exist" Apr 24 19:16:32.190332 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.190312 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9"] Apr 24 19:16:32.194226 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:32.194209 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-5cd78cfd67-m6wn9"] Apr 24 19:16:34.106725 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:16:34.106697 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" path="/var/lib/kubelet/pods/489ddaf5-7441-48ad-aee3-efe71a0a941e/volumes" Apr 24 19:17:01.984217 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:01.984187 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf"] Apr 24 19:17:01.984666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:01.984529 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" Apr 24 19:17:01.984666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:01.984541 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" Apr 24 19:17:01.984666 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:01.984596 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="489ddaf5-7441-48ad-aee3-efe71a0a941e" containerName="model-chainer" Apr 24 19:17:01.987544 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:01.987528 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:01.990334 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:01.990317 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-99560-serving-cert\"" Apr 24 19:17:01.990426 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:01.990314 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-99560-kube-rbac-proxy-sar-config\"" Apr 24 19:17:01.997701 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:01.997680 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf"] Apr 24 19:17:02.132481 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:02.132454 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls\") pod \"sequence-graph-99560-7846699b87-ljhdf\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:02.132619 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:02.132498 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/298f3ec6-a358-4e20-a59b-26c36c657012-openshift-service-ca-bundle\") pod \"sequence-graph-99560-7846699b87-ljhdf\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:02.233650 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:02.233617 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls\") pod \"sequence-graph-99560-7846699b87-ljhdf\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:02.233773 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:02.233665 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/298f3ec6-a358-4e20-a59b-26c36c657012-openshift-service-ca-bundle\") pod \"sequence-graph-99560-7846699b87-ljhdf\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:02.233773 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:17:02.233765 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/sequence-graph-99560-serving-cert: secret "sequence-graph-99560-serving-cert" not found Apr 24 19:17:02.233840 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:17:02.233828 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls podName:298f3ec6-a358-4e20-a59b-26c36c657012 nodeName:}" failed. No retries permitted until 2026-04-24 19:17:02.733808777 +0000 UTC m=+633.223233053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls") pod "sequence-graph-99560-7846699b87-ljhdf" (UID: "298f3ec6-a358-4e20-a59b-26c36c657012") : secret "sequence-graph-99560-serving-cert" not found Apr 24 19:17:02.234341 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:02.234300 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/298f3ec6-a358-4e20-a59b-26c36c657012-openshift-service-ca-bundle\") pod \"sequence-graph-99560-7846699b87-ljhdf\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:02.736651 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:02.736615 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls\") pod \"sequence-graph-99560-7846699b87-ljhdf\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:02.738930 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:02.738894 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls\") pod \"sequence-graph-99560-7846699b87-ljhdf\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:02.897861 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:02.897830 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:03.012446 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:03.012409 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf"] Apr 24 19:17:03.017274 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:17:03.017236 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod298f3ec6_a358_4e20_a59b_26c36c657012.slice/crio-d49e333bbd6ede4060ebcdc979c0eecabf058939e62a6400bdc6d7a975768a6e WatchSource:0}: Error finding container d49e333bbd6ede4060ebcdc979c0eecabf058939e62a6400bdc6d7a975768a6e: Status 404 returned error can't find the container with id d49e333bbd6ede4060ebcdc979c0eecabf058939e62a6400bdc6d7a975768a6e Apr 24 19:17:03.265510 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:03.265430 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" event={"ID":"298f3ec6-a358-4e20-a59b-26c36c657012","Type":"ContainerStarted","Data":"86006d7fe982a2b2071c991609eb435ff09a3596ead0832e72d098d0b8613d86"} Apr 24 19:17:03.265510 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:03.265466 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" event={"ID":"298f3ec6-a358-4e20-a59b-26c36c657012","Type":"ContainerStarted","Data":"d49e333bbd6ede4060ebcdc979c0eecabf058939e62a6400bdc6d7a975768a6e"} Apr 24 19:17:03.265678 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:03.265556 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:17:03.283194 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:03.283144 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" podStartSLOduration=2.283127604 podStartE2EDuration="2.283127604s" podCreationTimestamp="2026-04-24 19:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:17:03.282437327 +0000 UTC m=+633.771861650" watchObservedRunningTime="2026-04-24 19:17:03.283127604 +0000 UTC m=+633.772551902" Apr 24 19:17:09.273404 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:17:09.273375 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:21:30.046013 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:21:30.045933 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:21:30.047920 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:21:30.047876 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:24:36.770257 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:24:36.770185 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v"] Apr 24 19:24:36.772707 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:24:36.770420 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" containerID="cri-o://fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675" gracePeriod=30 Apr 24 19:24:40.158220 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:24:40.158184 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:24:45.158000 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:24:45.157967 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:24:50.158091 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:24:50.158053 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:24:50.158449 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:24:50.158153 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:24:55.158320 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:24:55.158282 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:00.158284 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:00.158249 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:05.158205 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:05.158169 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:06.905100 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:06.905077 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:25:07.028316 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.028234 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls\") pod \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " Apr 24 19:25:07.028316 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.028312 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fafa372-2edb-479c-89c2-3e6c5b16af3d-openshift-service-ca-bundle\") pod \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\" (UID: \"8fafa372-2edb-479c-89c2-3e6c5b16af3d\") " Apr 24 19:25:07.028630 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.028599 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fafa372-2edb-479c-89c2-3e6c5b16af3d-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "8fafa372-2edb-479c-89c2-3e6c5b16af3d" (UID: "8fafa372-2edb-479c-89c2-3e6c5b16af3d"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:25:07.030313 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.030291 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "8fafa372-2edb-479c-89c2-3e6c5b16af3d" (UID: "8fafa372-2edb-479c-89c2-3e6c5b16af3d"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:25:07.129352 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.129322 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8fafa372-2edb-479c-89c2-3e6c5b16af3d-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:25:07.129352 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.129349 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fafa372-2edb-479c-89c2-3e6c5b16af3d-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:25:07.665604 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.665567 2571 generic.go:358] "Generic (PLEG): container finished" podID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerID="fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675" exitCode=0 Apr 24 19:25:07.665793 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.665636 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" Apr 24 19:25:07.665793 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.665650 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" event={"ID":"8fafa372-2edb-479c-89c2-3e6c5b16af3d","Type":"ContainerDied","Data":"fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675"} Apr 24 19:25:07.665793 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.665693 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v" event={"ID":"8fafa372-2edb-479c-89c2-3e6c5b16af3d","Type":"ContainerDied","Data":"b79f1b4b9e98f982feb9015289d0fd33b8e6187151d5e52ee67c3ab6451a7e14"} Apr 24 19:25:07.665793 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.665708 2571 scope.go:117] "RemoveContainer" containerID="fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675" Apr 24 19:25:07.674046 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.674027 2571 scope.go:117] "RemoveContainer" containerID="fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675" Apr 24 19:25:07.674283 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:25:07.674265 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675\": container with ID starting with fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675 not found: ID does not exist" containerID="fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675" Apr 24 19:25:07.674352 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.674289 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675"} err="failed to get container status \"fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675\": rpc error: code = NotFound desc = could not find container \"fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675\": container with ID starting with fcad41b194284e394415ff997f5c2dceda2a3c7877424f121b5a42b4d3a57675 not found: ID does not exist" Apr 24 19:25:07.686679 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.686655 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v"] Apr 24 19:25:07.688442 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:07.688420 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-d5820-6fc68fcc77-2vl8v"] Apr 24 19:25:08.107547 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:08.107473 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" path="/var/lib/kubelet/pods/8fafa372-2edb-479c-89c2-3e6c5b16af3d/volumes" Apr 24 19:25:16.665366 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:16.665334 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf"] Apr 24 19:25:16.665827 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:16.665578 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" containerID="cri-o://86006d7fe982a2b2071c991609eb435ff09a3596ead0832e72d098d0b8613d86" gracePeriod=30 Apr 24 19:25:19.272093 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:19.272046 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:24.271740 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:24.271700 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:29.271496 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:29.271455 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:29.271884 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:29.271552 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:25:34.271358 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:34.271321 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:36.981158 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:36.981124 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5"] Apr 24 19:25:36.981593 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:36.981499 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" Apr 24 19:25:36.981593 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:36.981512 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" Apr 24 19:25:36.981593 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:36.981573 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="8fafa372-2edb-479c-89c2-3e6c5b16af3d" containerName="switch-graph-d5820" Apr 24 19:25:36.984781 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:36.984764 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:36.987418 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:36.987397 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-b2e49-serving-cert\"" Apr 24 19:25:36.987524 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:36.987398 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-b2e49-kube-rbac-proxy-sar-config\"" Apr 24 19:25:36.996536 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:36.996513 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5"] Apr 24 19:25:37.092311 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.092279 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-openshift-service-ca-bundle\") pod \"ensemble-graph-b2e49-5bbbdc7c77-q8pp5\" (UID: \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\") " pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:37.092311 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.092322 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-proxy-tls\") pod \"ensemble-graph-b2e49-5bbbdc7c77-q8pp5\" (UID: \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\") " pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:37.193382 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.193346 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-openshift-service-ca-bundle\") pod \"ensemble-graph-b2e49-5bbbdc7c77-q8pp5\" (UID: \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\") " pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:37.193382 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.193383 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-proxy-tls\") pod \"ensemble-graph-b2e49-5bbbdc7c77-q8pp5\" (UID: \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\") " pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:37.193993 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.193961 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-openshift-service-ca-bundle\") pod \"ensemble-graph-b2e49-5bbbdc7c77-q8pp5\" (UID: \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\") " pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:37.195703 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.195681 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-proxy-tls\") pod \"ensemble-graph-b2e49-5bbbdc7c77-q8pp5\" (UID: \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\") " pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:37.294215 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.294128 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:37.410675 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.410603 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5"] Apr 24 19:25:37.412978 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:25:37.412955 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7acf4d57_716d_4c7f_acaa_636ce2b36f0d.slice/crio-93241d44c46f8e39d52052a52d0690774c1df8c9b553d1f128657f375783f581 WatchSource:0}: Error finding container 93241d44c46f8e39d52052a52d0690774c1df8c9b553d1f128657f375783f581: Status 404 returned error can't find the container with id 93241d44c46f8e39d52052a52d0690774c1df8c9b553d1f128657f375783f581 Apr 24 19:25:37.414745 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.414732 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 19:25:37.761704 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.761671 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" event={"ID":"7acf4d57-716d-4c7f-acaa-636ce2b36f0d","Type":"ContainerStarted","Data":"c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441"} Apr 24 19:25:37.761704 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.761707 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" event={"ID":"7acf4d57-716d-4c7f-acaa-636ce2b36f0d","Type":"ContainerStarted","Data":"93241d44c46f8e39d52052a52d0690774c1df8c9b553d1f128657f375783f581"} Apr 24 19:25:37.761952 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.761747 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:37.778014 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:37.777960 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" podStartSLOduration=1.7779438669999998 podStartE2EDuration="1.777943867s" podCreationTimestamp="2026-04-24 19:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:25:37.777333815 +0000 UTC m=+1148.266758109" watchObservedRunningTime="2026-04-24 19:25:37.777943867 +0000 UTC m=+1148.267368164" Apr 24 19:25:39.272511 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:39.272467 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:43.770724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:43.770691 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:25:44.271456 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:44.271417 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:46.788282 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.788254 2571 generic.go:358] "Generic (PLEG): container finished" podID="298f3ec6-a358-4e20-a59b-26c36c657012" containerID="86006d7fe982a2b2071c991609eb435ff09a3596ead0832e72d098d0b8613d86" exitCode=0 Apr 24 19:25:46.788624 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.788296 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" event={"ID":"298f3ec6-a358-4e20-a59b-26c36c657012","Type":"ContainerDied","Data":"86006d7fe982a2b2071c991609eb435ff09a3596ead0832e72d098d0b8613d86"} Apr 24 19:25:46.788624 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.788323 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" event={"ID":"298f3ec6-a358-4e20-a59b-26c36c657012","Type":"ContainerDied","Data":"d49e333bbd6ede4060ebcdc979c0eecabf058939e62a6400bdc6d7a975768a6e"} Apr 24 19:25:46.788624 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.788337 2571 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d49e333bbd6ede4060ebcdc979c0eecabf058939e62a6400bdc6d7a975768a6e" Apr 24 19:25:46.798976 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.798956 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:25:46.869679 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.869648 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls\") pod \"298f3ec6-a358-4e20-a59b-26c36c657012\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " Apr 24 19:25:46.869857 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.869704 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/298f3ec6-a358-4e20-a59b-26c36c657012-openshift-service-ca-bundle\") pod \"298f3ec6-a358-4e20-a59b-26c36c657012\" (UID: \"298f3ec6-a358-4e20-a59b-26c36c657012\") " Apr 24 19:25:46.870101 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.870077 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/298f3ec6-a358-4e20-a59b-26c36c657012-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "298f3ec6-a358-4e20-a59b-26c36c657012" (UID: "298f3ec6-a358-4e20-a59b-26c36c657012"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:25:46.871655 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.871637 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "298f3ec6-a358-4e20-a59b-26c36c657012" (UID: "298f3ec6-a358-4e20-a59b-26c36c657012"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:25:46.971177 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.971092 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/298f3ec6-a358-4e20-a59b-26c36c657012-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:25:46.971177 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:46.971124 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/298f3ec6-a358-4e20-a59b-26c36c657012-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:25:47.065272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:47.065238 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5"] Apr 24 19:25:47.065500 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:47.065476 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" containerID="cri-o://c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441" gracePeriod=30 Apr 24 19:25:47.791040 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:47.791005 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf" Apr 24 19:25:47.810615 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:47.810589 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf"] Apr 24 19:25:47.814273 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:47.814253 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-99560-7846699b87-ljhdf"] Apr 24 19:25:48.106586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:48.106507 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" path="/var/lib/kubelet/pods/298f3ec6-a358-4e20-a59b-26c36c657012/volumes" Apr 24 19:25:48.768784 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:48.768742 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:53.768772 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:53.768735 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:58.769312 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:58.769269 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:25:58.769667 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:25:58.769372 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:26:03.769201 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:03.769158 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:08.769092 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:08.769053 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:13.768542 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:13.768495 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:16.824545 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.824513 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485"] Apr 24 19:26:16.824897 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.824877 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" Apr 24 19:26:16.824897 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.824891 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" Apr 24 19:26:16.825037 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.824962 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="298f3ec6-a358-4e20-a59b-26c36c657012" containerName="sequence-graph-99560" Apr 24 19:26:16.828017 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.827998 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:16.830586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.830569 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-443ff-kube-rbac-proxy-sar-config\"" Apr 24 19:26:16.830695 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.830585 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-443ff-serving-cert\"" Apr 24 19:26:16.835950 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.835925 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485"] Apr 24 19:26:16.918807 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.918778 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-openshift-service-ca-bundle\") pod \"sequence-graph-443ff-6999dd8899-xg485\" (UID: \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\") " pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:16.918999 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:16.918830 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-proxy-tls\") pod \"sequence-graph-443ff-6999dd8899-xg485\" (UID: \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\") " pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:17.019550 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.019522 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-openshift-service-ca-bundle\") pod \"sequence-graph-443ff-6999dd8899-xg485\" (UID: \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\") " pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:17.019724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.019582 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-proxy-tls\") pod \"sequence-graph-443ff-6999dd8899-xg485\" (UID: \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\") " pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:17.020336 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.020312 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-openshift-service-ca-bundle\") pod \"sequence-graph-443ff-6999dd8899-xg485\" (UID: \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\") " pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:17.022306 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.022285 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-proxy-tls\") pod \"sequence-graph-443ff-6999dd8899-xg485\" (UID: \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\") " pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:17.138169 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.138144 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:17.212619 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.212595 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:26:17.267379 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.267354 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485"] Apr 24 19:26:17.269256 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:26:17.269225 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-b7306f8d9b6fd2385bb9c14a756e83b82f04bea6f8b90c4e5e708041aec5032f WatchSource:0}: Error finding container b7306f8d9b6fd2385bb9c14a756e83b82f04bea6f8b90c4e5e708041aec5032f: Status 404 returned error can't find the container with id b7306f8d9b6fd2385bb9c14a756e83b82f04bea6f8b90c4e5e708041aec5032f Apr 24 19:26:17.322715 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.322694 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-proxy-tls\") pod \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\" (UID: \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\") " Apr 24 19:26:17.322797 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.322738 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-openshift-service-ca-bundle\") pod \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\" (UID: \"7acf4d57-716d-4c7f-acaa-636ce2b36f0d\") " Apr 24 19:26:17.323118 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.323097 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "7acf4d57-716d-4c7f-acaa-636ce2b36f0d" (UID: "7acf4d57-716d-4c7f-acaa-636ce2b36f0d"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:26:17.324618 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.324595 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "7acf4d57-716d-4c7f-acaa-636ce2b36f0d" (UID: "7acf4d57-716d-4c7f-acaa-636ce2b36f0d"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:26:17.424153 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.424122 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:26:17.424153 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.424152 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acf4d57-716d-4c7f-acaa-636ce2b36f0d-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:26:17.879857 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.879758 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" event={"ID":"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b","Type":"ContainerStarted","Data":"940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397"} Apr 24 19:26:17.879857 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.879803 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:17.879857 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.879818 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" event={"ID":"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b","Type":"ContainerStarted","Data":"b7306f8d9b6fd2385bb9c14a756e83b82f04bea6f8b90c4e5e708041aec5032f"} Apr 24 19:26:17.880871 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.880847 2571 generic.go:358] "Generic (PLEG): container finished" podID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerID="c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441" exitCode=0 Apr 24 19:26:17.880952 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.880897 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" Apr 24 19:26:17.880952 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.880893 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" event={"ID":"7acf4d57-716d-4c7f-acaa-636ce2b36f0d","Type":"ContainerDied","Data":"c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441"} Apr 24 19:26:17.881026 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.880951 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5" event={"ID":"7acf4d57-716d-4c7f-acaa-636ce2b36f0d","Type":"ContainerDied","Data":"93241d44c46f8e39d52052a52d0690774c1df8c9b553d1f128657f375783f581"} Apr 24 19:26:17.881026 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.880969 2571 scope.go:117] "RemoveContainer" containerID="c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441" Apr 24 19:26:17.889669 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.889651 2571 scope.go:117] "RemoveContainer" containerID="c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441" Apr 24 19:26:17.889895 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:26:17.889875 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441\": container with ID starting with c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441 not found: ID does not exist" containerID="c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441" Apr 24 19:26:17.890042 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.889922 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441"} err="failed to get container status \"c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441\": rpc error: code = NotFound desc = could not find container \"c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441\": container with ID starting with c9b668cd6beeac813d163fea93c6bcc6e86835fd1deaacd555cb23d97bd4a441 not found: ID does not exist" Apr 24 19:26:17.897569 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.897526 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" podStartSLOduration=1.897515195 podStartE2EDuration="1.897515195s" podCreationTimestamp="2026-04-24 19:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:26:17.896875277 +0000 UTC m=+1188.386299575" watchObservedRunningTime="2026-04-24 19:26:17.897515195 +0000 UTC m=+1188.386939492" Apr 24 19:26:17.910380 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.910358 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5"] Apr 24 19:26:17.914132 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:17.914114 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-b2e49-5bbbdc7c77-q8pp5"] Apr 24 19:26:18.107203 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:18.107161 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" path="/var/lib/kubelet/pods/7acf4d57-716d-4c7f-acaa-636ce2b36f0d/volumes" Apr 24 19:26:23.890653 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:23.890627 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:26.896456 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:26.896427 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485"] Apr 24 19:26:26.896831 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:26.896610 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" containerID="cri-o://940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397" gracePeriod=30 Apr 24 19:26:28.889370 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:28.889330 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:30.069733 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:30.069704 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:26:30.071574 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:30.071553 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:26:30.084012 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:30.083988 2571 scope.go:117] "RemoveContainer" containerID="86006d7fe982a2b2071c991609eb435ff09a3596ead0832e72d098d0b8613d86" Apr 24 19:26:33.889016 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:33.888979 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:38.888763 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:38.888722 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:38.889322 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:38.888823 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:43.889945 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:43.889878 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:48.889466 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:48.889428 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:53.889929 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:53.889870 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:26:56.923228 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:26:56.923196 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-b7306f8d9b6fd2385bb9c14a756e83b82f04bea6f8b90c4e5e708041aec5032f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-conmon-940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:26:56.923595 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:26:56.923264 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-b7306f8d9b6fd2385bb9c14a756e83b82f04bea6f8b90c4e5e708041aec5032f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-conmon-940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:26:56.923595 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:26:56.923335 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:26:56.923595 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:26:56.923371 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd9b4a1_ee35_4a19_b6d7_b4ffbcd5cd8b.slice/crio-conmon-940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:26:56.994757 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:56.994728 2571 generic.go:358] "Generic (PLEG): container finished" podID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerID="940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397" exitCode=0 Apr 24 19:26:56.994894 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:56.994790 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" event={"ID":"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b","Type":"ContainerDied","Data":"940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397"} Apr 24 19:26:57.045892 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.045868 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:57.135409 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.135371 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-proxy-tls\") pod \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\" (UID: \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\") " Apr 24 19:26:57.135574 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.135513 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-openshift-service-ca-bundle\") pod \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\" (UID: \"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b\") " Apr 24 19:26:57.135898 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.135865 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" (UID: "5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:26:57.137451 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.137424 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" (UID: "5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:26:57.236461 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.236416 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:26:57.236461 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.236456 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:26:57.279461 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.279432 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2"] Apr 24 19:26:57.279809 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.279797 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" Apr 24 19:26:57.279854 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.279812 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" Apr 24 19:26:57.279854 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.279838 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" Apr 24 19:26:57.279854 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.279844 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" Apr 24 19:26:57.279971 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.279889 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="7acf4d57-716d-4c7f-acaa-636ce2b36f0d" containerName="ensemble-graph-b2e49" Apr 24 19:26:57.279971 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.279900 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" containerName="sequence-graph-443ff" Apr 24 19:26:57.283121 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.283105 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:57.285380 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.285361 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-ec538-serving-cert\"" Apr 24 19:26:57.285485 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.285385 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"ensemble-graph-ec538-kube-rbac-proxy-sar-config\"" Apr 24 19:26:57.290138 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.290005 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2"] Apr 24 19:26:57.337572 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.337536 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcf65dd5-f28c-4130-aed1-b49645bf4a43-openshift-service-ca-bundle\") pod \"ensemble-graph-ec538-7b5b96995b-hjrb2\" (UID: \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\") " pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:57.337742 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.337682 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcf65dd5-f28c-4130-aed1-b49645bf4a43-proxy-tls\") pod \"ensemble-graph-ec538-7b5b96995b-hjrb2\" (UID: \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\") " pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:57.438728 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.438695 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcf65dd5-f28c-4130-aed1-b49645bf4a43-proxy-tls\") pod \"ensemble-graph-ec538-7b5b96995b-hjrb2\" (UID: \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\") " pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:57.438947 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.438746 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcf65dd5-f28c-4130-aed1-b49645bf4a43-openshift-service-ca-bundle\") pod \"ensemble-graph-ec538-7b5b96995b-hjrb2\" (UID: \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\") " pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:57.439419 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.439394 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcf65dd5-f28c-4130-aed1-b49645bf4a43-openshift-service-ca-bundle\") pod \"ensemble-graph-ec538-7b5b96995b-hjrb2\" (UID: \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\") " pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:57.441063 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.441044 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcf65dd5-f28c-4130-aed1-b49645bf4a43-proxy-tls\") pod \"ensemble-graph-ec538-7b5b96995b-hjrb2\" (UID: \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\") " pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:57.593544 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.593459 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:57.718762 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.718737 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2"] Apr 24 19:26:57.720627 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:26:57.720598 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcf65dd5_f28c_4130_aed1_b49645bf4a43.slice/crio-3bdca6bcda1a297ecc3ba4e617cbf09eca7da5089cc90ab8a96fd28dc3eae1d2 WatchSource:0}: Error finding container 3bdca6bcda1a297ecc3ba4e617cbf09eca7da5089cc90ab8a96fd28dc3eae1d2: Status 404 returned error can't find the container with id 3bdca6bcda1a297ecc3ba4e617cbf09eca7da5089cc90ab8a96fd28dc3eae1d2 Apr 24 19:26:57.999672 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.999645 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" Apr 24 19:26:58.000121 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.999643 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485" event={"ID":"5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b","Type":"ContainerDied","Data":"b7306f8d9b6fd2385bb9c14a756e83b82f04bea6f8b90c4e5e708041aec5032f"} Apr 24 19:26:58.000121 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:57.999777 2571 scope.go:117] "RemoveContainer" containerID="940b8c272fac466aa3a2e6bb23d23c297a5f56b08bb59bdbacb6d7152b1df397" Apr 24 19:26:58.001239 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:58.001209 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" event={"ID":"dcf65dd5-f28c-4130-aed1-b49645bf4a43","Type":"ContainerStarted","Data":"51ec81893efcc482a157b10234b75c6ce5e58bd08e9e849506a21a1fa86d9fdf"} Apr 24 19:26:58.001331 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:58.001244 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" event={"ID":"dcf65dd5-f28c-4130-aed1-b49645bf4a43","Type":"ContainerStarted","Data":"3bdca6bcda1a297ecc3ba4e617cbf09eca7da5089cc90ab8a96fd28dc3eae1d2"} Apr 24 19:26:58.001331 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:58.001312 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:26:58.021573 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:58.021531 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" podStartSLOduration=1.021519704 podStartE2EDuration="1.021519704s" podCreationTimestamp="2026-04-24 19:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:26:58.020008508 +0000 UTC m=+1228.509432835" watchObservedRunningTime="2026-04-24 19:26:58.021519704 +0000 UTC m=+1228.510944042" Apr 24 19:26:58.034613 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:58.034588 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485"] Apr 24 19:26:58.040296 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:58.040275 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-443ff-6999dd8899-xg485"] Apr 24 19:26:58.108050 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:26:58.108016 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b" path="/var/lib/kubelet/pods/5fd9b4a1-ee35-4a19-b6d7-b4ffbcd5cd8b/volumes" Apr 24 19:27:04.011503 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:04.011475 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:27:27.127206 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.127133 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv"] Apr 24 19:27:27.130340 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.130326 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:27.132545 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.132527 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-cb3b2-kube-rbac-proxy-sar-config\"" Apr 24 19:27:27.132849 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.132833 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"sequence-graph-cb3b2-serving-cert\"" Apr 24 19:27:27.138978 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.138953 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv"] Apr 24 19:27:27.297011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.296965 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-proxy-tls\") pod \"sequence-graph-cb3b2-5bb4f7479d-567lv\" (UID: \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\") " pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:27.297011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.297020 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-openshift-service-ca-bundle\") pod \"sequence-graph-cb3b2-5bb4f7479d-567lv\" (UID: \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\") " pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:27.397788 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.397704 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-proxy-tls\") pod \"sequence-graph-cb3b2-5bb4f7479d-567lv\" (UID: \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\") " pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:27.397788 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.397750 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-openshift-service-ca-bundle\") pod \"sequence-graph-cb3b2-5bb4f7479d-567lv\" (UID: \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\") " pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:27.398425 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.398401 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-openshift-service-ca-bundle\") pod \"sequence-graph-cb3b2-5bb4f7479d-567lv\" (UID: \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\") " pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:27.400001 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.399983 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-proxy-tls\") pod \"sequence-graph-cb3b2-5bb4f7479d-567lv\" (UID: \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\") " pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:27.441164 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.441136 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:27.555410 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:27.555388 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv"] Apr 24 19:27:27.558291 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:27:27.558260 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aad4d3e_eb9d_4c81_a294_e14d72ffd25b.slice/crio-82ae6b19437fb504dcb093b7089a4f308c4e25ea3b737241c129ad4f54d5c636 WatchSource:0}: Error finding container 82ae6b19437fb504dcb093b7089a4f308c4e25ea3b737241c129ad4f54d5c636: Status 404 returned error can't find the container with id 82ae6b19437fb504dcb093b7089a4f308c4e25ea3b737241c129ad4f54d5c636 Apr 24 19:27:28.101124 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:28.101088 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" event={"ID":"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b","Type":"ContainerStarted","Data":"ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6"} Apr 24 19:27:28.101124 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:28.101127 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" event={"ID":"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b","Type":"ContainerStarted","Data":"82ae6b19437fb504dcb093b7089a4f308c4e25ea3b737241c129ad4f54d5c636"} Apr 24 19:27:28.101333 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:28.101185 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:27:28.116877 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:28.116729 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" podStartSLOduration=1.11671411 podStartE2EDuration="1.11671411s" podCreationTimestamp="2026-04-24 19:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:27:28.116415228 +0000 UTC m=+1258.605839524" watchObservedRunningTime="2026-04-24 19:27:28.11671411 +0000 UTC m=+1258.606138408" Apr 24 19:27:34.109695 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:27:34.109667 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:31:30.094734 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:31:30.094706 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:31:30.095820 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:31:30.095797 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:35:12.037701 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:12.037624 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2"] Apr 24 19:35:12.038179 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:12.037851 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" containerID="cri-o://51ec81893efcc482a157b10234b75c6ce5e58bd08e9e849506a21a1fa86d9fdf" gracePeriod=30 Apr 24 19:35:14.009941 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:14.009882 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:19.009845 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:19.009804 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:24.009262 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:24.009225 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:24.009618 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:24.009325 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:35:29.009871 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:29.009831 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:34.009693 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:34.009655 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:39.009594 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:39.009557 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:41.767072 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:41.767009 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv"] Apr 24 19:35:41.767464 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:41.767236 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" containerID="cri-o://ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6" gracePeriod=30 Apr 24 19:35:42.528305 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.528273 2571 generic.go:358] "Generic (PLEG): container finished" podID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerID="51ec81893efcc482a157b10234b75c6ce5e58bd08e9e849506a21a1fa86d9fdf" exitCode=0 Apr 24 19:35:42.528494 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.528353 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" event={"ID":"dcf65dd5-f28c-4130-aed1-b49645bf4a43","Type":"ContainerDied","Data":"51ec81893efcc482a157b10234b75c6ce5e58bd08e9e849506a21a1fa86d9fdf"} Apr 24 19:35:42.675518 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.675494 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:35:42.744353 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.744317 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcf65dd5-f28c-4130-aed1-b49645bf4a43-proxy-tls\") pod \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\" (UID: \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\") " Apr 24 19:35:42.744506 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.744432 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcf65dd5-f28c-4130-aed1-b49645bf4a43-openshift-service-ca-bundle\") pod \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\" (UID: \"dcf65dd5-f28c-4130-aed1-b49645bf4a43\") " Apr 24 19:35:42.744772 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.744742 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcf65dd5-f28c-4130-aed1-b49645bf4a43-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "dcf65dd5-f28c-4130-aed1-b49645bf4a43" (UID: "dcf65dd5-f28c-4130-aed1-b49645bf4a43"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:35:42.746318 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.746299 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf65dd5-f28c-4130-aed1-b49645bf4a43-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "dcf65dd5-f28c-4130-aed1-b49645bf4a43" (UID: "dcf65dd5-f28c-4130-aed1-b49645bf4a43"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:35:42.845548 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.845467 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcf65dd5-f28c-4130-aed1-b49645bf4a43-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:35:42.845548 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:42.845493 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dcf65dd5-f28c-4130-aed1-b49645bf4a43-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:35:43.532692 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:43.532654 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" event={"ID":"dcf65dd5-f28c-4130-aed1-b49645bf4a43","Type":"ContainerDied","Data":"3bdca6bcda1a297ecc3ba4e617cbf09eca7da5089cc90ab8a96fd28dc3eae1d2"} Apr 24 19:35:43.532692 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:43.532678 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2" Apr 24 19:35:43.532692 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:43.532700 2571 scope.go:117] "RemoveContainer" containerID="51ec81893efcc482a157b10234b75c6ce5e58bd08e9e849506a21a1fa86d9fdf" Apr 24 19:35:43.552922 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:43.552881 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2"] Apr 24 19:35:43.554236 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:43.554217 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/ensemble-graph-ec538-7b5b96995b-hjrb2"] Apr 24 19:35:44.107146 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:44.107115 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" path="/var/lib/kubelet/pods/dcf65dd5-f28c-4130-aed1-b49645bf4a43/volumes" Apr 24 19:35:44.108476 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:44.108453 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:49.107706 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:49.107665 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:54.108220 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:54.108178 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:35:54.108699 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:54.108679 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:35:59.108314 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:35:59.108272 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:04.107815 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:04.107772 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:09.108170 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:09.108138 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:12.263494 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.263458 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh"] Apr 24 19:36:12.263866 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.263831 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" Apr 24 19:36:12.263866 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.263842 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" Apr 24 19:36:12.263982 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.263940 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="dcf65dd5-f28c-4130-aed1-b49645bf4a43" containerName="ensemble-graph-ec538" Apr 24 19:36:12.266935 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.266895 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:12.269480 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.269457 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-8e956-serving-cert\"" Apr 24 19:36:12.269593 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.269460 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-8e956-kube-rbac-proxy-sar-config\"" Apr 24 19:36:12.277937 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.277916 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh"] Apr 24 19:36:12.301043 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.301014 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls\") pod \"splitter-graph-8e956-6bb6cf5f5d-dbpqh\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:12.301186 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.301052 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-openshift-service-ca-bundle\") pod \"splitter-graph-8e956-6bb6cf5f5d-dbpqh\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:12.401475 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.401447 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls\") pod \"splitter-graph-8e956-6bb6cf5f5d-dbpqh\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:12.401586 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.401487 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-openshift-service-ca-bundle\") pod \"splitter-graph-8e956-6bb6cf5f5d-dbpqh\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:12.401624 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:12.401603 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-8e956-serving-cert: secret "splitter-graph-8e956-serving-cert" not found Apr 24 19:36:12.401688 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:12.401676 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls podName:9d781b40-a10f-49f3-b1d7-1bcb7915eab4 nodeName:}" failed. No retries permitted until 2026-04-24 19:36:12.901656514 +0000 UTC m=+1783.391080790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls") pod "splitter-graph-8e956-6bb6cf5f5d-dbpqh" (UID: "9d781b40-a10f-49f3-b1d7-1bcb7915eab4") : secret "splitter-graph-8e956-serving-cert" not found Apr 24 19:36:12.402050 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.402033 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-openshift-service-ca-bundle\") pod \"splitter-graph-8e956-6bb6cf5f5d-dbpqh\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:12.403736 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.403718 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:36:12.501842 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.501809 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-proxy-tls\") pod \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\" (UID: \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\") " Apr 24 19:36:12.502074 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.501892 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-openshift-service-ca-bundle\") pod \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\" (UID: \"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b\") " Apr 24 19:36:12.502209 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.502187 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" (UID: "7aad4d3e-eb9d-4c81-a294-e14d72ffd25b"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:36:12.503855 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.503836 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" (UID: "7aad4d3e-eb9d-4c81-a294-e14d72ffd25b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:36:12.602595 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.602506 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:36:12.602595 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.602548 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:36:12.623722 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.623694 2571 generic.go:358] "Generic (PLEG): container finished" podID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerID="ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6" exitCode=0 Apr 24 19:36:12.623839 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.623757 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" Apr 24 19:36:12.623839 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.623772 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" event={"ID":"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b","Type":"ContainerDied","Data":"ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6"} Apr 24 19:36:12.623839 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.623798 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv" event={"ID":"7aad4d3e-eb9d-4c81-a294-e14d72ffd25b","Type":"ContainerDied","Data":"82ae6b19437fb504dcb093b7089a4f308c4e25ea3b737241c129ad4f54d5c636"} Apr 24 19:36:12.623839 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.623813 2571 scope.go:117] "RemoveContainer" containerID="ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6" Apr 24 19:36:12.632320 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.632302 2571 scope.go:117] "RemoveContainer" containerID="ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6" Apr 24 19:36:12.632590 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:12.632571 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6\": container with ID starting with ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6 not found: ID does not exist" containerID="ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6" Apr 24 19:36:12.632648 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.632597 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6"} err="failed to get container status \"ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6\": rpc error: code = NotFound desc = could not find container \"ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6\": container with ID starting with ec1528f99a9103ea2538570e1f83ffde543156d330263cd1a760dabb46e929e6 not found: ID does not exist" Apr 24 19:36:12.644923 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.644880 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv"] Apr 24 19:36:12.650724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.650705 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/sequence-graph-cb3b2-5bb4f7479d-567lv"] Apr 24 19:36:12.904720 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:12.904636 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls\") pod \"splitter-graph-8e956-6bb6cf5f5d-dbpqh\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:12.904872 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:12.904753 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/splitter-graph-8e956-serving-cert: secret "splitter-graph-8e956-serving-cert" not found Apr 24 19:36:12.904872 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:12.904816 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls podName:9d781b40-a10f-49f3-b1d7-1bcb7915eab4 nodeName:}" failed. No retries permitted until 2026-04-24 19:36:13.904800939 +0000 UTC m=+1784.394225215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls") pod "splitter-graph-8e956-6bb6cf5f5d-dbpqh" (UID: "9d781b40-a10f-49f3-b1d7-1bcb7915eab4") : secret "splitter-graph-8e956-serving-cert" not found Apr 24 19:36:13.913863 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:13.913821 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls\") pod \"splitter-graph-8e956-6bb6cf5f5d-dbpqh\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:13.916159 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:13.916139 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls\") pod \"splitter-graph-8e956-6bb6cf5f5d-dbpqh\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:14.077739 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:14.077707 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:14.108421 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:14.108394 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" path="/var/lib/kubelet/pods/7aad4d3e-eb9d-4c81-a294-e14d72ffd25b/volumes" Apr 24 19:36:14.193059 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:14.193036 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh"] Apr 24 19:36:14.195277 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:36:14.195247 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d781b40_a10f_49f3_b1d7_1bcb7915eab4.slice/crio-8c39b4004724cd6b3edb18c0a0e039838a62b51655ecf22f1f5fb8a97e5a645f WatchSource:0}: Error finding container 8c39b4004724cd6b3edb18c0a0e039838a62b51655ecf22f1f5fb8a97e5a645f: Status 404 returned error can't find the container with id 8c39b4004724cd6b3edb18c0a0e039838a62b51655ecf22f1f5fb8a97e5a645f Apr 24 19:36:14.197301 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:14.197286 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 19:36:14.632605 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:14.632507 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" event={"ID":"9d781b40-a10f-49f3-b1d7-1bcb7915eab4","Type":"ContainerStarted","Data":"180f67bfa8a744c8c0369722c6278bc964abfd3d3d7ee069756d77c38381af74"} Apr 24 19:36:14.632605 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:14.632545 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" event={"ID":"9d781b40-a10f-49f3-b1d7-1bcb7915eab4","Type":"ContainerStarted","Data":"8c39b4004724cd6b3edb18c0a0e039838a62b51655ecf22f1f5fb8a97e5a645f"} Apr 24 19:36:14.632605 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:14.632562 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:14.649510 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:14.649465 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" podStartSLOduration=2.649450266 podStartE2EDuration="2.649450266s" podCreationTimestamp="2026-04-24 19:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:36:14.648642045 +0000 UTC m=+1785.138066341" watchObservedRunningTime="2026-04-24 19:36:14.649450266 +0000 UTC m=+1785.138874562" Apr 24 19:36:20.642491 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:20.642411 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:22.333252 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:22.333180 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh"] Apr 24 19:36:22.333587 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:22.333378 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" containerID="cri-o://180f67bfa8a744c8c0369722c6278bc964abfd3d3d7ee069756d77c38381af74" gracePeriod=30 Apr 24 19:36:25.639986 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:25.639946 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:30.118754 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:30.118728 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:36:30.120259 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:30.120233 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:36:30.639790 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:30.639750 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:35.640626 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:35.640587 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:35.641025 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:35.640690 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:40.640558 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:40.640519 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:45.640153 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:45.640112 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:50.640461 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:50.640423 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:36:51.973437 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:51.973388 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c"] Apr 24 19:36:51.974001 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:51.973962 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" Apr 24 19:36:51.974001 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:51.973984 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" Apr 24 19:36:51.974125 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:51.974069 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="7aad4d3e-eb9d-4c81-a294-e14d72ffd25b" containerName="sequence-graph-cb3b2" Apr 24 19:36:51.978277 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:51.978256 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:51.980885 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:51.980862 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-a0351-kube-rbac-proxy-sar-config\"" Apr 24 19:36:51.981031 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:51.980934 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"switch-graph-a0351-serving-cert\"" Apr 24 19:36:51.986230 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:51.986208 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c"] Apr 24 19:36:52.135738 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.135711 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96571d8-c3fa-4646-96fb-1705841cc8d5-openshift-service-ca-bundle\") pod \"switch-graph-a0351-c7768db49-ftz2c\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:52.135897 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.135770 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls\") pod \"switch-graph-a0351-c7768db49-ftz2c\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:52.237012 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.236927 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls\") pod \"switch-graph-a0351-c7768db49-ftz2c\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:52.237012 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.236993 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96571d8-c3fa-4646-96fb-1705841cc8d5-openshift-service-ca-bundle\") pod \"switch-graph-a0351-c7768db49-ftz2c\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:52.237188 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:52.237084 2571 secret.go:189] Couldn't get secret kserve-ci-e2e-test/switch-graph-a0351-serving-cert: secret "switch-graph-a0351-serving-cert" not found Apr 24 19:36:52.237188 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:52.237157 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls podName:b96571d8-c3fa-4646-96fb-1705841cc8d5 nodeName:}" failed. No retries permitted until 2026-04-24 19:36:52.737139803 +0000 UTC m=+1823.226564078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls") pod "switch-graph-a0351-c7768db49-ftz2c" (UID: "b96571d8-c3fa-4646-96fb-1705841cc8d5") : secret "switch-graph-a0351-serving-cert" not found Apr 24 19:36:52.237554 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.237538 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96571d8-c3fa-4646-96fb-1705841cc8d5-openshift-service-ca-bundle\") pod \"switch-graph-a0351-c7768db49-ftz2c\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:52.355463 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:52.355427 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d781b40_a10f_49f3_b1d7_1bcb7915eab4.slice/crio-180f67bfa8a744c8c0369722c6278bc964abfd3d3d7ee069756d77c38381af74.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:36:52.355693 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:52.355517 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d781b40_a10f_49f3_b1d7_1bcb7915eab4.slice/crio-180f67bfa8a744c8c0369722c6278bc964abfd3d3d7ee069756d77c38381af74.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Apr 24 19:36:52.355825 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:36:52.355550 2571 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d781b40_a10f_49f3_b1d7_1bcb7915eab4.slice/crio-8c39b4004724cd6b3edb18c0a0e039838a62b51655ecf22f1f5fb8a97e5a645f\": RecentStats: unable to find data in memory cache], [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d781b40_a10f_49f3_b1d7_1bcb7915eab4.slice/crio-180f67bfa8a744c8c0369722c6278bc964abfd3d3d7ee069756d77c38381af74.scope\": RecentStats: unable to find data in memory cache]" Apr 24 19:36:52.742036 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.741982 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls\") pod \"switch-graph-a0351-c7768db49-ftz2c\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:52.744179 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.744145 2571 generic.go:358] "Generic (PLEG): container finished" podID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerID="180f67bfa8a744c8c0369722c6278bc964abfd3d3d7ee069756d77c38381af74" exitCode=0 Apr 24 19:36:52.744310 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.744223 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" event={"ID":"9d781b40-a10f-49f3-b1d7-1bcb7915eab4","Type":"ContainerDied","Data":"180f67bfa8a744c8c0369722c6278bc964abfd3d3d7ee069756d77c38381af74"} Apr 24 19:36:52.744724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.744703 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls\") pod \"switch-graph-a0351-c7768db49-ftz2c\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:52.888618 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.888580 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:52.981197 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:52.981172 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:53.010382 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.010293 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c"] Apr 24 19:36:53.012871 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:36:53.012842 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb96571d8_c3fa_4646_96fb_1705841cc8d5.slice/crio-fb0bc5ca846f0fd845e08c3196cdd63ac923830a5f51a7ec78a098afa083a965 WatchSource:0}: Error finding container fb0bc5ca846f0fd845e08c3196cdd63ac923830a5f51a7ec78a098afa083a965: Status 404 returned error can't find the container with id fb0bc5ca846f0fd845e08c3196cdd63ac923830a5f51a7ec78a098afa083a965 Apr 24 19:36:53.144731 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.144701 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls\") pod \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " Apr 24 19:36:53.144890 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.144876 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-openshift-service-ca-bundle\") pod \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\" (UID: \"9d781b40-a10f-49f3-b1d7-1bcb7915eab4\") " Apr 24 19:36:53.145193 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.145169 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "9d781b40-a10f-49f3-b1d7-1bcb7915eab4" (UID: "9d781b40-a10f-49f3-b1d7-1bcb7915eab4"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:36:53.146644 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.146623 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "9d781b40-a10f-49f3-b1d7-1bcb7915eab4" (UID: "9d781b40-a10f-49f3-b1d7-1bcb7915eab4"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:36:53.245796 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.245760 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:36:53.245796 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.245791 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d781b40-a10f-49f3-b1d7-1bcb7915eab4-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:36:53.753659 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.753619 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" event={"ID":"b96571d8-c3fa-4646-96fb-1705841cc8d5","Type":"ContainerStarted","Data":"8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a"} Apr 24 19:36:53.753659 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.753665 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" event={"ID":"b96571d8-c3fa-4646-96fb-1705841cc8d5","Type":"ContainerStarted","Data":"fb0bc5ca846f0fd845e08c3196cdd63ac923830a5f51a7ec78a098afa083a965"} Apr 24 19:36:53.753918 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.753831 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:36:53.754802 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.754774 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" event={"ID":"9d781b40-a10f-49f3-b1d7-1bcb7915eab4","Type":"ContainerDied","Data":"8c39b4004724cd6b3edb18c0a0e039838a62b51655ecf22f1f5fb8a97e5a645f"} Apr 24 19:36:53.754946 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.754811 2571 scope.go:117] "RemoveContainer" containerID="180f67bfa8a744c8c0369722c6278bc964abfd3d3d7ee069756d77c38381af74" Apr 24 19:36:53.754946 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.754813 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh" Apr 24 19:36:53.772262 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.772208 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" podStartSLOduration=2.772197545 podStartE2EDuration="2.772197545s" podCreationTimestamp="2026-04-24 19:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:36:53.772072066 +0000 UTC m=+1824.261496363" watchObservedRunningTime="2026-04-24 19:36:53.772197545 +0000 UTC m=+1824.261621865" Apr 24 19:36:53.783331 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.783308 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh"] Apr 24 19:36:53.787006 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:53.786988 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-8e956-6bb6cf5f5d-dbpqh"] Apr 24 19:36:54.106901 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:54.106825 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" path="/var/lib/kubelet/pods/9d781b40-a10f-49f3-b1d7-1bcb7915eab4/volumes" Apr 24 19:36:59.764757 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:36:59.764725 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:37:22.539386 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.539351 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p"] Apr 24 19:37:22.539731 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.539699 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" Apr 24 19:37:22.539731 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.539711 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" Apr 24 19:37:22.539800 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.539768 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d781b40-a10f-49f3-b1d7-1bcb7915eab4" containerName="splitter-graph-8e956" Apr 24 19:37:22.542854 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.542839 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:22.545520 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.545502 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-11f3c-kube-rbac-proxy-sar-config\"" Apr 24 19:37:22.545769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.545753 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"splitter-graph-11f3c-serving-cert\"" Apr 24 19:37:22.553388 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.553363 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p"] Apr 24 19:37:22.593047 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.593018 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf929f1-6853-43f5-8512-c70688f4852c-openshift-service-ca-bundle\") pod \"splitter-graph-11f3c-5fc8d754fc-jxw7p\" (UID: \"7bf929f1-6853-43f5-8512-c70688f4852c\") " pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:22.593191 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.593067 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7bf929f1-6853-43f5-8512-c70688f4852c-proxy-tls\") pod \"splitter-graph-11f3c-5fc8d754fc-jxw7p\" (UID: \"7bf929f1-6853-43f5-8512-c70688f4852c\") " pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:22.693699 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.693670 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf929f1-6853-43f5-8512-c70688f4852c-openshift-service-ca-bundle\") pod \"splitter-graph-11f3c-5fc8d754fc-jxw7p\" (UID: \"7bf929f1-6853-43f5-8512-c70688f4852c\") " pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:22.693867 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.693721 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7bf929f1-6853-43f5-8512-c70688f4852c-proxy-tls\") pod \"splitter-graph-11f3c-5fc8d754fc-jxw7p\" (UID: \"7bf929f1-6853-43f5-8512-c70688f4852c\") " pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:22.694283 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.694264 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf929f1-6853-43f5-8512-c70688f4852c-openshift-service-ca-bundle\") pod \"splitter-graph-11f3c-5fc8d754fc-jxw7p\" (UID: \"7bf929f1-6853-43f5-8512-c70688f4852c\") " pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:22.695943 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.695896 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7bf929f1-6853-43f5-8512-c70688f4852c-proxy-tls\") pod \"splitter-graph-11f3c-5fc8d754fc-jxw7p\" (UID: \"7bf929f1-6853-43f5-8512-c70688f4852c\") " pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:22.854058 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.853981 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:22.970505 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:22.970481 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p"] Apr 24 19:37:22.972933 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:37:22.972888 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bf929f1_6853_43f5_8512_c70688f4852c.slice/crio-fd8e53debd827492b4cce4135cc357418b6f87a50881f16b04ed10c35017cd0d WatchSource:0}: Error finding container fd8e53debd827492b4cce4135cc357418b6f87a50881f16b04ed10c35017cd0d: Status 404 returned error can't find the container with id fd8e53debd827492b4cce4135cc357418b6f87a50881f16b04ed10c35017cd0d Apr 24 19:37:23.842996 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:23.842956 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" event={"ID":"7bf929f1-6853-43f5-8512-c70688f4852c","Type":"ContainerStarted","Data":"a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b"} Apr 24 19:37:23.842996 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:23.842995 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" event={"ID":"7bf929f1-6853-43f5-8512-c70688f4852c","Type":"ContainerStarted","Data":"fd8e53debd827492b4cce4135cc357418b6f87a50881f16b04ed10c35017cd0d"} Apr 24 19:37:23.843432 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:23.843075 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:37:23.861228 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:23.861184 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" podStartSLOduration=1.861171892 podStartE2EDuration="1.861171892s" podCreationTimestamp="2026-04-24 19:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:37:23.85902992 +0000 UTC m=+1854.348454231" watchObservedRunningTime="2026-04-24 19:37:23.861171892 +0000 UTC m=+1854.350596189" Apr 24 19:37:29.852010 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:37:29.851983 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:41:30.141882 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:41:30.141815 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:41:30.144484 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:41:30.143602 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:45:37.154802 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:45:37.154724 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p"] Apr 24 19:45:37.155340 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:45:37.154976 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" containerID="cri-o://a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b" gracePeriod=30 Apr 24 19:45:39.849980 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:45:39.849937 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:45:44.850109 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:45:44.850069 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:45:49.850003 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:45:49.849969 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:45:49.850410 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:45:49.850085 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:45:54.850550 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:45:54.850507 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:45:59.850400 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:45:59.850362 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:46:04.850596 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:04.850559 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:46:07.297775 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.297748 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:46:07.423191 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.423090 2571 generic.go:358] "Generic (PLEG): container finished" podID="7bf929f1-6853-43f5-8512-c70688f4852c" containerID="a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b" exitCode=0 Apr 24 19:46:07.423191 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.423169 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" event={"ID":"7bf929f1-6853-43f5-8512-c70688f4852c","Type":"ContainerDied","Data":"a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b"} Apr 24 19:46:07.423415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.423191 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" Apr 24 19:46:07.423415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.423211 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p" event={"ID":"7bf929f1-6853-43f5-8512-c70688f4852c","Type":"ContainerDied","Data":"fd8e53debd827492b4cce4135cc357418b6f87a50881f16b04ed10c35017cd0d"} Apr 24 19:46:07.423415 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.423227 2571 scope.go:117] "RemoveContainer" containerID="a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b" Apr 24 19:46:07.431008 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.430991 2571 scope.go:117] "RemoveContainer" containerID="a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b" Apr 24 19:46:07.431264 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:46:07.431245 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b\": container with ID starting with a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b not found: ID does not exist" containerID="a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b" Apr 24 19:46:07.431310 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.431273 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b"} err="failed to get container status \"a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b\": rpc error: code = NotFound desc = could not find container \"a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b\": container with ID starting with a557538411fb85ec289469772bec73064741a8039c2f8daa6cb3d272a463d69b not found: ID does not exist" Apr 24 19:46:07.458380 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.458354 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf929f1-6853-43f5-8512-c70688f4852c-openshift-service-ca-bundle\") pod \"7bf929f1-6853-43f5-8512-c70688f4852c\" (UID: \"7bf929f1-6853-43f5-8512-c70688f4852c\") " Apr 24 19:46:07.458471 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.458408 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7bf929f1-6853-43f5-8512-c70688f4852c-proxy-tls\") pod \"7bf929f1-6853-43f5-8512-c70688f4852c\" (UID: \"7bf929f1-6853-43f5-8512-c70688f4852c\") " Apr 24 19:46:07.458686 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.458665 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bf929f1-6853-43f5-8512-c70688f4852c-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "7bf929f1-6853-43f5-8512-c70688f4852c" (UID: "7bf929f1-6853-43f5-8512-c70688f4852c"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:46:07.460458 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.460442 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bf929f1-6853-43f5-8512-c70688f4852c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "7bf929f1-6853-43f5-8512-c70688f4852c" (UID: "7bf929f1-6853-43f5-8512-c70688f4852c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:46:07.559227 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.559176 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf929f1-6853-43f5-8512-c70688f4852c-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:46:07.559227 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.559221 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7bf929f1-6853-43f5-8512-c70688f4852c-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:46:07.744533 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.744501 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p"] Apr 24 19:46:07.747483 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:07.747461 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/splitter-graph-11f3c-5fc8d754fc-jxw7p"] Apr 24 19:46:08.107011 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:08.106931 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" path="/var/lib/kubelet/pods/7bf929f1-6853-43f5-8512-c70688f4852c/volumes" Apr 24 19:46:30.164513 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:30.164484 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:46:30.167107 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:46:30.167089 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:51:30.190029 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:51:30.189890 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:51:30.194140 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:51:30.190769 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:53:11.379520 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:11.379433 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c"] Apr 24 19:53:11.380075 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:11.379695 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" containerID="cri-o://8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a" gracePeriod=30 Apr 24 19:53:12.799622 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.799592 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mzt5j/must-gather-8v25w"] Apr 24 19:53:12.800063 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.799962 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" Apr 24 19:53:12.800063 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.799976 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" Apr 24 19:53:12.800063 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.800027 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="7bf929f1-6853-43f5-8512-c70688f4852c" containerName="splitter-graph-11f3c" Apr 24 19:53:12.803113 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.803094 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:12.805664 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.805641 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-mzt5j\"/\"default-dockercfg-7zlc8\"" Apr 24 19:53:12.805926 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.805897 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-mzt5j\"/\"openshift-service-ca.crt\"" Apr 24 19:53:12.806943 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.806925 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-mzt5j\"/\"kube-root-ca.crt\"" Apr 24 19:53:12.818736 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.818716 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mzt5j/must-gather-8v25w"] Apr 24 19:53:12.844218 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.844195 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-must-gather-output\") pod \"must-gather-8v25w\" (UID: \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\") " pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:12.844321 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.844243 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jpf\" (UniqueName: \"kubernetes.io/projected/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-kube-api-access-44jpf\") pod \"must-gather-8v25w\" (UID: \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\") " pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:12.945286 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.945259 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-must-gather-output\") pod \"must-gather-8v25w\" (UID: \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\") " pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:12.945401 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.945339 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-44jpf\" (UniqueName: \"kubernetes.io/projected/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-kube-api-access-44jpf\") pod \"must-gather-8v25w\" (UID: \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\") " pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:12.945583 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.945564 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-must-gather-output\") pod \"must-gather-8v25w\" (UID: \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\") " pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:12.953031 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:12.953014 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jpf\" (UniqueName: \"kubernetes.io/projected/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-kube-api-access-44jpf\") pod \"must-gather-8v25w\" (UID: \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\") " pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:13.121076 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:13.120987 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:13.234637 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:13.234592 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mzt5j/must-gather-8v25w"] Apr 24 19:53:13.236869 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:53:13.236841 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ea2e9f_74fc_4666_b245_f9c853ddf1b1.slice/crio-af682fb3e642451e818d02cbc54102d5671b02f8fc47bd4593d32876fab6d70d WatchSource:0}: Error finding container af682fb3e642451e818d02cbc54102d5671b02f8fc47bd4593d32876fab6d70d: Status 404 returned error can't find the container with id af682fb3e642451e818d02cbc54102d5671b02f8fc47bd4593d32876fab6d70d Apr 24 19:53:13.238537 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:13.238518 2571 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 24 19:53:13.681088 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:13.681056 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mzt5j/must-gather-8v25w" event={"ID":"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1","Type":"ContainerStarted","Data":"af682fb3e642451e818d02cbc54102d5671b02f8fc47bd4593d32876fab6d70d"} Apr 24 19:53:14.764594 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:14.764553 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:53:17.698295 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:17.698255 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mzt5j/must-gather-8v25w" event={"ID":"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1","Type":"ContainerStarted","Data":"73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f"} Apr 24 19:53:17.698678 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:17.698302 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mzt5j/must-gather-8v25w" event={"ID":"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1","Type":"ContainerStarted","Data":"ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3"} Apr 24 19:53:17.716314 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:17.716267 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mzt5j/must-gather-8v25w" podStartSLOduration=1.655515142 podStartE2EDuration="5.716252064s" podCreationTimestamp="2026-04-24 19:53:12 +0000 UTC" firstStartedPulling="2026-04-24 19:53:13.238680401 +0000 UTC m=+2803.728104677" lastFinishedPulling="2026-04-24 19:53:17.299417323 +0000 UTC m=+2807.788841599" observedRunningTime="2026-04-24 19:53:17.713996369 +0000 UTC m=+2808.203420667" watchObservedRunningTime="2026-04-24 19:53:17.716252064 +0000 UTC m=+2808.205676361" Apr 24 19:53:19.763389 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:19.763348 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:53:24.763200 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:24.763160 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:53:24.763656 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:24.763271 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:53:25.784436 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:25.784409 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:26.589919 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:26.589877 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:27.330581 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:27.330555 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:28.076313 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:28.076283 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:28.839867 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:28.839838 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:29.619223 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:29.619194 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:29.764809 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:29.764773 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:53:30.386746 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:30.386720 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:31.128194 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:31.128157 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:31.881487 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:31.881460 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:32.627113 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:32.627080 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:33.369809 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:33.369778 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:34.129186 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:34.129156 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_switch-graph-a0351-c7768db49-ftz2c_b96571d8-c3fa-4646-96fb-1705841cc8d5/switch-graph-a0351/0.log" Apr 24 19:53:34.763171 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:34.763127 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:53:35.760350 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:35.760319 2571 generic.go:358] "Generic (PLEG): container finished" podID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerID="ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3" exitCode=0 Apr 24 19:53:35.760547 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:35.760392 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mzt5j/must-gather-8v25w" event={"ID":"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1","Type":"ContainerDied","Data":"ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3"} Apr 24 19:53:35.760751 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:35.760736 2571 scope.go:117] "RemoveContainer" containerID="ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3" Apr 24 19:53:36.542164 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:36.542134 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mzt5j_must-gather-8v25w_a3ea2e9f-74fc-4666-b245-f9c853ddf1b1/gather/0.log" Apr 24 19:53:39.763160 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:39.763121 2571 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 24 19:53:39.863329 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:39.863303 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-sppcw_06e96d17-2cb4-45ff-b8aa-5dd288628fc3/global-pull-secret-syncer/0.log" Apr 24 19:53:39.986838 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:39.986812 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-ztvgx_5767c74f-a79f-44ed-966e-2106086c99dc/konnectivity-agent/0.log" Apr 24 19:53:40.063572 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:40.063497 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-130-85.ec2.internal_e6111c3de51e90f6a3a0694ec4ce1e2d/haproxy/0.log" Apr 24 19:53:41.543304 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.543282 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:53:41.686570 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.686543 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls\") pod \"b96571d8-c3fa-4646-96fb-1705841cc8d5\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " Apr 24 19:53:41.686712 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.686635 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96571d8-c3fa-4646-96fb-1705841cc8d5-openshift-service-ca-bundle\") pod \"b96571d8-c3fa-4646-96fb-1705841cc8d5\" (UID: \"b96571d8-c3fa-4646-96fb-1705841cc8d5\") " Apr 24 19:53:41.686999 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.686976 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96571d8-c3fa-4646-96fb-1705841cc8d5-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "b96571d8-c3fa-4646-96fb-1705841cc8d5" (UID: "b96571d8-c3fa-4646-96fb-1705841cc8d5"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 19:53:41.688562 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.688543 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b96571d8-c3fa-4646-96fb-1705841cc8d5" (UID: "b96571d8-c3fa-4646-96fb-1705841cc8d5"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 19:53:41.777438 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.777411 2571 generic.go:358] "Generic (PLEG): container finished" podID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerID="8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a" exitCode=0 Apr 24 19:53:41.777550 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.777478 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" Apr 24 19:53:41.777550 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.777485 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" event={"ID":"b96571d8-c3fa-4646-96fb-1705841cc8d5","Type":"ContainerDied","Data":"8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a"} Apr 24 19:53:41.777550 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.777508 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c" event={"ID":"b96571d8-c3fa-4646-96fb-1705841cc8d5","Type":"ContainerDied","Data":"fb0bc5ca846f0fd845e08c3196cdd63ac923830a5f51a7ec78a098afa083a965"} Apr 24 19:53:41.777550 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.777523 2571 scope.go:117] "RemoveContainer" containerID="8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a" Apr 24 19:53:41.787381 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.787355 2571 scope.go:117] "RemoveContainer" containerID="8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a" Apr 24 19:53:41.787676 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.787655 2571 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b96571d8-c3fa-4646-96fb-1705841cc8d5-openshift-service-ca-bundle\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:53:41.787769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.787686 2571 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b96571d8-c3fa-4646-96fb-1705841cc8d5-proxy-tls\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:53:41.787769 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:53:41.787654 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a\": container with ID starting with 8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a not found: ID does not exist" containerID="8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a" Apr 24 19:53:41.787769 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.787716 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a"} err="failed to get container status \"8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a\": rpc error: code = NotFound desc = could not find container \"8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a\": container with ID starting with 8f44758a355b0563883e3ef3d311a76405d30d33a28e56b868c090171a2a981a not found: ID does not exist" Apr 24 19:53:41.801012 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.800992 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c"] Apr 24 19:53:41.804409 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.804389 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/switch-graph-a0351-c7768db49-ftz2c"] Apr 24 19:53:41.949119 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.949053 2571 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mzt5j/must-gather-8v25w"] Apr 24 19:53:41.949272 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.949253 2571 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-must-gather-mzt5j/must-gather-8v25w" podUID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerName="copy" containerID="cri-o://73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f" gracePeriod=2 Apr 24 19:53:41.954027 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:41.954005 2571 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mzt5j/must-gather-8v25w"] Apr 24 19:53:42.108449 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.108408 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" path="/var/lib/kubelet/pods/b96571d8-c3fa-4646-96fb-1705841cc8d5/volumes" Apr 24 19:53:42.163736 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.163714 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mzt5j_must-gather-8v25w_a3ea2e9f-74fc-4666-b245-f9c853ddf1b1/copy/0.log" Apr 24 19:53:42.164085 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.164070 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:42.292653 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.292583 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jpf\" (UniqueName: \"kubernetes.io/projected/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-kube-api-access-44jpf\") pod \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\" (UID: \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\") " Apr 24 19:53:42.292787 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.292654 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-must-gather-output\") pod \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\" (UID: \"a3ea2e9f-74fc-4666-b245-f9c853ddf1b1\") " Apr 24 19:53:42.294353 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.294327 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" (UID: "a3ea2e9f-74fc-4666-b245-f9c853ddf1b1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 24 19:53:42.294782 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.294762 2571 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-kube-api-access-44jpf" (OuterVolumeSpecName: "kube-api-access-44jpf") pod "a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" (UID: "a3ea2e9f-74fc-4666-b245-f9c853ddf1b1"). InnerVolumeSpecName "kube-api-access-44jpf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 19:53:42.394108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.394065 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-44jpf\" (UniqueName: \"kubernetes.io/projected/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-kube-api-access-44jpf\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:53:42.394108 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.394093 2571 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1-must-gather-output\") on node \"ip-10-0-130-85.ec2.internal\" DevicePath \"\"" Apr 24 19:53:42.782384 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.782355 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mzt5j_must-gather-8v25w_a3ea2e9f-74fc-4666-b245-f9c853ddf1b1/copy/0.log" Apr 24 19:53:42.782785 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.782685 2571 generic.go:358] "Generic (PLEG): container finished" podID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerID="73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f" exitCode=143 Apr 24 19:53:42.782785 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.782735 2571 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mzt5j/must-gather-8v25w" Apr 24 19:53:42.782785 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.782774 2571 scope.go:117] "RemoveContainer" containerID="73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f" Apr 24 19:53:42.791238 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.791216 2571 scope.go:117] "RemoveContainer" containerID="ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3" Apr 24 19:53:42.802552 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.802531 2571 scope.go:117] "RemoveContainer" containerID="73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f" Apr 24 19:53:42.802780 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:53:42.802761 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f\": container with ID starting with 73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f not found: ID does not exist" containerID="73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f" Apr 24 19:53:42.802842 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.802785 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f"} err="failed to get container status \"73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f\": rpc error: code = NotFound desc = could not find container \"73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f\": container with ID starting with 73e062f17e4dcf81692ddaa7cf1b112164542cac36b2e531551e92abb0d43b0f not found: ID does not exist" Apr 24 19:53:42.802842 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.802803 2571 scope.go:117] "RemoveContainer" containerID="ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3" Apr 24 19:53:42.803045 ip-10-0-130-85 kubenswrapper[2571]: E0424 19:53:42.803029 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3\": container with ID starting with ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3 not found: ID does not exist" containerID="ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3" Apr 24 19:53:42.803095 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:42.803048 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3"} err="failed to get container status \"ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3\": rpc error: code = NotFound desc = could not find container \"ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3\": container with ID starting with ec8822690b34985a78de1926f42798fac12f83910cc1d09fff3121db56da0bf3 not found: ID does not exist" Apr 24 19:53:43.483677 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.483601 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_025a79e2-8d13-41c3-89b6-2c7dd9616a66/alertmanager/0.log" Apr 24 19:53:43.502994 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.502971 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_025a79e2-8d13-41c3-89b6-2c7dd9616a66/config-reloader/0.log" Apr 24 19:53:43.524962 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.524943 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_025a79e2-8d13-41c3-89b6-2c7dd9616a66/kube-rbac-proxy-web/0.log" Apr 24 19:53:43.544779 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.544744 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_025a79e2-8d13-41c3-89b6-2c7dd9616a66/kube-rbac-proxy/0.log" Apr 24 19:53:43.565096 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.565075 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_025a79e2-8d13-41c3-89b6-2c7dd9616a66/kube-rbac-proxy-metric/0.log" Apr 24 19:53:43.587482 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.587461 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_025a79e2-8d13-41c3-89b6-2c7dd9616a66/prom-label-proxy/0.log" Apr 24 19:53:43.612196 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.612176 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_025a79e2-8d13-41c3-89b6-2c7dd9616a66/init-config-reloader/0.log" Apr 24 19:53:43.689246 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.689225 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-6xdh4_49a37844-298e-4826-a760-5ba43867d446/kube-state-metrics/0.log" Apr 24 19:53:43.710189 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.710169 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-6xdh4_49a37844-298e-4826-a760-5ba43867d446/kube-rbac-proxy-main/0.log" Apr 24 19:53:43.732134 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.732108 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-6xdh4_49a37844-298e-4826-a760-5ba43867d446/kube-rbac-proxy-self/0.log" Apr 24 19:53:43.784468 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.784402 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-7dccd58f55-w9klz_272dfdad-dc51-4c52-8c46-f73a8d1bba06/monitoring-plugin/0.log" Apr 24 19:53:43.873893 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.873867 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-qj24z_82d6a5b3-4b28-4b2e-8cef-302264f53d35/node-exporter/0.log" Apr 24 19:53:43.892995 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.892973 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-qj24z_82d6a5b3-4b28-4b2e-8cef-302264f53d35/kube-rbac-proxy/0.log" Apr 24 19:53:43.912857 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:43.912836 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-qj24z_82d6a5b3-4b28-4b2e-8cef-302264f53d35/init-textfile/0.log" Apr 24 19:53:44.013403 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.013385 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-z2tt6_a7bb7d80-1c4f-459d-8ed3-3623da4d74b6/kube-rbac-proxy-main/0.log" Apr 24 19:53:44.033917 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.033882 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-z2tt6_a7bb7d80-1c4f-459d-8ed3-3623da4d74b6/kube-rbac-proxy-self/0.log" Apr 24 19:53:44.057126 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.057104 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-9d44df66c-z2tt6_a7bb7d80-1c4f-459d-8ed3-3623da4d74b6/openshift-state-metrics/0.log" Apr 24 19:53:44.099323 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.099288 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_bf1b7947-aa90-42fa-b1a3-348370f46e19/prometheus/0.log" Apr 24 19:53:44.107526 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.107498 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" path="/var/lib/kubelet/pods/a3ea2e9f-74fc-4666-b245-f9c853ddf1b1/volumes" Apr 24 19:53:44.116187 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.116170 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_bf1b7947-aa90-42fa-b1a3-348370f46e19/config-reloader/0.log" Apr 24 19:53:44.138067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.138048 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_bf1b7947-aa90-42fa-b1a3-348370f46e19/thanos-sidecar/0.log" Apr 24 19:53:44.158627 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.158606 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_bf1b7947-aa90-42fa-b1a3-348370f46e19/kube-rbac-proxy-web/0.log" Apr 24 19:53:44.178572 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.178555 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_bf1b7947-aa90-42fa-b1a3-348370f46e19/kube-rbac-proxy/0.log" Apr 24 19:53:44.201618 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.201600 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_bf1b7947-aa90-42fa-b1a3-348370f46e19/kube-rbac-proxy-thanos/0.log" Apr 24 19:53:44.222573 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.222557 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_bf1b7947-aa90-42fa-b1a3-348370f46e19/init-config-reloader/0.log" Apr 24 19:53:44.252180 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.252147 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-mn72d_b1027239-cc37-421e-ba6a-e801f278db4c/prometheus-operator/0.log" Apr 24 19:53:44.273572 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.273536 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-mn72d_b1027239-cc37-421e-ba6a-e801f278db4c/kube-rbac-proxy/0.log" Apr 24 19:53:44.300527 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.300464 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-57cf98b594-2c9bn_e369c5e1-62ad-4a07-bd91-f9b696624a1c/prometheus-operator-admission-webhook/0.log" Apr 24 19:53:44.331963 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.331890 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fd887b88c-dsp79_17d45a70-60d5-447b-96ab-bece4ce1be59/telemeter-client/0.log" Apr 24 19:53:44.355452 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.355434 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fd887b88c-dsp79_17d45a70-60d5-447b-96ab-bece4ce1be59/reload/0.log" Apr 24 19:53:44.384254 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.384238 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fd887b88c-dsp79_17d45a70-60d5-447b-96ab-bece4ce1be59/kube-rbac-proxy/0.log" Apr 24 19:53:44.419868 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.419849 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6c69ddf84f-cltxb_3d80891a-935a-4389-9c77-1a81e295f4dd/thanos-query/0.log" Apr 24 19:53:44.448125 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.448111 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6c69ddf84f-cltxb_3d80891a-935a-4389-9c77-1a81e295f4dd/kube-rbac-proxy-web/0.log" Apr 24 19:53:44.474728 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.474709 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6c69ddf84f-cltxb_3d80891a-935a-4389-9c77-1a81e295f4dd/kube-rbac-proxy/0.log" Apr 24 19:53:44.499456 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.499439 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6c69ddf84f-cltxb_3d80891a-935a-4389-9c77-1a81e295f4dd/prom-label-proxy/0.log" Apr 24 19:53:44.528611 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.528591 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6c69ddf84f-cltxb_3d80891a-935a-4389-9c77-1a81e295f4dd/kube-rbac-proxy-rules/0.log" Apr 24 19:53:44.557948 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:44.557870 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-6c69ddf84f-cltxb_3d80891a-935a-4389-9c77-1a81e295f4dd/kube-rbac-proxy-metrics/0.log" Apr 24 19:53:46.707067 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707032 2571 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j"] Apr 24 19:53:46.707444 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707348 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" Apr 24 19:53:46.707444 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707358 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" Apr 24 19:53:46.707444 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707370 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerName="gather" Apr 24 19:53:46.707444 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707376 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerName="gather" Apr 24 19:53:46.707444 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707382 2571 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerName="copy" Apr 24 19:53:46.707444 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707388 2571 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerName="copy" Apr 24 19:53:46.707444 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707445 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="b96571d8-c3fa-4646-96fb-1705841cc8d5" containerName="switch-graph-a0351" Apr 24 19:53:46.707648 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707452 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerName="gather" Apr 24 19:53:46.707648 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.707461 2571 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3ea2e9f-74fc-4666-b245-f9c853ddf1b1" containerName="copy" Apr 24 19:53:46.712307 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.712286 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.714954 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.714933 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-clnxb\"/\"kube-root-ca.crt\"" Apr 24 19:53:46.716051 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.716031 2571 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-clnxb\"/\"default-dockercfg-jx8fh\"" Apr 24 19:53:46.716148 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.716031 2571 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-clnxb\"/\"openshift-service-ca.crt\"" Apr 24 19:53:46.720023 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.720003 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j"] Apr 24 19:53:46.829170 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.829129 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-proc\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.829337 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.829188 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-lib-modules\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.829337 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.829279 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6q7f\" (UniqueName: \"kubernetes.io/projected/df7a9400-1be8-43a0-af50-4ccc14c68b93-kube-api-access-w6q7f\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.829337 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.829318 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-sys\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.829465 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.829346 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-podres\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930235 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930201 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6q7f\" (UniqueName: \"kubernetes.io/projected/df7a9400-1be8-43a0-af50-4ccc14c68b93-kube-api-access-w6q7f\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930235 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930236 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-sys\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930458 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930259 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-podres\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930458 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930306 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-proc\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930458 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930334 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-sys\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930458 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930340 2571 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-lib-modules\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930458 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930390 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-proc\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930458 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930439 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-podres\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.930805 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.930483 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df7a9400-1be8-43a0-af50-4ccc14c68b93-lib-modules\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:46.943553 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:46.943528 2571 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6q7f\" (UniqueName: \"kubernetes.io/projected/df7a9400-1be8-43a0-af50-4ccc14c68b93-kube-api-access-w6q7f\") pod \"perf-node-gather-daemonset-ffl5j\" (UID: \"df7a9400-1be8-43a0-af50-4ccc14c68b93\") " pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:47.023834 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.023762 2571 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:47.142216 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.142192 2571 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j"] Apr 24 19:53:47.144010 ip-10-0-130-85 kubenswrapper[2571]: W0424 19:53:47.143976 2571 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddf7a9400_1be8_43a0_af50_4ccc14c68b93.slice/crio-293ab0b99b0a37a31d42499fc6ed4c45f0c048f72e77b746f93b96b79d3c0d09 WatchSource:0}: Error finding container 293ab0b99b0a37a31d42499fc6ed4c45f0c048f72e77b746f93b96b79d3c0d09: Status 404 returned error can't find the container with id 293ab0b99b0a37a31d42499fc6ed4c45f0c048f72e77b746f93b96b79d3c0d09 Apr 24 19:53:47.684007 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.683978 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5g8bb_e6ca0a6d-3eab-40ba-8698-e1fbdb43065a/dns/0.log" Apr 24 19:53:47.706379 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.706351 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5g8bb_e6ca0a6d-3eab-40ba-8698-e1fbdb43065a/kube-rbac-proxy/0.log" Apr 24 19:53:47.798562 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.798532 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" event={"ID":"df7a9400-1be8-43a0-af50-4ccc14c68b93","Type":"ContainerStarted","Data":"769fe1909a1f281e781eb43ee4bef4bcb2382efc291a54c133b78611c25f9724"} Apr 24 19:53:47.798882 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.798565 2571 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" event={"ID":"df7a9400-1be8-43a0-af50-4ccc14c68b93","Type":"ContainerStarted","Data":"293ab0b99b0a37a31d42499fc6ed4c45f0c048f72e77b746f93b96b79d3c0d09"} Apr 24 19:53:47.798882 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.798595 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:47.813966 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.813894 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" podStartSLOduration=1.813878416 podStartE2EDuration="1.813878416s" podCreationTimestamp="2026-04-24 19:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 19:53:47.813248034 +0000 UTC m=+2838.302672343" watchObservedRunningTime="2026-04-24 19:53:47.813878416 +0000 UTC m=+2838.303302715" Apr 24 19:53:47.833548 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:47.833520 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-qgvbw_42b9ec03-8a07-41ff-bcea-adb703ebf407/dns-node-resolver/0.log" Apr 24 19:53:48.282658 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:48.282626 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-5sdvh_5f9742f1-146e-47e5-a302-4d5570284239/node-ca/0.log" Apr 24 19:53:49.390880 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:49.390842 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-qx2g9_349784ed-94ac-4a05-8df5-2577a4c28f7a/serve-healthcheck-canary/0.log" Apr 24 19:53:49.906368 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:49.906327 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-nvfsw_32a6dbba-0d8b-45d1-a36a-94876710e4fe/kube-rbac-proxy/0.log" Apr 24 19:53:49.926074 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:49.926050 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-nvfsw_32a6dbba-0d8b-45d1-a36a-94876710e4fe/exporter/0.log" Apr 24 19:53:49.945930 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:49.945889 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-nvfsw_32a6dbba-0d8b-45d1-a36a-94876710e4fe/extractor/0.log" Apr 24 19:53:51.962087 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:51.962042 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_llmisvc-controller-manager-68cc5db7c4-7ndf4_487924d6-78ed-407f-97bb-6ef4e63a0c74/manager/0.log" Apr 24 19:53:52.450930 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:52.450878 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_s3-init-rmzm5_6ba70631-6e05-4949-8b53-5ca189f77144/s3-init/0.log" Apr 24 19:53:53.810724 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:53.810695 2571 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-clnxb/perf-node-gather-daemonset-ffl5j" Apr 24 19:53:57.819274 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:57.819246 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8snvl_eb058d66-c070-4f69-80a8-1e14b2f69f5d/kube-multus/0.log" Apr 24 19:53:57.983013 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:57.982932 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gjqqt_c48fa480-1be5-43ad-b4b7-25d95f327d97/kube-multus-additional-cni-plugins/0.log" Apr 24 19:53:58.007993 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:58.007967 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gjqqt_c48fa480-1be5-43ad-b4b7-25d95f327d97/egress-router-binary-copy/0.log" Apr 24 19:53:58.031956 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:58.031932 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gjqqt_c48fa480-1be5-43ad-b4b7-25d95f327d97/cni-plugins/0.log" Apr 24 19:53:58.053238 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:58.053215 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gjqqt_c48fa480-1be5-43ad-b4b7-25d95f327d97/bond-cni-plugin/0.log" Apr 24 19:53:58.072354 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:58.072338 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gjqqt_c48fa480-1be5-43ad-b4b7-25d95f327d97/routeoverride-cni/0.log" Apr 24 19:53:58.094037 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:58.094021 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gjqqt_c48fa480-1be5-43ad-b4b7-25d95f327d97/whereabouts-cni-bincopy/0.log" Apr 24 19:53:58.116546 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:58.116522 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gjqqt_c48fa480-1be5-43ad-b4b7-25d95f327d97/whereabouts-cni/0.log" Apr 24 19:53:58.326129 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:58.326058 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-l2vd2_4a2cb1aa-59c8-4abb-9a8a-daf68903a358/network-metrics-daemon/0.log" Apr 24 19:53:58.344058 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:58.344032 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-l2vd2_4a2cb1aa-59c8-4abb-9a8a-daf68903a358/kube-rbac-proxy/0.log" Apr 24 19:53:59.466704 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.466672 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-controller/0.log" Apr 24 19:53:59.482753 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.482713 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/0.log" Apr 24 19:53:59.494592 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.494570 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovn-acl-logging/1.log" Apr 24 19:53:59.509421 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.509403 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/kube-rbac-proxy-node/0.log" Apr 24 19:53:59.531568 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.531552 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/kube-rbac-proxy-ovn-metrics/0.log" Apr 24 19:53:59.551890 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.551871 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/northd/0.log" Apr 24 19:53:59.571034 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.571012 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/nbdb/0.log" Apr 24 19:53:59.589159 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.589143 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/sbdb/0.log" Apr 24 19:53:59.683962 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:53:59.683932 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9rqzs_0f254dda-8cf9-4910-99c2-638993f242ac/ovnkube-controller/0.log" Apr 24 19:54:00.968182 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:54:00.968137 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-f4z44_4c1189cb-51d5-415c-8dc4-6c1970496280/network-check-target-container/0.log" Apr 24 19:54:01.842789 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:54:01.842760 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-9cd8l_a0fb5806-85c6-498a-9018-3d8218e45d60/iptables-alerter/0.log" Apr 24 19:54:02.523068 ip-10-0-130-85 kubenswrapper[2571]: I0424 19:54:02.523046 2571 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-2whc8_7141b48e-7f6d-4770-ab5e-9f920fa03a0d/tuned/0.log"