Apr 16 13:56:23.272620 ip-10-0-129-84 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 16 13:56:23.272631 ip-10-0-129-84 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 16 13:56:23.272638 ip-10-0-129-84 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 16 13:56:23.272884 ip-10-0-129-84 systemd[1]: Failed to start Kubernetes Kubelet. Apr 16 13:56:33.282135 ip-10-0-129-84 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 16 13:56:33.282152 ip-10-0-129-84 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot 077e16aec1da4fa5b2e6621ad7ea8f92 -- Apr 16 13:58:45.847554 ip-10-0-129-84 systemd[1]: Starting Kubernetes Kubelet... Apr 16 13:58:46.346981 ip-10-0-129-84 kubenswrapper[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 13:58:46.346981 ip-10-0-129-84 kubenswrapper[2569]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 16 13:58:46.346981 ip-10-0-129-84 kubenswrapper[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 13:58:46.346981 ip-10-0-129-84 kubenswrapper[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 13:58:46.346981 ip-10-0-129-84 kubenswrapper[2569]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 13:58:46.347807 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.347715 2569 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 13:58:46.351762 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351747 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 13:58:46.351762 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351762 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351766 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351769 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351772 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351776 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351778 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351782 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351785 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351788 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351791 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351798 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351801 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351804 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351807 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351809 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351812 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351814 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351817 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351820 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351822 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 13:58:46.351832 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351825 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351828 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351831 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351834 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351836 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351840 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351843 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351846 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351848 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351852 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351854 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351857 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351859 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351862 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351864 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351867 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351870 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351873 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351876 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 13:58:46.352327 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351879 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351881 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351884 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351886 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351893 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351896 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351898 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351901 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351905 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351908 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351911 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351913 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351918 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351922 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351926 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351929 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351932 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351935 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351938 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 13:58:46.352831 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351940 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351943 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351946 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351948 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351951 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351954 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351956 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351959 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351961 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351964 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351967 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351969 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351972 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351974 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351977 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351979 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351982 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351984 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351987 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351990 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 13:58:46.353311 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351992 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351995 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.351998 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352000 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352003 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352005 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352008 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352410 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352416 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352419 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352422 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352425 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352428 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352431 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352434 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352437 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352439 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352442 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352445 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 13:58:46.353787 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352448 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352452 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352455 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352459 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352463 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352466 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352469 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352472 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352474 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352477 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352480 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352483 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352486 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352490 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352492 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352495 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352498 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352500 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352503 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 13:58:46.354253 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352505 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352508 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352511 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352514 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352516 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352519 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352521 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352525 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352527 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352530 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352532 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352535 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352537 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352540 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352543 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352545 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352548 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352550 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352553 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352555 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 13:58:46.354958 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352558 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352560 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352563 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352565 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352568 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352571 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352575 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352577 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352580 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352582 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352585 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352588 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352590 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352594 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352597 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352599 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352602 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352605 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352607 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352611 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 13:58:46.355572 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352620 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352625 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352628 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352630 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352634 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352637 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352639 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352642 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352645 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352647 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352650 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352652 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352655 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352658 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.352660 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354029 2569 flags.go:64] FLAG: --address="0.0.0.0" Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354037 2569 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354044 2569 flags.go:64] FLAG: --anonymous-auth="true" Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354050 2569 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354055 2569 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354058 2569 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 16 13:58:46.356076 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354063 2569 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354070 2569 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354073 2569 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354076 2569 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354080 2569 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354084 2569 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354087 2569 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354090 2569 flags.go:64] FLAG: --cgroup-root="" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354093 2569 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354096 2569 flags.go:64] FLAG: --client-ca-file="" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354099 2569 flags.go:64] FLAG: --cloud-config="" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354102 2569 flags.go:64] FLAG: --cloud-provider="external" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354105 2569 flags.go:64] FLAG: --cluster-dns="[]" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354110 2569 flags.go:64] FLAG: --cluster-domain="" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354113 2569 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354117 2569 flags.go:64] FLAG: --config-dir="" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354120 2569 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354123 2569 flags.go:64] FLAG: --container-log-max-files="5" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354128 2569 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354131 2569 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354134 2569 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354138 2569 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354141 2569 flags.go:64] FLAG: --contention-profiling="false" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354144 2569 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 16 13:58:46.356645 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354147 2569 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354150 2569 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354153 2569 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354158 2569 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354161 2569 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354164 2569 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354168 2569 flags.go:64] FLAG: --enable-load-reader="false" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354171 2569 flags.go:64] FLAG: --enable-server="true" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354174 2569 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354207 2569 flags.go:64] FLAG: --event-burst="100" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354211 2569 flags.go:64] FLAG: --event-qps="50" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354214 2569 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354218 2569 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354222 2569 flags.go:64] FLAG: --eviction-hard="" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354226 2569 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354229 2569 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354233 2569 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354247 2569 flags.go:64] FLAG: --eviction-soft="" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354250 2569 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354253 2569 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354257 2569 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354260 2569 flags.go:64] FLAG: --experimental-mounter-path="" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354264 2569 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354267 2569 flags.go:64] FLAG: --fail-swap-on="true" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354270 2569 flags.go:64] FLAG: --feature-gates="" Apr 16 13:58:46.357261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354274 2569 flags.go:64] FLAG: --file-check-frequency="20s" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354277 2569 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354280 2569 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354284 2569 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354287 2569 flags.go:64] FLAG: --healthz-port="10248" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354290 2569 flags.go:64] FLAG: --help="false" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354293 2569 flags.go:64] FLAG: --hostname-override="ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354297 2569 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354300 2569 flags.go:64] FLAG: --http-check-frequency="20s" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354303 2569 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354306 2569 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354310 2569 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354313 2569 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354316 2569 flags.go:64] FLAG: --image-service-endpoint="" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354356 2569 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354395 2569 flags.go:64] FLAG: --kube-api-burst="100" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354401 2569 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354407 2569 flags.go:64] FLAG: --kube-api-qps="50" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354412 2569 flags.go:64] FLAG: --kube-reserved="" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354417 2569 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354421 2569 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354424 2569 flags.go:64] FLAG: --kubelet-cgroups="" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354428 2569 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354432 2569 flags.go:64] FLAG: --lock-file="" Apr 16 13:58:46.357863 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354439 2569 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354443 2569 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354446 2569 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354453 2569 flags.go:64] FLAG: --log-json-split-stream="false" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354457 2569 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354463 2569 flags.go:64] FLAG: --log-text-split-stream="false" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354468 2569 flags.go:64] FLAG: --logging-format="text" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354473 2569 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.354483 2569 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355630 2569 flags.go:64] FLAG: --manifest-url="" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355637 2569 flags.go:64] FLAG: --manifest-url-header="" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355796 2569 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355804 2569 flags.go:64] FLAG: --max-open-files="1000000" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355810 2569 flags.go:64] FLAG: --max-pods="110" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355814 2569 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355817 2569 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355821 2569 flags.go:64] FLAG: --memory-manager-policy="None" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355825 2569 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355829 2569 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355832 2569 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355836 2569 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355855 2569 flags.go:64] FLAG: --node-status-max-images="50" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355858 2569 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355861 2569 flags.go:64] FLAG: --oom-score-adj="-999" Apr 16 13:58:46.358491 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355864 2569 flags.go:64] FLAG: --pod-cidr="" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355868 2569 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc76bab72f320de3d4105c90d73c4fb139c09e20ce0fa8dcbc0cb59920d27dec" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355875 2569 flags.go:64] FLAG: --pod-manifest-path="" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355878 2569 flags.go:64] FLAG: --pod-max-pids="-1" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355881 2569 flags.go:64] FLAG: --pods-per-core="0" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355885 2569 flags.go:64] FLAG: --port="10250" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355888 2569 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355891 2569 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-02d20a692c9122ea7" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355895 2569 flags.go:64] FLAG: --qos-reserved="" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355898 2569 flags.go:64] FLAG: --read-only-port="10255" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355901 2569 flags.go:64] FLAG: --register-node="true" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355904 2569 flags.go:64] FLAG: --register-schedulable="true" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355908 2569 flags.go:64] FLAG: --register-with-taints="" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355912 2569 flags.go:64] FLAG: --registry-burst="10" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355915 2569 flags.go:64] FLAG: --registry-qps="5" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355918 2569 flags.go:64] FLAG: --reserved-cpus="" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355921 2569 flags.go:64] FLAG: --reserved-memory="" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355925 2569 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355928 2569 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355932 2569 flags.go:64] FLAG: --rotate-certificates="false" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355935 2569 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355938 2569 flags.go:64] FLAG: --runonce="false" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355941 2569 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355944 2569 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355947 2569 flags.go:64] FLAG: --seccomp-default="false" Apr 16 13:58:46.359060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355950 2569 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355954 2569 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355957 2569 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355960 2569 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355963 2569 flags.go:64] FLAG: --storage-driver-password="root" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355966 2569 flags.go:64] FLAG: --storage-driver-secure="false" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355969 2569 flags.go:64] FLAG: --storage-driver-table="stats" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355972 2569 flags.go:64] FLAG: --storage-driver-user="root" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355975 2569 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355978 2569 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355981 2569 flags.go:64] FLAG: --system-cgroups="" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355984 2569 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355990 2569 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355993 2569 flags.go:64] FLAG: --tls-cert-file="" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.355996 2569 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356001 2569 flags.go:64] FLAG: --tls-min-version="" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356004 2569 flags.go:64] FLAG: --tls-private-key-file="" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356007 2569 flags.go:64] FLAG: --topology-manager-policy="none" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356010 2569 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356014 2569 flags.go:64] FLAG: --topology-manager-scope="container" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356017 2569 flags.go:64] FLAG: --v="2" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356021 2569 flags.go:64] FLAG: --version="false" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356025 2569 flags.go:64] FLAG: --vmodule="" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356031 2569 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.356034 2569 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 16 13:58:46.359672 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356131 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356135 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356139 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356142 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356147 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356151 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356154 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356157 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356159 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356162 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356165 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356167 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356170 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356172 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356175 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356178 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356180 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356183 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356185 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356188 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 13:58:46.360300 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356191 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356194 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356196 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356199 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356202 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356204 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356207 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356210 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356212 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356215 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356218 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356221 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356223 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356227 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356229 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356232 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356247 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356250 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356253 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356256 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 13:58:46.360814 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356259 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356261 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356264 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356267 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356269 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356272 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356274 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356277 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356280 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356283 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356285 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356288 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356292 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356296 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356299 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356302 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356305 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356308 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356311 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356313 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 13:58:46.361365 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356317 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356320 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356322 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356325 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356328 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356331 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356333 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356336 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356339 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356341 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356344 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356347 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356349 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356352 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356354 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356357 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356360 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356362 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356365 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 13:58:46.361844 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356367 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 13:58:46.362313 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356370 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 13:58:46.362313 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356372 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 13:58:46.362313 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356375 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 13:58:46.362313 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356377 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 13:58:46.362313 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356380 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 13:58:46.362313 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.356383 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 13:58:46.362313 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.357053 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 13:58:46.363676 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.363658 2569 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 16 13:58:46.363717 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.363677 2569 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 13:58:46.363750 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363727 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 13:58:46.363750 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363733 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 13:58:46.363750 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363736 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 13:58:46.363750 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363740 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 13:58:46.363750 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363743 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 13:58:46.363750 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363746 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 13:58:46.363750 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363750 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 13:58:46.363750 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363753 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363756 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363759 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363762 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363765 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363768 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363771 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363773 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363776 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363779 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363782 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363785 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363787 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363790 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363792 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363795 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363798 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363801 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363804 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363807 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 13:58:46.363950 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363811 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363814 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363817 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363819 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363823 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363825 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363828 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363831 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363834 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363836 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363839 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363841 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363844 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363846 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363849 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363852 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363855 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363857 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363860 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363862 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 13:58:46.364490 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363865 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363868 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363870 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363873 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363875 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363878 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363881 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363883 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363886 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363888 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363891 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363893 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363896 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363899 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363902 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363904 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363907 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363910 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363913 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363915 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 13:58:46.364983 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363918 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363921 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363924 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363926 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363929 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363931 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363935 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363940 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363943 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363946 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363948 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363951 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363954 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363957 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363960 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363962 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363965 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363968 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 13:58:46.365592 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.363970 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.363976 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364096 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364101 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364104 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364107 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364110 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364113 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364116 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364118 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364121 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364125 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364128 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364131 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364135 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364138 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 13:58:46.366042 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364140 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364143 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364145 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364148 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364151 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364153 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364156 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364158 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364161 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364163 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364166 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364168 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364171 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364174 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364176 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364179 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364181 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364184 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364187 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 13:58:46.366461 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364189 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364192 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364195 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364197 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364200 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364203 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364205 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364208 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364211 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364213 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364216 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364219 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364221 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364225 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364229 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364232 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364251 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364255 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 13:58:46.366921 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364258 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364261 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364264 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364267 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364270 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364272 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364275 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364277 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364280 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364283 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364286 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364288 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364291 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364293 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364296 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364298 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364301 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364304 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364307 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364309 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 13:58:46.367367 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364312 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364315 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364317 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364320 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364323 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364326 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364329 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364332 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364334 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364337 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364339 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364342 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364344 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364347 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:46.364349 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.364354 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 13:58:46.367855 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.365144 2569 server.go:962] "Client rotation is on, will bootstrap in background" Apr 16 13:58:46.368851 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.368837 2569 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 16 13:58:46.369879 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.369869 2569 server.go:1019] "Starting client certificate rotation" Apr 16 13:58:46.369979 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.369962 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 13:58:46.370771 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.370760 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 13:58:46.399150 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.399131 2569 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 13:58:46.400942 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.400923 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 13:58:46.418597 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.418575 2569 log.go:25] "Validated CRI v1 runtime API" Apr 16 13:58:46.424860 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.424848 2569 log.go:25] "Validated CRI v1 image API" Apr 16 13:58:46.426214 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.426198 2569 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 13:58:46.432868 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.432846 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 13:58:46.432983 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.432962 2569 fs.go:135] Filesystem UUIDs: map[67a22b88-b1e2-4417-aba6-56a04d181b45:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 eca2e1a9-f347-451c-b5bf-57a155f1def0:/dev/nvme0n1p4] Apr 16 13:58:46.433021 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.432984 2569 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 16 13:58:46.438756 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.438651 2569 manager.go:217] Machine: {Timestamp:2026-04-16 13:58:46.436639525 +0000 UTC m=+0.457927268 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3106285 MemoryCapacity:33164492800 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec232e815ecccbc618dab607e343d093 SystemUUID:ec232e81-5ecc-cbc6-18da-b607e343d093 BootID:077e16ae-c1da-4fa5-b2e6-621ad7ea8f92 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582246400 Type:vfs Inodes:4048400 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6098944 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:5d:8a:b1:ab:a9 Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:5d:8a:b1:ab:a9 Speed:0 Mtu:9001} {Name:ovs-system MacAddress:ca:f6:37:7f:9a:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164492800 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 16 13:58:46.438756 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.438752 2569 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 16 13:58:46.438859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.438833 2569 manager.go:233] Version: {KernelVersion:5.14.0-570.104.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260401-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 16 13:58:46.439991 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.439970 2569 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 13:58:46.440125 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.439994 2569 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-129-84.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 13:58:46.440167 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.440134 2569 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 13:58:46.440167 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.440143 2569 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 13:58:46.440167 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.440156 2569 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 13:58:46.440268 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.440169 2569 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 13:58:46.441877 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.441867 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 16 13:58:46.441998 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.441989 2569 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 16 13:58:46.444946 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.444936 2569 kubelet.go:491] "Attempting to sync node with API server" Apr 16 13:58:46.444982 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.444949 2569 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 13:58:46.444982 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.444960 2569 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 16 13:58:46.444982 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.444969 2569 kubelet.go:397] "Adding apiserver pod source" Apr 16 13:58:46.445080 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.444990 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 13:58:46.446301 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.446229 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 13:58:46.446340 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.446316 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 13:58:46.451721 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.451700 2569 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 16 13:58:46.453777 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.453763 2569 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 13:58:46.455273 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455260 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455279 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455286 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455292 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455298 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455304 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455310 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455316 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455323 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 16 13:58:46.455331 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455329 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 16 13:58:46.455570 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455337 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 16 13:58:46.455570 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.455346 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 16 13:58:46.456145 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.456136 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 16 13:58:46.456179 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.456146 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 16 13:58:46.457341 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.457320 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 13:58:46.457520 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.457320 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-129-84.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 13:58:46.460056 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.460043 2569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 13:58:46.460106 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.460081 2569 server.go:1295] "Started kubelet" Apr 16 13:58:46.460185 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.460162 2569 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 13:58:46.460226 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.460172 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 13:58:46.460274 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.460258 2569 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 16 13:58:46.460980 ip-10-0-129-84 systemd[1]: Started Kubernetes Kubelet. Apr 16 13:58:46.461975 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.461960 2569 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 13:58:46.463082 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.463068 2569 server.go:317] "Adding debug handlers to kubelet server" Apr 16 13:58:46.467825 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.467794 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-129-84.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 16 13:58:46.468832 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.467816 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-129-84.ec2.internal.18a6db071227448a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-129-84.ec2.internal,UID:ip-10-0-129-84.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-129-84.ec2.internal,},FirstTimestamp:2026-04-16 13:58:46.460056714 +0000 UTC m=+0.481344453,LastTimestamp:2026-04-16 13:58:46.460056714 +0000 UTC m=+0.481344453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-129-84.ec2.internal,}" Apr 16 13:58:46.469554 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.469536 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 13:58:46.469708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.469573 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 16 13:58:46.470131 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.470109 2569 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 16 13:58:46.470392 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.470293 2569 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 16 13:58:46.470392 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.470393 2569 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 13:58:46.470517 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.470479 2569 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 13:58:46.470653 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.470639 2569 reconstruct.go:97] "Volume reconstruction finished" Apr 16 13:58:46.470716 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.470655 2569 reconciler.go:26] "Reconciler: start to sync state" Apr 16 13:58:46.470790 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.470647 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:46.471231 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.471212 2569 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 16 13:58:46.471231 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.471229 2569 factory.go:55] Registering systemd factory Apr 16 13:58:46.471389 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.471252 2569 factory.go:223] Registration of the systemd container factory successfully Apr 16 13:58:46.471568 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.471505 2569 factory.go:153] Registering CRI-O factory Apr 16 13:58:46.471568 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.471523 2569 factory.go:223] Registration of the crio container factory successfully Apr 16 13:58:46.471568 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.471557 2569 factory.go:103] Registering Raw factory Apr 16 13:58:46.471568 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.471570 2569 manager.go:1196] Started watching for new ooms in manager Apr 16 13:58:46.472079 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.472063 2569 manager.go:319] Starting recovery of all containers Apr 16 13:58:46.474147 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.474121 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-129-84.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 16 13:58:46.474644 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.474616 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 13:58:46.477975 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.477767 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-jwpxj" Apr 16 13:58:46.481814 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.481792 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-jwpxj" Apr 16 13:58:46.485368 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.485350 2569 manager.go:324] Recovery completed Apr 16 13:58:46.489353 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.489341 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 13:58:46.491539 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.491523 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientMemory" Apr 16 13:58:46.491611 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.491550 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 13:58:46.491611 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.491559 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientPID" Apr 16 13:58:46.492013 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.492000 2569 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 16 13:58:46.492013 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.492011 2569 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 16 13:58:46.492084 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.492025 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 16 13:58:46.493191 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.493129 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-129-84.ec2.internal.18a6db071407a1b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-129-84.ec2.internal,UID:ip-10-0-129-84.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-129-84.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-129-84.ec2.internal,},FirstTimestamp:2026-04-16 13:58:46.49153784 +0000 UTC m=+0.512825578,LastTimestamp:2026-04-16 13:58:46.49153784 +0000 UTC m=+0.512825578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-129-84.ec2.internal,}" Apr 16 13:58:46.495626 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.495610 2569 policy_none.go:49] "None policy: Start" Apr 16 13:58:46.495626 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.495628 2569 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 13:58:46.495724 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.495638 2569 state_mem.go:35] "Initializing new in-memory state store" Apr 16 13:58:46.533789 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.533772 2569 manager.go:341] "Starting Device Plugin manager" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.533807 2569 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.533818 2569 server.go:85] "Starting device plugin registration server" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.534052 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.534065 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.534152 2569 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.534293 2569 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.534306 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.535721 2569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 16 13:58:46.560211 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.535760 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:46.596261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.596210 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 13:58:46.597406 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.597361 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 13:58:46.597406 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.597397 2569 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 13:58:46.597479 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.597427 2569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 13:58:46.597479 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.597437 2569 kubelet.go:2451] "Starting kubelet main sync loop" Apr 16 13:58:46.597535 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.597479 2569 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 13:58:46.603160 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.603138 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 13:58:46.635092 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.635069 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 13:58:46.637433 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.637415 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientMemory" Apr 16 13:58:46.637498 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.637449 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 13:58:46.637498 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.637464 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientPID" Apr 16 13:58:46.637498 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.637496 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.646010 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.645988 2569 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.646110 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.646013 2569 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-129-84.ec2.internal\": node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:46.664699 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.664680 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:46.698108 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.698074 2569 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal"] Apr 16 13:58:46.698196 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.698143 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 13:58:46.699513 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.699495 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientMemory" Apr 16 13:58:46.699607 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.699524 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 13:58:46.699607 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.699534 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientPID" Apr 16 13:58:46.701765 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.701753 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 13:58:46.701873 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.701860 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.701910 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.701887 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 13:58:46.702450 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.702436 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientMemory" Apr 16 13:58:46.702537 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.702463 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 13:58:46.702537 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.702437 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientMemory" Apr 16 13:58:46.702537 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.702477 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientPID" Apr 16 13:58:46.702537 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.702495 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 13:58:46.702537 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.702507 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientPID" Apr 16 13:58:46.705224 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.705212 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.705307 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.705249 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 13:58:46.705831 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.705810 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientMemory" Apr 16 13:58:46.705906 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.705837 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 13:58:46.705906 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.705851 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeHasSufficientPID" Apr 16 13:58:46.717773 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.717747 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-129-84.ec2.internal\" not found" node="ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.722023 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.722006 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-129-84.ec2.internal\" not found" node="ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.765113 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.765098 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:46.772304 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.772286 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a246135dc05bb9822400fbcc84ce6ae-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal\" (UID: \"3a246135dc05bb9822400fbcc84ce6ae\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.772357 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.772312 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/8a9e06fbe231682ac4f4d6b934aa0a28-config\") pod \"kube-apiserver-proxy-ip-10-0-129-84.ec2.internal\" (UID: \"8a9e06fbe231682ac4f4d6b934aa0a28\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.772357 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.772329 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3a246135dc05bb9822400fbcc84ce6ae-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal\" (UID: \"3a246135dc05bb9822400fbcc84ce6ae\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.866268 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.866178 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:46.872563 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.872547 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3a246135dc05bb9822400fbcc84ce6ae-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal\" (UID: \"3a246135dc05bb9822400fbcc84ce6ae\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.872650 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.872572 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a246135dc05bb9822400fbcc84ce6ae-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal\" (UID: \"3a246135dc05bb9822400fbcc84ce6ae\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.872650 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.872591 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/8a9e06fbe231682ac4f4d6b934aa0a28-config\") pod \"kube-apiserver-proxy-ip-10-0-129-84.ec2.internal\" (UID: \"8a9e06fbe231682ac4f4d6b934aa0a28\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.872650 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.872631 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/8a9e06fbe231682ac4f4d6b934aa0a28-config\") pod \"kube-apiserver-proxy-ip-10-0-129-84.ec2.internal\" (UID: \"8a9e06fbe231682ac4f4d6b934aa0a28\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.872755 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.872643 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3a246135dc05bb9822400fbcc84ce6ae-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal\" (UID: \"3a246135dc05bb9822400fbcc84ce6ae\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.872755 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:46.872657 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3a246135dc05bb9822400fbcc84ce6ae-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal\" (UID: \"3a246135dc05bb9822400fbcc84ce6ae\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:46.966958 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:46.966927 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.021507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.021474 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" Apr 16 13:58:47.024928 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.024912 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:47.067678 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.067651 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.168167 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.168114 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.268664 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.268635 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.369124 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.369094 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.370176 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.370163 2569 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 16 13:58:47.370334 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.370320 2569 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 13:58:47.469547 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.469485 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.469689 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.469675 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 16 13:58:47.484246 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.484205 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-15 13:53:46 +0000 UTC" deadline="2027-12-16 21:12:06.915870802 +0000 UTC" Apr 16 13:58:47.484246 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.484228 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="14623h13m19.431645726s" Apr 16 13:58:47.487541 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.487514 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 13:58:47.507340 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.507316 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-wg4gf" Apr 16 13:58:47.516095 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.516076 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-wg4gf" Apr 16 13:58:47.570380 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.570358 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.670593 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.670566 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.771082 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.771002 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.861984 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:47.861958 2569 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 13:58:47.871480 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.871457 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:47.972211 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:47.972184 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-129-84.ec2.internal\" not found" Apr 16 13:58:48.027132 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.027073 2569 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 13:58:48.045471 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.045453 2569 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 13:58:48.070280 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.070256 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" Apr 16 13:58:48.080081 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.080058 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 13:58:48.080842 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.080829 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" Apr 16 13:58:48.097990 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.097969 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 13:58:48.240971 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.240940 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a9e06fbe231682ac4f4d6b934aa0a28.slice/crio-24da9a7265c92da15d4ca9e28f0bab6bf0965d35da1d099f86a2f268ecf1b778 WatchSource:0}: Error finding container 24da9a7265c92da15d4ca9e28f0bab6bf0965d35da1d099f86a2f268ecf1b778: Status 404 returned error can't find the container with id 24da9a7265c92da15d4ca9e28f0bab6bf0965d35da1d099f86a2f268ecf1b778 Apr 16 13:58:48.242327 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.242307 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 13:58:48.244607 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.244590 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 13:58:48.352552 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.352527 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a246135dc05bb9822400fbcc84ce6ae.slice/crio-45e066fc303d18b490e6b05bf08fb2a3aaeae22a117f94f800d123705867a00d WatchSource:0}: Error finding container 45e066fc303d18b490e6b05bf08fb2a3aaeae22a117f94f800d123705867a00d: Status 404 returned error can't find the container with id 45e066fc303d18b490e6b05bf08fb2a3aaeae22a117f94f800d123705867a00d Apr 16 13:58:48.446646 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.446624 2569 apiserver.go:52] "Watching apiserver" Apr 16 13:58:48.454149 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.454128 2569 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 16 13:58:48.455055 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.455035 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-q4pj8","openshift-network-operator/iptables-alerter-cx8jm","openshift-ovn-kubernetes/ovnkube-node-ljx7q","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2","openshift-cluster-node-tuning-operator/tuned-56z6w","openshift-dns/node-resolver-jnqp7","openshift-multus/network-metrics-daemon-99gsl","openshift-network-diagnostics/network-check-target-bdcn7","kube-system/konnectivity-agent-qnnbt","kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal","openshift-image-registry/node-ca-p4bsc","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal","openshift-multus/multus-additional-cni-plugins-t28sg"] Apr 16 13:58:48.459586 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.459571 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:48.459648 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:48.459631 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:58:48.461642 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.461629 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.461768 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.461753 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.464090 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.464022 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.464090 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.464032 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 16 13:58:48.464090 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.464051 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 16 13:58:48.464090 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.464029 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 16 13:58:48.464370 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.464092 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-pwlhx\"" Apr 16 13:58:48.465061 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.465015 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 16 13:58:48.465140 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.465063 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-jp2kq\"" Apr 16 13:58:48.465140 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.465115 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 16 13:58:48.465261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.465146 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 16 13:58:48.465261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.465085 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 16 13:58:48.465261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.465150 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 16 13:58:48.465261 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.465079 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 16 13:58:48.466149 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.466133 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 16 13:58:48.466262 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.466144 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 16 13:58:48.466262 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.466136 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 16 13:58:48.466369 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.466267 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.466369 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.466187 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-gcj25\"" Apr 16 13:58:48.468466 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.468443 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 16 13:58:48.468637 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.468445 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-76pmw\"" Apr 16 13:58:48.469465 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.469033 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 16 13:58:48.469465 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.469067 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.470906 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.470883 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 16 13:58:48.470998 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.470889 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 16 13:58:48.470998 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.470949 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-9rq89\"" Apr 16 13:58:48.472041 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.472026 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.474016 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.473998 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 16 13:58:48.474151 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.474131 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 16 13:58:48.474268 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.474179 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-jb5ss\"" Apr 16 13:58:48.474268 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.474249 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 16 13:58:48.474386 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.474335 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 16 13:58:48.474428 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.474392 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:48.474488 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:48.474462 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:58:48.476541 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.476527 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:58:48.478867 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.478853 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.479692 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479677 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-cnibin\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.479761 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479702 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-cni-multus\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.479761 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479718 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-etc-selinux\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.479761 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479742 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xk9\" (UniqueName: \"kubernetes.io/projected/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-kube-api-access-v5xk9\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:48.479874 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479765 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07a3b071-1443-4213-b66f-ce5f4d7ff313-hosts-file\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.479874 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479779 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-netns\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.479874 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479793 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-os-release\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.479874 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479831 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-cni-bin\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.479874 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479861 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-etc-kubernetes\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.480104 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479878 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-node-log\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480104 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479897 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-cni-bin\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480104 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479937 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-modprobe-d\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.480104 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.479990 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-systemd\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.480104 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480028 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:48.480104 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480057 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwtfq\" (UniqueName: \"kubernetes.io/projected/165a8934-211a-41ba-a917-3ec360f1fb99-kube-api-access-kwtfq\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.480104 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480085 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62n2g\" (UniqueName: \"kubernetes.io/projected/07a3b071-1443-4213-b66f-ce5f4d7ff313-kube-api-access-62n2g\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480139 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-hostroot\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480194 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-multus-certs\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480219 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-lib-modules\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480249 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/07a3b071-1443-4213-b66f-ce5f4d7ff313-tmp-dir\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480268 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-kubelet\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480281 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfzd2\" (UniqueName: \"kubernetes.io/projected/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-kube-api-access-wfzd2\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480295 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-run-netns\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480337 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-registration-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480349 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480359 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480370 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-sys-fs\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480386 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-bpxb7\"" Apr 16 13:58:48.480412 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480406 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-ovn\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480432 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-log-socket\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480456 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-run-ovn-kubernetes\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480486 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-sys\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480508 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-host\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480529 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-cni-binary-copy\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480560 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-socket-dir-parent\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480582 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-systemd\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480606 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480631 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4l2\" (UniqueName: \"kubernetes.io/projected/7e5f1228-5703-456a-a909-558205e02bfc-kube-api-access-lp4l2\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480647 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysctl-d\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480663 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sqkw\" (UniqueName: \"kubernetes.io/projected/170b3d99-353c-47c0-9fd5-7c56afedf117-kube-api-access-2sqkw\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480679 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/165a8934-211a-41ba-a917-3ec360f1fb99-host-slash\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480710 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-conf-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480737 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-env-overrides\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480755 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-socket-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.480859 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480770 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/170b3d99-353c-47c0-9fd5-7c56afedf117-tmp\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480784 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-slash\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480808 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-cni-netd\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480833 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-kubelet-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480847 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-system-cni-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480860 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-k8s-cni-cncf-io\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480882 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-ovnkube-config\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480904 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-ovnkube-script-lib\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480947 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-systemd-units\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480972 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-cni-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.480993 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-var-lib-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481014 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481047 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e5f1228-5703-456a-a909-558205e02bfc-ovn-node-metrics-cert\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481071 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-device-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481106 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysconfig\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481143 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-tuned\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481177 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.481464 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481180 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/165a8934-211a-41ba-a917-3ec360f1fb99-iptables-alerter-script\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481206 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481219 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-etc-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481224 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-6gpk8\"" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481264 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-kubernetes\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481271 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481299 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-var-lib-kubelet\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481312 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481330 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-daemon-config\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481370 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-kubelet\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481399 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-run\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481441 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwx5n\" (UniqueName: \"kubernetes.io/projected/1153eba3-85ab-49ec-ac00-fab2f08d676b-kube-api-access-dwx5n\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.482022 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.481472 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysctl-conf\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.483292 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.483274 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 16 13:58:48.483364 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.483326 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 16 13:58:48.483364 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.483360 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-hwxl9\"" Apr 16 13:58:48.517632 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.517608 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 13:53:47 +0000 UTC" deadline="2028-01-31 13:20:06.503422812 +0000 UTC" Apr 16 13:58:48.517632 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.517632 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15719h21m17.985793956s" Apr 16 13:58:48.571990 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.571972 2569 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 13:58:48.581960 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.581943 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-modprobe-d\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.582046 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.581968 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-systemd\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.582046 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.581985 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:48.582046 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582002 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwtfq\" (UniqueName: \"kubernetes.io/projected/165a8934-211a-41ba-a917-3ec360f1fb99-kube-api-access-kwtfq\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.582046 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582017 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-62n2g\" (UniqueName: \"kubernetes.io/projected/07a3b071-1443-4213-b66f-ce5f4d7ff313-kube-api-access-62n2g\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582058 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-hostroot\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582064 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-systemd\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582078 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-multus-certs\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582093 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-lib-modules\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582122 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-hostroot\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582110 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b86b831d-508f-4fd4-9a93-21d9e8e21be7-agent-certs\") pod \"konnectivity-agent-qnnbt\" (UID: \"b86b831d-508f-4fd4-9a93-21d9e8e21be7\") " pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:48.582162 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582142 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-modprobe-d\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582175 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b86b831d-508f-4fd4-9a93-21d9e8e21be7-konnectivity-ca\") pod \"konnectivity-agent-qnnbt\" (UID: \"b86b831d-508f-4fd4-9a93-21d9e8e21be7\") " pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:58:48.582227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582209 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-multus-certs\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:48.582262 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 13:58:49.082216865 +0000 UTC m=+3.103504611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582263 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-lib-modules\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582308 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-os-release\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582342 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/07a3b071-1443-4213-b66f-ce5f4d7ff313-tmp-dir\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582372 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-kubelet\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582405 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa773e4-9e56-4130-bc08-0913d59056bb-host\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582438 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wfzd2\" (UniqueName: \"kubernetes.io/projected/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-kube-api-access-wfzd2\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582436 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-kubelet\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582461 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-run-netns\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582486 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-registration-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582510 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-sys-fs\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582536 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582569 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/07a3b071-1443-4213-b66f-ce5f4d7ff313-tmp-dir\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582584 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-sys-fs\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582590 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-registration-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.582708 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582601 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-run-netns\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582633 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-ovn\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582659 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-log-socket\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582695 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-ovn\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582701 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-run-ovn-kubernetes\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582724 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-log-socket\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582726 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-sys\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582759 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-host\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582767 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-sys\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582762 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-run-ovn-kubernetes\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582785 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582821 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-cni-binary-copy\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582845 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-socket-dir-parent\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582849 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-host\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582870 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-systemd\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582886 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-socket-dir-parent\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582895 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582921 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lp4l2\" (UniqueName: \"kubernetes.io/projected/7e5f1228-5703-456a-a909-558205e02bfc-kube-api-access-lp4l2\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.583415 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582928 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-systemd\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582931 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582945 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysctl-d\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.582985 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqkw\" (UniqueName: \"kubernetes.io/projected/170b3d99-353c-47c0-9fd5-7c56afedf117-kube-api-access-2sqkw\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583010 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583046 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/165a8934-211a-41ba-a917-3ec360f1fb99-host-slash\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583045 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysctl-d\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583070 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-conf-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583097 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-env-overrides\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583107 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/165a8934-211a-41ba-a917-3ec360f1fb99-host-slash\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583125 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-conf-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583151 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-socket-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583187 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/170b3d99-353c-47c0-9fd5-7c56afedf117-tmp\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583209 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-slash\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583263 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-slash\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583285 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-socket-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583293 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-cni-netd\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584156 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583317 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-kubelet-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583339 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-system-cni-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583355 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-cni-netd\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583382 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-k8s-cni-cncf-io\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583396 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-kubelet-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583408 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-ovnkube-config\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583412 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-system-cni-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583446 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-k8s-cni-cncf-io\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583477 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-ovnkube-script-lib\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583507 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583514 2569 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583535 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-systemd-units\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583560 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-cni-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583584 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-env-overrides\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583624 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-var-lib-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583585 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-var-lib-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583640 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-systemd-units\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583640 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-cni-dir\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.584871 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583666 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583688 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-run-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583705 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e5f1228-5703-456a-a909-558205e02bfc-ovn-node-metrics-cert\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583768 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-device-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583795 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysconfig\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583823 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-device-dir\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583819 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-tuned\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583871 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-cni-binary-copy\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583875 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/165a8934-211a-41ba-a917-3ec360f1fb99-iptables-alerter-script\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583882 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-ovnkube-config\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583920 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-etc-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583940 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysconfig\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583956 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-etc-openvswitch\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583921 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e5f1228-5703-456a-a909-558205e02bfc-ovnkube-script-lib\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583978 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-kubernetes\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.583996 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-var-lib-kubelet\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584015 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-cnibin\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.585507 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584019 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-kubernetes\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584043 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-cni-binary-copy\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584053 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-var-lib-kubelet\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584082 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pplj4\" (UniqueName: \"kubernetes.io/projected/f2f44068-07cc-44c3-b6bc-448389afc9ce-kube-api-access-pplj4\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584107 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-daemon-config\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584134 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-kubelet\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584165 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-run\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584261 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-kubelet\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584288 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwx5n\" (UniqueName: \"kubernetes.io/projected/1153eba3-85ab-49ec-ac00-fab2f08d676b-kube-api-access-dwx5n\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584314 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysctl-conf\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584345 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-run\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584367 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/165a8934-211a-41ba-a917-3ec360f1fb99-iptables-alerter-script\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584512 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-sysctl-conf\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584524 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxxc\" (UniqueName: \"kubernetes.io/projected/afa773e4-9e56-4130-bc08-0913d59056bb-kube-api-access-4wxxc\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584555 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-cnibin\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584580 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-cni-multus\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584606 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-etc-selinux\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.585996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584581 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-multus-daemon-config\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584640 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xk9\" (UniqueName: \"kubernetes.io/projected/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-kube-api-access-v5xk9\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584635 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-cnibin\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584663 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-cni-multus\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584666 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/afa773e4-9e56-4130-bc08-0913d59056bb-serviceca\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584700 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/1153eba3-85ab-49ec-ac00-fab2f08d676b-etc-selinux\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584711 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07a3b071-1443-4213-b66f-ce5f4d7ff313-hosts-file\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584735 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-netns\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584764 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-system-cni-dir\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584812 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07a3b071-1443-4213-b66f-ce5f4d7ff313-hosts-file\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584814 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-run-netns\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584838 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-os-release\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584870 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-cni-bin\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584909 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-etc-kubernetes\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584948 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-etc-kubernetes\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584935 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-node-log\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584950 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-host-var-lib-cni-bin\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584948 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-os-release\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.586560 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.584986 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-cni-bin\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.587077 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.585014 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-node-log\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.587077 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.585033 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e5f1228-5703-456a-a909-558205e02bfc-host-cni-bin\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.587077 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.586572 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/170b3d99-353c-47c0-9fd5-7c56afedf117-etc-tuned\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.587077 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.586584 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/170b3d99-353c-47c0-9fd5-7c56afedf117-tmp\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.587077 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.586607 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e5f1228-5703-456a-a909-558205e02bfc-ovn-node-metrics-cert\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.590325 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.590302 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfzd2\" (UniqueName: \"kubernetes.io/projected/c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e-kube-api-access-wfzd2\") pod \"multus-q4pj8\" (UID: \"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e\") " pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.590795 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.590772 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwtfq\" (UniqueName: \"kubernetes.io/projected/165a8934-211a-41ba-a917-3ec360f1fb99-kube-api-access-kwtfq\") pod \"iptables-alerter-cx8jm\" (UID: \"165a8934-211a-41ba-a917-3ec360f1fb99\") " pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.590932 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.590916 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-62n2g\" (UniqueName: \"kubernetes.io/projected/07a3b071-1443-4213-b66f-ce5f4d7ff313-kube-api-access-62n2g\") pod \"node-resolver-jnqp7\" (UID: \"07a3b071-1443-4213-b66f-ce5f4d7ff313\") " pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.591170 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.591150 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp4l2\" (UniqueName: \"kubernetes.io/projected/7e5f1228-5703-456a-a909-558205e02bfc-kube-api-access-lp4l2\") pod \"ovnkube-node-ljx7q\" (UID: \"7e5f1228-5703-456a-a909-558205e02bfc\") " pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.591571 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.591547 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xk9\" (UniqueName: \"kubernetes.io/projected/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-kube-api-access-v5xk9\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:48.591653 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.591581 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqkw\" (UniqueName: \"kubernetes.io/projected/170b3d99-353c-47c0-9fd5-7c56afedf117-kube-api-access-2sqkw\") pod \"tuned-56z6w\" (UID: \"170b3d99-353c-47c0-9fd5-7c56afedf117\") " pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.591786 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.591770 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwx5n\" (UniqueName: \"kubernetes.io/projected/1153eba3-85ab-49ec-ac00-fab2f08d676b-kube-api-access-dwx5n\") pod \"aws-ebs-csi-driver-node-995x2\" (UID: \"1153eba3-85ab-49ec-ac00-fab2f08d676b\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.601256 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.601179 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" event={"ID":"8a9e06fbe231682ac4f4d6b934aa0a28","Type":"ContainerStarted","Data":"24da9a7265c92da15d4ca9e28f0bab6bf0965d35da1d099f86a2f268ecf1b778"} Apr 16 13:58:48.602067 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.602049 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" event={"ID":"3a246135dc05bb9822400fbcc84ce6ae","Type":"ContainerStarted","Data":"45e066fc303d18b490e6b05bf08fb2a3aaeae22a117f94f800d123705867a00d"} Apr 16 13:58:48.685201 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685179 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685345 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685216 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-cnibin\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685345 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685233 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-cni-binary-copy\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685345 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685266 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pplj4\" (UniqueName: \"kubernetes.io/projected/f2f44068-07cc-44c3-b6bc-448389afc9ce-kube-api-access-pplj4\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685345 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685300 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-cnibin\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685345 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685311 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxxc\" (UniqueName: \"kubernetes.io/projected/afa773e4-9e56-4130-bc08-0913d59056bb-kube-api-access-4wxxc\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685348 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/afa773e4-9e56-4130-bc08-0913d59056bb-serviceca\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685375 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-system-cni-dir\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685425 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b86b831d-508f-4fd4-9a93-21d9e8e21be7-agent-certs\") pod \"konnectivity-agent-qnnbt\" (UID: \"b86b831d-508f-4fd4-9a93-21d9e8e21be7\") " pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685450 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b86b831d-508f-4fd4-9a93-21d9e8e21be7-konnectivity-ca\") pod \"konnectivity-agent-qnnbt\" (UID: \"b86b831d-508f-4fd4-9a93-21d9e8e21be7\") " pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685474 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-os-release\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685483 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-system-cni-dir\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685503 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa773e4-9e56-4130-bc08-0913d59056bb-host\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685536 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685549 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa773e4-9e56-4130-bc08-0913d59056bb-host\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.685590 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685569 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685979 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685602 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:48.685979 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685859 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685979 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685568 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-os-release\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685979 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685867 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-cni-binary-copy\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.685979 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.685883 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/afa773e4-9e56-4130-bc08-0913d59056bb-serviceca\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.686115 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.686024 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/b86b831d-508f-4fd4-9a93-21d9e8e21be7-konnectivity-ca\") pod \"konnectivity-agent-qnnbt\" (UID: \"b86b831d-508f-4fd4-9a93-21d9e8e21be7\") " pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:58:48.686115 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.686086 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/f2f44068-07cc-44c3-b6bc-448389afc9ce-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.686380 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.686362 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2f44068-07cc-44c3-b6bc-448389afc9ce-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.687588 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.687571 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/b86b831d-508f-4fd4-9a93-21d9e8e21be7-agent-certs\") pod \"konnectivity-agent-qnnbt\" (UID: \"b86b831d-508f-4fd4-9a93-21d9e8e21be7\") " pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:58:48.691282 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:48.691263 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 13:58:48.691358 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:48.691282 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 13:58:48.691358 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:48.691301 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n4qzk for pod openshift-network-diagnostics/network-check-target-bdcn7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:48.691458 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:48.691448 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk podName:84822439-41ed-4bb8-b7d6-6784ad00eeaf nodeName:}" failed. No retries permitted until 2026-04-16 13:58:49.191434283 +0000 UTC m=+3.212722021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n4qzk" (UniqueName: "kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk") pod "network-check-target-bdcn7" (UID: "84822439-41ed-4bb8-b7d6-6784ad00eeaf") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:48.693185 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.693158 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pplj4\" (UniqueName: \"kubernetes.io/projected/f2f44068-07cc-44c3-b6bc-448389afc9ce-kube-api-access-pplj4\") pod \"multus-additional-cni-plugins-t28sg\" (UID: \"f2f44068-07cc-44c3-b6bc-448389afc9ce\") " pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.693330 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.693316 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxxc\" (UniqueName: \"kubernetes.io/projected/afa773e4-9e56-4130-bc08-0913d59056bb-kube-api-access-4wxxc\") pod \"node-ca-p4bsc\" (UID: \"afa773e4-9e56-4130-bc08-0913d59056bb\") " pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.772603 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.772582 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-cx8jm" Apr 16 13:58:48.778533 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.778513 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod165a8934_211a_41ba_a917_3ec360f1fb99.slice/crio-e0bd2d617da04495b05125fa92c07327c336fe23eceebd9df840cad8fd96fd1c WatchSource:0}: Error finding container e0bd2d617da04495b05125fa92c07327c336fe23eceebd9df840cad8fd96fd1c: Status 404 returned error can't find the container with id e0bd2d617da04495b05125fa92c07327c336fe23eceebd9df840cad8fd96fd1c Apr 16 13:58:48.779210 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.779188 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:58:48.785499 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.785477 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e5f1228_5703_456a_a909_558205e02bfc.slice/crio-edd8099cb5e538c66cac97995bd057b5fe48e03ff82481dcb71b4fbd6cb55033 WatchSource:0}: Error finding container edd8099cb5e538c66cac97995bd057b5fe48e03ff82481dcb71b4fbd6cb55033: Status 404 returned error can't find the container with id edd8099cb5e538c66cac97995bd057b5fe48e03ff82481dcb71b4fbd6cb55033 Apr 16 13:58:48.786192 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.786173 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" Apr 16 13:58:48.790523 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.790505 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-56z6w" Apr 16 13:58:48.792819 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.792793 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1153eba3_85ab_49ec_ac00_fab2f08d676b.slice/crio-09916b544275101a36bd6ae4d9a14fada6ca4bc129132a5dfbf0cf12eaacf5f5 WatchSource:0}: Error finding container 09916b544275101a36bd6ae4d9a14fada6ca4bc129132a5dfbf0cf12eaacf5f5: Status 404 returned error can't find the container with id 09916b544275101a36bd6ae4d9a14fada6ca4bc129132a5dfbf0cf12eaacf5f5 Apr 16 13:58:48.795973 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.795955 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jnqp7" Apr 16 13:58:48.796338 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.796312 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod170b3d99_353c_47c0_9fd5_7c56afedf117.slice/crio-166cc5585e71f24668ea4d412fcbf5f74040ecd37de7df3880469eeeef827d1b WatchSource:0}: Error finding container 166cc5585e71f24668ea4d412fcbf5f74040ecd37de7df3880469eeeef827d1b: Status 404 returned error can't find the container with id 166cc5585e71f24668ea4d412fcbf5f74040ecd37de7df3880469eeeef827d1b Apr 16 13:58:48.800641 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.800621 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q4pj8" Apr 16 13:58:48.802732 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.802711 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07a3b071_1443_4213_b66f_ce5f4d7ff313.slice/crio-84c0e12d33e6244275991b4a44f308c4136cc87b433be679646ee56a52593aaa WatchSource:0}: Error finding container 84c0e12d33e6244275991b4a44f308c4136cc87b433be679646ee56a52593aaa: Status 404 returned error can't find the container with id 84c0e12d33e6244275991b4a44f308c4136cc87b433be679646ee56a52593aaa Apr 16 13:58:48.806572 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.806551 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:58:48.808516 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.808471 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6ac1ffc_23ae_4117_8a3e_a4aa0d7cb42e.slice/crio-ad93f72fe02a3390c536aeda95ec26df18e0e9c25a9be088ac1eeeddd8f01044 WatchSource:0}: Error finding container ad93f72fe02a3390c536aeda95ec26df18e0e9c25a9be088ac1eeeddd8f01044: Status 404 returned error can't find the container with id ad93f72fe02a3390c536aeda95ec26df18e0e9c25a9be088ac1eeeddd8f01044 Apr 16 13:58:48.811486 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.811469 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p4bsc" Apr 16 13:58:48.813669 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.813527 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb86b831d_508f_4fd4_9a93_21d9e8e21be7.slice/crio-95b004df20256974213640d0afa192a0481394ade8270f2c6bed613fb0c38606 WatchSource:0}: Error finding container 95b004df20256974213640d0afa192a0481394ade8270f2c6bed613fb0c38606: Status 404 returned error can't find the container with id 95b004df20256974213640d0afa192a0481394ade8270f2c6bed613fb0c38606 Apr 16 13:58:48.816583 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:48.815167 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-t28sg" Apr 16 13:58:48.817757 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.817733 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa773e4_9e56_4130_bc08_0913d59056bb.slice/crio-cdcc69e7e2f544e59794bf885289a6fa7189334245fcb98d1036ede9bca765f4 WatchSource:0}: Error finding container cdcc69e7e2f544e59794bf885289a6fa7189334245fcb98d1036ede9bca765f4: Status 404 returned error can't find the container with id cdcc69e7e2f544e59794bf885289a6fa7189334245fcb98d1036ede9bca765f4 Apr 16 13:58:48.822462 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:58:48.822445 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f44068_07cc_44c3_b6bc_448389afc9ce.slice/crio-7021f858636b2e5b4803acf96ee91d17ff835f4ac618cd75a6a54fe4bcf68b03 WatchSource:0}: Error finding container 7021f858636b2e5b4803acf96ee91d17ff835f4ac618cd75a6a54fe4bcf68b03: Status 404 returned error can't find the container with id 7021f858636b2e5b4803acf96ee91d17ff835f4ac618cd75a6a54fe4bcf68b03 Apr 16 13:58:49.087508 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.087420 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:49.087646 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:49.087567 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:49.087684 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:49.087658 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 13:58:50.087642247 +0000 UTC m=+4.108929987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:49.289493 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.289461 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:49.289634 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:49.289580 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 13:58:49.289634 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:49.289596 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 13:58:49.289634 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:49.289605 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n4qzk for pod openshift-network-diagnostics/network-check-target-bdcn7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:49.289801 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:49.289650 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk podName:84822439-41ed-4bb8-b7d6-6784ad00eeaf nodeName:}" failed. No retries permitted until 2026-04-16 13:58:50.289636086 +0000 UTC m=+4.310923812 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n4qzk" (UniqueName: "kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk") pod "network-check-target-bdcn7" (UID: "84822439-41ed-4bb8-b7d6-6784ad00eeaf") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:49.518846 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.518757 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 13:53:47 +0000 UTC" deadline="2028-01-04 05:01:13.719224038 +0000 UTC" Apr 16 13:58:49.518846 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.518798 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15063h2m24.20043055s" Apr 16 13:58:49.605575 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.605534 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-56z6w" event={"ID":"170b3d99-353c-47c0-9fd5-7c56afedf117","Type":"ContainerStarted","Data":"166cc5585e71f24668ea4d412fcbf5f74040ecd37de7df3880469eeeef827d1b"} Apr 16 13:58:49.607067 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.607035 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t28sg" event={"ID":"f2f44068-07cc-44c3-b6bc-448389afc9ce","Type":"ContainerStarted","Data":"7021f858636b2e5b4803acf96ee91d17ff835f4ac618cd75a6a54fe4bcf68b03"} Apr 16 13:58:49.609078 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.609051 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jnqp7" event={"ID":"07a3b071-1443-4213-b66f-ce5f4d7ff313","Type":"ContainerStarted","Data":"84c0e12d33e6244275991b4a44f308c4136cc87b433be679646ee56a52593aaa"} Apr 16 13:58:49.611408 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.611383 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" event={"ID":"1153eba3-85ab-49ec-ac00-fab2f08d676b","Type":"ContainerStarted","Data":"09916b544275101a36bd6ae4d9a14fada6ca4bc129132a5dfbf0cf12eaacf5f5"} Apr 16 13:58:49.614333 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.614309 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"edd8099cb5e538c66cac97995bd057b5fe48e03ff82481dcb71b4fbd6cb55033"} Apr 16 13:58:49.616075 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.616053 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-cx8jm" event={"ID":"165a8934-211a-41ba-a917-3ec360f1fb99","Type":"ContainerStarted","Data":"e0bd2d617da04495b05125fa92c07327c336fe23eceebd9df840cad8fd96fd1c"} Apr 16 13:58:49.621084 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.621058 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p4bsc" event={"ID":"afa773e4-9e56-4130-bc08-0913d59056bb","Type":"ContainerStarted","Data":"cdcc69e7e2f544e59794bf885289a6fa7189334245fcb98d1036ede9bca765f4"} Apr 16 13:58:49.624666 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.623188 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-qnnbt" event={"ID":"b86b831d-508f-4fd4-9a93-21d9e8e21be7","Type":"ContainerStarted","Data":"95b004df20256974213640d0afa192a0481394ade8270f2c6bed613fb0c38606"} Apr 16 13:58:49.624947 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:49.624926 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4pj8" event={"ID":"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e","Type":"ContainerStarted","Data":"ad93f72fe02a3390c536aeda95ec26df18e0e9c25a9be088ac1eeeddd8f01044"} Apr 16 13:58:50.096497 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:50.095900 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:50.096497 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:50.096077 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:50.096497 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:50.096140 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 13:58:52.096122238 +0000 UTC m=+6.117409968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:50.298729 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:50.298687 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:50.298907 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:50.298887 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 13:58:50.298907 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:50.298905 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 13:58:50.299007 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:50.298918 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n4qzk for pod openshift-network-diagnostics/network-check-target-bdcn7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:50.299007 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:50.298974 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk podName:84822439-41ed-4bb8-b7d6-6784ad00eeaf nodeName:}" failed. No retries permitted until 2026-04-16 13:58:52.298956413 +0000 UTC m=+6.320244156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n4qzk" (UniqueName: "kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk") pod "network-check-target-bdcn7" (UID: "84822439-41ed-4bb8-b7d6-6784ad00eeaf") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:50.598361 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:50.598106 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:50.598804 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:50.598379 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:50.598804 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:50.598479 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:58:50.598968 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:50.598922 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:58:52.115575 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:52.114990 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:52.115575 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:52.115165 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:52.115575 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:52.115228 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 13:58:56.115210095 +0000 UTC m=+10.136497838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:52.317161 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:52.317072 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:52.317333 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:52.317261 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 13:58:52.317333 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:52.317286 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 13:58:52.317333 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:52.317299 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n4qzk for pod openshift-network-diagnostics/network-check-target-bdcn7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:52.317506 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:52.317362 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk podName:84822439-41ed-4bb8-b7d6-6784ad00eeaf nodeName:}" failed. No retries permitted until 2026-04-16 13:58:56.317340681 +0000 UTC m=+10.338628423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n4qzk" (UniqueName: "kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk") pod "network-check-target-bdcn7" (UID: "84822439-41ed-4bb8-b7d6-6784ad00eeaf") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:52.601259 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:52.598596 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:52.601259 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:52.598737 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:58:52.601259 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:52.600483 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:52.601259 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:52.600636 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:58:53.369400 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.368655 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-mqpls"] Apr 16 13:58:53.372835 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.372386 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:53.372835 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:53.372461 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:58:53.428526 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.428476 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:53.428704 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.428533 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-dbus\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:53.428704 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.428584 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-kubelet-config\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:53.529510 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.528991 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:53.529510 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.529043 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-dbus\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:53.529510 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.529089 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-kubelet-config\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:53.529510 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.529189 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-kubelet-config\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:53.529510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:53.529311 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 13:58:53.529510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:53.529366 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret podName:be2e3878-e04d-4a67-b1b8-fcab8f431c5b nodeName:}" failed. No retries permitted until 2026-04-16 13:58:54.029348373 +0000 UTC m=+8.050636103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret") pod "global-pull-secret-syncer-mqpls" (UID: "be2e3878-e04d-4a67-b1b8-fcab8f431c5b") : object "kube-system"/"original-pull-secret" not registered Apr 16 13:58:53.530018 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:53.529671 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-dbus\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:54.034204 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:54.034108 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:54.034408 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:54.034285 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 13:58:54.034408 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:54.034371 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret podName:be2e3878-e04d-4a67-b1b8-fcab8f431c5b nodeName:}" failed. No retries permitted until 2026-04-16 13:58:55.034349032 +0000 UTC m=+9.055636761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret") pod "global-pull-secret-syncer-mqpls" (UID: "be2e3878-e04d-4a67-b1b8-fcab8f431c5b") : object "kube-system"/"original-pull-secret" not registered Apr 16 13:58:54.598038 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:54.598007 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:54.598510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:54.598117 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:58:54.598510 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:54.598216 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:54.598510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:54.598334 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:58:55.042430 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:55.042327 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:55.042577 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:55.042510 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 13:58:55.042648 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:55.042585 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret podName:be2e3878-e04d-4a67-b1b8-fcab8f431c5b nodeName:}" failed. No retries permitted until 2026-04-16 13:58:57.042565214 +0000 UTC m=+11.063852961 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret") pod "global-pull-secret-syncer-mqpls" (UID: "be2e3878-e04d-4a67-b1b8-fcab8f431c5b") : object "kube-system"/"original-pull-secret" not registered Apr 16 13:58:55.598199 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:55.598162 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:55.598655 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:55.598318 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:58:56.151902 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:56.151840 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:56.152110 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:56.152046 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:56.152110 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:56.152105 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:04.152087921 +0000 UTC m=+18.173375660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:58:56.353693 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:56.353035 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:56.353693 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:56.353213 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 13:58:56.353693 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:56.353253 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 13:58:56.353693 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:56.353268 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n4qzk for pod openshift-network-diagnostics/network-check-target-bdcn7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:56.353693 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:56.353329 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk podName:84822439-41ed-4bb8-b7d6-6784ad00eeaf nodeName:}" failed. No retries permitted until 2026-04-16 13:59:04.353308629 +0000 UTC m=+18.374596373 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n4qzk" (UniqueName: "kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk") pod "network-check-target-bdcn7" (UID: "84822439-41ed-4bb8-b7d6-6784ad00eeaf") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:58:56.598954 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:56.598808 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:56.598954 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:56.598913 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:58:56.599482 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:56.599296 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:56.599482 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:56.599388 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:58:57.059027 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:57.058940 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:57.059202 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:57.059068 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 13:58:57.059202 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:57.059130 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret podName:be2e3878-e04d-4a67-b1b8-fcab8f431c5b nodeName:}" failed. No retries permitted until 2026-04-16 13:59:01.059116263 +0000 UTC m=+15.080403989 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret") pod "global-pull-secret-syncer-mqpls" (UID: "be2e3878-e04d-4a67-b1b8-fcab8f431c5b") : object "kube-system"/"original-pull-secret" not registered Apr 16 13:58:57.598164 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:57.598078 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:57.598349 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:57.598198 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:58:58.598110 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:58.598080 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:58:58.598634 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:58.598217 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:58:58.598634 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:58.598274 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:58:58.598634 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:58.598389 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:58:59.597768 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:58:59.597735 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:58:59.597945 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:58:59.597862 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:00.597608 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:00.597570 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:00.598056 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:00.597621 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:00.598056 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:00.597709 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:00.598056 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:00.597890 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:01.090324 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:01.090224 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:01.090462 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:01.090381 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 13:59:01.090462 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:01.090460 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret podName:be2e3878-e04d-4a67-b1b8-fcab8f431c5b nodeName:}" failed. No retries permitted until 2026-04-16 13:59:09.090439537 +0000 UTC m=+23.111727281 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret") pod "global-pull-secret-syncer-mqpls" (UID: "be2e3878-e04d-4a67-b1b8-fcab8f431c5b") : object "kube-system"/"original-pull-secret" not registered Apr 16 13:59:01.598417 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:01.598388 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:01.598853 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:01.598510 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:02.598159 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:02.598124 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:02.598342 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:02.598135 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:02.598342 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:02.598272 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:02.598457 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:02.598372 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:03.598628 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:03.598597 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:03.599104 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:03.598699 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:04.213984 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:04.213942 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:04.214155 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:04.214117 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:59:04.214223 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:04.214214 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:20.214192156 +0000 UTC m=+34.235479884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:59:04.415009 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:04.414975 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:04.415186 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:04.415139 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 13:59:04.415186 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:04.415165 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 13:59:04.415186 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:04.415178 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n4qzk for pod openshift-network-diagnostics/network-check-target-bdcn7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:59:04.415330 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:04.415228 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk podName:84822439-41ed-4bb8-b7d6-6784ad00eeaf nodeName:}" failed. No retries permitted until 2026-04-16 13:59:20.415214579 +0000 UTC m=+34.436502305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n4qzk" (UniqueName: "kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk") pod "network-check-target-bdcn7" (UID: "84822439-41ed-4bb8-b7d6-6784ad00eeaf") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:59:04.597648 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:04.597574 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:04.597796 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:04.597574 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:04.597796 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:04.597712 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:04.597796 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:04.597755 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:05.597778 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:05.597747 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:05.598180 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:05.597860 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:06.598685 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:06.598652 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:06.599141 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:06.598754 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:06.599227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:06.599155 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:06.599311 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:06.599270 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:07.599484 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.599195 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:07.600021 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:07.599554 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:07.672978 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.672944 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 13:59:07.673798 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.673769 2569 generic.go:358] "Generic (PLEG): container finished" podID="7e5f1228-5703-456a-a909-558205e02bfc" containerID="ab21ce74fc9aa934eb8d9c323e141ad2656b6f3f0f8e42719017104ffc203542" exitCode=1 Apr 16 13:59:07.673900 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.673846 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"9fb0e4e1087625ded8096d9f25c1e3ef9c72b3ed4ee4560a7f3ab5b4e54e8001"} Apr 16 13:59:07.673900 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.673886 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerDied","Data":"ab21ce74fc9aa934eb8d9c323e141ad2656b6f3f0f8e42719017104ffc203542"} Apr 16 13:59:07.674054 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.673904 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"1c557a801ddacc53affde269e49bd9344d22b51ad158def75f8ab712660df0ed"} Apr 16 13:59:07.676161 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.676135 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" event={"ID":"8a9e06fbe231682ac4f4d6b934aa0a28","Type":"ContainerStarted","Data":"5a5a4f0b4fc37d9bc9d4ae01757cc19543bc1513fd1e3d259ac63adac29e2cab"} Apr 16 13:59:07.678423 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.678398 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q4pj8" event={"ID":"c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e","Type":"ContainerStarted","Data":"b88846f229cc70a709f5840f567012e6e3102e32e8f8798a97e398921f3f2ad7"} Apr 16 13:59:07.689178 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.689106 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-56z6w" event={"ID":"170b3d99-353c-47c0-9fd5-7c56afedf117","Type":"ContainerStarted","Data":"6805065056ce6f741803010ecf007a9568245eb420360a1cee9838d000795370"} Apr 16 13:59:07.704982 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.704937 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-129-84.ec2.internal" podStartSLOduration=19.704920887 podStartE2EDuration="19.704920887s" podCreationTimestamp="2026-04-16 13:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 13:59:07.6891514 +0000 UTC m=+21.710439148" watchObservedRunningTime="2026-04-16 13:59:07.704920887 +0000 UTC m=+21.726208635" Apr 16 13:59:07.705160 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.705131 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-q4pj8" podStartSLOduration=3.423288199 podStartE2EDuration="21.705124613s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.810096182 +0000 UTC m=+2.831383911" lastFinishedPulling="2026-04-16 13:59:07.091932598 +0000 UTC m=+21.113220325" observedRunningTime="2026-04-16 13:59:07.704896448 +0000 UTC m=+21.726184223" watchObservedRunningTime="2026-04-16 13:59:07.705124613 +0000 UTC m=+21.726412360" Apr 16 13:59:07.717510 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:07.717472 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-56z6w" podStartSLOduration=3.7174751969999997 podStartE2EDuration="21.717458353s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.797894701 +0000 UTC m=+2.819182427" lastFinishedPulling="2026-04-16 13:59:06.797877853 +0000 UTC m=+20.819165583" observedRunningTime="2026-04-16 13:59:07.717030896 +0000 UTC m=+21.738318646" watchObservedRunningTime="2026-04-16 13:59:07.717458353 +0000 UTC m=+21.738746104" Apr 16 13:59:08.597812 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.597600 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:08.597990 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.597654 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:08.597990 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:08.597869 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:08.597990 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:08.597900 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:08.691325 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.691287 2569 generic.go:358] "Generic (PLEG): container finished" podID="3a246135dc05bb9822400fbcc84ce6ae" containerID="c35e8e42862cae53d01e31892e46b08c572b0504290b1ebd2b39f11dc2afcda2" exitCode=0 Apr 16 13:59:08.691819 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.691365 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" event={"ID":"3a246135dc05bb9822400fbcc84ce6ae","Type":"ContainerDied","Data":"c35e8e42862cae53d01e31892e46b08c572b0504290b1ebd2b39f11dc2afcda2"} Apr 16 13:59:08.692764 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.692735 2569 generic.go:358] "Generic (PLEG): container finished" podID="f2f44068-07cc-44c3-b6bc-448389afc9ce" containerID="3608382f055da163e17c6ff595ff5083991597228b88c8da2ee30a056b04f195" exitCode=0 Apr 16 13:59:08.692886 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.692813 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t28sg" event={"ID":"f2f44068-07cc-44c3-b6bc-448389afc9ce","Type":"ContainerDied","Data":"3608382f055da163e17c6ff595ff5083991597228b88c8da2ee30a056b04f195"} Apr 16 13:59:08.696224 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.696189 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jnqp7" event={"ID":"07a3b071-1443-4213-b66f-ce5f4d7ff313","Type":"ContainerStarted","Data":"b0770d40f770ca15170b596089457cdebc3a64b4b0927f6938daa8a265891547"} Apr 16 13:59:08.697978 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.697698 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" event={"ID":"1153eba3-85ab-49ec-ac00-fab2f08d676b","Type":"ContainerStarted","Data":"7dfc94e1b8f5001ba98d107c4c4f316d49d67aa94e7e64691615aec3706bb887"} Apr 16 13:59:08.702984 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.702969 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 13:59:08.703305 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.703286 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"5ceb59260595a95c286b88e6a2bd3ca3e0c3b9db9d2df00d3b4595044b372256"} Apr 16 13:59:08.703383 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.703310 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"ca55ebd5d93f3a52b489555efd944afb2578206a3c21508211917fb587915683"} Apr 16 13:59:08.703383 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.703322 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"49001596ca71f83380ffb77d30a78e920b2a98980b207c111b8520b290b1ea0d"} Apr 16 13:59:08.704516 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.704496 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-cx8jm" event={"ID":"165a8934-211a-41ba-a917-3ec360f1fb99","Type":"ContainerStarted","Data":"8021c92981a306bcf835ef7414d38283f5925173a1c27a10f965a534a7fbc4d8"} Apr 16 13:59:08.705684 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.705656 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p4bsc" event={"ID":"afa773e4-9e56-4130-bc08-0913d59056bb","Type":"ContainerStarted","Data":"ea795ad7a89574b07320c9dd5b8c61304080f3a7764e017f3d2cc35c8e1409a7"} Apr 16 13:59:08.706828 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.706808 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-qnnbt" event={"ID":"b86b831d-508f-4fd4-9a93-21d9e8e21be7","Type":"ContainerStarted","Data":"bbe73e1fdd1148b5a837ec7c40c42a1c1933e3455950c730d2fe03b8f81000ec"} Apr 16 13:59:08.750508 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.750465 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-cx8jm" podStartSLOduration=4.751844081 podStartE2EDuration="22.750453274s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.780290336 +0000 UTC m=+2.801578061" lastFinishedPulling="2026-04-16 13:59:06.778899514 +0000 UTC m=+20.800187254" observedRunningTime="2026-04-16 13:59:08.750363373 +0000 UTC m=+22.771651122" watchObservedRunningTime="2026-04-16 13:59:08.750453274 +0000 UTC m=+22.771741022" Apr 16 13:59:08.764924 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.764885 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jnqp7" podStartSLOduration=4.773351487 podStartE2EDuration="22.764873682s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.804078483 +0000 UTC m=+2.825366209" lastFinishedPulling="2026-04-16 13:59:06.795600677 +0000 UTC m=+20.816888404" observedRunningTime="2026-04-16 13:59:08.764376353 +0000 UTC m=+22.785664102" watchObservedRunningTime="2026-04-16 13:59:08.764873682 +0000 UTC m=+22.786161430" Apr 16 13:59:08.778231 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.778196 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-qnnbt" podStartSLOduration=4.814085501 podStartE2EDuration="22.778185666s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.814822422 +0000 UTC m=+2.836110150" lastFinishedPulling="2026-04-16 13:59:06.778922585 +0000 UTC m=+20.800210315" observedRunningTime="2026-04-16 13:59:08.778152873 +0000 UTC m=+22.799440621" watchObservedRunningTime="2026-04-16 13:59:08.778185666 +0000 UTC m=+22.799473414" Apr 16 13:59:08.790770 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.790740 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p4bsc" podStartSLOduration=4.831497745 podStartE2EDuration="22.790730597s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.819665115 +0000 UTC m=+2.840952842" lastFinishedPulling="2026-04-16 13:59:06.778897956 +0000 UTC m=+20.800185694" observedRunningTime="2026-04-16 13:59:08.790491186 +0000 UTC m=+22.811778933" watchObservedRunningTime="2026-04-16 13:59:08.790730597 +0000 UTC m=+22.812018345" Apr 16 13:59:08.990106 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.990031 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:59:08.990674 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:08.990656 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:59:09.152704 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.152675 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:09.152862 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:09.152823 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 13:59:09.152926 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:09.152897 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret podName:be2e3878-e04d-4a67-b1b8-fcab8f431c5b nodeName:}" failed. No retries permitted until 2026-04-16 13:59:25.152875722 +0000 UTC m=+39.174163471 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret") pod "global-pull-secret-syncer-mqpls" (UID: "be2e3878-e04d-4a67-b1b8-fcab8f431c5b") : object "kube-system"/"original-pull-secret" not registered Apr 16 13:59:09.373375 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.373350 2569 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 16 13:59:09.544968 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.544764 2569 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-16T13:59:09.373373553Z","UUID":"2ba1da21-dddb-4c12-9c3a-11e865fb870f","Handler":null,"Name":"","Endpoint":""} Apr 16 13:59:09.546376 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.546354 2569 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 16 13:59:09.546376 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.546382 2569 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 16 13:59:09.598505 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.598478 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:09.598654 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:09.598584 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:09.710621 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.710581 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" event={"ID":"3a246135dc05bb9822400fbcc84ce6ae","Type":"ContainerStarted","Data":"6de53c1994bf84f7965d5898263165f190c739d7012dd11f444ace1dc29c10a4"} Apr 16 13:59:09.712456 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.712426 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" event={"ID":"1153eba3-85ab-49ec-ac00-fab2f08d676b","Type":"ContainerStarted","Data":"a9310735097ad0cccf3e936406443a9b5e926f453281239a520b88dd58227591"} Apr 16 13:59:09.713060 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.712930 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:59:09.713335 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.713319 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-qnnbt" Apr 16 13:59:09.724518 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:09.724480 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-129-84.ec2.internal" podStartSLOduration=21.724468864 podStartE2EDuration="21.724468864s" podCreationTimestamp="2026-04-16 13:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 13:59:09.723491058 +0000 UTC m=+23.744778807" watchObservedRunningTime="2026-04-16 13:59:09.724468864 +0000 UTC m=+23.745756617" Apr 16 13:59:10.598812 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:10.598565 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:10.598976 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:10.598635 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:10.598976 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:10.598912 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:10.599102 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:10.598994 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:10.718012 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:10.717979 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 13:59:10.718427 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:10.718384 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"f452fb48a23c6c9eda5b0eb41e62027d8463efd568efad349d6d6674ab5e8f29"} Apr 16 13:59:11.598428 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:11.598400 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:11.598605 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:11.598508 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:11.722616 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:11.722579 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" event={"ID":"1153eba3-85ab-49ec-ac00-fab2f08d676b","Type":"ContainerStarted","Data":"2d7f6b5addbca4a2c059ce3c8bcb179b1013bd75661c5eac104eaf8ab2e5ed66"} Apr 16 13:59:11.738928 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:11.738883 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-995x2" podStartSLOduration=3.64582444 podStartE2EDuration="25.738869342s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.7946218 +0000 UTC m=+2.815909525" lastFinishedPulling="2026-04-16 13:59:10.88766669 +0000 UTC m=+24.908954427" observedRunningTime="2026-04-16 13:59:11.738401509 +0000 UTC m=+25.759689260" watchObservedRunningTime="2026-04-16 13:59:11.738869342 +0000 UTC m=+25.760157100" Apr 16 13:59:12.598666 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:12.598631 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:12.598830 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:12.598755 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:12.599067 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:12.598637 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:12.599067 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:12.599032 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:13.598514 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.598333 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:13.599087 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:13.598576 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:13.727065 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.727035 2569 generic.go:358] "Generic (PLEG): container finished" podID="f2f44068-07cc-44c3-b6bc-448389afc9ce" containerID="d36cfb7c8f4de7145154e6d178132e5ec4075bd79612aaafd563d50bf22994c3" exitCode=0 Apr 16 13:59:13.727226 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.727129 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t28sg" event={"ID":"f2f44068-07cc-44c3-b6bc-448389afc9ce","Type":"ContainerDied","Data":"d36cfb7c8f4de7145154e6d178132e5ec4075bd79612aaafd563d50bf22994c3"} Apr 16 13:59:13.730142 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.730097 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 13:59:13.730446 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.730424 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"e0c55ac77a5efc2f44e64dc22d242e91aad4e5da9001c04459c9420fb28590c3"} Apr 16 13:59:13.730806 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.730787 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:59:13.730878 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.730812 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:59:13.730952 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.730938 2569 scope.go:117] "RemoveContainer" containerID="ab21ce74fc9aa934eb8d9c323e141ad2656b6f3f0f8e42719017104ffc203542" Apr 16 13:59:13.745426 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.745403 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:59:13.745502 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:13.745492 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:59:14.598434 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.598406 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:14.599141 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:14.598545 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:14.599141 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.598959 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:14.599141 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:14.599047 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:14.733825 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.733800 2569 generic.go:358] "Generic (PLEG): container finished" podID="f2f44068-07cc-44c3-b6bc-448389afc9ce" containerID="f7d8082adaef0b16c201c05d4f7d034714ea234ed15a67dba6404eae218a7c22" exitCode=0 Apr 16 13:59:14.733914 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.733879 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t28sg" event={"ID":"f2f44068-07cc-44c3-b6bc-448389afc9ce","Type":"ContainerDied","Data":"f7d8082adaef0b16c201c05d4f7d034714ea234ed15a67dba6404eae218a7c22"} Apr 16 13:59:14.737279 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.737263 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 13:59:14.737601 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.737582 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" event={"ID":"7e5f1228-5703-456a-a909-558205e02bfc","Type":"ContainerStarted","Data":"92be8177d8831153247374ce0818ea4549eaea9b4ad38a4021548fd6cd2ef85a"} Apr 16 13:59:14.737695 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.737682 2569 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 13:59:14.784453 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.784411 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" podStartSLOduration=10.160787459 podStartE2EDuration="28.78439931s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.787047279 +0000 UTC m=+2.808335006" lastFinishedPulling="2026-04-16 13:59:07.410659112 +0000 UTC m=+21.431946857" observedRunningTime="2026-04-16 13:59:14.783157831 +0000 UTC m=+28.804445603" watchObservedRunningTime="2026-04-16 13:59:14.78439931 +0000 UTC m=+28.805687057" Apr 16 13:59:14.789048 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.789033 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:59:14.816733 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.816705 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-mqpls"] Apr 16 13:59:14.816839 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.816810 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:14.816933 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:14.816910 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:14.817170 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.817143 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-99gsl"] Apr 16 13:59:14.817300 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.817270 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:14.817428 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:14.817382 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:14.817718 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.817697 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-bdcn7"] Apr 16 13:59:14.817807 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:14.817775 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:14.817865 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:14.817849 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:16.598543 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:16.598404 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:16.598964 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:16.598482 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:16.598964 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:16.598512 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:16.598964 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:16.598650 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:16.598964 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:16.598733 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:16.598964 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:16.598794 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:16.741828 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:16.741802 2569 generic.go:358] "Generic (PLEG): container finished" podID="f2f44068-07cc-44c3-b6bc-448389afc9ce" containerID="cbca852906dc22d6b75947b6b71db71fb7dfe56bb01837e809fa81690265db4b" exitCode=0 Apr 16 13:59:16.741956 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:16.741899 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t28sg" event={"ID":"f2f44068-07cc-44c3-b6bc-448389afc9ce","Type":"ContainerDied","Data":"cbca852906dc22d6b75947b6b71db71fb7dfe56bb01837e809fa81690265db4b"} Apr 16 13:59:18.598098 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:18.598024 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:18.598098 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:18.598043 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:18.598098 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:18.598034 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:18.598732 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:18.598131 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-mqpls" podUID="be2e3878-e04d-4a67-b1b8-fcab8f431c5b" Apr 16 13:59:18.598732 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:18.598220 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 13:59:18.598732 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:18.598328 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-bdcn7" podUID="84822439-41ed-4bb8-b7d6-6784ad00eeaf" Apr 16 13:59:20.246621 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.246540 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:20.247144 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.246671 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:59:20.247144 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.246734 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:52.246719628 +0000 UTC m=+66.268007360 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 13:59:20.283145 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.283116 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-129-84.ec2.internal" event="NodeReady" Apr 16 13:59:20.283328 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.283266 2569 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 16 13:59:20.315065 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.315038 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7fd54c5856-xxztt"] Apr 16 13:59:20.334094 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.334067 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rdgwf"] Apr 16 13:59:20.334267 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.334222 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.336422 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.336400 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 16 13:59:20.337222 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.336875 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 16 13:59:20.337222 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.336888 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 16 13:59:20.337222 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.336888 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-gdggq\"" Apr 16 13:59:20.345949 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.345833 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 16 13:59:20.353517 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.353495 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-92l7v"] Apr 16 13:59:20.353654 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.353639 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.355853 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.355831 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 16 13:59:20.355952 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.355912 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-2c9fp\"" Apr 16 13:59:20.355952 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.355879 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 16 13:59:20.370427 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.370409 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7fd54c5856-xxztt"] Apr 16 13:59:20.370567 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.370554 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rdgwf"] Apr 16 13:59:20.370658 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.370649 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-92l7v"] Apr 16 13:59:20.370844 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.370823 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:20.373107 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.373087 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 16 13:59:20.373207 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.373128 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 16 13:59:20.373407 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.373390 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 16 13:59:20.373614 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.373599 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-wz4rp\"" Apr 16 13:59:20.448342 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448306 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-image-registry-private-configuration\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.448527 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448353 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-trusted-ca\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.448527 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448392 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-ca-trust-extracted\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.448527 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448417 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gpl\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-kube-api-access-d7gpl\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.448527 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448457 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.448527 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448508 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f334ac90-d973-40ab-bade-1a585fb2d9b2-tmp-dir\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.448781 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448573 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f334ac90-d973-40ab-bade-1a585fb2d9b2-config-volume\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.448781 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448615 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9m7m\" (UniqueName: \"kubernetes.io/projected/f334ac90-d973-40ab-bade-1a585fb2d9b2-kube-api-access-l9m7m\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.448781 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448648 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-bound-sa-token\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.448781 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448687 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:20.448781 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448721 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.448978 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.448818 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 13:59:20.448978 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448836 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-certificates\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.448978 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.448842 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 13:59:20.448978 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.448863 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n4qzk for pod openshift-network-diagnostics/network-check-target-bdcn7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:59:20.448978 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.448873 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-installation-pull-secrets\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.448978 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.448960 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk podName:84822439-41ed-4bb8-b7d6-6784ad00eeaf nodeName:}" failed. No retries permitted until 2026-04-16 13:59:52.44893791 +0000 UTC m=+66.470225642 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-n4qzk" (UniqueName: "kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk") pod "network-check-target-bdcn7" (UID: "84822439-41ed-4bb8-b7d6-6784ad00eeaf") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 13:59:20.550256 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550152 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f334ac90-d973-40ab-bade-1a585fb2d9b2-config-volume\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.550256 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550195 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l9m7m\" (UniqueName: \"kubernetes.io/projected/f334ac90-d973-40ab-bade-1a585fb2d9b2-kube-api-access-l9m7m\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.550461 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550354 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-bound-sa-token\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.550461 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550422 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.550461 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550449 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-certificates\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.550615 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550477 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-installation-pull-secrets\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.550615 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550526 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42q6v\" (UniqueName: \"kubernetes.io/projected/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-kube-api-access-42q6v\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:20.550615 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.550580 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 13:59:20.550615 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.550598 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fd54c5856-xxztt: secret "image-registry-tls" not found Apr 16 13:59:20.550615 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550604 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-image-registry-private-configuration\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.550852 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.550647 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls podName:b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:21.050629723 +0000 UTC m=+35.071917456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls") pod "image-registry-7fd54c5856-xxztt" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2") : secret "image-registry-tls" not found Apr 16 13:59:20.550852 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550667 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-trusted-ca\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.550852 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550699 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-ca-trust-extracted\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.550852 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550724 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7gpl\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-kube-api-access-d7gpl\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.550852 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550755 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.550852 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550783 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:20.550852 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550786 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f334ac90-d973-40ab-bade-1a585fb2d9b2-config-volume\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.550852 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.550819 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f334ac90-d973-40ab-bade-1a585fb2d9b2-tmp-dir\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.551227 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.550936 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 13:59:20.551227 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.550998 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls podName:f334ac90-d973-40ab-bade-1a585fb2d9b2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:21.050979742 +0000 UTC m=+35.072267471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls") pod "dns-default-rdgwf" (UID: "f334ac90-d973-40ab-bade-1a585fb2d9b2") : secret "dns-default-metrics-tls" not found Apr 16 13:59:20.551227 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.551072 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f334ac90-d973-40ab-bade-1a585fb2d9b2-tmp-dir\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.551410 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.551222 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-certificates\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.551517 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.551391 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-ca-trust-extracted\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.551773 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.551751 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-trusted-ca\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.555100 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.555077 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-image-registry-private-configuration\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.555209 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.555114 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-installation-pull-secrets\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.558601 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.558580 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9m7m\" (UniqueName: \"kubernetes.io/projected/f334ac90-d973-40ab-bade-1a585fb2d9b2-kube-api-access-l9m7m\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:20.566121 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.566086 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-bound-sa-token\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.566390 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.566370 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7gpl\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-kube-api-access-d7gpl\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:20.598221 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.598196 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:20.598350 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.598196 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:20.598350 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.598202 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:20.600996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.600789 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 13:59:20.600996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.600789 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 13:59:20.600996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.600835 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 13:59:20.600996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.600843 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-j4r87\"" Apr 16 13:59:20.600996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.600859 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-b98gv\"" Apr 16 13:59:20.600996 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.600793 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 13:59:20.651813 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.651782 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-42q6v\" (UniqueName: \"kubernetes.io/projected/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-kube-api-access-42q6v\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:20.651981 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.651832 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:20.651981 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.651954 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 13:59:20.652093 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:20.652026 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert podName:3bfef623-c79c-41c4-9fc5-0a25ecab1f4a nodeName:}" failed. No retries permitted until 2026-04-16 13:59:21.152007364 +0000 UTC m=+35.173295112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert") pod "ingress-canary-92l7v" (UID: "3bfef623-c79c-41c4-9fc5-0a25ecab1f4a") : secret "canary-serving-cert" not found Apr 16 13:59:20.660465 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:20.660440 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-42q6v\" (UniqueName: \"kubernetes.io/projected/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-kube-api-access-42q6v\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:21.056118 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:21.056084 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:21.056414 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:21.056149 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:21.056414 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:21.056262 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 13:59:21.056414 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:21.056284 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fd54c5856-xxztt: secret "image-registry-tls" not found Apr 16 13:59:21.056414 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:21.056313 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 13:59:21.056414 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:21.056354 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls podName:b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:22.056331331 +0000 UTC m=+36.077619073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls") pod "image-registry-7fd54c5856-xxztt" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2") : secret "image-registry-tls" not found Apr 16 13:59:21.056414 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:21.056376 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls podName:f334ac90-d973-40ab-bade-1a585fb2d9b2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:22.056364678 +0000 UTC m=+36.077652413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls") pod "dns-default-rdgwf" (UID: "f334ac90-d973-40ab-bade-1a585fb2d9b2") : secret "dns-default-metrics-tls" not found Apr 16 13:59:21.157108 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:21.157071 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:21.157387 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:21.157223 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 13:59:21.157387 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:21.157304 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert podName:3bfef623-c79c-41c4-9fc5-0a25ecab1f4a nodeName:}" failed. No retries permitted until 2026-04-16 13:59:22.157287981 +0000 UTC m=+36.178575726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert") pod "ingress-canary-92l7v" (UID: "3bfef623-c79c-41c4-9fc5-0a25ecab1f4a") : secret "canary-serving-cert" not found Apr 16 13:59:22.065229 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:22.065189 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:22.065820 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:22.065315 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:22.065820 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:22.065351 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 13:59:22.065820 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:22.065423 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 13:59:22.065820 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:22.065434 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fd54c5856-xxztt: secret "image-registry-tls" not found Apr 16 13:59:22.065820 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:22.065434 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls podName:f334ac90-d973-40ab-bade-1a585fb2d9b2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:24.065411541 +0000 UTC m=+38.086699271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls") pod "dns-default-rdgwf" (UID: "f334ac90-d973-40ab-bade-1a585fb2d9b2") : secret "dns-default-metrics-tls" not found Apr 16 13:59:22.065820 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:22.065480 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls podName:b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:24.065469083 +0000 UTC m=+38.086756809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls") pod "image-registry-7fd54c5856-xxztt" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2") : secret "image-registry-tls" not found Apr 16 13:59:22.165639 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:22.165607 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:22.165783 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:22.165732 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 13:59:22.165844 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:22.165800 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert podName:3bfef623-c79c-41c4-9fc5-0a25ecab1f4a nodeName:}" failed. No retries permitted until 2026-04-16 13:59:24.165781904 +0000 UTC m=+38.187069629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert") pod "ingress-canary-92l7v" (UID: "3bfef623-c79c-41c4-9fc5-0a25ecab1f4a") : secret "canary-serving-cert" not found Apr 16 13:59:23.757366 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:23.757122 2569 generic.go:358] "Generic (PLEG): container finished" podID="f2f44068-07cc-44c3-b6bc-448389afc9ce" containerID="8f43aac06284f8c936f2259bd3e7e87aa88e53e1045f6173438b083762f07f21" exitCode=0 Apr 16 13:59:23.757366 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:23.757205 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t28sg" event={"ID":"f2f44068-07cc-44c3-b6bc-448389afc9ce","Type":"ContainerDied","Data":"8f43aac06284f8c936f2259bd3e7e87aa88e53e1045f6173438b083762f07f21"} Apr 16 13:59:24.080726 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:24.080651 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:24.080726 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:24.080699 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:24.080891 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:24.080778 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 13:59:24.080891 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:24.080784 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 13:59:24.080891 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:24.080801 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fd54c5856-xxztt: secret "image-registry-tls" not found Apr 16 13:59:24.080891 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:24.080837 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls podName:f334ac90-d973-40ab-bade-1a585fb2d9b2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:28.080818514 +0000 UTC m=+42.102106260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls") pod "dns-default-rdgwf" (UID: "f334ac90-d973-40ab-bade-1a585fb2d9b2") : secret "dns-default-metrics-tls" not found Apr 16 13:59:24.080891 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:24.080874 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls podName:b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:28.080868209 +0000 UTC m=+42.102155935 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls") pod "image-registry-7fd54c5856-xxztt" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2") : secret "image-registry-tls" not found Apr 16 13:59:24.181726 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:24.181697 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:24.181849 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:24.181829 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 13:59:24.181897 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:24.181884 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert podName:3bfef623-c79c-41c4-9fc5-0a25ecab1f4a nodeName:}" failed. No retries permitted until 2026-04-16 13:59:28.181870814 +0000 UTC m=+42.203158545 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert") pod "ingress-canary-92l7v" (UID: "3bfef623-c79c-41c4-9fc5-0a25ecab1f4a") : secret "canary-serving-cert" not found Apr 16 13:59:24.761683 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:24.761651 2569 generic.go:358] "Generic (PLEG): container finished" podID="f2f44068-07cc-44c3-b6bc-448389afc9ce" containerID="a47e1d2899ab066ce37f0fb60756f99a2b014fb2de9c52ee361cc6d10fd12ad9" exitCode=0 Apr 16 13:59:24.762043 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:24.761689 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t28sg" event={"ID":"f2f44068-07cc-44c3-b6bc-448389afc9ce","Type":"ContainerDied","Data":"a47e1d2899ab066ce37f0fb60756f99a2b014fb2de9c52ee361cc6d10fd12ad9"} Apr 16 13:59:25.190286 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:25.190253 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:25.193503 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:25.193484 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/be2e3878-e04d-4a67-b1b8-fcab8f431c5b-original-pull-secret\") pod \"global-pull-secret-syncer-mqpls\" (UID: \"be2e3878-e04d-4a67-b1b8-fcab8f431c5b\") " pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:25.418365 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:25.418329 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-mqpls" Apr 16 13:59:25.577182 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:25.577132 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-mqpls"] Apr 16 13:59:25.582702 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:59:25.582562 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe2e3878_e04d_4a67_b1b8_fcab8f431c5b.slice/crio-e481924a3ec2c18ce23c02f6193822e3876aa42e0d4139bc16129b5d6a21e269 WatchSource:0}: Error finding container e481924a3ec2c18ce23c02f6193822e3876aa42e0d4139bc16129b5d6a21e269: Status 404 returned error can't find the container with id e481924a3ec2c18ce23c02f6193822e3876aa42e0d4139bc16129b5d6a21e269 Apr 16 13:59:25.771017 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:25.770938 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t28sg" event={"ID":"f2f44068-07cc-44c3-b6bc-448389afc9ce","Type":"ContainerStarted","Data":"56bec8c1bd377c004979a380b07794a8512d41e03d488f18b315b4f673b619a1"} Apr 16 13:59:25.771941 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:25.771917 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-mqpls" event={"ID":"be2e3878-e04d-4a67-b1b8-fcab8f431c5b","Type":"ContainerStarted","Data":"e481924a3ec2c18ce23c02f6193822e3876aa42e0d4139bc16129b5d6a21e269"} Apr 16 13:59:25.792570 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:25.792532 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-t28sg" podStartSLOduration=5.866369702 podStartE2EDuration="39.792518616s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:58:48.823718897 +0000 UTC m=+2.845006624" lastFinishedPulling="2026-04-16 13:59:22.749867813 +0000 UTC m=+36.771155538" observedRunningTime="2026-04-16 13:59:25.791689867 +0000 UTC m=+39.812977616" watchObservedRunningTime="2026-04-16 13:59:25.792518616 +0000 UTC m=+39.813806364" Apr 16 13:59:28.112881 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:28.112842 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:28.113384 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:28.112939 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:28.113384 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:28.113008 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 13:59:28.113384 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:28.113034 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 13:59:28.113384 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:28.113045 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fd54c5856-xxztt: secret "image-registry-tls" not found Apr 16 13:59:28.113384 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:28.113088 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls podName:f334ac90-d973-40ab-bade-1a585fb2d9b2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:36.113066913 +0000 UTC m=+50.134354645 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls") pod "dns-default-rdgwf" (UID: "f334ac90-d973-40ab-bade-1a585fb2d9b2") : secret "dns-default-metrics-tls" not found Apr 16 13:59:28.113384 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:28.113106 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls podName:b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:36.113097948 +0000 UTC m=+50.134385673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls") pod "image-registry-7fd54c5856-xxztt" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2") : secret "image-registry-tls" not found Apr 16 13:59:28.213772 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:28.213729 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:28.213956 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:28.213839 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 13:59:28.213956 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:28.213905 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert podName:3bfef623-c79c-41c4-9fc5-0a25ecab1f4a nodeName:}" failed. No retries permitted until 2026-04-16 13:59:36.213887829 +0000 UTC m=+50.235175567 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert") pod "ingress-canary-92l7v" (UID: "3bfef623-c79c-41c4-9fc5-0a25ecab1f4a") : secret "canary-serving-cert" not found Apr 16 13:59:31.783940 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:31.783898 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-mqpls" event={"ID":"be2e3878-e04d-4a67-b1b8-fcab8f431c5b","Type":"ContainerStarted","Data":"fcc858bdb923dbb267d436f48217529d932c16b043da62e97ace6a8b9be41e94"} Apr 16 13:59:31.798921 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:31.798880 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-mqpls" podStartSLOduration=33.421452661000004 podStartE2EDuration="38.798866665s" podCreationTimestamp="2026-04-16 13:58:53 +0000 UTC" firstStartedPulling="2026-04-16 13:59:25.584717377 +0000 UTC m=+39.606005104" lastFinishedPulling="2026-04-16 13:59:30.96213138 +0000 UTC m=+44.983419108" observedRunningTime="2026-04-16 13:59:31.79779431 +0000 UTC m=+45.819082058" watchObservedRunningTime="2026-04-16 13:59:31.798866665 +0000 UTC m=+45.820154412" Apr 16 13:59:36.169011 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:36.168972 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:36.169510 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:36.169027 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:36.169510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:36.169113 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 13:59:36.169510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:36.169131 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 13:59:36.169510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:36.169156 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fd54c5856-xxztt: secret "image-registry-tls" not found Apr 16 13:59:36.169510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:36.169178 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls podName:f334ac90-d973-40ab-bade-1a585fb2d9b2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:52.16916383 +0000 UTC m=+66.190451558 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls") pod "dns-default-rdgwf" (UID: "f334ac90-d973-40ab-bade-1a585fb2d9b2") : secret "dns-default-metrics-tls" not found Apr 16 13:59:36.169510 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:36.169227 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls podName:b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2 nodeName:}" failed. No retries permitted until 2026-04-16 13:59:52.169209188 +0000 UTC m=+66.190496934 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls") pod "image-registry-7fd54c5856-xxztt" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2") : secret "image-registry-tls" not found Apr 16 13:59:36.269726 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:36.269691 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:36.269880 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:36.269823 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 13:59:36.269880 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:36.269878 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert podName:3bfef623-c79c-41c4-9fc5-0a25ecab1f4a nodeName:}" failed. No retries permitted until 2026-04-16 13:59:52.269864861 +0000 UTC m=+66.291152591 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert") pod "ingress-canary-92l7v" (UID: "3bfef623-c79c-41c4-9fc5-0a25ecab1f4a") : secret "canary-serving-cert" not found Apr 16 13:59:46.752671 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:46.752642 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ljx7q" Apr 16 13:59:52.181042 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.181005 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 13:59:52.181441 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.181050 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 13:59:52.181441 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.181141 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 13:59:52.181441 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.181145 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 13:59:52.181441 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.181164 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fd54c5856-xxztt: secret "image-registry-tls" not found Apr 16 13:59:52.181441 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.181187 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls podName:f334ac90-d973-40ab-bade-1a585fb2d9b2 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:24.181174091 +0000 UTC m=+98.202461816 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls") pod "dns-default-rdgwf" (UID: "f334ac90-d973-40ab-bade-1a585fb2d9b2") : secret "dns-default-metrics-tls" not found Apr 16 13:59:52.181441 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.181207 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls podName:b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:24.181192584 +0000 UTC m=+98.202480309 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls") pod "image-registry-7fd54c5856-xxztt" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2") : secret "image-registry-tls" not found Apr 16 13:59:52.282279 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.282225 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 13:59:52.282279 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.282285 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 13:59:52.282543 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.282387 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 13:59:52.282543 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.282455 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert podName:3bfef623-c79c-41c4-9fc5-0a25ecab1f4a nodeName:}" failed. No retries permitted until 2026-04-16 14:00:24.282436844 +0000 UTC m=+98.303724575 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert") pod "ingress-canary-92l7v" (UID: "3bfef623-c79c-41c4-9fc5-0a25ecab1f4a") : secret "canary-serving-cert" not found Apr 16 13:59:52.284704 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.284686 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 13:59:52.293287 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.293267 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 13:59:52.293381 ip-10-0-129-84 kubenswrapper[2569]: E0416 13:59:52.293325 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:56.293307125 +0000 UTC m=+130.314594852 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : secret "metrics-daemon-secret" not found Apr 16 13:59:52.484391 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.484323 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:52.486822 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.486806 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 13:59:52.496833 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.496816 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 13:59:52.508728 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.508708 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4qzk\" (UniqueName: \"kubernetes.io/projected/84822439-41ed-4bb8-b7d6-6784ad00eeaf-kube-api-access-n4qzk\") pod \"network-check-target-bdcn7\" (UID: \"84822439-41ed-4bb8-b7d6-6784ad00eeaf\") " pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:52.712501 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.712475 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-j4r87\"" Apr 16 13:59:52.720444 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.720430 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:52.850586 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:52.850558 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-bdcn7"] Apr 16 13:59:52.853905 ip-10-0-129-84 kubenswrapper[2569]: W0416 13:59:52.853880 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84822439_41ed_4bb8_b7d6_6784ad00eeaf.slice/crio-f3cb7eaea28b4e82c5539339484b993724903ee7ea34e7d46d60fd3807641568 WatchSource:0}: Error finding container f3cb7eaea28b4e82c5539339484b993724903ee7ea34e7d46d60fd3807641568: Status 404 returned error can't find the container with id f3cb7eaea28b4e82c5539339484b993724903ee7ea34e7d46d60fd3807641568 Apr 16 13:59:53.823626 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:53.823589 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-bdcn7" event={"ID":"84822439-41ed-4bb8-b7d6-6784ad00eeaf","Type":"ContainerStarted","Data":"f3cb7eaea28b4e82c5539339484b993724903ee7ea34e7d46d60fd3807641568"} Apr 16 13:59:56.830465 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:56.830347 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-bdcn7" event={"ID":"84822439-41ed-4bb8-b7d6-6784ad00eeaf","Type":"ContainerStarted","Data":"5ffea15af9d1aeb13efd147c32c47b7f45ef2b3299253fdbafbcb92bdf6bf031"} Apr 16 13:59:56.830799 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:56.830559 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 13:59:56.845841 ip-10-0-129-84 kubenswrapper[2569]: I0416 13:59:56.845801 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-bdcn7" podStartSLOduration=67.609964596 podStartE2EDuration="1m10.845790242s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 13:59:52.855747018 +0000 UTC m=+66.877034744" lastFinishedPulling="2026-04-16 13:59:56.091572661 +0000 UTC m=+70.112860390" observedRunningTime="2026-04-16 13:59:56.845349234 +0000 UTC m=+70.866636995" watchObservedRunningTime="2026-04-16 13:59:56.845790242 +0000 UTC m=+70.867077967" Apr 16 14:00:24.211690 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:24.211639 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:00:24.211690 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:24.211704 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 14:00:24.212193 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:24.211793 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 14:00:24.212193 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:24.211811 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-7fd54c5856-xxztt: secret "image-registry-tls" not found Apr 16 14:00:24.212193 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:24.211811 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 14:00:24.212193 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:24.211866 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls podName:b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:28.21185023 +0000 UTC m=+162.233137956 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls") pod "image-registry-7fd54c5856-xxztt" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2") : secret "image-registry-tls" not found Apr 16 14:00:24.212193 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:24.211879 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls podName:f334ac90-d973-40ab-bade-1a585fb2d9b2 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:28.211873489 +0000 UTC m=+162.233161216 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls") pod "dns-default-rdgwf" (UID: "f334ac90-d973-40ab-bade-1a585fb2d9b2") : secret "dns-default-metrics-tls" not found Apr 16 14:00:24.312347 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:24.312323 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 14:00:24.312472 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:24.312456 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 14:00:24.312519 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:24.312510 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert podName:3bfef623-c79c-41c4-9fc5-0a25ecab1f4a nodeName:}" failed. No retries permitted until 2026-04-16 14:01:28.312496893 +0000 UTC m=+162.333784619 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert") pod "ingress-canary-92l7v" (UID: "3bfef623-c79c-41c4-9fc5-0a25ecab1f4a") : secret "canary-serving-cert" not found Apr 16 14:00:27.834663 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:27.834630 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-bdcn7" Apr 16 14:00:51.349669 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.349626 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw"] Apr 16 14:00:51.354184 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.354163 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g"] Apr 16 14:00:51.355806 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.354578 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw" Apr 16 14:00:51.358002 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.357966 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.358468 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.358447 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-storage-operator\"/\"volume-data-source-validator-dockercfg-6qzdx\"" Apr 16 14:00:51.359048 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.358997 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\"" Apr 16 14:00:51.359349 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.359329 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\"" Apr 16 14:00:51.360160 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.360144 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-dockercfg-6jt8d\"" Apr 16 14:00:51.361197 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.361182 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\"" Apr 16 14:00:51.362228 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.362212 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 16 14:00:51.362347 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.362214 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 16 14:00:51.362347 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.362313 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemetry-config\"" Apr 16 14:00:51.373864 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.373844 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw"] Apr 16 14:00:51.374794 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.374777 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g"] Apr 16 14:00:51.500248 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.500217 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tgp6\" (UniqueName: \"kubernetes.io/projected/e56c6526-2094-4c6e-8688-b4a0daf73cb6-kube-api-access-8tgp6\") pod \"volume-data-source-validator-7d955d5dd4-9jvhw\" (UID: \"e56c6526-2094-4c6e-8688-b4a0daf73cb6\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw" Apr 16 14:00:51.500360 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.500274 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-telemetry-config\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.500360 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.500313 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.500360 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.500344 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qds8x\" (UniqueName: \"kubernetes.io/projected/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-kube-api-access-qds8x\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.600847 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.600752 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-telemetry-config\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.600847 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.600799 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.600847 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.600817 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qds8x\" (UniqueName: \"kubernetes.io/projected/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-kube-api-access-qds8x\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.601099 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:51.600913 2569 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:51.601099 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:51.600995 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls podName:fd96583f-aa32-457d-81a3-f0f6d9afe9d9 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:52.10097477 +0000 UTC m=+126.122262518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-jzj8g" (UID: "fd96583f-aa32-457d-81a3-f0f6d9afe9d9") : secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:51.601191 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.601097 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8tgp6\" (UniqueName: \"kubernetes.io/projected/e56c6526-2094-4c6e-8688-b4a0daf73cb6-kube-api-access-8tgp6\") pod \"volume-data-source-validator-7d955d5dd4-9jvhw\" (UID: \"e56c6526-2094-4c6e-8688-b4a0daf73cb6\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw" Apr 16 14:00:51.601503 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.601484 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-telemetry-config\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.609463 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.609443 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tgp6\" (UniqueName: \"kubernetes.io/projected/e56c6526-2094-4c6e-8688-b4a0daf73cb6-kube-api-access-8tgp6\") pod \"volume-data-source-validator-7d955d5dd4-9jvhw\" (UID: \"e56c6526-2094-4c6e-8688-b4a0daf73cb6\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw" Apr 16 14:00:51.609862 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.609840 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qds8x\" (UniqueName: \"kubernetes.io/projected/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-kube-api-access-qds8x\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:51.666293 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.666263 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw" Apr 16 14:00:51.775589 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.775559 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw"] Apr 16 14:00:51.778519 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:00:51.778493 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode56c6526_2094_4c6e_8688_b4a0daf73cb6.slice/crio-351e33abd6a1d7a122257164255dae15793cbc3430dad415c28af73be7e5a8d1 WatchSource:0}: Error finding container 351e33abd6a1d7a122257164255dae15793cbc3430dad415c28af73be7e5a8d1: Status 404 returned error can't find the container with id 351e33abd6a1d7a122257164255dae15793cbc3430dad415c28af73be7e5a8d1 Apr 16 14:00:51.928405 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:51.928373 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw" event={"ID":"e56c6526-2094-4c6e-8688-b4a0daf73cb6","Type":"ContainerStarted","Data":"351e33abd6a1d7a122257164255dae15793cbc3430dad415c28af73be7e5a8d1"} Apr 16 14:00:52.105176 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:52.105148 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:52.105315 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:52.105296 2569 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:52.105376 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:52.105367 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls podName:fd96583f-aa32-457d-81a3-f0f6d9afe9d9 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:53.105352759 +0000 UTC m=+127.126640486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-jzj8g" (UID: "fd96583f-aa32-457d-81a3-f0f6d9afe9d9") : secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:52.930930 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:52.930900 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw" event={"ID":"e56c6526-2094-4c6e-8688-b4a0daf73cb6","Type":"ContainerStarted","Data":"260d03cdf48583fb43bb7bb872c7d482fd03ddf4b885c6990797363c7833ae57"} Apr 16 14:00:52.945292 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:52.945250 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/volume-data-source-validator-7d955d5dd4-9jvhw" podStartSLOduration=0.857539708 podStartE2EDuration="1.945221785s" podCreationTimestamp="2026-04-16 14:00:51 +0000 UTC" firstStartedPulling="2026-04-16 14:00:51.780182475 +0000 UTC m=+125.801470204" lastFinishedPulling="2026-04-16 14:00:52.867864555 +0000 UTC m=+126.889152281" observedRunningTime="2026-04-16 14:00:52.943702985 +0000 UTC m=+126.964990734" watchObservedRunningTime="2026-04-16 14:00:52.945221785 +0000 UTC m=+126.966509533" Apr 16 14:00:53.112164 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:53.112126 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:53.112350 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:53.112277 2569 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:53.112350 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:53.112345 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls podName:fd96583f-aa32-457d-81a3-f0f6d9afe9d9 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:55.112325334 +0000 UTC m=+129.133613062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-jzj8g" (UID: "fd96583f-aa32-457d-81a3-f0f6d9afe9d9") : secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:55.125544 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:55.125506 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:55.125951 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:55.125641 2569 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:55.125951 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:55.125706 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls podName:fd96583f-aa32-457d-81a3-f0f6d9afe9d9 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:59.125690375 +0000 UTC m=+133.146978102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-jzj8g" (UID: "fd96583f-aa32-457d-81a3-f0f6d9afe9d9") : secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:56.333366 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.333312 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 14:00:56.333745 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:56.333459 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 14:00:56.333745 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:56.333531 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs podName:6cc56cdf-0ee0-49a9-b52c-65d8745cb390 nodeName:}" failed. No retries permitted until 2026-04-16 14:02:58.333515743 +0000 UTC m=+252.354803469 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs") pod "network-metrics-daemon-99gsl" (UID: "6cc56cdf-0ee0-49a9-b52c-65d8745cb390") : secret "metrics-daemon-secret" not found Apr 16 14:00:56.376858 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.376833 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r"] Apr 16 14:00:56.379632 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.379618 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:00:56.381858 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.381835 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 16 14:00:56.382022 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.382002 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 16 14:00:56.382126 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.382106 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 16 14:00:56.382731 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.382715 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-bcq7s\"" Apr 16 14:00:56.386164 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.386134 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r"] Apr 16 14:00:56.434300 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.434270 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mhpv\" (UniqueName: \"kubernetes.io/projected/1c0043ca-b332-4736-afd5-f76d52dc18f8-kube-api-access-8mhpv\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:00:56.434404 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.434315 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:00:56.535339 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.535311 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8mhpv\" (UniqueName: \"kubernetes.io/projected/1c0043ca-b332-4736-afd5-f76d52dc18f8-kube-api-access-8mhpv\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:00:56.535460 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.535365 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:00:56.535516 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:56.535505 2569 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 14:00:56.535568 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:56.535560 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls podName:1c0043ca-b332-4736-afd5-f76d52dc18f8 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:57.03554545 +0000 UTC m=+131.056833175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls") pod "cluster-samples-operator-667775844f-tv26r" (UID: "1c0043ca-b332-4736-afd5-f76d52dc18f8") : secret "samples-operator-tls" not found Apr 16 14:00:56.543613 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.543594 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mhpv\" (UniqueName: \"kubernetes.io/projected/1c0043ca-b332-4736-afd5-f76d52dc18f8-kube-api-access-8mhpv\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:00:56.655819 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:56.655800 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-jnqp7_07a3b071-1443-4213-b66f-ce5f4d7ff313/dns-node-resolver/0.log" Apr 16 14:00:57.038790 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.038698 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:00:57.038949 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:57.038845 2569 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 14:00:57.038949 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:57.038914 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls podName:1c0043ca-b332-4736-afd5-f76d52dc18f8 nodeName:}" failed. No retries permitted until 2026-04-16 14:00:58.038897997 +0000 UTC m=+132.060185723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls") pod "cluster-samples-operator-667775844f-tv26r" (UID: "1c0043ca-b332-4736-afd5-f76d52dc18f8") : secret "samples-operator-tls" not found Apr 16 14:00:57.384197 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.384165 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-d87b8d5fc-27q26"] Apr 16 14:00:57.387041 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.387026 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.389444 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.389426 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Apr 16 14:00:57.390378 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.390360 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Apr 16 14:00:57.390444 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.390428 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-n4gjh\"" Apr 16 14:00:57.390814 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.390800 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Apr 16 14:00:57.391469 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.391452 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Apr 16 14:00:57.396936 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.396915 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-d87b8d5fc-27q26"] Apr 16 14:00:57.397062 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.397048 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Apr 16 14:00:57.442122 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.442078 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5057171c-9c0f-4741-b8ce-987c40eb447d-config\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.442122 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.442127 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5057171c-9c0f-4741-b8ce-987c40eb447d-trusted-ca\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.442337 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.442166 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5057171c-9c0f-4741-b8ce-987c40eb447d-serving-cert\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.442337 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.442248 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzb5q\" (UniqueName: \"kubernetes.io/projected/5057171c-9c0f-4741-b8ce-987c40eb447d-kube-api-access-vzb5q\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.542671 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.542633 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vzb5q\" (UniqueName: \"kubernetes.io/projected/5057171c-9c0f-4741-b8ce-987c40eb447d-kube-api-access-vzb5q\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.542872 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.542690 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5057171c-9c0f-4741-b8ce-987c40eb447d-config\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.542872 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.542710 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5057171c-9c0f-4741-b8ce-987c40eb447d-trusted-ca\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.542872 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.542736 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5057171c-9c0f-4741-b8ce-987c40eb447d-serving-cert\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.543589 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.543565 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5057171c-9c0f-4741-b8ce-987c40eb447d-config\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.543704 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.543688 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5057171c-9c0f-4741-b8ce-987c40eb447d-trusted-ca\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.544934 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.544916 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5057171c-9c0f-4741-b8ce-987c40eb447d-serving-cert\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.551216 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.551194 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzb5q\" (UniqueName: \"kubernetes.io/projected/5057171c-9c0f-4741-b8ce-987c40eb447d-kube-api-access-vzb5q\") pod \"console-operator-d87b8d5fc-27q26\" (UID: \"5057171c-9c0f-4741-b8ce-987c40eb447d\") " pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.662760 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.662702 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-p4bsc_afa773e4-9e56-4130-bc08-0913d59056bb/node-ca/0.log" Apr 16 14:00:57.698009 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.697987 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:00:57.828795 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.828762 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-d87b8d5fc-27q26"] Apr 16 14:00:57.831883 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:00:57.831847 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5057171c_9c0f_4741_b8ce_987c40eb447d.slice/crio-10c5d707fa20dd21fe119f13f54b63b348d5cccae74b229cdca62b2e6d8d31eb WatchSource:0}: Error finding container 10c5d707fa20dd21fe119f13f54b63b348d5cccae74b229cdca62b2e6d8d31eb: Status 404 returned error can't find the container with id 10c5d707fa20dd21fe119f13f54b63b348d5cccae74b229cdca62b2e6d8d31eb Apr 16 14:00:57.941647 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:57.941583 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" event={"ID":"5057171c-9c0f-4741-b8ce-987c40eb447d","Type":"ContainerStarted","Data":"10c5d707fa20dd21fe119f13f54b63b348d5cccae74b229cdca62b2e6d8d31eb"} Apr 16 14:00:58.047514 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:58.047480 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:00:58.047640 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:58.047625 2569 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 14:00:58.047699 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:58.047690 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls podName:1c0043ca-b332-4736-afd5-f76d52dc18f8 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:00.047672946 +0000 UTC m=+134.068960689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls") pod "cluster-samples-operator-667775844f-tv26r" (UID: "1c0043ca-b332-4736-afd5-f76d52dc18f8") : secret "samples-operator-tls" not found Apr 16 14:00:59.155030 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:00:59.154996 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:00:59.155496 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:59.155170 2569 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 14:00:59.155496 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:00:59.155270 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls podName:fd96583f-aa32-457d-81a3-f0f6d9afe9d9 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:07.155231215 +0000 UTC m=+141.176518942 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-jzj8g" (UID: "fd96583f-aa32-457d-81a3-f0f6d9afe9d9") : secret "cluster-monitoring-operator-tls" not found Apr 16 14:01:00.061720 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:00.061680 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:01:00.061913 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:00.061857 2569 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 14:01:00.061979 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:00.061928 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls podName:1c0043ca-b332-4736-afd5-f76d52dc18f8 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:04.061907096 +0000 UTC m=+138.083194823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls") pod "cluster-samples-operator-667775844f-tv26r" (UID: "1c0043ca-b332-4736-afd5-f76d52dc18f8") : secret "samples-operator-tls" not found Apr 16 14:01:00.948924 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:00.948901 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/0.log" Apr 16 14:01:00.949228 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:00.948941 2569 generic.go:358] "Generic (PLEG): container finished" podID="5057171c-9c0f-4741-b8ce-987c40eb447d" containerID="001d39e6de5ea67d3d3c88c38423883216ca2b7d860f1c8aa720b17c31a075b9" exitCode=255 Apr 16 14:01:00.949228 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:00.949000 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" event={"ID":"5057171c-9c0f-4741-b8ce-987c40eb447d","Type":"ContainerDied","Data":"001d39e6de5ea67d3d3c88c38423883216ca2b7d860f1c8aa720b17c31a075b9"} Apr 16 14:01:00.949228 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:00.949185 2569 scope.go:117] "RemoveContainer" containerID="001d39e6de5ea67d3d3c88c38423883216ca2b7d860f1c8aa720b17c31a075b9" Apr 16 14:01:01.953703 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:01.953677 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/1.log" Apr 16 14:01:01.954104 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:01.954000 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/0.log" Apr 16 14:01:01.954104 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:01.954033 2569 generic.go:358] "Generic (PLEG): container finished" podID="5057171c-9c0f-4741-b8ce-987c40eb447d" containerID="33674d62372b1cd3eba721b6d7eaa9954afaf2e740f87612053bd6b75917d54d" exitCode=255 Apr 16 14:01:01.954104 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:01.954065 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" event={"ID":"5057171c-9c0f-4741-b8ce-987c40eb447d","Type":"ContainerDied","Data":"33674d62372b1cd3eba721b6d7eaa9954afaf2e740f87612053bd6b75917d54d"} Apr 16 14:01:01.954277 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:01.954107 2569 scope.go:117] "RemoveContainer" containerID="001d39e6de5ea67d3d3c88c38423883216ca2b7d860f1c8aa720b17c31a075b9" Apr 16 14:01:01.954365 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:01.954349 2569 scope.go:117] "RemoveContainer" containerID="33674d62372b1cd3eba721b6d7eaa9954afaf2e740f87612053bd6b75917d54d" Apr 16 14:01:01.954548 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:01.954528 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-d87b8d5fc-27q26_openshift-console-operator(5057171c-9c0f-4741-b8ce-987c40eb447d)\"" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" podUID="5057171c-9c0f-4741-b8ce-987c40eb447d" Apr 16 14:01:02.491386 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.491351 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk"] Apr 16 14:01:02.494479 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.494462 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk" Apr 16 14:01:02.496694 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.496675 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"network-diagnostics-dockercfg-fc67k\"" Apr 16 14:01:02.501521 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.501498 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk"] Apr 16 14:01:02.580495 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.580457 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhcc\" (UniqueName: \"kubernetes.io/projected/55298a9b-5dc0-448a-84f0-b2afbbac7a82-kube-api-access-fzhcc\") pod \"network-check-source-7b678d77c7-7fbbk\" (UID: \"55298a9b-5dc0-448a-84f0-b2afbbac7a82\") " pod="openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk" Apr 16 14:01:02.681023 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.680993 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhcc\" (UniqueName: \"kubernetes.io/projected/55298a9b-5dc0-448a-84f0-b2afbbac7a82-kube-api-access-fzhcc\") pod \"network-check-source-7b678d77c7-7fbbk\" (UID: \"55298a9b-5dc0-448a-84f0-b2afbbac7a82\") " pod="openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk" Apr 16 14:01:02.689012 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.688988 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhcc\" (UniqueName: \"kubernetes.io/projected/55298a9b-5dc0-448a-84f0-b2afbbac7a82-kube-api-access-fzhcc\") pod \"network-check-source-7b678d77c7-7fbbk\" (UID: \"55298a9b-5dc0-448a-84f0-b2afbbac7a82\") " pod="openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk" Apr 16 14:01:02.803111 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.803048 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk" Apr 16 14:01:02.911084 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.911057 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk"] Apr 16 14:01:02.914638 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:02.914611 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55298a9b_5dc0_448a_84f0_b2afbbac7a82.slice/crio-cc05f9662b2cd0c0eeee373b8c805956bc4c825d51c87bd26181da7c16c63a80 WatchSource:0}: Error finding container cc05f9662b2cd0c0eeee373b8c805956bc4c825d51c87bd26181da7c16c63a80: Status 404 returned error can't find the container with id cc05f9662b2cd0c0eeee373b8c805956bc4c825d51c87bd26181da7c16c63a80 Apr 16 14:01:02.956681 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.956659 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/1.log" Apr 16 14:01:02.957051 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.957028 2569 scope.go:117] "RemoveContainer" containerID="33674d62372b1cd3eba721b6d7eaa9954afaf2e740f87612053bd6b75917d54d" Apr 16 14:01:02.957231 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:02.957209 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-d87b8d5fc-27q26_openshift-console-operator(5057171c-9c0f-4741-b8ce-987c40eb447d)\"" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" podUID="5057171c-9c0f-4741-b8ce-987c40eb447d" Apr 16 14:01:02.957791 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:02.957772 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk" event={"ID":"55298a9b-5dc0-448a-84f0-b2afbbac7a82","Type":"ContainerStarted","Data":"cc05f9662b2cd0c0eeee373b8c805956bc4c825d51c87bd26181da7c16c63a80"} Apr 16 14:01:03.961974 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:03.961937 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk" event={"ID":"55298a9b-5dc0-448a-84f0-b2afbbac7a82","Type":"ContainerStarted","Data":"b6ee588fb6788f50e113b738473b6f4d967dc8b42fee69b76bef6422fc06f387"} Apr 16 14:01:03.976699 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:03.976652 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7b678d77c7-7fbbk" podStartSLOduration=1.976635916 podStartE2EDuration="1.976635916s" podCreationTimestamp="2026-04-16 14:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:01:03.975655616 +0000 UTC m=+137.996943364" watchObservedRunningTime="2026-04-16 14:01:03.976635916 +0000 UTC m=+137.997923716" Apr 16 14:01:04.092767 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:04.092726 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:01:04.092894 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:04.092863 2569 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 16 14:01:04.092929 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:04.092918 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls podName:1c0043ca-b332-4736-afd5-f76d52dc18f8 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:12.092903942 +0000 UTC m=+146.114191667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls") pod "cluster-samples-operator-667775844f-tv26r" (UID: "1c0043ca-b332-4736-afd5-f76d52dc18f8") : secret "samples-operator-tls" not found Apr 16 14:01:07.018703 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.018674 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l"] Apr 16 14:01:07.021678 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.021664 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" Apr 16 14:01:07.024033 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.024012 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Apr 16 14:01:07.024160 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.024033 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-gpgxr\"" Apr 16 14:01:07.024841 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.024824 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Apr 16 14:01:07.031041 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.031022 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l"] Apr 16 14:01:07.118030 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.118001 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tsbh\" (UniqueName: \"kubernetes.io/projected/4df7741a-6f2b-4ee8-a7a6-76bbed009a0a-kube-api-access-5tsbh\") pod \"migrator-64d4d94569-7pj7l\" (UID: \"4df7741a-6f2b-4ee8-a7a6-76bbed009a0a\") " pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" Apr 16 14:01:07.218566 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.218529 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5tsbh\" (UniqueName: \"kubernetes.io/projected/4df7741a-6f2b-4ee8-a7a6-76bbed009a0a-kube-api-access-5tsbh\") pod \"migrator-64d4d94569-7pj7l\" (UID: \"4df7741a-6f2b-4ee8-a7a6-76bbed009a0a\") " pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" Apr 16 14:01:07.218566 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.218572 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:01:07.218710 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:07.218684 2569 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Apr 16 14:01:07.218744 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:07.218737 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls podName:fd96583f-aa32-457d-81a3-f0f6d9afe9d9 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:23.218724223 +0000 UTC m=+157.240011948 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6667474d89-jzj8g" (UID: "fd96583f-aa32-457d-81a3-f0f6d9afe9d9") : secret "cluster-monitoring-operator-tls" not found Apr 16 14:01:07.226492 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.226463 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tsbh\" (UniqueName: \"kubernetes.io/projected/4df7741a-6f2b-4ee8-a7a6-76bbed009a0a-kube-api-access-5tsbh\") pod \"migrator-64d4d94569-7pj7l\" (UID: \"4df7741a-6f2b-4ee8-a7a6-76bbed009a0a\") " pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" Apr 16 14:01:07.330395 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.330346 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" Apr 16 14:01:07.439341 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.439310 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l"] Apr 16 14:01:07.442428 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:07.442398 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df7741a_6f2b_4ee8_a7a6_76bbed009a0a.slice/crio-b71ed8fbf24afbece9bd505f69ebd8cc47d6599259d889aada99b15de8a33b75 WatchSource:0}: Error finding container b71ed8fbf24afbece9bd505f69ebd8cc47d6599259d889aada99b15de8a33b75: Status 404 returned error can't find the container with id b71ed8fbf24afbece9bd505f69ebd8cc47d6599259d889aada99b15de8a33b75 Apr 16 14:01:07.698748 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.698720 2569 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:01:07.698887 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.698765 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:01:07.699081 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.699069 2569 scope.go:117] "RemoveContainer" containerID="33674d62372b1cd3eba721b6d7eaa9954afaf2e740f87612053bd6b75917d54d" Apr 16 14:01:07.699232 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:07.699216 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-d87b8d5fc-27q26_openshift-console-operator(5057171c-9c0f-4741-b8ce-987c40eb447d)\"" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" podUID="5057171c-9c0f-4741-b8ce-987c40eb447d" Apr 16 14:01:07.972077 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:07.971988 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" event={"ID":"4df7741a-6f2b-4ee8-a7a6-76bbed009a0a","Type":"ContainerStarted","Data":"b71ed8fbf24afbece9bd505f69ebd8cc47d6599259d889aada99b15de8a33b75"} Apr 16 14:01:08.104045 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.104016 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-c7hvq"] Apr 16 14:01:08.107181 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.107155 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.109556 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.109531 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-lstgd\"" Apr 16 14:01:08.109556 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.109535 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 16 14:01:08.110341 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.110293 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 16 14:01:08.110341 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.110313 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 16 14:01:08.110505 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.110310 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 16 14:01:08.117255 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.117221 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-c7hvq"] Apr 16 14:01:08.226427 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.226357 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.226427 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.226410 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-crio-socket\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.226621 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.226490 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vs78\" (UniqueName: \"kubernetes.io/projected/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-kube-api-access-9vs78\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.226621 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.226554 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.226621 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.226579 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-data-volume\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.327230 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.327197 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.327415 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.327265 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-crio-socket\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.327415 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.327315 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vs78\" (UniqueName: \"kubernetes.io/projected/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-kube-api-access-9vs78\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.327415 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:08.327351 2569 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 14:01:08.327415 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.327368 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-crio-socket\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.327415 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.327376 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.327655 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:08.327428 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls podName:e55837c2-18f0-4bdc-bfc7-fef4606ffaf9 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:08.82740689 +0000 UTC m=+142.848694617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls") pod "insights-runtime-extractor-c7hvq" (UID: "e55837c2-18f0-4bdc-bfc7-fef4606ffaf9") : secret "insights-runtime-extractor-tls" not found Apr 16 14:01:08.327655 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.327463 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-data-volume\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.327847 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.327818 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-data-volume\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.327960 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.327945 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.337716 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.337690 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vs78\" (UniqueName: \"kubernetes.io/projected/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-kube-api-access-9vs78\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.831559 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.831447 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:08.831747 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:08.831587 2569 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 14:01:08.831747 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:08.831659 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls podName:e55837c2-18f0-4bdc-bfc7-fef4606ffaf9 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:09.831637985 +0000 UTC m=+143.852925712 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls") pod "insights-runtime-extractor-c7hvq" (UID: "e55837c2-18f0-4bdc-bfc7-fef4606ffaf9") : secret "insights-runtime-extractor-tls" not found Apr 16 14:01:08.975604 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.975570 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" event={"ID":"4df7741a-6f2b-4ee8-a7a6-76bbed009a0a","Type":"ContainerStarted","Data":"b84152be6a8ff047a99d19e17e99a2e303d984e542443954d1cf21a0267cd6e6"} Apr 16 14:01:08.975604 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.975605 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" event={"ID":"4df7741a-6f2b-4ee8-a7a6-76bbed009a0a","Type":"ContainerStarted","Data":"7a5f9a1b969ba3bf067abb2c76c25af1a29391a4e82dceb6469eedc4a7d4ea8e"} Apr 16 14:01:08.990564 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:08.990522 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-64d4d94569-7pj7l" podStartSLOduration=0.974080872 podStartE2EDuration="1.990510295s" podCreationTimestamp="2026-04-16 14:01:07 +0000 UTC" firstStartedPulling="2026-04-16 14:01:07.444104954 +0000 UTC m=+141.465392679" lastFinishedPulling="2026-04-16 14:01:08.460534374 +0000 UTC m=+142.481822102" observedRunningTime="2026-04-16 14:01:08.990273227 +0000 UTC m=+143.011560994" watchObservedRunningTime="2026-04-16 14:01:08.990510295 +0000 UTC m=+143.011798042" Apr 16 14:01:09.839295 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:09.839233 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:09.839702 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:09.839390 2569 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 14:01:09.839702 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:09.839456 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls podName:e55837c2-18f0-4bdc-bfc7-fef4606ffaf9 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:11.839437965 +0000 UTC m=+145.860725717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls") pod "insights-runtime-extractor-c7hvq" (UID: "e55837c2-18f0-4bdc-bfc7-fef4606ffaf9") : secret "insights-runtime-extractor-tls" not found Apr 16 14:01:11.854913 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:11.854873 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:11.855402 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:11.855000 2569 secret.go:189] Couldn't get secret openshift-insights/insights-runtime-extractor-tls: secret "insights-runtime-extractor-tls" not found Apr 16 14:01:11.855402 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:11.855068 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls podName:e55837c2-18f0-4bdc-bfc7-fef4606ffaf9 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:15.855048263 +0000 UTC m=+149.876335994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "insights-runtime-extractor-tls" (UniqueName: "kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls") pod "insights-runtime-extractor-c7hvq" (UID: "e55837c2-18f0-4bdc-bfc7-fef4606ffaf9") : secret "insights-runtime-extractor-tls" not found Apr 16 14:01:12.157068 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:12.157036 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:01:12.159505 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:12.159481 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c0043ca-b332-4736-afd5-f76d52dc18f8-samples-operator-tls\") pod \"cluster-samples-operator-667775844f-tv26r\" (UID: \"1c0043ca-b332-4736-afd5-f76d52dc18f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:01:12.288565 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:12.288534 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" Apr 16 14:01:12.404732 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:12.404696 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r"] Apr 16 14:01:12.987007 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:12.986964 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" event={"ID":"1c0043ca-b332-4736-afd5-f76d52dc18f8","Type":"ContainerStarted","Data":"75722419f0f02f54fc7a470d68ccd4a5b2d4be04acf9f45fd95aa4b0375b2c56"} Apr 16 14:01:14.996971 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:14.996934 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" event={"ID":"1c0043ca-b332-4736-afd5-f76d52dc18f8","Type":"ContainerStarted","Data":"2e6828f51694ba2a50557e8eec95a28f4b59e8c9e69c39000e69064fdd01e995"} Apr 16 14:01:14.996971 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:14.996972 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" event={"ID":"1c0043ca-b332-4736-afd5-f76d52dc18f8","Type":"ContainerStarted","Data":"f959accff7e77e98782aa54132c4e63632ab2ab44436ba5c3fa4c0f751d07c43"} Apr 16 14:01:15.012977 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:15.012933 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-667775844f-tv26r" podStartSLOduration=17.498216613 podStartE2EDuration="19.012921362s" podCreationTimestamp="2026-04-16 14:00:56 +0000 UTC" firstStartedPulling="2026-04-16 14:01:12.445492383 +0000 UTC m=+146.466780108" lastFinishedPulling="2026-04-16 14:01:13.960197126 +0000 UTC m=+147.981484857" observedRunningTime="2026-04-16 14:01:15.012349298 +0000 UTC m=+149.033637045" watchObservedRunningTime="2026-04-16 14:01:15.012921362 +0000 UTC m=+149.034209109" Apr 16 14:01:15.889726 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:15.889691 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:15.891920 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:15.891894 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/e55837c2-18f0-4bdc-bfc7-fef4606ffaf9-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-c7hvq\" (UID: \"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9\") " pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:15.917894 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:15.917869 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-c7hvq" Apr 16 14:01:16.031490 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:16.031463 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-c7hvq"] Apr 16 14:01:16.035329 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:16.035304 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode55837c2_18f0_4bdc_bfc7_fef4606ffaf9.slice/crio-07399fbe806a522bb7305e9e998069e3432ed6e7e52bceefd9a9bf64c3c321e1 WatchSource:0}: Error finding container 07399fbe806a522bb7305e9e998069e3432ed6e7e52bceefd9a9bf64c3c321e1: Status 404 returned error can't find the container with id 07399fbe806a522bb7305e9e998069e3432ed6e7e52bceefd9a9bf64c3c321e1 Apr 16 14:01:17.003105 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:17.003031 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-c7hvq" event={"ID":"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9","Type":"ContainerStarted","Data":"1597ab23cb455c7b9a71324d19167ff92f87d503716f7723c871d89b2115d595"} Apr 16 14:01:17.003105 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:17.003067 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-c7hvq" event={"ID":"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9","Type":"ContainerStarted","Data":"18de3ea092e44056fd2c4d482e6cc7585985d7bb0cd343cb5b9d4944d01e995e"} Apr 16 14:01:17.003105 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:17.003078 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-c7hvq" event={"ID":"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9","Type":"ContainerStarted","Data":"07399fbe806a522bb7305e9e998069e3432ed6e7e52bceefd9a9bf64c3c321e1"} Apr 16 14:01:20.012963 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:20.012926 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-c7hvq" event={"ID":"e55837c2-18f0-4bdc-bfc7-fef4606ffaf9","Type":"ContainerStarted","Data":"483b3cc60e6e0a15f7e824509ae0503f2a3636f05494e744cf701d02cacd2e62"} Apr 16 14:01:20.031108 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:20.031057 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-c7hvq" podStartSLOduration=8.910462131 podStartE2EDuration="12.03104343s" podCreationTimestamp="2026-04-16 14:01:08 +0000 UTC" firstStartedPulling="2026-04-16 14:01:16.089757105 +0000 UTC m=+150.111044832" lastFinishedPulling="2026-04-16 14:01:19.21033839 +0000 UTC m=+153.231626131" observedRunningTime="2026-04-16 14:01:20.030301533 +0000 UTC m=+154.051589281" watchObservedRunningTime="2026-04-16 14:01:20.03104343 +0000 UTC m=+154.052331203" Apr 16 14:01:20.598941 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:20.598910 2569 scope.go:117] "RemoveContainer" containerID="33674d62372b1cd3eba721b6d7eaa9954afaf2e740f87612053bd6b75917d54d" Apr 16 14:01:21.016186 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:21.016166 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:01:21.016539 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:21.016489 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/1.log" Apr 16 14:01:21.016539 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:21.016522 2569 generic.go:358] "Generic (PLEG): container finished" podID="5057171c-9c0f-4741-b8ce-987c40eb447d" containerID="f7c3dbb3c64f05e416a42163d0b5baa1394e2026662236b249977000f7f92327" exitCode=255 Apr 16 14:01:21.016620 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:21.016595 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" event={"ID":"5057171c-9c0f-4741-b8ce-987c40eb447d","Type":"ContainerDied","Data":"f7c3dbb3c64f05e416a42163d0b5baa1394e2026662236b249977000f7f92327"} Apr 16 14:01:21.016654 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:21.016635 2569 scope.go:117] "RemoveContainer" containerID="33674d62372b1cd3eba721b6d7eaa9954afaf2e740f87612053bd6b75917d54d" Apr 16 14:01:21.016942 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:21.016921 2569 scope.go:117] "RemoveContainer" containerID="f7c3dbb3c64f05e416a42163d0b5baa1394e2026662236b249977000f7f92327" Apr 16 14:01:21.017135 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:21.017114 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-d87b8d5fc-27q26_openshift-console-operator(5057171c-9c0f-4741-b8ce-987c40eb447d)\"" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" podUID="5057171c-9c0f-4741-b8ce-987c40eb447d" Apr 16 14:01:22.020116 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:22.020092 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:01:23.248309 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:23.248262 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:01:23.250597 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:23.250579 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd96583f-aa32-457d-81a3-f0f6d9afe9d9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6667474d89-jzj8g\" (UID: \"fd96583f-aa32-457d-81a3-f0f6d9afe9d9\") " pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:01:23.350389 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:23.350342 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" podUID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" Apr 16 14:01:23.364507 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:23.364476 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-rdgwf" podUID="f334ac90-d973-40ab-bade-1a585fb2d9b2" Apr 16 14:01:23.381610 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:23.381587 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-92l7v" podUID="3bfef623-c79c-41c4-9fc5-0a25ecab1f4a" Apr 16 14:01:23.470473 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:23.470443 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" Apr 16 14:01:23.581594 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:23.581562 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g"] Apr 16 14:01:23.584807 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:23.584779 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd96583f_aa32_457d_81a3_f0f6d9afe9d9.slice/crio-856ceedc91d434e418601b868c7fb0f74d091a87abdf70ff8cc7074264b8a457 WatchSource:0}: Error finding container 856ceedc91d434e418601b868c7fb0f74d091a87abdf70ff8cc7074264b8a457: Status 404 returned error can't find the container with id 856ceedc91d434e418601b868c7fb0f74d091a87abdf70ff8cc7074264b8a457 Apr 16 14:01:23.623233 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:23.623200 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-99gsl" podUID="6cc56cdf-0ee0-49a9-b52c-65d8745cb390" Apr 16 14:01:24.025611 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:24.025570 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" event={"ID":"fd96583f-aa32-457d-81a3-f0f6d9afe9d9","Type":"ContainerStarted","Data":"856ceedc91d434e418601b868c7fb0f74d091a87abdf70ff8cc7074264b8a457"} Apr 16 14:01:24.025780 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:24.025640 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rdgwf" Apr 16 14:01:24.025780 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:24.025640 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:01:26.032093 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.032062 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" event={"ID":"fd96583f-aa32-457d-81a3-f0f6d9afe9d9","Type":"ContainerStarted","Data":"0c89599f89f685f06bac85487396879c46565d91e5dd6c830795ae46cf82a8b8"} Apr 16 14:01:26.051549 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.051494 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6667474d89-jzj8g" podStartSLOduration=33.338260919 podStartE2EDuration="35.051480235s" podCreationTimestamp="2026-04-16 14:00:51 +0000 UTC" firstStartedPulling="2026-04-16 14:01:23.587056911 +0000 UTC m=+157.608344651" lastFinishedPulling="2026-04-16 14:01:25.30027623 +0000 UTC m=+159.321563967" observedRunningTime="2026-04-16 14:01:26.050787259 +0000 UTC m=+160.072075008" watchObservedRunningTime="2026-04-16 14:01:26.051480235 +0000 UTC m=+160.072768021" Apr 16 14:01:26.579695 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.579660 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6"] Apr 16 14:01:26.582495 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.582469 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" Apr 16 14:01:26.585217 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.585192 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 16 14:01:26.586018 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.585992 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 16 14:01:26.586018 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.585998 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-vmfld\"" Apr 16 14:01:26.592555 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.592536 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6"] Apr 16 14:01:26.679835 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.679800 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a4977aa-a63b-46b6-a23c-4924a58855f4-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-fdps6\" (UID: \"5a4977aa-a63b-46b6-a23c-4924a58855f4\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" Apr 16 14:01:26.679966 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.679855 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5a4977aa-a63b-46b6-a23c-4924a58855f4-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-fdps6\" (UID: \"5a4977aa-a63b-46b6-a23c-4924a58855f4\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" Apr 16 14:01:26.706036 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.706011 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv"] Apr 16 14:01:26.709688 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.709668 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" Apr 16 14:01:26.712304 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.712284 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-dockercfg-2q89s\"" Apr 16 14:01:26.712693 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.712676 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-admission-webhook-tls\"" Apr 16 14:01:26.727490 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.727470 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv"] Apr 16 14:01:26.781057 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.781001 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-xrbxv\" (UID: \"345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" Apr 16 14:01:26.781057 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.781038 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a4977aa-a63b-46b6-a23c-4924a58855f4-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-fdps6\" (UID: \"5a4977aa-a63b-46b6-a23c-4924a58855f4\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" Apr 16 14:01:26.781219 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.781094 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5a4977aa-a63b-46b6-a23c-4924a58855f4-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-fdps6\" (UID: \"5a4977aa-a63b-46b6-a23c-4924a58855f4\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" Apr 16 14:01:26.781665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.781648 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a4977aa-a63b-46b6-a23c-4924a58855f4-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-fdps6\" (UID: \"5a4977aa-a63b-46b6-a23c-4924a58855f4\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" Apr 16 14:01:26.783387 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.783368 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5a4977aa-a63b-46b6-a23c-4924a58855f4-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-fdps6\" (UID: \"5a4977aa-a63b-46b6-a23c-4924a58855f4\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" Apr 16 14:01:26.882368 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.882342 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-xrbxv\" (UID: \"345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" Apr 16 14:01:26.884522 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.884504 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da-tls-certificates\") pod \"prometheus-operator-admission-webhook-9cb97cd87-xrbxv\" (UID: \"345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" Apr 16 14:01:26.892303 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:26.892284 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" Apr 16 14:01:27.006583 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:27.006549 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6"] Apr 16 14:01:27.010143 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:27.010116 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a4977aa_a63b_46b6_a23c_4924a58855f4.slice/crio-ce8f0b6ff2929f6ccda9b615fbf9a0fd6f5fc22d673241a91146445b54a29af9 WatchSource:0}: Error finding container ce8f0b6ff2929f6ccda9b615fbf9a0fd6f5fc22d673241a91146445b54a29af9: Status 404 returned error can't find the container with id ce8f0b6ff2929f6ccda9b615fbf9a0fd6f5fc22d673241a91146445b54a29af9 Apr 16 14:01:27.018210 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:27.018190 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" Apr 16 14:01:27.035475 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:27.035449 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" event={"ID":"5a4977aa-a63b-46b6-a23c-4924a58855f4","Type":"ContainerStarted","Data":"ce8f0b6ff2929f6ccda9b615fbf9a0fd6f5fc22d673241a91146445b54a29af9"} Apr 16 14:01:27.124980 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:27.124947 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv"] Apr 16 14:01:27.127431 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:27.127403 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod345f9e1d_ee2b_421c_b6b5_3b0ac2ee70da.slice/crio-6e4abed968966d8afa1cceedeccf929813a9b85312ddd5895c20a2b92f80148e WatchSource:0}: Error finding container 6e4abed968966d8afa1cceedeccf929813a9b85312ddd5895c20a2b92f80148e: Status 404 returned error can't find the container with id 6e4abed968966d8afa1cceedeccf929813a9b85312ddd5895c20a2b92f80148e Apr 16 14:01:27.698569 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:27.698523 2569 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:01:27.698569 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:27.698569 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:01:27.698900 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:27.698888 2569 scope.go:117] "RemoveContainer" containerID="f7c3dbb3c64f05e416a42163d0b5baa1394e2026662236b249977000f7f92327" Apr 16 14:01:27.699069 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:27.699053 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-d87b8d5fc-27q26_openshift-console-operator(5057171c-9c0f-4741-b8ce-987c40eb447d)\"" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" podUID="5057171c-9c0f-4741-b8ce-987c40eb447d" Apr 16 14:01:28.040844 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.040770 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" event={"ID":"345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da","Type":"ContainerStarted","Data":"6e4abed968966d8afa1cceedeccf929813a9b85312ddd5895c20a2b92f80148e"} Apr 16 14:01:28.293299 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.293211 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 14:01:28.293299 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.293273 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:01:28.295501 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.295467 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f334ac90-d973-40ab-bade-1a585fb2d9b2-metrics-tls\") pod \"dns-default-rdgwf\" (UID: \"f334ac90-d973-40ab-bade-1a585fb2d9b2\") " pod="openshift-dns/dns-default-rdgwf" Apr 16 14:01:28.295612 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.295539 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"image-registry-7fd54c5856-xxztt\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:01:28.394615 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.394578 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 14:01:28.396908 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.396878 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3bfef623-c79c-41c4-9fc5-0a25ecab1f4a-cert\") pod \"ingress-canary-92l7v\" (UID: \"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a\") " pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 14:01:28.529505 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.529476 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-gdggq\"" Apr 16 14:01:28.529505 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.529476 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-2c9fp\"" Apr 16 14:01:28.537270 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.537229 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rdgwf" Apr 16 14:01:28.537397 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.537276 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:01:28.669162 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.669135 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7fd54c5856-xxztt"] Apr 16 14:01:28.671777 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:28.671740 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb527f47e_3e49_4f0d_a50b_4c39b6a2a2c2.slice/crio-d3ff3b52e64c97edc328392c849536fef83a99ca9d1332a2a38cb41f5e107d04 WatchSource:0}: Error finding container d3ff3b52e64c97edc328392c849536fef83a99ca9d1332a2a38cb41f5e107d04: Status 404 returned error can't find the container with id d3ff3b52e64c97edc328392c849536fef83a99ca9d1332a2a38cb41f5e107d04 Apr 16 14:01:28.680654 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:28.680628 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rdgwf"] Apr 16 14:01:28.684717 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:28.684695 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf334ac90_d973_40ab_bade_1a585fb2d9b2.slice/crio-ed0db99134f10ba8f2b59522559a52447dbbb25cb1b95f5252866c06be5c00ef WatchSource:0}: Error finding container ed0db99134f10ba8f2b59522559a52447dbbb25cb1b95f5252866c06be5c00ef: Status 404 returned error can't find the container with id ed0db99134f10ba8f2b59522559a52447dbbb25cb1b95f5252866c06be5c00ef Apr 16 14:01:29.043877 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.043840 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rdgwf" event={"ID":"f334ac90-d973-40ab-bade-1a585fb2d9b2","Type":"ContainerStarted","Data":"ed0db99134f10ba8f2b59522559a52447dbbb25cb1b95f5252866c06be5c00ef"} Apr 16 14:01:29.045117 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.045092 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" event={"ID":"345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da","Type":"ContainerStarted","Data":"403a0b08d12370ba333da5043ca7b2dff5c6a970f6054b70eafc8889db78cb3e"} Apr 16 14:01:29.045330 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.045305 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" Apr 16 14:01:29.046476 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.046451 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" event={"ID":"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2","Type":"ContainerStarted","Data":"f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e"} Apr 16 14:01:29.046573 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.046479 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" event={"ID":"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2","Type":"ContainerStarted","Data":"d3ff3b52e64c97edc328392c849536fef83a99ca9d1332a2a38cb41f5e107d04"} Apr 16 14:01:29.046679 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.046592 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:01:29.050017 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.049999 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" Apr 16 14:01:29.060302 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.060266 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-9cb97cd87-xrbxv" podStartSLOduration=1.876252911 podStartE2EDuration="3.060253018s" podCreationTimestamp="2026-04-16 14:01:26 +0000 UTC" firstStartedPulling="2026-04-16 14:01:27.129078774 +0000 UTC m=+161.150366514" lastFinishedPulling="2026-04-16 14:01:28.313078891 +0000 UTC m=+162.334366621" observedRunningTime="2026-04-16 14:01:29.058505061 +0000 UTC m=+163.079792810" watchObservedRunningTime="2026-04-16 14:01:29.060253018 +0000 UTC m=+163.081540761" Apr 16 14:01:29.077735 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.077693 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" podStartSLOduration=162.077679435 podStartE2EDuration="2m42.077679435s" podCreationTimestamp="2026-04-16 13:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:01:29.076909876 +0000 UTC m=+163.098197628" watchObservedRunningTime="2026-04-16 14:01:29.077679435 +0000 UTC m=+163.098967183" Apr 16 14:01:29.871008 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.870969 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-68dtq"] Apr 16 14:01:29.874140 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.874119 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:29.880294 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.876863 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 16 14:01:29.880294 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.877306 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-xptqz\"" Apr 16 14:01:29.880294 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.877715 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 16 14:01:29.880294 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.877857 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 16 14:01:29.883303 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:29.883280 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-68dtq"] Apr 16 14:01:30.009322 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.009277 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wnl\" (UniqueName: \"kubernetes.io/projected/b2210f1f-3fa8-41ec-83c9-27e559ff4604-kube-api-access-85wnl\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.009482 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.009349 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b2210f1f-3fa8-41ec-83c9-27e559ff4604-metrics-client-ca\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.009482 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.009431 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.009598 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.009504 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.110269 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.110219 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-85wnl\" (UniqueName: \"kubernetes.io/projected/b2210f1f-3fa8-41ec-83c9-27e559ff4604-kube-api-access-85wnl\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.110696 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.110303 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b2210f1f-3fa8-41ec-83c9-27e559ff4604-metrics-client-ca\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.110696 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.110400 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.110696 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.110455 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.110861 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:30.110731 2569 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Apr 16 14:01:30.110861 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:30.110797 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-tls podName:b2210f1f-3fa8-41ec-83c9-27e559ff4604 nodeName:}" failed. No retries permitted until 2026-04-16 14:01:30.610775686 +0000 UTC m=+164.632063427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-tls") pod "prometheus-operator-78f957474d-68dtq" (UID: "b2210f1f-3fa8-41ec-83c9-27e559ff4604") : secret "prometheus-operator-tls" not found Apr 16 14:01:30.111013 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.110992 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b2210f1f-3fa8-41ec-83c9-27e559ff4604-metrics-client-ca\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.113004 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.112981 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.120948 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.120924 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-85wnl\" (UniqueName: \"kubernetes.io/projected/b2210f1f-3fa8-41ec-83c9-27e559ff4604-kube-api-access-85wnl\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.614300 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.614265 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.616550 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.616530 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b2210f1f-3fa8-41ec-83c9-27e559ff4604-prometheus-operator-tls\") pod \"prometheus-operator-78f957474d-68dtq\" (UID: \"b2210f1f-3fa8-41ec-83c9-27e559ff4604\") " pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.789114 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.789084 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" Apr 16 14:01:30.901090 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:30.901064 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-78f957474d-68dtq"] Apr 16 14:01:30.903969 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:30.903943 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2210f1f_3fa8_41ec_83c9_27e559ff4604.slice/crio-8fe4584f9672eabcfc021f29cab7daac66d068bb02ba2ccba148e7ad8a322c94 WatchSource:0}: Error finding container 8fe4584f9672eabcfc021f29cab7daac66d068bb02ba2ccba148e7ad8a322c94: Status 404 returned error can't find the container with id 8fe4584f9672eabcfc021f29cab7daac66d068bb02ba2ccba148e7ad8a322c94 Apr 16 14:01:31.054017 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:31.053980 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" event={"ID":"b2210f1f-3fa8-41ec-83c9-27e559ff4604","Type":"ContainerStarted","Data":"8fe4584f9672eabcfc021f29cab7daac66d068bb02ba2ccba148e7ad8a322c94"} Apr 16 14:01:31.055196 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:31.055149 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" event={"ID":"5a4977aa-a63b-46b6-a23c-4924a58855f4","Type":"ContainerStarted","Data":"43404a57127165635871fa0daf16404f67054377666123d739a7af571e2c0bfd"} Apr 16 14:01:31.056559 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:31.056526 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rdgwf" event={"ID":"f334ac90-d973-40ab-bade-1a585fb2d9b2","Type":"ContainerStarted","Data":"3c8541e9c45e21b8554d21ffd1980541a1ff9b9ef6c97d974780528bb7808f4d"} Apr 16 14:01:31.056667 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:31.056563 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rdgwf" event={"ID":"f334ac90-d973-40ab-bade-1a585fb2d9b2","Type":"ContainerStarted","Data":"b4e3ff62a9457d93a97c741427853c6e74b315a85c8b99dc34a2722fdda55c3b"} Apr 16 14:01:31.071543 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:31.071505 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-fdps6" podStartSLOduration=1.958869473 podStartE2EDuration="5.071490301s" podCreationTimestamp="2026-04-16 14:01:26 +0000 UTC" firstStartedPulling="2026-04-16 14:01:27.011987999 +0000 UTC m=+161.033275725" lastFinishedPulling="2026-04-16 14:01:30.124608815 +0000 UTC m=+164.145896553" observedRunningTime="2026-04-16 14:01:31.071345507 +0000 UTC m=+165.092633266" watchObservedRunningTime="2026-04-16 14:01:31.071490301 +0000 UTC m=+165.092778050" Apr 16 14:01:31.087456 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:31.087420 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rdgwf" podStartSLOduration=129.645222159 podStartE2EDuration="2m11.087407481s" podCreationTimestamp="2026-04-16 13:59:20 +0000 UTC" firstStartedPulling="2026-04-16 14:01:28.686439503 +0000 UTC m=+162.707727229" lastFinishedPulling="2026-04-16 14:01:30.128624809 +0000 UTC m=+164.149912551" observedRunningTime="2026-04-16 14:01:31.087303136 +0000 UTC m=+165.108590884" watchObservedRunningTime="2026-04-16 14:01:31.087407481 +0000 UTC m=+165.108695229" Apr 16 14:01:32.061295 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:32.061229 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-rdgwf" Apr 16 14:01:33.068921 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:33.068878 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" event={"ID":"b2210f1f-3fa8-41ec-83c9-27e559ff4604","Type":"ContainerStarted","Data":"6bd0730c833a154c8befa101153ca0e8e5aa3eb0c3aeca756516c75f893b6d35"} Apr 16 14:01:33.068921 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:33.068925 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" event={"ID":"b2210f1f-3fa8-41ec-83c9-27e559ff4604","Type":"ContainerStarted","Data":"06ac6b2e606dafedd3f9c3d9c8b6f26a1a93d5c25e952f433e3350adde49b26e"} Apr 16 14:01:33.090395 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:33.090351 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-78f957474d-68dtq" podStartSLOduration=2.755783523 podStartE2EDuration="4.090337744s" podCreationTimestamp="2026-04-16 14:01:29 +0000 UTC" firstStartedPulling="2026-04-16 14:01:30.905756816 +0000 UTC m=+164.927044541" lastFinishedPulling="2026-04-16 14:01:32.240311033 +0000 UTC m=+166.261598762" observedRunningTime="2026-04-16 14:01:33.08955906 +0000 UTC m=+167.110846807" watchObservedRunningTime="2026-04-16 14:01:33.090337744 +0000 UTC m=+167.111625496" Apr 16 14:01:35.213749 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.213717 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-js5md"] Apr 16 14:01:35.216755 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.216722 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.218743 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.218729 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 16 14:01:35.219000 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.218986 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-nv4cs\"" Apr 16 14:01:35.219056 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.218986 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 16 14:01:35.219093 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.219006 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 16 14:01:35.350507 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350481 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/206c995a-8df6-47f1-b70d-fae559390324-metrics-client-ca\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.350656 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350514 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-tls\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.350656 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350535 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-accelerators-collector-config\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.350656 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350618 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-textfile\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.350656 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350647 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-wtmp\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.350823 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350672 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb4fw\" (UniqueName: \"kubernetes.io/projected/206c995a-8df6-47f1-b70d-fae559390324-kube-api-access-sb4fw\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.350823 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350714 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.350823 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350742 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-root\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.350823 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.350813 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-sys\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.451565 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451538 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/206c995a-8df6-47f1-b70d-fae559390324-metrics-client-ca\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.451565 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451571 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-tls\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.451747 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451593 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-accelerators-collector-config\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.451747 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451624 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-textfile\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.451747 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451648 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-wtmp\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.451747 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451668 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sb4fw\" (UniqueName: \"kubernetes.io/projected/206c995a-8df6-47f1-b70d-fae559390324-kube-api-access-sb4fw\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.451747 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451697 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.451954 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451904 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-root\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.452006 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451941 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-wtmp\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.452006 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.451946 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-sys\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.452091 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.452014 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-root\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.452091 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.452018 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-textfile\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.452091 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.452081 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/206c995a-8df6-47f1-b70d-fae559390324-sys\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.452225 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.452195 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/206c995a-8df6-47f1-b70d-fae559390324-metrics-client-ca\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.452380 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.452358 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-accelerators-collector-config\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.454223 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.454203 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.454326 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.454273 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/206c995a-8df6-47f1-b70d-fae559390324-node-exporter-tls\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.459064 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.459039 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb4fw\" (UniqueName: \"kubernetes.io/projected/206c995a-8df6-47f1-b70d-fae559390324-kube-api-access-sb4fw\") pod \"node-exporter-js5md\" (UID: \"206c995a-8df6-47f1-b70d-fae559390324\") " pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.525806 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:35.525738 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-js5md" Apr 16 14:01:35.533646 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:35.533619 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod206c995a_8df6_47f1_b70d_fae559390324.slice/crio-33303dc5f871ea10f48dfb5095732fc6b3725abea7cf7b9abe8a3bb225dd8d65 WatchSource:0}: Error finding container 33303dc5f871ea10f48dfb5095732fc6b3725abea7cf7b9abe8a3bb225dd8d65: Status 404 returned error can't find the container with id 33303dc5f871ea10f48dfb5095732fc6b3725abea7cf7b9abe8a3bb225dd8d65 Apr 16 14:01:36.078000 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.077962 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-js5md" event={"ID":"206c995a-8df6-47f1-b70d-fae559390324","Type":"ContainerStarted","Data":"33303dc5f871ea10f48dfb5095732fc6b3725abea7cf7b9abe8a3bb225dd8d65"} Apr 16 14:01:36.283210 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.283181 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:01:36.286436 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.286415 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.288803 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.288780 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 16 14:01:36.288894 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.288866 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 16 14:01:36.288969 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.288890 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 16 14:01:36.288969 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.288893 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-z64zd\"" Apr 16 14:01:36.289069 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.289002 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 16 14:01:36.289184 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.289159 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 16 14:01:36.289409 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.289386 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 16 14:01:36.289518 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.289411 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 16 14:01:36.289518 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.289437 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 16 14:01:36.289842 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.289823 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 16 14:01:36.299644 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.299607 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359189 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359255 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359284 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-volume\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359312 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359344 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-web-config\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359370 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359393 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4ps\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-kube-api-access-7j4ps\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359421 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359448 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-out\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359475 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359515 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359550 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.359875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.359596 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.460706 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460684 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.460834 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460716 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.460834 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460744 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-volume\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.460834 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460772 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.460834 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460802 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-web-config\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.460834 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460826 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.461056 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460852 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4ps\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-kube-api-access-7j4ps\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.461056 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460883 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.461056 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460907 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-out\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.461056 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460935 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.461056 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.460976 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.461056 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.461011 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.461377 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.461058 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.461626 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.461572 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.463142 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.462822 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.463142 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:01:36.462953 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle podName:9fddbc08-5509-4e65-bdae-beebb2d56a6a nodeName:}" failed. No retries permitted until 2026-04-16 14:01:36.962933855 +0000 UTC m=+170.984221584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a") : configmap references non-existent config key: ca-bundle.crt Apr 16 14:01:36.463815 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.463789 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.464054 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.464036 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-web-config\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.464154 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.464118 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.464654 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.464629 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-out\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.464830 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.464806 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.464994 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.464970 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-volume\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.465426 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.465406 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.465602 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.465587 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.466080 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.466063 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.468637 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.468617 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4ps\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-kube-api-access-7j4ps\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.600510 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.600470 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 14:01:36.964585 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.964536 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:36.965298 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:36.965279 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:37.197387 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:37.197349 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:01:37.331119 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:37.331096 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:01:37.332780 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:37.332749 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fddbc08_5509_4e65_bdae_beebb2d56a6a.slice/crio-0927c8df13ce565385a71261f163829bba18f30f6e06b10982aa1d224015ed1c WatchSource:0}: Error finding container 0927c8df13ce565385a71261f163829bba18f30f6e06b10982aa1d224015ed1c: Status 404 returned error can't find the container with id 0927c8df13ce565385a71261f163829bba18f30f6e06b10982aa1d224015ed1c Apr 16 14:01:37.598003 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:37.597934 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 14:01:37.600399 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:37.600374 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-wz4rp\"" Apr 16 14:01:37.609142 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:37.609123 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-92l7v" Apr 16 14:01:37.843799 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:37.843771 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-92l7v"] Apr 16 14:01:37.846634 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:37.846607 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bfef623_c79c_41c4_9fc5_0a25ecab1f4a.slice/crio-d072f029bebea0a9112fbcf65ea9cf313eebdde478da44e2d8c5a739220b895e WatchSource:0}: Error finding container d072f029bebea0a9112fbcf65ea9cf313eebdde478da44e2d8c5a739220b895e: Status 404 returned error can't find the container with id d072f029bebea0a9112fbcf65ea9cf313eebdde478da44e2d8c5a739220b895e Apr 16 14:01:38.084376 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.084337 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerStarted","Data":"0927c8df13ce565385a71261f163829bba18f30f6e06b10982aa1d224015ed1c"} Apr 16 14:01:38.085718 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.085687 2569 generic.go:358] "Generic (PLEG): container finished" podID="206c995a-8df6-47f1-b70d-fae559390324" containerID="e695696cfbbc6b6289f12ee7c380bd9f9c849c54c88fbafe732bc32597d8028c" exitCode=0 Apr 16 14:01:38.085833 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.085755 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-js5md" event={"ID":"206c995a-8df6-47f1-b70d-fae559390324","Type":"ContainerDied","Data":"e695696cfbbc6b6289f12ee7c380bd9f9c849c54c88fbafe732bc32597d8028c"} Apr 16 14:01:38.086846 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.086824 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-92l7v" event={"ID":"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a","Type":"ContainerStarted","Data":"d072f029bebea0a9112fbcf65ea9cf313eebdde478da44e2d8c5a739220b895e"} Apr 16 14:01:38.283898 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.283865 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5789679d96-mrqgr"] Apr 16 14:01:38.287367 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.287347 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.289784 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.289760 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-metrics\"" Apr 16 14:01:38.289888 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.289762 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-web\"" Apr 16 14:01:38.289888 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.289861 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy\"" Apr 16 14:01:38.290013 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.289930 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-grpc-tls-a7l0mfhv2e8m2\"" Apr 16 14:01:38.290013 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.289933 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-kube-rbac-proxy-rules\"" Apr 16 14:01:38.290120 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.290030 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-dockercfg-znp62\"" Apr 16 14:01:38.290525 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.290375 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"thanos-querier-tls\"" Apr 16 14:01:38.297977 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.297898 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5789679d96-mrqgr"] Apr 16 14:01:38.378602 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.378534 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e188b038-b59c-4fb7-b257-d08cf56b2473-metrics-client-ca\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.378602 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.378576 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-tls\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.379030 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.378638 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-grpc-tls\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.379030 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.378704 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.379030 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.378732 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.379030 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.378767 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjbzh\" (UniqueName: \"kubernetes.io/projected/e188b038-b59c-4fb7-b257-d08cf56b2473-kube-api-access-gjbzh\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.379030 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.378805 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.379030 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.378873 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.480341 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.480300 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e188b038-b59c-4fb7-b257-d08cf56b2473-metrics-client-ca\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.480501 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.480347 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-tls\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.480501 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.480474 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-grpc-tls\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.480612 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.480538 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.480612 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.480572 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.480612 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.480606 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gjbzh\" (UniqueName: \"kubernetes.io/projected/e188b038-b59c-4fb7-b257-d08cf56b2473-kube-api-access-gjbzh\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.480773 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.480647 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.480773 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.480685 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.481186 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.481160 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e188b038-b59c-4fb7-b257-d08cf56b2473-metrics-client-ca\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.483976 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.483939 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.484250 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.484078 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-tls\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.484250 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.484208 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.484985 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.484958 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.485543 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.485520 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.485996 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.485976 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e188b038-b59c-4fb7-b257-d08cf56b2473-secret-grpc-tls\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.489079 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.489056 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjbzh\" (UniqueName: \"kubernetes.io/projected/e188b038-b59c-4fb7-b257-d08cf56b2473-kube-api-access-gjbzh\") pod \"thanos-querier-5789679d96-mrqgr\" (UID: \"e188b038-b59c-4fb7-b257-d08cf56b2473\") " pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.600132 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.600098 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:38.875416 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:38.875314 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5789679d96-mrqgr"] Apr 16 14:01:38.907417 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:38.907386 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode188b038_b59c_4fb7_b257_d08cf56b2473.slice/crio-a6d28a05cb56a74b1460699aa2905c3641becc0956118071c76cd03ba2a715b0 WatchSource:0}: Error finding container a6d28a05cb56a74b1460699aa2905c3641becc0956118071c76cd03ba2a715b0: Status 404 returned error can't find the container with id a6d28a05cb56a74b1460699aa2905c3641becc0956118071c76cd03ba2a715b0 Apr 16 14:01:39.092643 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.092565 2569 generic.go:358] "Generic (PLEG): container finished" podID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerID="ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201" exitCode=0 Apr 16 14:01:39.092796 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.092657 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerDied","Data":"ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201"} Apr 16 14:01:39.094819 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.094796 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-js5md" event={"ID":"206c995a-8df6-47f1-b70d-fae559390324","Type":"ContainerStarted","Data":"19bea91414fd13c3b3492ff28b59061c6ed4df21e6a88abaa6ddffe0c1b0d896"} Apr 16 14:01:39.094916 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.094824 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-js5md" event={"ID":"206c995a-8df6-47f1-b70d-fae559390324","Type":"ContainerStarted","Data":"299c3c0922f448ef3ee71ad12b7ce5059a2264794a8f2c4003ea4eaf0766acc0"} Apr 16 14:01:39.095997 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.095969 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" event={"ID":"e188b038-b59c-4fb7-b257-d08cf56b2473","Type":"ContainerStarted","Data":"a6d28a05cb56a74b1460699aa2905c3641becc0956118071c76cd03ba2a715b0"} Apr 16 14:01:39.130113 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.130068 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-js5md" podStartSLOduration=1.9157559690000001 podStartE2EDuration="4.130054636s" podCreationTimestamp="2026-04-16 14:01:35 +0000 UTC" firstStartedPulling="2026-04-16 14:01:35.535266507 +0000 UTC m=+169.556554232" lastFinishedPulling="2026-04-16 14:01:37.749565158 +0000 UTC m=+171.770852899" observedRunningTime="2026-04-16 14:01:39.129166066 +0000 UTC m=+173.150453816" watchObservedRunningTime="2026-04-16 14:01:39.130054636 +0000 UTC m=+173.151342384" Apr 16 14:01:39.544712 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.544676 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-68dc684744-wk424"] Apr 16 14:01:39.547788 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.547762 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.550469 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.550443 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-server-audit-profiles\"" Apr 16 14:01:39.551423 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.551384 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kubelet-serving-ca-bundle\"" Apr 16 14:01:39.551423 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.551388 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-dockercfg-ptjkw\"" Apr 16 14:01:39.551423 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.551408 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-client-certs\"" Apr 16 14:01:39.551645 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.551446 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-fle51lo1fqivu\"" Apr 16 14:01:39.551645 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.551524 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"metrics-server-tls\"" Apr 16 14:01:39.557998 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.557978 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68dc684744-wk424"] Apr 16 14:01:39.690955 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.690929 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-client-ca-bundle\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.691082 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.690970 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada681f4-20a2-4c44-96bb-7e711b04a8dc-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.691082 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.691005 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxvfk\" (UniqueName: \"kubernetes.io/projected/ada681f4-20a2-4c44-96bb-7e711b04a8dc-kube-api-access-mxvfk\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.691082 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.691028 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ada681f4-20a2-4c44-96bb-7e711b04a8dc-audit-log\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.691210 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.691109 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ada681f4-20a2-4c44-96bb-7e711b04a8dc-metrics-server-audit-profiles\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.691210 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.691143 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-secret-metrics-server-tls\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.691210 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.691171 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-secret-metrics-server-client-certs\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.792497 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.792460 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-secret-metrics-server-client-certs\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.792665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.792540 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-client-ca-bundle\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.792665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.792578 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada681f4-20a2-4c44-96bb-7e711b04a8dc-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.792665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.792609 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxvfk\" (UniqueName: \"kubernetes.io/projected/ada681f4-20a2-4c44-96bb-7e711b04a8dc-kube-api-access-mxvfk\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.792665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.792636 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ada681f4-20a2-4c44-96bb-7e711b04a8dc-audit-log\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.792902 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.792696 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ada681f4-20a2-4c44-96bb-7e711b04a8dc-metrics-server-audit-profiles\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.792902 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.792739 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-secret-metrics-server-tls\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.793161 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.793140 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ada681f4-20a2-4c44-96bb-7e711b04a8dc-audit-log\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.793487 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.793436 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada681f4-20a2-4c44-96bb-7e711b04a8dc-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.793966 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.793918 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ada681f4-20a2-4c44-96bb-7e711b04a8dc-metrics-server-audit-profiles\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.795609 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.795523 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-client-certs\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-secret-metrics-server-client-certs\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.796201 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.795862 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-secret-metrics-server-tls\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.796201 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.796162 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada681f4-20a2-4c44-96bb-7e711b04a8dc-client-ca-bundle\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.800233 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.800213 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxvfk\" (UniqueName: \"kubernetes.io/projected/ada681f4-20a2-4c44-96bb-7e711b04a8dc-kube-api-access-mxvfk\") pod \"metrics-server-68dc684744-wk424\" (UID: \"ada681f4-20a2-4c44-96bb-7e711b04a8dc\") " pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:39.860806 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:39.860775 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:40.014698 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.014666 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq"] Apr 16 14:01:40.017154 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.017130 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" Apr 16 14:01:40.019902 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.019478 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"monitoring-plugin-cert\"" Apr 16 14:01:40.019902 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.019546 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"default-dockercfg-bk9tc\"" Apr 16 14:01:40.020265 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.020229 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68dc684744-wk424"] Apr 16 14:01:40.024383 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:40.024299 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada681f4_20a2_4c44_96bb_7e711b04a8dc.slice/crio-0a2cde10b8a1a7bbf61f05bdf29eb7cc73047cb5b72c049fe62c09497fb6c8f6 WatchSource:0}: Error finding container 0a2cde10b8a1a7bbf61f05bdf29eb7cc73047cb5b72c049fe62c09497fb6c8f6: Status 404 returned error can't find the container with id 0a2cde10b8a1a7bbf61f05bdf29eb7cc73047cb5b72c049fe62c09497fb6c8f6 Apr 16 14:01:40.025840 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.025802 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq"] Apr 16 14:01:40.095954 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.095863 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/30c840a8-6433-4186-9b21-6cae0b492905-monitoring-plugin-cert\") pod \"monitoring-plugin-5876b4bbc7-t4hwq\" (UID: \"30c840a8-6433-4186-9b21-6cae0b492905\") " pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" Apr 16 14:01:40.100913 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.100879 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-92l7v" event={"ID":"3bfef623-c79c-41c4-9fc5-0a25ecab1f4a","Type":"ContainerStarted","Data":"89086704dc1b74e7b5365763a06fc352a9d25c97960172534427a449e7b998d9"} Apr 16 14:01:40.102297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.102270 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68dc684744-wk424" event={"ID":"ada681f4-20a2-4c44-96bb-7e711b04a8dc","Type":"ContainerStarted","Data":"0a2cde10b8a1a7bbf61f05bdf29eb7cc73047cb5b72c049fe62c09497fb6c8f6"} Apr 16 14:01:40.115253 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.115187 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-92l7v" podStartSLOduration=138.347966308 podStartE2EDuration="2m20.115153732s" podCreationTimestamp="2026-04-16 13:59:20 +0000 UTC" firstStartedPulling="2026-04-16 14:01:37.848594851 +0000 UTC m=+171.869882577" lastFinishedPulling="2026-04-16 14:01:39.615782264 +0000 UTC m=+173.637070001" observedRunningTime="2026-04-16 14:01:40.113624661 +0000 UTC m=+174.134912411" watchObservedRunningTime="2026-04-16 14:01:40.115153732 +0000 UTC m=+174.136441482" Apr 16 14:01:40.196573 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.196539 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/30c840a8-6433-4186-9b21-6cae0b492905-monitoring-plugin-cert\") pod \"monitoring-plugin-5876b4bbc7-t4hwq\" (UID: \"30c840a8-6433-4186-9b21-6cae0b492905\") " pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" Apr 16 14:01:40.200181 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.200155 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/30c840a8-6433-4186-9b21-6cae0b492905-monitoring-plugin-cert\") pod \"monitoring-plugin-5876b4bbc7-t4hwq\" (UID: \"30c840a8-6433-4186-9b21-6cae0b492905\") " pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" Apr 16 14:01:40.331603 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.331572 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" Apr 16 14:01:40.470372 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:40.470342 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq"] Apr 16 14:01:40.799445 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:40.799375 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c840a8_6433_4186_9b21_6cae0b492905.slice/crio-f278e43c0aab8a664f87c76ec8955259eb15b520491191b5d402a830be050c3e WatchSource:0}: Error finding container f278e43c0aab8a664f87c76ec8955259eb15b520491191b5d402a830be050c3e: Status 404 returned error can't find the container with id f278e43c0aab8a664f87c76ec8955259eb15b520491191b5d402a830be050c3e Apr 16 14:01:41.107327 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:41.107227 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" event={"ID":"30c840a8-6433-4186-9b21-6cae0b492905","Type":"ContainerStarted","Data":"f278e43c0aab8a664f87c76ec8955259eb15b520491191b5d402a830be050c3e"} Apr 16 14:01:41.598582 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:41.598561 2569 scope.go:117] "RemoveContainer" containerID="f7c3dbb3c64f05e416a42163d0b5baa1394e2026662236b249977000f7f92327" Apr 16 14:01:42.072786 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.072742 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rdgwf" Apr 16 14:01:42.115277 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.115218 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" event={"ID":"e188b038-b59c-4fb7-b257-d08cf56b2473","Type":"ContainerStarted","Data":"6ec17591125bfeb6cb5e22a8eb1f9bc52b1e57936d4dc14050b6cb4de7abbb3b"} Apr 16 14:01:42.115277 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.115281 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" event={"ID":"e188b038-b59c-4fb7-b257-d08cf56b2473","Type":"ContainerStarted","Data":"6a3cc973d762b2fdd145f301413174c299902bf7b56fcee39fa0d2b84a9cbe6a"} Apr 16 14:01:42.115506 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.115294 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" event={"ID":"e188b038-b59c-4fb7-b257-d08cf56b2473","Type":"ContainerStarted","Data":"de683287d8eee078b6d132f3507f9c73d25e3bf3fb068eba6f40f85c05a3ede8"} Apr 16 14:01:42.118675 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.118653 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:01:42.118798 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.118747 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" event={"ID":"5057171c-9c0f-4741-b8ce-987c40eb447d","Type":"ContainerStarted","Data":"37353d6fdb3dad72213a177cd28ddb1d4b739adba0b6bd13731dbdc75663d795"} Apr 16 14:01:42.119146 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.119101 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:01:42.123851 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.123825 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerStarted","Data":"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33"} Apr 16 14:01:42.123925 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.123859 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerStarted","Data":"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8"} Apr 16 14:01:42.123925 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.123872 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerStarted","Data":"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088"} Apr 16 14:01:42.123925 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.123885 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerStarted","Data":"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f"} Apr 16 14:01:42.123925 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.123896 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerStarted","Data":"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c"} Apr 16 14:01:42.135313 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.135264 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" podStartSLOduration=42.222562181 podStartE2EDuration="45.135250308s" podCreationTimestamp="2026-04-16 14:00:57 +0000 UTC" firstStartedPulling="2026-04-16 14:00:57.83377141 +0000 UTC m=+131.855059137" lastFinishedPulling="2026-04-16 14:01:00.746459527 +0000 UTC m=+134.767747264" observedRunningTime="2026-04-16 14:01:42.133584196 +0000 UTC m=+176.154871945" watchObservedRunningTime="2026-04-16 14:01:42.135250308 +0000 UTC m=+176.156538055" Apr 16 14:01:42.604403 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.604375 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-d87b8d5fc-27q26" Apr 16 14:01:42.825345 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.825302 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-586b57c7b4-78mff"] Apr 16 14:01:42.827648 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.827627 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-586b57c7b4-78mff" Apr 16 14:01:42.830765 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.830711 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Apr 16 14:01:42.833674 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.833653 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-qmn5h\"" Apr 16 14:01:42.833795 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.833726 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Apr 16 14:01:42.839762 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.839743 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-586b57c7b4-78mff"] Apr 16 14:01:42.926122 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:42.926094 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdz7v\" (UniqueName: \"kubernetes.io/projected/89931564-c377-407d-bb9d-ccfd758359f2-kube-api-access-hdz7v\") pod \"downloads-586b57c7b4-78mff\" (UID: \"89931564-c377-407d-bb9d-ccfd758359f2\") " pod="openshift-console/downloads-586b57c7b4-78mff" Apr 16 14:01:43.027427 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:43.027315 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hdz7v\" (UniqueName: \"kubernetes.io/projected/89931564-c377-407d-bb9d-ccfd758359f2-kube-api-access-hdz7v\") pod \"downloads-586b57c7b4-78mff\" (UID: \"89931564-c377-407d-bb9d-ccfd758359f2\") " pod="openshift-console/downloads-586b57c7b4-78mff" Apr 16 14:01:43.035294 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:43.035268 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdz7v\" (UniqueName: \"kubernetes.io/projected/89931564-c377-407d-bb9d-ccfd758359f2-kube-api-access-hdz7v\") pod \"downloads-586b57c7b4-78mff\" (UID: \"89931564-c377-407d-bb9d-ccfd758359f2\") " pod="openshift-console/downloads-586b57c7b4-78mff" Apr 16 14:01:43.138964 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:43.138941 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-586b57c7b4-78mff" Apr 16 14:01:43.278121 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:43.278039 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-586b57c7b4-78mff"] Apr 16 14:01:43.280302 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:01:43.280278 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89931564_c377_407d_bb9d_ccfd758359f2.slice/crio-7fce56f96b977df453b87f967ed6f1d2ddb628455ba1602edca3344aaec3bde9 WatchSource:0}: Error finding container 7fce56f96b977df453b87f967ed6f1d2ddb628455ba1602edca3344aaec3bde9: Status 404 returned error can't find the container with id 7fce56f96b977df453b87f967ed6f1d2ddb628455ba1602edca3344aaec3bde9 Apr 16 14:01:44.133494 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.133458 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-586b57c7b4-78mff" event={"ID":"89931564-c377-407d-bb9d-ccfd758359f2","Type":"ContainerStarted","Data":"7fce56f96b977df453b87f967ed6f1d2ddb628455ba1602edca3344aaec3bde9"} Apr 16 14:01:44.134763 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.134734 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" event={"ID":"30c840a8-6433-4186-9b21-6cae0b492905","Type":"ContainerStarted","Data":"9fd24503c3579a68cd532a29f6819eac10e1a5015a6ac89fb8d157c751f00fc5"} Apr 16 14:01:44.134914 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.134900 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" Apr 16 14:01:44.138080 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.138057 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerStarted","Data":"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55"} Apr 16 14:01:44.140489 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.140466 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" Apr 16 14:01:44.141038 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.141015 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" event={"ID":"e188b038-b59c-4fb7-b257-d08cf56b2473","Type":"ContainerStarted","Data":"f6dbda9578329bf6c4d86044d2131d17088df1cf6ad587bfb1727c5d422fd769"} Apr 16 14:01:44.141121 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.141047 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" event={"ID":"e188b038-b59c-4fb7-b257-d08cf56b2473","Type":"ContainerStarted","Data":"33a98f6681584f583a96478a19e14f78fb01b7c49ec72f1a234ccfb10620f47d"} Apr 16 14:01:44.141121 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.141062 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" event={"ID":"e188b038-b59c-4fb7-b257-d08cf56b2473","Type":"ContainerStarted","Data":"0eaeddb1dd66c40807597a7dd8e870014ba30d5609f906c0624c42ecbdbd2103"} Apr 16 14:01:44.141368 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.141349 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:44.142585 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.142561 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68dc684744-wk424" event={"ID":"ada681f4-20a2-4c44-96bb-7e711b04a8dc","Type":"ContainerStarted","Data":"06ccca7d578c130fbdbf645634844bbd0a7e3ba6ead16a1ab40154bd2ba74d64"} Apr 16 14:01:44.153428 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.153375 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5876b4bbc7-t4hwq" podStartSLOduration=2.885776422 podStartE2EDuration="5.153363131s" podCreationTimestamp="2026-04-16 14:01:39 +0000 UTC" firstStartedPulling="2026-04-16 14:01:40.801997092 +0000 UTC m=+174.823284819" lastFinishedPulling="2026-04-16 14:01:43.069583796 +0000 UTC m=+177.090871528" observedRunningTime="2026-04-16 14:01:44.15179234 +0000 UTC m=+178.173080088" watchObservedRunningTime="2026-04-16 14:01:44.153363131 +0000 UTC m=+178.174650879" Apr 16 14:01:44.187067 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.187019 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" podStartSLOduration=1.99762028 podStartE2EDuration="6.187003623s" podCreationTimestamp="2026-04-16 14:01:38 +0000 UTC" firstStartedPulling="2026-04-16 14:01:38.909775696 +0000 UTC m=+172.931063421" lastFinishedPulling="2026-04-16 14:01:43.099159034 +0000 UTC m=+177.120446764" observedRunningTime="2026-04-16 14:01:44.1861395 +0000 UTC m=+178.207427256" watchObservedRunningTime="2026-04-16 14:01:44.187003623 +0000 UTC m=+178.208291372" Apr 16 14:01:44.212535 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.212484 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.4465746680000002 podStartE2EDuration="8.212466861s" podCreationTimestamp="2026-04-16 14:01:36 +0000 UTC" firstStartedPulling="2026-04-16 14:01:37.334678475 +0000 UTC m=+171.355966200" lastFinishedPulling="2026-04-16 14:01:43.100570667 +0000 UTC m=+177.121858393" observedRunningTime="2026-04-16 14:01:44.21172726 +0000 UTC m=+178.233015008" watchObservedRunningTime="2026-04-16 14:01:44.212466861 +0000 UTC m=+178.233754610" Apr 16 14:01:44.232220 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:44.232161 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-68dc684744-wk424" podStartSLOduration=2.186351349 podStartE2EDuration="5.232142859s" podCreationTimestamp="2026-04-16 14:01:39 +0000 UTC" firstStartedPulling="2026-04-16 14:01:40.026970874 +0000 UTC m=+174.048258600" lastFinishedPulling="2026-04-16 14:01:43.072762371 +0000 UTC m=+177.094050110" observedRunningTime="2026-04-16 14:01:44.229795376 +0000 UTC m=+178.251083128" watchObservedRunningTime="2026-04-16 14:01:44.232142859 +0000 UTC m=+178.253430611" Apr 16 14:01:48.542250 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:48.542195 2569 patch_prober.go:28] interesting pod/image-registry-7fd54c5856-xxztt container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 16 14:01:48.542645 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:48.542274 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" podUID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:01:48.915382 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:48.915340 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-7fd54c5856-xxztt"] Apr 16 14:01:48.920417 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:48.920368 2569 patch_prober.go:28] interesting pod/image-registry-7fd54c5856-xxztt container/registry namespace/openshift-image-registry: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 16 14:01:48.920543 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:48.920429 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" podUID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:01:50.156062 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:50.156035 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5789679d96-mrqgr" Apr 16 14:01:58.920365 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:58.920332 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:01:59.861319 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:59.861283 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:01:59.861513 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:01:59.861334 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:02:02.207091 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:02.207049 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-586b57c7b4-78mff" event={"ID":"89931564-c377-407d-bb9d-ccfd758359f2","Type":"ContainerStarted","Data":"185d0a7041cf64665f4d1da6320d6bfb2edf3dc1710e04b9eb4e5f6b19694a3e"} Apr 16 14:02:02.207570 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:02.207378 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-586b57c7b4-78mff" Apr 16 14:02:02.225016 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:02.224972 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-586b57c7b4-78mff" podStartSLOduration=2.033630998 podStartE2EDuration="20.22495687s" podCreationTimestamp="2026-04-16 14:01:42 +0000 UTC" firstStartedPulling="2026-04-16 14:01:43.282466641 +0000 UTC m=+177.303754370" lastFinishedPulling="2026-04-16 14:02:01.473792516 +0000 UTC m=+195.495080242" observedRunningTime="2026-04-16 14:02:02.223645984 +0000 UTC m=+196.244933732" watchObservedRunningTime="2026-04-16 14:02:02.22495687 +0000 UTC m=+196.246244619" Apr 16 14:02:02.229557 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:02.229532 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-586b57c7b4-78mff" Apr 16 14:02:13.937484 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:13.937413 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" podUID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" containerName="registry" containerID="cri-o://f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e" gracePeriod=30 Apr 16 14:02:14.204335 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.204312 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:02:14.246764 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.246733 2569 generic.go:358] "Generic (PLEG): container finished" podID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" containerID="f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e" exitCode=0 Apr 16 14:02:14.246902 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.246794 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" Apr 16 14:02:14.246902 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.246795 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" event={"ID":"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2","Type":"ContainerDied","Data":"f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e"} Apr 16 14:02:14.246902 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.246826 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7fd54c5856-xxztt" event={"ID":"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2","Type":"ContainerDied","Data":"d3ff3b52e64c97edc328392c849536fef83a99ca9d1332a2a38cb41f5e107d04"} Apr 16 14:02:14.246902 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.246849 2569 scope.go:117] "RemoveContainer" containerID="f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e" Apr 16 14:02:14.254358 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.254341 2569 scope.go:117] "RemoveContainer" containerID="f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e" Apr 16 14:02:14.254612 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:02:14.254583 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e\": container with ID starting with f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e not found: ID does not exist" containerID="f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e" Apr 16 14:02:14.254665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.254622 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e"} err="failed to get container status \"f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e\": rpc error: code = NotFound desc = could not find container \"f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e\": container with ID starting with f85436e309ae71cb730d4c2a3b3973531b2a74fa48ba41e389b06eca38fc072e not found: ID does not exist" Apr 16 14:02:14.315600 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.315579 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-bound-sa-token\") pod \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " Apr 16 14:02:14.315744 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.315616 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-installation-pull-secrets\") pod \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " Apr 16 14:02:14.315744 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.315635 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-certificates\") pod \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " Apr 16 14:02:14.315744 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.315669 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") pod \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " Apr 16 14:02:14.315744 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.315697 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-trusted-ca\") pod \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " Apr 16 14:02:14.315957 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.315749 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-image-registry-private-configuration\") pod \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " Apr 16 14:02:14.315957 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.315798 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-ca-trust-extracted\") pod \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " Apr 16 14:02:14.315957 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.315833 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7gpl\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-kube-api-access-d7gpl\") pod \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\" (UID: \"b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2\") " Apr 16 14:02:14.316179 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.316152 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 14:02:14.316376 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.316353 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 14:02:14.318293 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.318248 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 14:02:14.318575 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.318548 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:14.318686 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.318617 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-kube-api-access-d7gpl" (OuterVolumeSpecName: "kube-api-access-d7gpl") pod "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2"). InnerVolumeSpecName "kube-api-access-d7gpl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 14:02:14.318686 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.318677 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 14:02:14.319104 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.319075 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:14.324352 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.324325 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" (UID: "b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:02:14.417268 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.417225 2569 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-bound-sa-token\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:14.417268 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.417265 2569 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-installation-pull-secrets\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:14.417384 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.417276 2569 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-certificates\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:14.417384 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.417285 2569 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-registry-tls\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:14.417384 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.417294 2569 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-trusted-ca\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:14.417384 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.417303 2569 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-image-registry-private-configuration\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:14.417384 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.417312 2569 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-ca-trust-extracted\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:14.417384 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.417321 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7gpl\" (UniqueName: \"kubernetes.io/projected/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2-kube-api-access-d7gpl\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:14.567532 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.567505 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-7fd54c5856-xxztt"] Apr 16 14:02:14.573104 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.573086 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-7fd54c5856-xxztt"] Apr 16 14:02:14.601820 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:14.601798 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" path="/var/lib/kubelet/pods/b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2/volumes" Apr 16 14:02:19.866136 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:19.866103 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:02:19.869854 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:19.869831 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-68dc684744-wk424" Apr 16 14:02:55.461087 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:55.461051 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:02:55.461560 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:55.461482 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="alertmanager" containerID="cri-o://c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c" gracePeriod=120 Apr 16 14:02:55.461617 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:55.461568 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy-web" containerID="cri-o://25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088" gracePeriod=120 Apr 16 14:02:55.461617 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:55.461581 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy" containerID="cri-o://b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8" gracePeriod=120 Apr 16 14:02:55.461725 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:55.461617 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="config-reloader" containerID="cri-o://5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f" gracePeriod=120 Apr 16 14:02:55.461725 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:55.461563 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy-metric" containerID="cri-o://d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33" gracePeriod=120 Apr 16 14:02:55.461828 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:55.461697 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="prom-label-proxy" containerID="cri-o://f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55" gracePeriod=120 Apr 16 14:02:56.378802 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.378773 2569 generic.go:358] "Generic (PLEG): container finished" podID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerID="f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55" exitCode=0 Apr 16 14:02:56.378802 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.378797 2569 generic.go:358] "Generic (PLEG): container finished" podID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerID="b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8" exitCode=0 Apr 16 14:02:56.378802 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.378804 2569 generic.go:358] "Generic (PLEG): container finished" podID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerID="5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f" exitCode=0 Apr 16 14:02:56.378802 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.378810 2569 generic.go:358] "Generic (PLEG): container finished" podID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerID="c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c" exitCode=0 Apr 16 14:02:56.379059 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.378846 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerDied","Data":"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55"} Apr 16 14:02:56.379059 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.378877 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerDied","Data":"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8"} Apr 16 14:02:56.379059 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.378888 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerDied","Data":"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f"} Apr 16 14:02:56.379059 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.378896 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerDied","Data":"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c"} Apr 16 14:02:56.695952 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.695933 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:56.761892 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.761858 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-cluster-tls-config\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762074 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.761910 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-out\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762074 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.761940 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762074 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.761983 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762287 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762082 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-web\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762287 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762119 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4ps\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-kube-api-access-7j4ps\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762287 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762142 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-metrics-client-ca\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762287 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762174 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-web-config\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762287 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762219 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762287 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762281 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-tls-assets\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762582 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762308 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-main-db\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762582 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762342 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-volume\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.762582 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762374 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-main-tls\") pod \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\" (UID: \"9fddbc08-5509-4e65-bdae-beebb2d56a6a\") " Apr 16 14:02:56.763025 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.762757 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 14:02:56.763171 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.763140 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 14:02:56.764939 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.764910 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:02:56.765491 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.765454 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-out" (OuterVolumeSpecName: "config-out") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:02:56.765491 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.765459 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-kube-api-access-7j4ps" (OuterVolumeSpecName: "kube-api-access-7j4ps") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "kube-api-access-7j4ps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 14:02:56.765648 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.765563 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:56.765712 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.765666 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:56.765766 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.765729 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:56.766811 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.766787 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 14:02:56.767166 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.767142 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-volume" (OuterVolumeSpecName: "config-volume") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:56.767507 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.767488 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:56.770054 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.770029 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-cluster-tls-config" (OuterVolumeSpecName: "cluster-tls-config") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "cluster-tls-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:56.778957 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.778914 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-web-config" (OuterVolumeSpecName: "web-config") pod "9fddbc08-5509-4e65-bdae-beebb2d56a6a" (UID: "9fddbc08-5509-4e65-bdae-beebb2d56a6a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 14:02:56.863551 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863530 2569 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-volume\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863551 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863550 2569 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-main-tls\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863559 2569 reconciler_common.go:299] "Volume detached for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-cluster-tls-config\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863569 2569 reconciler_common.go:299] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-config-out\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863579 2569 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863587 2569 reconciler_common.go:299] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-trusted-ca-bundle\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863596 2569 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-web\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863605 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7j4ps\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-kube-api-access-7j4ps\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863613 2569 reconciler_common.go:299] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9fddbc08-5509-4e65-bdae-beebb2d56a6a-metrics-client-ca\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863622 2569 reconciler_common.go:299] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-web-config\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863630 2569 reconciler_common.go:299] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9fddbc08-5509-4e65-bdae-beebb2d56a6a-secret-alertmanager-kube-rbac-proxy-metric\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863640 2569 reconciler_common.go:299] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9fddbc08-5509-4e65-bdae-beebb2d56a6a-tls-assets\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:56.863668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:56.863649 2569 reconciler_common.go:299] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9fddbc08-5509-4e65-bdae-beebb2d56a6a-alertmanager-main-db\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:02:57.384689 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.384648 2569 generic.go:358] "Generic (PLEG): container finished" podID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerID="d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33" exitCode=0 Apr 16 14:02:57.384689 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.384683 2569 generic.go:358] "Generic (PLEG): container finished" podID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerID="25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088" exitCode=0 Apr 16 14:02:57.384954 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.384741 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerDied","Data":"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33"} Apr 16 14:02:57.384954 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.384790 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerDied","Data":"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088"} Apr 16 14:02:57.384954 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.384805 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9fddbc08-5509-4e65-bdae-beebb2d56a6a","Type":"ContainerDied","Data":"0927c8df13ce565385a71261f163829bba18f30f6e06b10982aa1d224015ed1c"} Apr 16 14:02:57.384954 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.384806 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.384954 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.384818 2569 scope.go:117] "RemoveContainer" containerID="f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55" Apr 16 14:02:57.392209 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.392195 2569 scope.go:117] "RemoveContainer" containerID="d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33" Apr 16 14:02:57.399411 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.399394 2569 scope.go:117] "RemoveContainer" containerID="b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8" Apr 16 14:02:57.405825 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.405807 2569 scope.go:117] "RemoveContainer" containerID="25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088" Apr 16 14:02:57.407527 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.407507 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:02:57.412688 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.412667 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:02:57.413297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.413276 2569 scope.go:117] "RemoveContainer" containerID="5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f" Apr 16 14:02:57.419898 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.419882 2569 scope.go:117] "RemoveContainer" containerID="c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c" Apr 16 14:02:57.426188 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.426175 2569 scope.go:117] "RemoveContainer" containerID="ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201" Apr 16 14:02:57.434602 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.434587 2569 scope.go:117] "RemoveContainer" containerID="f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55" Apr 16 14:02:57.434877 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:02:57.434859 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55\": container with ID starting with f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55 not found: ID does not exist" containerID="f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55" Apr 16 14:02:57.434960 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.434889 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55"} err="failed to get container status \"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55\": rpc error: code = NotFound desc = could not find container \"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55\": container with ID starting with f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55 not found: ID does not exist" Apr 16 14:02:57.434960 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.434914 2569 scope.go:117] "RemoveContainer" containerID="d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33" Apr 16 14:02:57.435183 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:02:57.435160 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33\": container with ID starting with d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33 not found: ID does not exist" containerID="d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33" Apr 16 14:02:57.435256 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435191 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33"} err="failed to get container status \"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33\": rpc error: code = NotFound desc = could not find container \"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33\": container with ID starting with d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33 not found: ID does not exist" Apr 16 14:02:57.435256 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435213 2569 scope.go:117] "RemoveContainer" containerID="b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8" Apr 16 14:02:57.435466 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435450 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:02:57.435506 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:02:57.435478 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8\": container with ID starting with b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8 not found: ID does not exist" containerID="b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8" Apr 16 14:02:57.435540 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435497 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8"} err="failed to get container status \"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8\": rpc error: code = NotFound desc = could not find container \"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8\": container with ID starting with b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8 not found: ID does not exist" Apr 16 14:02:57.435540 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435516 2569 scope.go:117] "RemoveContainer" containerID="25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088" Apr 16 14:02:57.435780 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:02:57.435760 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088\": container with ID starting with 25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088 not found: ID does not exist" containerID="25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088" Apr 16 14:02:57.435864 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435787 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088"} err="failed to get container status \"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088\": rpc error: code = NotFound desc = could not find container \"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088\": container with ID starting with 25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088 not found: ID does not exist" Apr 16 14:02:57.435864 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435807 2569 scope.go:117] "RemoveContainer" containerID="5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f" Apr 16 14:02:57.435864 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435844 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="init-config-reloader" Apr 16 14:02:57.435864 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435857 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="init-config-reloader" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435894 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="alertmanager" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435900 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="alertmanager" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435920 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435930 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435944 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="config-reloader" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435952 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="config-reloader" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435968 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy-metric" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435974 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy-metric" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435980 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="prom-label-proxy" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435985 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="prom-label-proxy" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.435993 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy-web" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436000 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy-web" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436013 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" containerName="registry" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436021 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" containerName="registry" Apr 16 14:02:57.436043 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:02:57.436037 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f\": container with ID starting with 5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f not found: ID does not exist" containerID="5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436058 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f"} err="failed to get container status \"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f\": rpc error: code = NotFound desc = could not find container \"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f\": container with ID starting with 5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f not found: ID does not exist" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436076 2569 scope.go:117] "RemoveContainer" containerID="c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436102 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436119 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="b527f47e-3e49-4f0d-a50b-4c39b6a2a2c2" containerName="registry" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436129 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="config-reloader" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436139 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="alertmanager" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436145 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy-web" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436151 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="prom-label-proxy" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436157 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" containerName="kube-rbac-proxy-metric" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:02:57.436318 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c\": container with ID starting with c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c not found: ID does not exist" containerID="c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436338 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c"} err="failed to get container status \"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c\": rpc error: code = NotFound desc = could not find container \"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c\": container with ID starting with c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c not found: ID does not exist" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436353 2569 scope.go:117] "RemoveContainer" containerID="ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:02:57.436556 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201\": container with ID starting with ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201 not found: ID does not exist" containerID="ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436581 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201"} err="failed to get container status \"ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201\": rpc error: code = NotFound desc = could not find container \"ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201\": container with ID starting with ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201 not found: ID does not exist" Apr 16 14:02:57.436599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436600 2569 scope.go:117] "RemoveContainer" containerID="f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55" Apr 16 14:02:57.437077 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436787 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55"} err="failed to get container status \"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55\": rpc error: code = NotFound desc = could not find container \"f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55\": container with ID starting with f73b8712fb89c12f03d3f0951501fb50fe3c231f8821e0d4ba0cee3787e99f55 not found: ID does not exist" Apr 16 14:02:57.437077 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436801 2569 scope.go:117] "RemoveContainer" containerID="d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33" Apr 16 14:02:57.437077 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.436995 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33"} err="failed to get container status \"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33\": rpc error: code = NotFound desc = could not find container \"d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33\": container with ID starting with d3bd2a745d90f9dbf4c4a3fab914d6f17ed4240601f478a7b26162a113d61f33 not found: ID does not exist" Apr 16 14:02:57.437077 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437015 2569 scope.go:117] "RemoveContainer" containerID="b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8" Apr 16 14:02:57.437229 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437211 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8"} err="failed to get container status \"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8\": rpc error: code = NotFound desc = could not find container \"b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8\": container with ID starting with b28045c1defd042a44fdca7ebb86445586bd133e595606633e7e8974f98b41d8 not found: ID does not exist" Apr 16 14:02:57.437311 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437230 2569 scope.go:117] "RemoveContainer" containerID="25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088" Apr 16 14:02:57.437459 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437439 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088"} err="failed to get container status \"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088\": rpc error: code = NotFound desc = could not find container \"25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088\": container with ID starting with 25f5dd2a98d75bfec477246a71ceb285ea0da818595de986d39e3e7f336e0088 not found: ID does not exist" Apr 16 14:02:57.437505 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437461 2569 scope.go:117] "RemoveContainer" containerID="5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f" Apr 16 14:02:57.437712 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437691 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f"} err="failed to get container status \"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f\": rpc error: code = NotFound desc = could not find container \"5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f\": container with ID starting with 5e0783b12ac8c341290000bb058571e3fde2095b0f4e771588ebb432b4ec4a3f not found: ID does not exist" Apr 16 14:02:57.437712 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437710 2569 scope.go:117] "RemoveContainer" containerID="c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c" Apr 16 14:02:57.437937 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437919 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c"} err="failed to get container status \"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c\": rpc error: code = NotFound desc = could not find container \"c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c\": container with ID starting with c10c739eecc6b36270aa91b84f64b1d194b89ff9cee30da0965553f7c815de5c not found: ID does not exist" Apr 16 14:02:57.437978 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.437938 2569 scope.go:117] "RemoveContainer" containerID="ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201" Apr 16 14:02:57.438150 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.438133 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201"} err="failed to get container status \"ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201\": rpc error: code = NotFound desc = could not find container \"ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201\": container with ID starting with ea270983ba4d78a9941ebc3e9b6f0e2a209e90766b34c4f67c5ace8cd75ac201 not found: ID does not exist" Apr 16 14:02:57.441636 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.441618 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.444469 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.443954 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-dockercfg-z64zd\"" Apr 16 14:02:57.444469 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.444198 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-web-config\"" Apr 16 14:02:57.444469 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.444437 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-metric\"" Apr 16 14:02:57.444672 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.444652 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls-assets-0\"" Apr 16 14:02:57.444760 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.444728 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-cluster-tls-config\"" Apr 16 14:02:57.444872 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.444813 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-tls\"" Apr 16 14:02:57.444872 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.444827 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-main-generated\"" Apr 16 14:02:57.444872 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.444849 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy-web\"" Apr 16 14:02:57.445033 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.444884 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"alertmanager-kube-rbac-proxy\"" Apr 16 14:02:57.450229 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.450200 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"alertmanager-trusted-ca-bundle\"" Apr 16 14:02:57.451318 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.451293 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:02:57.568753 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.568718 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.568878 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.568761 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-web-config\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.568878 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.568788 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e761084-6a61-409d-8d29-cf8c7fc2c750-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.568878 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.568835 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-config-volume\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.568878 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.568856 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.569010 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.568927 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmbf\" (UniqueName: \"kubernetes.io/projected/1e761084-6a61-409d-8d29-cf8c7fc2c750-kube-api-access-8bmbf\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.569010 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.568953 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.569010 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.568977 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e761084-6a61-409d-8d29-cf8c7fc2c750-config-out\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.569010 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.569000 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e761084-6a61-409d-8d29-cf8c7fc2c750-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.569135 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.569053 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.569135 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.569073 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e761084-6a61-409d-8d29-cf8c7fc2c750-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.569135 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.569113 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.569223 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.569138 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1e761084-6a61-409d-8d29-cf8c7fc2c750-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.669934 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.669866 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8bmbf\" (UniqueName: \"kubernetes.io/projected/1e761084-6a61-409d-8d29-cf8c7fc2c750-kube-api-access-8bmbf\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.669934 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.669899 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.669934 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.669917 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e761084-6a61-409d-8d29-cf8c7fc2c750-config-out\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670113 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.669943 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e761084-6a61-409d-8d29-cf8c7fc2c750-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670113 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.669980 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670113 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670010 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e761084-6a61-409d-8d29-cf8c7fc2c750-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670279 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670176 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670279 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670232 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1e761084-6a61-409d-8d29-cf8c7fc2c750-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670382 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670298 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670382 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670328 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-web-config\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670382 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670364 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e761084-6a61-409d-8d29-cf8c7fc2c750-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670523 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670412 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-config-volume\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670523 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670439 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670523 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670513 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1e761084-6a61-409d-8d29-cf8c7fc2c750-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.670703 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.670682 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1e761084-6a61-409d-8d29-cf8c7fc2c750-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.671786 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.671443 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e761084-6a61-409d-8d29-cf8c7fc2c750-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.672658 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.672628 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e761084-6a61-409d-8d29-cf8c7fc2c750-config-out\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.672884 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.672864 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e761084-6a61-409d-8d29-cf8c7fc2c750-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.673459 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.673437 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.673459 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.673447 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.673731 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.673709 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-cluster-tls-config\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.674176 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.674152 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-config-volume\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.674294 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.674198 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.674773 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.674755 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.674960 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.674944 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e761084-6a61-409d-8d29-cf8c7fc2c750-web-config\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.677810 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.677792 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bmbf\" (UniqueName: \"kubernetes.io/projected/1e761084-6a61-409d-8d29-cf8c7fc2c750-kube-api-access-8bmbf\") pod \"alertmanager-main-0\" (UID: \"1e761084-6a61-409d-8d29-cf8c7fc2c750\") " pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.753414 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.753378 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Apr 16 14:02:57.901603 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:57.901540 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Apr 16 14:02:57.904189 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:02:57.904161 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e761084_6a61_409d_8d29_cf8c7fc2c750.slice/crio-660ff6d4869b1ff8d8b4e9f46b24254f5875f09f22ffd671198ce1f8a6d91bdf WatchSource:0}: Error finding container 660ff6d4869b1ff8d8b4e9f46b24254f5875f09f22ffd671198ce1f8a6d91bdf: Status 404 returned error can't find the container with id 660ff6d4869b1ff8d8b4e9f46b24254f5875f09f22ffd671198ce1f8a6d91bdf Apr 16 14:02:58.377024 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.376943 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 14:02:58.379224 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.379204 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cc56cdf-0ee0-49a9-b52c-65d8745cb390-metrics-certs\") pod \"network-metrics-daemon-99gsl\" (UID: \"6cc56cdf-0ee0-49a9-b52c-65d8745cb390\") " pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 14:02:58.388807 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.388781 2569 generic.go:358] "Generic (PLEG): container finished" podID="1e761084-6a61-409d-8d29-cf8c7fc2c750" containerID="fb32386459eada50d9c0526fcc22ee0970b53f0751cde57c0dc98cd0ebbac94f" exitCode=0 Apr 16 14:02:58.388914 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.388852 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1e761084-6a61-409d-8d29-cf8c7fc2c750","Type":"ContainerDied","Data":"fb32386459eada50d9c0526fcc22ee0970b53f0751cde57c0dc98cd0ebbac94f"} Apr 16 14:02:58.388914 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.388874 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1e761084-6a61-409d-8d29-cf8c7fc2c750","Type":"ContainerStarted","Data":"660ff6d4869b1ff8d8b4e9f46b24254f5875f09f22ffd671198ce1f8a6d91bdf"} Apr 16 14:02:58.503618 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.503596 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-b98gv\"" Apr 16 14:02:58.511455 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.511438 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99gsl" Apr 16 14:02:58.608951 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.608625 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fddbc08-5509-4e65-bdae-beebb2d56a6a" path="/var/lib/kubelet/pods/9fddbc08-5509-4e65-bdae-beebb2d56a6a/volumes" Apr 16 14:02:58.642428 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:58.642400 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-99gsl"] Apr 16 14:02:58.644834 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:02:58.644808 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cc56cdf_0ee0_49a9_b52c_65d8745cb390.slice/crio-86a754068671e38584ba220fadf381891a134f5ced0cab027e0cc7f15dc52977 WatchSource:0}: Error finding container 86a754068671e38584ba220fadf381891a134f5ced0cab027e0cc7f15dc52977: Status 404 returned error can't find the container with id 86a754068671e38584ba220fadf381891a134f5ced0cab027e0cc7f15dc52977 Apr 16 14:02:59.397003 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.396960 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1e761084-6a61-409d-8d29-cf8c7fc2c750","Type":"ContainerStarted","Data":"05f9208bf6f16d5c159a4733fc753ae850c7c4e7b1756d6350b89bc85350e566"} Apr 16 14:02:59.397545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.397012 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1e761084-6a61-409d-8d29-cf8c7fc2c750","Type":"ContainerStarted","Data":"5f4387081ad78b1c6dccbfda5819c5429a062815eb66e7eddca924bb3ee22c5b"} Apr 16 14:02:59.397545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.397027 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1e761084-6a61-409d-8d29-cf8c7fc2c750","Type":"ContainerStarted","Data":"644d35b44a25fe223447420d54ae11c0032770062d16248d9c42994ac4a0701f"} Apr 16 14:02:59.397545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.397039 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1e761084-6a61-409d-8d29-cf8c7fc2c750","Type":"ContainerStarted","Data":"f01eff483bd0c86557026e5306bf90f17ec4a1f2908a7cca3707f3bdd2b2410c"} Apr 16 14:02:59.397545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.397050 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1e761084-6a61-409d-8d29-cf8c7fc2c750","Type":"ContainerStarted","Data":"f6f97fae601793dbc8cb1e4cda7ac354863eae031068ca31788115b12d68287b"} Apr 16 14:02:59.397545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.397062 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1e761084-6a61-409d-8d29-cf8c7fc2c750","Type":"ContainerStarted","Data":"6a8e0f684e167291031f6f46622ed7ca73281b46b790dcd59f664a000b957f1a"} Apr 16 14:02:59.398292 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.398256 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99gsl" event={"ID":"6cc56cdf-0ee0-49a9-b52c-65d8745cb390","Type":"ContainerStarted","Data":"86a754068671e38584ba220fadf381891a134f5ced0cab027e0cc7f15dc52977"} Apr 16 14:02:59.426959 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.426898 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.426879287 podStartE2EDuration="2.426879287s" podCreationTimestamp="2026-04-16 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:02:59.425268755 +0000 UTC m=+253.446556505" watchObservedRunningTime="2026-04-16 14:02:59.426879287 +0000 UTC m=+253.448167039" Apr 16 14:02:59.489223 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.487121 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-58f8588d54-vj2dk"] Apr 16 14:02:59.492068 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.492043 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.494265 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.494218 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client\"" Apr 16 14:02:59.494528 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.494391 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-dockercfg-xlxfz\"" Apr 16 14:02:59.494528 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.494431 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-kube-rbac-proxy-config\"" Apr 16 14:02:59.494528 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.494470 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"telemeter-client-tls\"" Apr 16 14:02:59.494528 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.494431 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"federate-client-certs\"" Apr 16 14:02:59.494716 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.494693 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-client-serving-certs-ca-bundle\"" Apr 16 14:02:59.501950 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.501890 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"telemeter-trusted-ca-bundle-8i12ta5c71j38\"" Apr 16 14:02:59.503529 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.503505 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-58f8588d54-vj2dk"] Apr 16 14:02:59.589101 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.589068 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.589282 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.589117 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-serving-certs-ca-bundle\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.589282 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.589190 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-secret-telemeter-client\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.589418 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.589286 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfxqm\" (UniqueName: \"kubernetes.io/projected/50c59d5c-1c79-4bc3-925e-84828e70d51b-kube-api-access-bfxqm\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.589418 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.589320 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-telemeter-trusted-ca-bundle\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.589418 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.589354 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-telemeter-client-tls\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.589558 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.589433 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-federate-client-tls\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.589558 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.589457 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-metrics-client-ca\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.690952 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.690872 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-telemeter-client-tls\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.691133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.690974 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-federate-client-tls\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.691133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.691004 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-metrics-client-ca\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.691133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.691049 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.691133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.691119 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-serving-certs-ca-bundle\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.691417 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.691156 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-secret-telemeter-client\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.691417 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.691223 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bfxqm\" (UniqueName: \"kubernetes.io/projected/50c59d5c-1c79-4bc3-925e-84828e70d51b-kube-api-access-bfxqm\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.691417 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.691275 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-telemeter-trusted-ca-bundle\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.694334 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.694278 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.694487 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.694454 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-serving-certs-ca-bundle\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.694564 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.694494 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-metrics-client-ca\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.694887 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.694848 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-telemeter-client-tls\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.695265 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.695200 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50c59d5c-1c79-4bc3-925e-84828e70d51b-telemeter-trusted-ca-bundle\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.696057 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.696028 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-secret-telemeter-client\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.696185 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.696156 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/50c59d5c-1c79-4bc3-925e-84828e70d51b-federate-client-tls\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.700519 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.700499 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfxqm\" (UniqueName: \"kubernetes.io/projected/50c59d5c-1c79-4bc3-925e-84828e70d51b-kube-api-access-bfxqm\") pod \"telemeter-client-58f8588d54-vj2dk\" (UID: \"50c59d5c-1c79-4bc3-925e-84828e70d51b\") " pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.806094 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.806060 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" Apr 16 14:02:59.992655 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:02:59.992583 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-58f8588d54-vj2dk"] Apr 16 14:02:59.996993 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:02:59.996960 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50c59d5c_1c79_4bc3_925e_84828e70d51b.slice/crio-cd6a3bea8148cc1ba6c250e0c8a50bcb8535a8c956a32ffe511d1817c86bd1c8 WatchSource:0}: Error finding container cd6a3bea8148cc1ba6c250e0c8a50bcb8535a8c956a32ffe511d1817c86bd1c8: Status 404 returned error can't find the container with id cd6a3bea8148cc1ba6c250e0c8a50bcb8535a8c956a32ffe511d1817c86bd1c8 Apr 16 14:03:00.403727 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:00.403690 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99gsl" event={"ID":"6cc56cdf-0ee0-49a9-b52c-65d8745cb390","Type":"ContainerStarted","Data":"beca4515a77c5e84f7f818e7d43e4c98edf21c10ce90fd16810f0551e7297c73"} Apr 16 14:03:00.403727 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:00.403730 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99gsl" event={"ID":"6cc56cdf-0ee0-49a9-b52c-65d8745cb390","Type":"ContainerStarted","Data":"ffe3b3fd2a0e5f59b5e1172b75267cc94803f55c4cda505a06ea5e38591f1008"} Apr 16 14:03:00.404764 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:00.404742 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" event={"ID":"50c59d5c-1c79-4bc3-925e-84828e70d51b","Type":"ContainerStarted","Data":"cd6a3bea8148cc1ba6c250e0c8a50bcb8535a8c956a32ffe511d1817c86bd1c8"} Apr 16 14:03:00.419307 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:00.419265 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-99gsl" podStartSLOduration=253.161670602 podStartE2EDuration="4m14.419229448s" podCreationTimestamp="2026-04-16 13:58:46 +0000 UTC" firstStartedPulling="2026-04-16 14:02:58.646867113 +0000 UTC m=+252.668154846" lastFinishedPulling="2026-04-16 14:02:59.904425958 +0000 UTC m=+253.925713692" observedRunningTime="2026-04-16 14:03:00.418087481 +0000 UTC m=+254.439375252" watchObservedRunningTime="2026-04-16 14:03:00.419229448 +0000 UTC m=+254.440517174" Apr 16 14:03:03.418407 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:03.418364 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" event={"ID":"50c59d5c-1c79-4bc3-925e-84828e70d51b","Type":"ContainerStarted","Data":"0cd27a77a50485f52a6b4f505411fcc8e11e19c74259ae5d49865a0af1114ee7"} Apr 16 14:03:03.418407 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:03.418412 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" event={"ID":"50c59d5c-1c79-4bc3-925e-84828e70d51b","Type":"ContainerStarted","Data":"cc42115b0faf38851c42fbc5abe9ef061d56fc6d10c8f340e7be7daf176bb389"} Apr 16 14:03:03.418801 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:03.418422 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" event={"ID":"50c59d5c-1c79-4bc3-925e-84828e70d51b","Type":"ContainerStarted","Data":"8633e3c17d29d92809b8d7494b752a31855fe7ce1c6dd0e2b6ba780acac8ce86"} Apr 16 14:03:03.439967 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:03.439914 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-58f8588d54-vj2dk" podStartSLOduration=2.086696527 podStartE2EDuration="4.439900951s" podCreationTimestamp="2026-04-16 14:02:59 +0000 UTC" firstStartedPulling="2026-04-16 14:02:59.999372341 +0000 UTC m=+254.020660066" lastFinishedPulling="2026-04-16 14:03:02.352576746 +0000 UTC m=+256.373864490" observedRunningTime="2026-04-16 14:03:03.437639529 +0000 UTC m=+257.458927276" watchObservedRunningTime="2026-04-16 14:03:03.439900951 +0000 UTC m=+257.461188737" Apr 16 14:03:46.484305 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:46.484280 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:03:46.485514 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:46.485488 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:03:46.490677 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:46.490657 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:03:46.491706 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:46.491686 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:03:46.493791 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:03:46.493774 2569 kubelet.go:1628] "Image garbage collection succeeded" Apr 16 14:05:27.823885 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.823851 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z"] Apr 16 14:05:27.827425 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.827405 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:27.829800 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.829773 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 16 14:05:27.829800 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.829788 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-pqm65\"" Apr 16 14:05:27.829953 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.829782 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 16 14:05:27.834153 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.834128 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z"] Apr 16 14:05:27.967439 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.967414 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n259w\" (UniqueName: \"kubernetes.io/projected/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-kube-api-access-n259w\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:27.967574 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.967451 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:27.967574 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:27.967473 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:28.067991 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.067963 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n259w\" (UniqueName: \"kubernetes.io/projected/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-kube-api-access-n259w\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:28.068113 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.068008 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:28.068113 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.068038 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:28.068344 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.068326 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-bundle\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:28.068442 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.068424 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-util\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:28.075551 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.075491 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n259w\" (UniqueName: \"kubernetes.io/projected/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-kube-api-access-n259w\") pod \"59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:28.137313 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.137278 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:28.253511 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.253481 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z"] Apr 16 14:05:28.257206 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:05:28.257180 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e0989cb_5f80_4f5c_acb2_e8dbb462652c.slice/crio-02f5d46624d1b7e7fe1352db29a3aeb3a3c074b76a4bd1c0ccbee5593fdaad56 WatchSource:0}: Error finding container 02f5d46624d1b7e7fe1352db29a3aeb3a3c074b76a4bd1c0ccbee5593fdaad56: Status 404 returned error can't find the container with id 02f5d46624d1b7e7fe1352db29a3aeb3a3c074b76a4bd1c0ccbee5593fdaad56 Apr 16 14:05:28.259099 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.259076 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 14:05:28.829163 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:28.829123 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" event={"ID":"0e0989cb-5f80-4f5c-acb2-e8dbb462652c","Type":"ContainerStarted","Data":"02f5d46624d1b7e7fe1352db29a3aeb3a3c074b76a4bd1c0ccbee5593fdaad56"} Apr 16 14:05:33.845916 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:33.845882 2569 generic.go:358] "Generic (PLEG): container finished" podID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerID="6600a639ad70abb75302d199dcb7536a23169195379798996f7c1fcafd88e9c5" exitCode=0 Apr 16 14:05:33.846331 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:33.845970 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" event={"ID":"0e0989cb-5f80-4f5c-acb2-e8dbb462652c","Type":"ContainerDied","Data":"6600a639ad70abb75302d199dcb7536a23169195379798996f7c1fcafd88e9c5"} Apr 16 14:05:37.857836 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:37.857803 2569 generic.go:358] "Generic (PLEG): container finished" podID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerID="b6502fff49331062b98be91da42c4b58d65b88a9e8bbfdca8427169f8c8cfe1b" exitCode=0 Apr 16 14:05:37.858165 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:37.857858 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" event={"ID":"0e0989cb-5f80-4f5c-acb2-e8dbb462652c","Type":"ContainerDied","Data":"b6502fff49331062b98be91da42c4b58d65b88a9e8bbfdca8427169f8c8cfe1b"} Apr 16 14:05:44.882436 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:44.882398 2569 generic.go:358] "Generic (PLEG): container finished" podID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerID="af6c11b533322f45afeef541da8893056422ace101ab7f6bbc5e85dd11bee1ff" exitCode=0 Apr 16 14:05:44.882777 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:44.882481 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" event={"ID":"0e0989cb-5f80-4f5c-acb2-e8dbb462652c","Type":"ContainerDied","Data":"af6c11b533322f45afeef541da8893056422ace101ab7f6bbc5e85dd11bee1ff"} Apr 16 14:05:46.004763 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.004739 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:46.013636 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.013616 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-util\") pod \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " Apr 16 14:05:46.013734 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.013674 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-bundle\") pod \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " Apr 16 14:05:46.013734 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.013700 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n259w\" (UniqueName: \"kubernetes.io/projected/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-kube-api-access-n259w\") pod \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\" (UID: \"0e0989cb-5f80-4f5c-acb2-e8dbb462652c\") " Apr 16 14:05:46.014177 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.014152 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-bundle" (OuterVolumeSpecName: "bundle") pod "0e0989cb-5f80-4f5c-acb2-e8dbb462652c" (UID: "0e0989cb-5f80-4f5c-acb2-e8dbb462652c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:05:46.015765 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.015744 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-kube-api-access-n259w" (OuterVolumeSpecName: "kube-api-access-n259w") pod "0e0989cb-5f80-4f5c-acb2-e8dbb462652c" (UID: "0e0989cb-5f80-4f5c-acb2-e8dbb462652c"). InnerVolumeSpecName "kube-api-access-n259w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 14:05:46.017993 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.017975 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-util" (OuterVolumeSpecName: "util") pod "0e0989cb-5f80-4f5c-acb2-e8dbb462652c" (UID: "0e0989cb-5f80-4f5c-acb2-e8dbb462652c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:05:46.114395 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.114366 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n259w\" (UniqueName: \"kubernetes.io/projected/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-kube-api-access-n259w\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:05:46.114395 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.114389 2569 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-util\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:05:46.114395 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.114400 2569 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e0989cb-5f80-4f5c-acb2-e8dbb462652c-bundle\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:05:46.890401 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.890365 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" event={"ID":"0e0989cb-5f80-4f5c-acb2-e8dbb462652c","Type":"ContainerDied","Data":"02f5d46624d1b7e7fe1352db29a3aeb3a3c074b76a4bd1c0ccbee5593fdaad56"} Apr 16 14:05:46.890401 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.890400 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f5d46624d1b7e7fe1352db29a3aeb3a3c074b76a4bd1c0ccbee5593fdaad56" Apr 16 14:05:46.890597 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:46.890437 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/59039e319e11338a40c6b6f1054d265f40bb50ceac6068d5c59955d29ct8c2z" Apr 16 14:05:53.760664 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.760636 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-pk76h"] Apr 16 14:05:53.761013 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.760975 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerName="pull" Apr 16 14:05:53.761013 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.760987 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerName="pull" Apr 16 14:05:53.761013 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.761003 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerName="extract" Apr 16 14:05:53.761013 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.761009 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerName="extract" Apr 16 14:05:53.761136 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.761017 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerName="util" Apr 16 14:05:53.761136 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.761022 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerName="util" Apr 16 14:05:53.761136 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.761074 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e0989cb-5f80-4f5c-acb2-e8dbb462652c" containerName="extract" Apr 16 14:05:53.766473 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.766456 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:53.769514 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.769491 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 16 14:05:53.769996 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.769975 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 16 14:05:53.770088 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.770007 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 16 14:05:53.770088 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.770007 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-certs\"" Apr 16 14:05:53.770461 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.770448 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-5j6r2\"" Apr 16 14:05:53.771017 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.771001 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 16 14:05:53.777009 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.776991 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-pk76h"] Apr 16 14:05:53.876870 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.876846 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/9872ae6e-02b2-4eb2-8d90-98467f473b72-cabundle0\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:53.877040 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.876885 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r56x\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-kube-api-access-5r56x\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:53.877040 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.877003 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:53.977803 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.977774 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:53.977928 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.977816 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/9872ae6e-02b2-4eb2-8d90-98467f473b72-cabundle0\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:53.977928 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:53.977906 2569 secret.go:281] references non-existent secret key: ca.crt Apr 16 14:05:53.977928 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:53.977921 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 14:05:53.977928 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:53.977929 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-pk76h: references non-existent secret key: ca.crt Apr 16 14:05:53.978122 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.977931 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5r56x\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-kube-api-access-5r56x\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:53.978122 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:53.977984 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates podName:9872ae6e-02b2-4eb2-8d90-98467f473b72 nodeName:}" failed. No retries permitted until 2026-04-16 14:05:54.477968228 +0000 UTC m=+428.499255955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates") pod "keda-operator-ffbb595cb-pk76h" (UID: "9872ae6e-02b2-4eb2-8d90-98467f473b72") : references non-existent secret key: ca.crt Apr 16 14:05:53.978581 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.978442 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle0\" (UniqueName: \"kubernetes.io/configmap/9872ae6e-02b2-4eb2-8d90-98467f473b72-cabundle0\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:53.987316 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:53.987295 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r56x\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-kube-api-access-5r56x\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:54.078376 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.078309 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb"] Apr 16 14:05:54.081724 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.081710 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.083694 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.083673 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 16 14:05:54.090819 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.090797 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb"] Apr 16 14:05:54.179604 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.179579 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/fcc47e07-2503-4998-b1de-2c03e7490b82-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.179736 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.179610 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhdc\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-kube-api-access-rhhdc\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.179736 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.179637 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.280569 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.280535 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/fcc47e07-2503-4998-b1de-2c03e7490b82-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.280727 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.280582 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhdc\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-kube-api-access-rhhdc\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.280727 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.280633 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.280817 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.280773 2569 secret.go:281] references non-existent secret key: tls.crt Apr 16 14:05:54.280817 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.280790 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 14:05:54.280817 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.280812 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb: references non-existent secret key: tls.crt Apr 16 14:05:54.280929 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.280881 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates podName:fcc47e07-2503-4998-b1de-2c03e7490b82 nodeName:}" failed. No retries permitted until 2026-04-16 14:05:54.780863471 +0000 UTC m=+428.802151215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates") pod "keda-metrics-apiserver-7c9f485588-f8rpb" (UID: "fcc47e07-2503-4998-b1de-2c03e7490b82") : references non-existent secret key: tls.crt Apr 16 14:05:54.280980 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.280960 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/fcc47e07-2503-4998-b1de-2c03e7490b82-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.289398 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.289379 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhdc\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-kube-api-access-rhhdc\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.401662 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.401632 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-admission-cf49989db-n4g7q"] Apr 16 14:05:54.404945 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.404926 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:54.407093 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.407076 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-admission-webhooks-certs\"" Apr 16 14:05:54.411733 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.411708 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-n4g7q"] Apr 16 14:05:54.483159 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.483120 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:54.483375 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.483187 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwjgg\" (UniqueName: \"kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-kube-api-access-wwjgg\") pod \"keda-admission-cf49989db-n4g7q\" (UID: \"f4b04995-b27d-4eb6-9c80-8c9f85931e48\") " pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:54.483375 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.483251 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-certificates\") pod \"keda-admission-cf49989db-n4g7q\" (UID: \"f4b04995-b27d-4eb6-9c80-8c9f85931e48\") " pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:54.483375 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.483312 2569 secret.go:281] references non-existent secret key: ca.crt Apr 16 14:05:54.483375 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.483330 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 14:05:54.483375 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.483338 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-pk76h: references non-existent secret key: ca.crt Apr 16 14:05:54.483375 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.483381 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates podName:9872ae6e-02b2-4eb2-8d90-98467f473b72 nodeName:}" failed. No retries permitted until 2026-04-16 14:05:55.483366252 +0000 UTC m=+429.504653978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates") pod "keda-operator-ffbb595cb-pk76h" (UID: "9872ae6e-02b2-4eb2-8d90-98467f473b72") : references non-existent secret key: ca.crt Apr 16 14:05:54.584000 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.583967 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-certificates\") pod \"keda-admission-cf49989db-n4g7q\" (UID: \"f4b04995-b27d-4eb6-9c80-8c9f85931e48\") " pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:54.584201 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.584059 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wwjgg\" (UniqueName: \"kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-kube-api-access-wwjgg\") pod \"keda-admission-cf49989db-n4g7q\" (UID: \"f4b04995-b27d-4eb6-9c80-8c9f85931e48\") " pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:54.584201 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.584136 2569 projected.go:264] Couldn't get secret openshift-keda/keda-admission-webhooks-certs: secret "keda-admission-webhooks-certs" not found Apr 16 14:05:54.584201 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.584164 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-admission-cf49989db-n4g7q: secret "keda-admission-webhooks-certs" not found Apr 16 14:05:54.584395 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.584252 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-certificates podName:f4b04995-b27d-4eb6-9c80-8c9f85931e48 nodeName:}" failed. No retries permitted until 2026-04-16 14:05:55.084216885 +0000 UTC m=+429.105504611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-certificates") pod "keda-admission-cf49989db-n4g7q" (UID: "f4b04995-b27d-4eb6-9c80-8c9f85931e48") : secret "keda-admission-webhooks-certs" not found Apr 16 14:05:54.595470 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.595439 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwjgg\" (UniqueName: \"kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-kube-api-access-wwjgg\") pod \"keda-admission-cf49989db-n4g7q\" (UID: \"f4b04995-b27d-4eb6-9c80-8c9f85931e48\") " pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:54.786464 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:54.786372 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:54.786846 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.786504 2569 secret.go:281] references non-existent secret key: tls.crt Apr 16 14:05:54.786846 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.786521 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 14:05:54.786846 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.786539 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb: references non-existent secret key: tls.crt Apr 16 14:05:54.786846 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:54.786595 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates podName:fcc47e07-2503-4998-b1de-2c03e7490b82 nodeName:}" failed. No retries permitted until 2026-04-16 14:05:55.786575988 +0000 UTC m=+429.807863714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates") pod "keda-metrics-apiserver-7c9f485588-f8rpb" (UID: "fcc47e07-2503-4998-b1de-2c03e7490b82") : references non-existent secret key: tls.crt Apr 16 14:05:55.089540 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:55.089442 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-certificates\") pod \"keda-admission-cf49989db-n4g7q\" (UID: \"f4b04995-b27d-4eb6-9c80-8c9f85931e48\") " pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:55.091901 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:55.091871 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/f4b04995-b27d-4eb6-9c80-8c9f85931e48-certificates\") pod \"keda-admission-cf49989db-n4g7q\" (UID: \"f4b04995-b27d-4eb6-9c80-8c9f85931e48\") " pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:55.315679 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:55.315633 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:55.433799 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:55.433768 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-admission-cf49989db-n4g7q"] Apr 16 14:05:55.436868 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:05:55.436841 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b04995_b27d_4eb6_9c80_8c9f85931e48.slice/crio-9f1422de19fc7447da7e30b75641c46695048c4c98fb493cd7971f487d39d471 WatchSource:0}: Error finding container 9f1422de19fc7447da7e30b75641c46695048c4c98fb493cd7971f487d39d471: Status 404 returned error can't find the container with id 9f1422de19fc7447da7e30b75641c46695048c4c98fb493cd7971f487d39d471 Apr 16 14:05:55.493051 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:55.493016 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:55.493199 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:55.493183 2569 secret.go:281] references non-existent secret key: ca.crt Apr 16 14:05:55.493290 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:55.493206 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 14:05:55.493290 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:55.493218 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-pk76h: references non-existent secret key: ca.crt Apr 16 14:05:55.493396 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:55.493302 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates podName:9872ae6e-02b2-4eb2-8d90-98467f473b72 nodeName:}" failed. No retries permitted until 2026-04-16 14:05:57.493279324 +0000 UTC m=+431.514567064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates") pod "keda-operator-ffbb595cb-pk76h" (UID: "9872ae6e-02b2-4eb2-8d90-98467f473b72") : references non-existent secret key: ca.crt Apr 16 14:05:55.794775 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:55.794741 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:55.795116 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:55.794859 2569 secret.go:281] references non-existent secret key: tls.crt Apr 16 14:05:55.795116 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:55.794871 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 14:05:55.795116 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:55.794888 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb: references non-existent secret key: tls.crt Apr 16 14:05:55.795116 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:55.794935 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates podName:fcc47e07-2503-4998-b1de-2c03e7490b82 nodeName:}" failed. No retries permitted until 2026-04-16 14:05:57.794921248 +0000 UTC m=+431.816208974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates") pod "keda-metrics-apiserver-7c9f485588-f8rpb" (UID: "fcc47e07-2503-4998-b1de-2c03e7490b82") : references non-existent secret key: tls.crt Apr 16 14:05:55.915153 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:55.915125 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-n4g7q" event={"ID":"f4b04995-b27d-4eb6-9c80-8c9f85931e48","Type":"ContainerStarted","Data":"9f1422de19fc7447da7e30b75641c46695048c4c98fb493cd7971f487d39d471"} Apr 16 14:05:57.508615 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:57.508581 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:05:57.509008 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:57.508732 2569 secret.go:281] references non-existent secret key: ca.crt Apr 16 14:05:57.509008 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:57.508744 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: ca.crt Apr 16 14:05:57.509008 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:57.508753 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-operator-ffbb595cb-pk76h: references non-existent secret key: ca.crt Apr 16 14:05:57.509008 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:57.508798 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates podName:9872ae6e-02b2-4eb2-8d90-98467f473b72 nodeName:}" failed. No retries permitted until 2026-04-16 14:06:01.508786162 +0000 UTC m=+435.530073888 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates") pod "keda-operator-ffbb595cb-pk76h" (UID: "9872ae6e-02b2-4eb2-8d90-98467f473b72") : references non-existent secret key: ca.crt Apr 16 14:05:57.811416 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:57.811328 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:05:57.811557 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:57.811469 2569 secret.go:281] references non-existent secret key: tls.crt Apr 16 14:05:57.811557 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:57.811483 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 14:05:57.811557 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:57.811502 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb: references non-existent secret key: tls.crt Apr 16 14:05:57.811557 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:05:57.811554 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates podName:fcc47e07-2503-4998-b1de-2c03e7490b82 nodeName:}" failed. No retries permitted until 2026-04-16 14:06:01.811536964 +0000 UTC m=+435.832824704 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates") pod "keda-metrics-apiserver-7c9f485588-f8rpb" (UID: "fcc47e07-2503-4998-b1de-2c03e7490b82") : references non-existent secret key: tls.crt Apr 16 14:05:57.922989 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:57.922955 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-admission-cf49989db-n4g7q" event={"ID":"f4b04995-b27d-4eb6-9c80-8c9f85931e48","Type":"ContainerStarted","Data":"329f9e8f7e5b4759625ddc3931c7ac1076e6d393efd9e71e6e30e5191ead21f5"} Apr 16 14:05:57.923179 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:57.923070 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:05:57.940021 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:05:57.939979 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-admission-cf49989db-n4g7q" podStartSLOduration=2.306083788 podStartE2EDuration="3.93981127s" podCreationTimestamp="2026-04-16 14:05:54 +0000 UTC" firstStartedPulling="2026-04-16 14:05:55.438150212 +0000 UTC m=+429.459437939" lastFinishedPulling="2026-04-16 14:05:57.071877691 +0000 UTC m=+431.093165421" observedRunningTime="2026-04-16 14:05:57.939453793 +0000 UTC m=+431.960741540" watchObservedRunningTime="2026-04-16 14:05:57.93981127 +0000 UTC m=+431.961099019" Apr 16 14:06:01.542096 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:01.542053 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:06:01.544450 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:01.544430 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/9872ae6e-02b2-4eb2-8d90-98467f473b72-certificates\") pod \"keda-operator-ffbb595cb-pk76h\" (UID: \"9872ae6e-02b2-4eb2-8d90-98467f473b72\") " pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:06:01.577338 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:01.577306 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:06:01.692880 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:01.692849 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-operator-ffbb595cb-pk76h"] Apr 16 14:06:01.695612 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:06:01.695582 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9872ae6e_02b2_4eb2_8d90_98467f473b72.slice/crio-1b2b64f271519c34ae5b33ccbad13f8cc60bbb78e5ed953af30259ec8e1b2653 WatchSource:0}: Error finding container 1b2b64f271519c34ae5b33ccbad13f8cc60bbb78e5ed953af30259ec8e1b2653: Status 404 returned error can't find the container with id 1b2b64f271519c34ae5b33ccbad13f8cc60bbb78e5ed953af30259ec8e1b2653 Apr 16 14:06:01.844870 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:01.844800 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:06:01.847100 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:01.847077 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/fcc47e07-2503-4998-b1de-2c03e7490b82-certificates\") pod \"keda-metrics-apiserver-7c9f485588-f8rpb\" (UID: \"fcc47e07-2503-4998-b1de-2c03e7490b82\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:06:01.892921 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:01.892897 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:06:01.936351 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:01.936321 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-pk76h" event={"ID":"9872ae6e-02b2-4eb2-8d90-98467f473b72","Type":"ContainerStarted","Data":"1b2b64f271519c34ae5b33ccbad13f8cc60bbb78e5ed953af30259ec8e1b2653"} Apr 16 14:06:02.007570 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:02.007527 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb"] Apr 16 14:06:02.010167 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:06:02.010142 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcc47e07_2503_4998_b1de_2c03e7490b82.slice/crio-d6c5ebe3b825a250d1c0f5a60c2ddb524f3f1dbb9da89dc64a59b9d4c6fc80a4 WatchSource:0}: Error finding container d6c5ebe3b825a250d1c0f5a60c2ddb524f3f1dbb9da89dc64a59b9d4c6fc80a4: Status 404 returned error can't find the container with id d6c5ebe3b825a250d1c0f5a60c2ddb524f3f1dbb9da89dc64a59b9d4c6fc80a4 Apr 16 14:06:02.940133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:02.940095 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" event={"ID":"fcc47e07-2503-4998-b1de-2c03e7490b82","Type":"ContainerStarted","Data":"d6c5ebe3b825a250d1c0f5a60c2ddb524f3f1dbb9da89dc64a59b9d4c6fc80a4"} Apr 16 14:06:07.960226 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:07.960189 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" event={"ID":"fcc47e07-2503-4998-b1de-2c03e7490b82","Type":"ContainerStarted","Data":"26197560b3fbb3b65602c037cfc98cb59c533da727ffac5ededbb906d6506564"} Apr 16 14:06:07.960742 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:07.960286 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:06:07.961609 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:07.961588 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-operator-ffbb595cb-pk76h" event={"ID":"9872ae6e-02b2-4eb2-8d90-98467f473b72","Type":"ContainerStarted","Data":"c5f3da981b0bc77d1201ecfd7c7b245af4cde9bc1a920f9abff2142993de34ac"} Apr 16 14:06:07.961728 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:07.961708 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:06:07.978564 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:07.978517 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" podStartSLOduration=8.612079399 podStartE2EDuration="13.978501277s" podCreationTimestamp="2026-04-16 14:05:54 +0000 UTC" firstStartedPulling="2026-04-16 14:06:02.011499451 +0000 UTC m=+436.032787177" lastFinishedPulling="2026-04-16 14:06:07.377921315 +0000 UTC m=+441.399209055" observedRunningTime="2026-04-16 14:06:07.977403941 +0000 UTC m=+441.998691689" watchObservedRunningTime="2026-04-16 14:06:07.978501277 +0000 UTC m=+441.999789027" Apr 16 14:06:07.992378 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:07.992338 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-operator-ffbb595cb-pk76h" podStartSLOduration=9.305597863 podStartE2EDuration="14.992326646s" podCreationTimestamp="2026-04-16 14:05:53 +0000 UTC" firstStartedPulling="2026-04-16 14:06:01.696934606 +0000 UTC m=+435.718222332" lastFinishedPulling="2026-04-16 14:06:07.383663389 +0000 UTC m=+441.404951115" observedRunningTime="2026-04-16 14:06:07.990167993 +0000 UTC m=+442.011455741" watchObservedRunningTime="2026-04-16 14:06:07.992326646 +0000 UTC m=+442.013614393" Apr 16 14:06:18.928908 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:18.928879 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-admission-cf49989db-n4g7q" Apr 16 14:06:18.970430 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:18.970403 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-f8rpb" Apr 16 14:06:28.967683 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:06:28.967654 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-operator-ffbb595cb-pk76h" Apr 16 14:07:00.630722 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.630683 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/seaweedfs-86cc847c5c-gzqx7"] Apr 16 14:07:00.634189 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.634166 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:00.636546 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.636523 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"mlpipeline-s3-artifact\"" Apr 16 14:07:00.636667 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.636525 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"default-dockercfg-zs6zv\"" Apr 16 14:07:00.636667 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.636525 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 14:07:00.637479 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.637453 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 14:07:00.642918 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.642897 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-gzqx7"] Apr 16 14:07:00.723474 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.723445 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvr6d\" (UniqueName: \"kubernetes.io/projected/3130322e-2435-4782-8e41-bd42dee3a2a8-kube-api-access-gvr6d\") pod \"seaweedfs-86cc847c5c-gzqx7\" (UID: \"3130322e-2435-4782-8e41-bd42dee3a2a8\") " pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:00.723609 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.723506 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3130322e-2435-4782-8e41-bd42dee3a2a8-data\") pod \"seaweedfs-86cc847c5c-gzqx7\" (UID: \"3130322e-2435-4782-8e41-bd42dee3a2a8\") " pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:00.824748 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.824715 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3130322e-2435-4782-8e41-bd42dee3a2a8-data\") pod \"seaweedfs-86cc847c5c-gzqx7\" (UID: \"3130322e-2435-4782-8e41-bd42dee3a2a8\") " pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:00.824872 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.824792 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gvr6d\" (UniqueName: \"kubernetes.io/projected/3130322e-2435-4782-8e41-bd42dee3a2a8-kube-api-access-gvr6d\") pod \"seaweedfs-86cc847c5c-gzqx7\" (UID: \"3130322e-2435-4782-8e41-bd42dee3a2a8\") " pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:00.825073 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.825054 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3130322e-2435-4782-8e41-bd42dee3a2a8-data\") pod \"seaweedfs-86cc847c5c-gzqx7\" (UID: \"3130322e-2435-4782-8e41-bd42dee3a2a8\") " pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:00.832799 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.832765 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvr6d\" (UniqueName: \"kubernetes.io/projected/3130322e-2435-4782-8e41-bd42dee3a2a8-kube-api-access-gvr6d\") pod \"seaweedfs-86cc847c5c-gzqx7\" (UID: \"3130322e-2435-4782-8e41-bd42dee3a2a8\") " pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:00.944047 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:00.943987 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:01.065796 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:01.065769 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/seaweedfs-86cc847c5c-gzqx7"] Apr 16 14:07:01.068422 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:07:01.068390 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3130322e_2435_4782_8e41_bd42dee3a2a8.slice/crio-7caff143d736c3fc66eb4b7e4196da22ef1a66e262684891e0545bb62b818b4e WatchSource:0}: Error finding container 7caff143d736c3fc66eb4b7e4196da22ef1a66e262684891e0545bb62b818b4e: Status 404 returned error can't find the container with id 7caff143d736c3fc66eb4b7e4196da22ef1a66e262684891e0545bb62b818b4e Apr 16 14:07:01.136841 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:01.136802 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-gzqx7" event={"ID":"3130322e-2435-4782-8e41-bd42dee3a2a8","Type":"ContainerStarted","Data":"7caff143d736c3fc66eb4b7e4196da22ef1a66e262684891e0545bb62b818b4e"} Apr 16 14:07:04.150054 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:04.150019 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/seaweedfs-86cc847c5c-gzqx7" event={"ID":"3130322e-2435-4782-8e41-bd42dee3a2a8","Type":"ContainerStarted","Data":"d6cd2b73acd9824ebdc9adb140f6cd5a17d62c6c21a8b50c7e7668f6660c6919"} Apr 16 14:07:04.150451 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:04.150143 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:07:04.166200 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:04.166150 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/seaweedfs-86cc847c5c-gzqx7" podStartSLOduration=1.558976014 podStartE2EDuration="4.166138628s" podCreationTimestamp="2026-04-16 14:07:00 +0000 UTC" firstStartedPulling="2026-04-16 14:07:01.069611667 +0000 UTC m=+495.090899393" lastFinishedPulling="2026-04-16 14:07:03.676774278 +0000 UTC m=+497.698062007" observedRunningTime="2026-04-16 14:07:04.163648303 +0000 UTC m=+498.184936051" watchObservedRunningTime="2026-04-16 14:07:04.166138628 +0000 UTC m=+498.187426377" Apr 16 14:07:10.156027 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:07:10.155994 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/seaweedfs-86cc847c5c-gzqx7" Apr 16 14:08:12.546491 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.546457 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/model-serving-api-86f7b4b499-9n7t5"] Apr 16 14:08:12.549929 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.549901 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:12.551541 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.551521 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-9n7t5"] Apr 16 14:08:12.552028 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.552003 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-dockercfg-spwwz\"" Apr 16 14:08:12.552028 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.552023 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"model-serving-api-tls\"" Apr 16 14:08:12.555554 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.555534 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/odh-model-controller-696fc77849-d9rbz"] Apr 16 14:08:12.558890 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.558871 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:12.561562 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.561533 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-dockercfg-tdhv8\"" Apr 16 14:08:12.561562 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.561549 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"odh-model-controller-webhook-cert\"" Apr 16 14:08:12.566232 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.566210 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-d9rbz"] Apr 16 14:08:12.610036 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.610004 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz5p4\" (UniqueName: \"kubernetes.io/projected/758de9fb-8f7d-4a4f-917a-1304d09bed4e-kube-api-access-dz5p4\") pod \"model-serving-api-86f7b4b499-9n7t5\" (UID: \"758de9fb-8f7d-4a4f-917a-1304d09bed4e\") " pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:12.610036 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.610036 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-cert\") pod \"odh-model-controller-696fc77849-d9rbz\" (UID: \"1fcb9f00-8568-4fcc-8b6c-4eede4a43768\") " pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:12.610227 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.610062 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs\") pod \"model-serving-api-86f7b4b499-9n7t5\" (UID: \"758de9fb-8f7d-4a4f-917a-1304d09bed4e\") " pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:12.610227 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.610157 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkfm\" (UniqueName: \"kubernetes.io/projected/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-kube-api-access-dpkfm\") pod \"odh-model-controller-696fc77849-d9rbz\" (UID: \"1fcb9f00-8568-4fcc-8b6c-4eede4a43768\") " pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:12.711230 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.711191 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dz5p4\" (UniqueName: \"kubernetes.io/projected/758de9fb-8f7d-4a4f-917a-1304d09bed4e-kube-api-access-dz5p4\") pod \"model-serving-api-86f7b4b499-9n7t5\" (UID: \"758de9fb-8f7d-4a4f-917a-1304d09bed4e\") " pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:12.711230 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.711229 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-cert\") pod \"odh-model-controller-696fc77849-d9rbz\" (UID: \"1fcb9f00-8568-4fcc-8b6c-4eede4a43768\") " pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:12.711582 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.711276 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs\") pod \"model-serving-api-86f7b4b499-9n7t5\" (UID: \"758de9fb-8f7d-4a4f-917a-1304d09bed4e\") " pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:12.711582 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.711331 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dpkfm\" (UniqueName: \"kubernetes.io/projected/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-kube-api-access-dpkfm\") pod \"odh-model-controller-696fc77849-d9rbz\" (UID: \"1fcb9f00-8568-4fcc-8b6c-4eede4a43768\") " pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:12.711582 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:08:12.711403 2569 secret.go:189] Couldn't get secret kserve/odh-model-controller-webhook-cert: secret "odh-model-controller-webhook-cert" not found Apr 16 14:08:12.711582 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:08:12.711409 2569 secret.go:189] Couldn't get secret kserve/model-serving-api-tls: secret "model-serving-api-tls" not found Apr 16 14:08:12.711582 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:08:12.711490 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs podName:758de9fb-8f7d-4a4f-917a-1304d09bed4e nodeName:}" failed. No retries permitted until 2026-04-16 14:08:13.21146932 +0000 UTC m=+567.232757048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certs" (UniqueName: "kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs") pod "model-serving-api-86f7b4b499-9n7t5" (UID: "758de9fb-8f7d-4a4f-917a-1304d09bed4e") : secret "model-serving-api-tls" not found Apr 16 14:08:12.711582 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:08:12.711510 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-cert podName:1fcb9f00-8568-4fcc-8b6c-4eede4a43768 nodeName:}" failed. No retries permitted until 2026-04-16 14:08:13.211501237 +0000 UTC m=+567.232788977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-cert") pod "odh-model-controller-696fc77849-d9rbz" (UID: "1fcb9f00-8568-4fcc-8b6c-4eede4a43768") : secret "odh-model-controller-webhook-cert" not found Apr 16 14:08:12.720450 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.720428 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz5p4\" (UniqueName: \"kubernetes.io/projected/758de9fb-8f7d-4a4f-917a-1304d09bed4e-kube-api-access-dz5p4\") pod \"model-serving-api-86f7b4b499-9n7t5\" (UID: \"758de9fb-8f7d-4a4f-917a-1304d09bed4e\") " pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:12.720599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:12.720578 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpkfm\" (UniqueName: \"kubernetes.io/projected/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-kube-api-access-dpkfm\") pod \"odh-model-controller-696fc77849-d9rbz\" (UID: \"1fcb9f00-8568-4fcc-8b6c-4eede4a43768\") " pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:13.215555 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:13.215511 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-cert\") pod \"odh-model-controller-696fc77849-d9rbz\" (UID: \"1fcb9f00-8568-4fcc-8b6c-4eede4a43768\") " pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:13.215555 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:13.215558 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs\") pod \"model-serving-api-86f7b4b499-9n7t5\" (UID: \"758de9fb-8f7d-4a4f-917a-1304d09bed4e\") " pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:13.215825 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:08:13.215652 2569 secret.go:189] Couldn't get secret kserve/model-serving-api-tls: secret "model-serving-api-tls" not found Apr 16 14:08:13.215825 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:08:13.215711 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs podName:758de9fb-8f7d-4a4f-917a-1304d09bed4e nodeName:}" failed. No retries permitted until 2026-04-16 14:08:14.215696318 +0000 UTC m=+568.236984043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certs" (UniqueName: "kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs") pod "model-serving-api-86f7b4b499-9n7t5" (UID: "758de9fb-8f7d-4a4f-917a-1304d09bed4e") : secret "model-serving-api-tls" not found Apr 16 14:08:13.218001 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:13.217974 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fcb9f00-8568-4fcc-8b6c-4eede4a43768-cert\") pod \"odh-model-controller-696fc77849-d9rbz\" (UID: \"1fcb9f00-8568-4fcc-8b6c-4eede4a43768\") " pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:13.471157 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:13.471073 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:13.592328 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:13.592306 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/odh-model-controller-696fc77849-d9rbz"] Apr 16 14:08:13.594693 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:08:13.594664 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fcb9f00_8568_4fcc_8b6c_4eede4a43768.slice/crio-16539bf307355b858f203ea50bc6234e7912d7a578849e99ac1036e3507d6186 WatchSource:0}: Error finding container 16539bf307355b858f203ea50bc6234e7912d7a578849e99ac1036e3507d6186: Status 404 returned error can't find the container with id 16539bf307355b858f203ea50bc6234e7912d7a578849e99ac1036e3507d6186 Apr 16 14:08:14.224409 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:14.224370 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs\") pod \"model-serving-api-86f7b4b499-9n7t5\" (UID: \"758de9fb-8f7d-4a4f-917a-1304d09bed4e\") " pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:14.226681 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:14.226663 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-certs\" (UniqueName: \"kubernetes.io/secret/758de9fb-8f7d-4a4f-917a-1304d09bed4e-tls-certs\") pod \"model-serving-api-86f7b4b499-9n7t5\" (UID: \"758de9fb-8f7d-4a4f-917a-1304d09bed4e\") " pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:14.361599 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:14.361570 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:14.383673 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:14.383636 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-d9rbz" event={"ID":"1fcb9f00-8568-4fcc-8b6c-4eede4a43768","Type":"ContainerStarted","Data":"16539bf307355b858f203ea50bc6234e7912d7a578849e99ac1036e3507d6186"} Apr 16 14:08:14.481457 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:14.481386 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/model-serving-api-86f7b4b499-9n7t5"] Apr 16 14:08:14.484180 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:08:14.484151 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod758de9fb_8f7d_4a4f_917a_1304d09bed4e.slice/crio-bedf180153d62e76810019a7026d8e9361a6b6e52a7454209aadfc27943deae5 WatchSource:0}: Error finding container bedf180153d62e76810019a7026d8e9361a6b6e52a7454209aadfc27943deae5: Status 404 returned error can't find the container with id bedf180153d62e76810019a7026d8e9361a6b6e52a7454209aadfc27943deae5 Apr 16 14:08:15.389532 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:15.389478 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-9n7t5" event={"ID":"758de9fb-8f7d-4a4f-917a-1304d09bed4e","Type":"ContainerStarted","Data":"bedf180153d62e76810019a7026d8e9361a6b6e52a7454209aadfc27943deae5"} Apr 16 14:08:18.403622 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:18.403584 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/odh-model-controller-696fc77849-d9rbz" event={"ID":"1fcb9f00-8568-4fcc-8b6c-4eede4a43768","Type":"ContainerStarted","Data":"b60b551658807d19a75ff52eec21b6944701fa0015bec39d0c97c1c1f02517c0"} Apr 16 14:08:18.403622 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:18.403635 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:18.404945 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:18.404919 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/model-serving-api-86f7b4b499-9n7t5" event={"ID":"758de9fb-8f7d-4a4f-917a-1304d09bed4e","Type":"ContainerStarted","Data":"958fd6293e82c4e05b4302e23881f8128325e8eea99fed722bc5455a66b9fcc7"} Apr 16 14:08:18.405078 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:18.405039 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:18.420536 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:18.420487 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/odh-model-controller-696fc77849-d9rbz" podStartSLOduration=2.328466931 podStartE2EDuration="6.420471776s" podCreationTimestamp="2026-04-16 14:08:12 +0000 UTC" firstStartedPulling="2026-04-16 14:08:13.595859653 +0000 UTC m=+567.617147379" lastFinishedPulling="2026-04-16 14:08:17.687864499 +0000 UTC m=+571.709152224" observedRunningTime="2026-04-16 14:08:18.418636624 +0000 UTC m=+572.439924372" watchObservedRunningTime="2026-04-16 14:08:18.420471776 +0000 UTC m=+572.441759524" Apr 16 14:08:18.436617 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:18.436571 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/model-serving-api-86f7b4b499-9n7t5" podStartSLOduration=3.180438779 podStartE2EDuration="6.4365569s" podCreationTimestamp="2026-04-16 14:08:12 +0000 UTC" firstStartedPulling="2026-04-16 14:08:14.48596287 +0000 UTC m=+568.507250597" lastFinishedPulling="2026-04-16 14:08:17.742080989 +0000 UTC m=+571.763368718" observedRunningTime="2026-04-16 14:08:18.436137208 +0000 UTC m=+572.457424957" watchObservedRunningTime="2026-04-16 14:08:18.4365569 +0000 UTC m=+572.457844649" Apr 16 14:08:29.411976 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:29.411942 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/odh-model-controller-696fc77849-d9rbz" Apr 16 14:08:29.414042 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:29.414021 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/model-serving-api-86f7b4b499-9n7t5" Apr 16 14:08:30.200480 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:30.200448 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/s3-init-n8lnh"] Apr 16 14:08:30.203344 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:30.203329 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-n8lnh" Apr 16 14:08:30.209486 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:30.209454 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-n8lnh"] Apr 16 14:08:30.367164 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:30.367137 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgzmb\" (UniqueName: \"kubernetes.io/projected/7f2ac48f-ac90-4ec6-867c-9616a66ed0a5-kube-api-access-mgzmb\") pod \"s3-init-n8lnh\" (UID: \"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5\") " pod="kserve/s3-init-n8lnh" Apr 16 14:08:30.467951 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:30.467876 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mgzmb\" (UniqueName: \"kubernetes.io/projected/7f2ac48f-ac90-4ec6-867c-9616a66ed0a5-kube-api-access-mgzmb\") pod \"s3-init-n8lnh\" (UID: \"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5\") " pod="kserve/s3-init-n8lnh" Apr 16 14:08:30.475506 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:30.475477 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgzmb\" (UniqueName: \"kubernetes.io/projected/7f2ac48f-ac90-4ec6-867c-9616a66ed0a5-kube-api-access-mgzmb\") pod \"s3-init-n8lnh\" (UID: \"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5\") " pod="kserve/s3-init-n8lnh" Apr 16 14:08:30.526844 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:30.526817 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-n8lnh" Apr 16 14:08:30.648567 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:30.648543 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/s3-init-n8lnh"] Apr 16 14:08:30.651074 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:08:30.651046 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f2ac48f_ac90_4ec6_867c_9616a66ed0a5.slice/crio-245766c54da58bbca484dde6a2d88dfa8019b19479bb17ba676a77f4f08d527c WatchSource:0}: Error finding container 245766c54da58bbca484dde6a2d88dfa8019b19479bb17ba676a77f4f08d527c: Status 404 returned error can't find the container with id 245766c54da58bbca484dde6a2d88dfa8019b19479bb17ba676a77f4f08d527c Apr 16 14:08:31.455325 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:31.455267 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-n8lnh" event={"ID":"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5","Type":"ContainerStarted","Data":"245766c54da58bbca484dde6a2d88dfa8019b19479bb17ba676a77f4f08d527c"} Apr 16 14:08:35.471873 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:35.471836 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-n8lnh" event={"ID":"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5","Type":"ContainerStarted","Data":"1bab5b88e72be050b0e2d69bda7efcd688dcb2cb05c1cf395312f4d48826c621"} Apr 16 14:08:35.485778 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:35.485738 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/s3-init-n8lnh" podStartSLOduration=1.014783534 podStartE2EDuration="5.485726006s" podCreationTimestamp="2026-04-16 14:08:30 +0000 UTC" firstStartedPulling="2026-04-16 14:08:30.652969733 +0000 UTC m=+584.674257460" lastFinishedPulling="2026-04-16 14:08:35.123912202 +0000 UTC m=+589.145199932" observedRunningTime="2026-04-16 14:08:35.485366851 +0000 UTC m=+589.506654598" watchObservedRunningTime="2026-04-16 14:08:35.485726006 +0000 UTC m=+589.507013754" Apr 16 14:08:38.483493 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:38.483459 2569 generic.go:358] "Generic (PLEG): container finished" podID="7f2ac48f-ac90-4ec6-867c-9616a66ed0a5" containerID="1bab5b88e72be050b0e2d69bda7efcd688dcb2cb05c1cf395312f4d48826c621" exitCode=0 Apr 16 14:08:38.483845 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:38.483528 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-n8lnh" event={"ID":"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5","Type":"ContainerDied","Data":"1bab5b88e72be050b0e2d69bda7efcd688dcb2cb05c1cf395312f4d48826c621"} Apr 16 14:08:39.619142 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:39.619119 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-n8lnh" Apr 16 14:08:39.754708 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:39.754638 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgzmb\" (UniqueName: \"kubernetes.io/projected/7f2ac48f-ac90-4ec6-867c-9616a66ed0a5-kube-api-access-mgzmb\") pod \"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5\" (UID: \"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5\") " Apr 16 14:08:39.756708 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:39.756686 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2ac48f-ac90-4ec6-867c-9616a66ed0a5-kube-api-access-mgzmb" (OuterVolumeSpecName: "kube-api-access-mgzmb") pod "7f2ac48f-ac90-4ec6-867c-9616a66ed0a5" (UID: "7f2ac48f-ac90-4ec6-867c-9616a66ed0a5"). InnerVolumeSpecName "kube-api-access-mgzmb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 14:08:39.855285 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:39.855230 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mgzmb\" (UniqueName: \"kubernetes.io/projected/7f2ac48f-ac90-4ec6-867c-9616a66ed0a5-kube-api-access-mgzmb\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:08:40.491645 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:40.491615 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/s3-init-n8lnh" Apr 16 14:08:40.491645 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:40.491630 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/s3-init-n8lnh" event={"ID":"7f2ac48f-ac90-4ec6-867c-9616a66ed0a5","Type":"ContainerDied","Data":"245766c54da58bbca484dde6a2d88dfa8019b19479bb17ba676a77f4f08d527c"} Apr 16 14:08:40.491852 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:40.491656 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="245766c54da58bbca484dde6a2d88dfa8019b19479bb17ba676a77f4f08d527c" Apr 16 14:08:46.521984 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:46.521956 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:08:46.522539 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:46.522076 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:08:46.528196 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:46.528179 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:08:46.528364 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:46.528332 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:08:49.676929 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.676897 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs"] Apr 16 14:08:49.677571 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.677326 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f2ac48f-ac90-4ec6-867c-9616a66ed0a5" containerName="s3-init" Apr 16 14:08:49.677571 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.677341 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2ac48f-ac90-4ec6-867c-9616a66ed0a5" containerName="s3-init" Apr 16 14:08:49.677571 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.677413 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f2ac48f-ac90-4ec6-867c-9616a66ed0a5" containerName="s3-init" Apr 16 14:08:49.685709 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.685688 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:08:49.687939 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.687914 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-x6x2v\"" Apr 16 14:08:49.688180 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.688158 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs"] Apr 16 14:08:49.718343 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.718313 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5ef18018-5b9a-4799-919f-27e187f05a2b-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs\" (UID: \"5ef18018-5b9a-4799-919f-27e187f05a2b\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:08:49.819324 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.819286 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5ef18018-5b9a-4799-919f-27e187f05a2b-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs\" (UID: \"5ef18018-5b9a-4799-919f-27e187f05a2b\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:08:49.819623 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.819602 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5ef18018-5b9a-4799-919f-27e187f05a2b-kserve-provision-location\") pod \"isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs\" (UID: \"5ef18018-5b9a-4799-919f-27e187f05a2b\") " pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:08:49.997289 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:49.997186 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:08:50.122806 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:50.122741 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs"] Apr 16 14:08:50.125036 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:08:50.125004 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ef18018_5b9a_4799_919f_27e187f05a2b.slice/crio-ca36ad2b2045cbe65cb2e9622f7c74e95ef8bdbc23ad2feee2191030839762d2 WatchSource:0}: Error finding container ca36ad2b2045cbe65cb2e9622f7c74e95ef8bdbc23ad2feee2191030839762d2: Status 404 returned error can't find the container with id ca36ad2b2045cbe65cb2e9622f7c74e95ef8bdbc23ad2feee2191030839762d2 Apr 16 14:08:50.528703 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:50.528670 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" event={"ID":"5ef18018-5b9a-4799-919f-27e187f05a2b","Type":"ContainerStarted","Data":"ca36ad2b2045cbe65cb2e9622f7c74e95ef8bdbc23ad2feee2191030839762d2"} Apr 16 14:08:53.541051 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:53.540963 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" event={"ID":"5ef18018-5b9a-4799-919f-27e187f05a2b","Type":"ContainerStarted","Data":"776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05"} Apr 16 14:08:57.554283 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:57.554217 2569 generic.go:358] "Generic (PLEG): container finished" podID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerID="776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05" exitCode=0 Apr 16 14:08:57.554668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:08:57.554287 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" event={"ID":"5ef18018-5b9a-4799-919f-27e187f05a2b","Type":"ContainerDied","Data":"776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05"} Apr 16 14:09:11.613474 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:11.613390 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" event={"ID":"5ef18018-5b9a-4799-919f-27e187f05a2b","Type":"ContainerStarted","Data":"8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a"} Apr 16 14:09:14.628951 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:14.628910 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" event={"ID":"5ef18018-5b9a-4799-919f-27e187f05a2b","Type":"ContainerStarted","Data":"30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf"} Apr 16 14:09:14.629374 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:14.629233 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:09:14.629374 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:14.629284 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:09:14.630803 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:14.630771 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:09:14.631447 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:14.631420 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:09:14.645932 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:14.645889 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podStartSLOduration=1.293071833 podStartE2EDuration="25.645876097s" podCreationTimestamp="2026-04-16 14:08:49 +0000 UTC" firstStartedPulling="2026-04-16 14:08:50.127103527 +0000 UTC m=+604.148391253" lastFinishedPulling="2026-04-16 14:09:14.479907781 +0000 UTC m=+628.501195517" observedRunningTime="2026-04-16 14:09:14.643623564 +0000 UTC m=+628.664911314" watchObservedRunningTime="2026-04-16 14:09:14.645876097 +0000 UTC m=+628.667163844" Apr 16 14:09:15.633386 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:15.633344 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:09:15.633778 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:15.633754 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:09:25.634261 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:25.634198 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:09:25.647818 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:25.634662 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:09:35.633937 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:35.633888 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:09:35.634397 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:35.634372 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:09:45.633807 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:45.633759 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:09:45.634413 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:45.634194 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:09:55.633747 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:55.633698 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:09:55.634203 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:09:55.634164 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:10:05.634214 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:05.634180 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:10:05.634722 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:05.634697 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:10:15.634408 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:15.634376 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:10:15.634804 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:15.634658 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:10:24.790739 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.790707 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs"] Apr 16 14:10:24.791214 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.791049 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" containerID="cri-o://8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a" gracePeriod=30 Apr 16 14:10:24.791214 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.791164 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" containerID="cri-o://30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf" gracePeriod=30 Apr 16 14:10:24.861941 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.861913 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq"] Apr 16 14:10:24.865578 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.865548 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:10:24.872814 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.872788 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq"] Apr 16 14:10:24.906880 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.906846 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7"] Apr 16 14:10:24.910413 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.910393 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:10:24.918010 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:24.917988 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7"] Apr 16 14:10:25.020795 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.020763 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6c1cd6f5-74a2-47b0-b431-864185372950-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7\" (UID: \"6c1cd6f5-74a2-47b0-b431-864185372950\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:10:25.020934 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.020808 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/109a7777-da00-4d77-911a-aac714533f34-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq\" (UID: \"109a7777-da00-4d77-911a-aac714533f34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:10:25.121614 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.121532 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6c1cd6f5-74a2-47b0-b431-864185372950-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7\" (UID: \"6c1cd6f5-74a2-47b0-b431-864185372950\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:10:25.121614 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.121590 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/109a7777-da00-4d77-911a-aac714533f34-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq\" (UID: \"109a7777-da00-4d77-911a-aac714533f34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:10:25.121928 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.121908 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/109a7777-da00-4d77-911a-aac714533f34-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq\" (UID: \"109a7777-da00-4d77-911a-aac714533f34\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:10:25.122003 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.121913 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6c1cd6f5-74a2-47b0-b431-864185372950-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7\" (UID: \"6c1cd6f5-74a2-47b0-b431-864185372950\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:10:25.177525 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.177475 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:10:25.221311 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.221268 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:10:25.313780 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.313724 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq"] Apr 16 14:10:25.323829 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:10:25.323018 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod109a7777_da00_4d77_911a_aac714533f34.slice/crio-770bcef0085909d171b73217a4bc81154add6f7e61bb17bd6639cf41063df31b WatchSource:0}: Error finding container 770bcef0085909d171b73217a4bc81154add6f7e61bb17bd6639cf41063df31b: Status 404 returned error can't find the container with id 770bcef0085909d171b73217a4bc81154add6f7e61bb17bd6639cf41063df31b Apr 16 14:10:25.360899 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.360829 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7"] Apr 16 14:10:25.363337 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:10:25.363300 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c1cd6f5_74a2_47b0_b431_864185372950.slice/crio-a96941ab70b8d311b1eb0b190f8d051dcf9bfc479397068acf3cb07fc1bddab5 WatchSource:0}: Error finding container a96941ab70b8d311b1eb0b190f8d051dcf9bfc479397068acf3cb07fc1bddab5: Status 404 returned error can't find the container with id a96941ab70b8d311b1eb0b190f8d051dcf9bfc479397068acf3cb07fc1bddab5 Apr 16 14:10:25.633990 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.633889 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:10:25.634306 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.634271 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:10:25.871835 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.871800 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" event={"ID":"109a7777-da00-4d77-911a-aac714533f34","Type":"ContainerStarted","Data":"7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06"} Apr 16 14:10:25.871835 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.871835 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" event={"ID":"109a7777-da00-4d77-911a-aac714533f34","Type":"ContainerStarted","Data":"770bcef0085909d171b73217a4bc81154add6f7e61bb17bd6639cf41063df31b"} Apr 16 14:10:25.873220 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.873197 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" event={"ID":"6c1cd6f5-74a2-47b0-b431-864185372950","Type":"ContainerStarted","Data":"0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983"} Apr 16 14:10:25.873345 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:25.873229 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" event={"ID":"6c1cd6f5-74a2-47b0-b431-864185372950","Type":"ContainerStarted","Data":"a96941ab70b8d311b1eb0b190f8d051dcf9bfc479397068acf3cb07fc1bddab5"} Apr 16 14:10:29.887497 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:29.887466 2569 generic.go:358] "Generic (PLEG): container finished" podID="109a7777-da00-4d77-911a-aac714533f34" containerID="7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06" exitCode=0 Apr 16 14:10:29.887933 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:29.887540 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" event={"ID":"109a7777-da00-4d77-911a-aac714533f34","Type":"ContainerDied","Data":"7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06"} Apr 16 14:10:29.888656 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:29.888625 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 14:10:29.889339 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:29.889317 2569 generic.go:358] "Generic (PLEG): container finished" podID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerID="8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a" exitCode=0 Apr 16 14:10:29.889435 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:29.889386 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" event={"ID":"5ef18018-5b9a-4799-919f-27e187f05a2b","Type":"ContainerDied","Data":"8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a"} Apr 16 14:10:29.890713 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:29.890694 2569 generic.go:358] "Generic (PLEG): container finished" podID="6c1cd6f5-74a2-47b0-b431-864185372950" containerID="0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983" exitCode=0 Apr 16 14:10:29.890789 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:29.890738 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" event={"ID":"6c1cd6f5-74a2-47b0-b431-864185372950","Type":"ContainerDied","Data":"0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983"} Apr 16 14:10:30.898344 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:30.898295 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" event={"ID":"109a7777-da00-4d77-911a-aac714533f34","Type":"ContainerStarted","Data":"4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917"} Apr 16 14:10:30.898751 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:30.898629 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:10:30.900330 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:30.900297 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 14:10:30.915393 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:30.915338 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podStartSLOduration=6.915317904 podStartE2EDuration="6.915317904s" podCreationTimestamp="2026-04-16 14:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:10:30.912227272 +0000 UTC m=+704.933515022" watchObservedRunningTime="2026-04-16 14:10:30.915317904 +0000 UTC m=+704.936605654" Apr 16 14:10:31.903720 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:31.903677 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 14:10:35.633488 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:35.633449 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:10:35.633945 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:35.633872 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:10:41.904545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:41.904497 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 14:10:45.633600 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:45.633507 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.34:8080: connect: connection refused" Apr 16 14:10:45.634048 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:45.633662 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:10:45.634203 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:45.634180 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:10:45.634332 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:45.634304 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:10:49.971551 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:49.971508 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" event={"ID":"6c1cd6f5-74a2-47b0-b431-864185372950","Type":"ContainerStarted","Data":"b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed"} Apr 16 14:10:49.972035 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:49.971836 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:10:49.973213 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:49.973189 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 14:10:49.987905 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:49.987854 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podStartSLOduration=6.408376238 podStartE2EDuration="25.987837969s" podCreationTimestamp="2026-04-16 14:10:24 +0000 UTC" firstStartedPulling="2026-04-16 14:10:29.891843252 +0000 UTC m=+703.913130978" lastFinishedPulling="2026-04-16 14:10:49.47130498 +0000 UTC m=+723.492592709" observedRunningTime="2026-04-16 14:10:49.986483859 +0000 UTC m=+724.007771617" watchObservedRunningTime="2026-04-16 14:10:49.987837969 +0000 UTC m=+724.009125719" Apr 16 14:10:50.975476 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:50.975441 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 14:10:51.904164 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:51.904126 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 14:10:55.458748 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.458722 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:10:55.497927 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.497896 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5ef18018-5b9a-4799-919f-27e187f05a2b-kserve-provision-location\") pod \"5ef18018-5b9a-4799-919f-27e187f05a2b\" (UID: \"5ef18018-5b9a-4799-919f-27e187f05a2b\") " Apr 16 14:10:55.498175 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.498153 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ef18018-5b9a-4799-919f-27e187f05a2b-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "5ef18018-5b9a-4799-919f-27e187f05a2b" (UID: "5ef18018-5b9a-4799-919f-27e187f05a2b"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:10:55.598759 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.598729 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5ef18018-5b9a-4799-919f-27e187f05a2b-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:10:55.994435 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.994404 2569 generic.go:358] "Generic (PLEG): container finished" podID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerID="30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf" exitCode=0 Apr 16 14:10:55.994626 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.994461 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" event={"ID":"5ef18018-5b9a-4799-919f-27e187f05a2b","Type":"ContainerDied","Data":"30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf"} Apr 16 14:10:55.994626 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.994484 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" Apr 16 14:10:55.994626 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.994496 2569 scope.go:117] "RemoveContainer" containerID="30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf" Apr 16 14:10:55.994626 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:55.994487 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs" event={"ID":"5ef18018-5b9a-4799-919f-27e187f05a2b","Type":"ContainerDied","Data":"ca36ad2b2045cbe65cb2e9622f7c74e95ef8bdbc23ad2feee2191030839762d2"} Apr 16 14:10:56.003078 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.003062 2569 scope.go:117] "RemoveContainer" containerID="8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a" Apr 16 14:10:56.010413 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.010393 2569 scope.go:117] "RemoveContainer" containerID="776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05" Apr 16 14:10:56.016492 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.016424 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs"] Apr 16 14:10:56.018425 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.018357 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-raw-sklearn-batcher-72692-predictor-566c5d979c-7jzfs"] Apr 16 14:10:56.018490 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.018457 2569 scope.go:117] "RemoveContainer" containerID="30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf" Apr 16 14:10:56.018731 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:10:56.018714 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf\": container with ID starting with 30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf not found: ID does not exist" containerID="30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf" Apr 16 14:10:56.018784 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.018739 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf"} err="failed to get container status \"30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf\": rpc error: code = NotFound desc = could not find container \"30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf\": container with ID starting with 30c9cc22e8282de260ea7385110f2de9b41562f760b7c11f59273db54db3ffcf not found: ID does not exist" Apr 16 14:10:56.018784 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.018758 2569 scope.go:117] "RemoveContainer" containerID="8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a" Apr 16 14:10:56.019012 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:10:56.018996 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a\": container with ID starting with 8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a not found: ID does not exist" containerID="8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a" Apr 16 14:10:56.019051 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.019017 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a"} err="failed to get container status \"8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a\": rpc error: code = NotFound desc = could not find container \"8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a\": container with ID starting with 8aadbbbd7c5e8da8a09ba8fd8413a4773a9f8694e9abf9199ec98053fef5428a not found: ID does not exist" Apr 16 14:10:56.019051 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.019033 2569 scope.go:117] "RemoveContainer" containerID="776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05" Apr 16 14:10:56.019283 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:10:56.019262 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05\": container with ID starting with 776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05 not found: ID does not exist" containerID="776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05" Apr 16 14:10:56.019368 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.019290 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05"} err="failed to get container status \"776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05\": rpc error: code = NotFound desc = could not find container \"776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05\": container with ID starting with 776de3144f63fd7eb6f9459a3d27ce62b7f194e764733ecc212a43714743ec05 not found: ID does not exist" Apr 16 14:10:56.602150 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:10:56.602117 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" path="/var/lib/kubelet/pods/5ef18018-5b9a-4799-919f-27e187f05a2b/volumes" Apr 16 14:11:00.975723 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:00.975676 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 14:11:01.903929 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:01.903893 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 14:11:10.975395 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:10.975353 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 14:11:11.904506 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:11.904470 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 14:11:20.976159 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:20.976112 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 14:11:21.903571 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:21.903527 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 14:11:30.976182 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:30.976139 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 14:11:31.904426 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:31.904384 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.35:8080: connect: connection refused" Apr 16 14:11:40.602752 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:40.602724 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:11:40.975972 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:40.975924 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.36:8080: connect: connection refused" Apr 16 14:11:50.977101 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:11:50.977074 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:12:05.122300 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.122196 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq"] Apr 16 14:12:05.122831 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.122638 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" containerID="cri-o://4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917" gracePeriod=30 Apr 16 14:12:05.161695 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.161659 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5"] Apr 16 14:12:05.162317 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.162293 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" Apr 16 14:12:05.162317 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.162314 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" Apr 16 14:12:05.162460 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.162336 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" Apr 16 14:12:05.162460 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.162345 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" Apr 16 14:12:05.162460 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.162362 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="storage-initializer" Apr 16 14:12:05.162460 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.162371 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="storage-initializer" Apr 16 14:12:05.162577 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.162470 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="kserve-container" Apr 16 14:12:05.162577 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.162486 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ef18018-5b9a-4799-919f-27e187f05a2b" containerName="agent" Apr 16 14:12:05.166222 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.166203 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:12:05.174427 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.174371 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5"] Apr 16 14:12:05.210336 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.210305 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s"] Apr 16 14:12:05.213837 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.213814 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:12:05.220459 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.220437 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s"] Apr 16 14:12:05.290304 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.290272 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5826bce5-daea-4687-a04f-6ed05214e98d-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5\" (UID: \"5826bce5-daea-4687-a04f-6ed05214e98d\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:12:05.290500 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.290310 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5e8d959e-e715-418f-9a13-3ad1a3dd6954-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s\" (UID: \"5e8d959e-e715-418f-9a13-3ad1a3dd6954\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:12:05.306201 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.306170 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7"] Apr 16 14:12:05.306485 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.306456 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" containerID="cri-o://b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed" gracePeriod=30 Apr 16 14:12:05.391845 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.391814 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5826bce5-daea-4687-a04f-6ed05214e98d-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5\" (UID: \"5826bce5-daea-4687-a04f-6ed05214e98d\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:12:05.391994 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.391855 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5e8d959e-e715-418f-9a13-3ad1a3dd6954-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s\" (UID: \"5e8d959e-e715-418f-9a13-3ad1a3dd6954\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:12:05.392317 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.392295 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5e8d959e-e715-418f-9a13-3ad1a3dd6954-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s\" (UID: \"5e8d959e-e715-418f-9a13-3ad1a3dd6954\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:12:05.392360 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.392295 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5826bce5-daea-4687-a04f-6ed05214e98d-kserve-provision-location\") pod \"isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5\" (UID: \"5826bce5-daea-4687-a04f-6ed05214e98d\") " pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:12:05.477944 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.477912 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:12:05.525955 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.525750 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:12:05.606896 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.606874 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5"] Apr 16 14:12:05.610086 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:12:05.610058 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5826bce5_daea_4687_a04f_6ed05214e98d.slice/crio-010161f69904620cabbde2814dd73bd5139ee9047f86568e163ed78e319b917b WatchSource:0}: Error finding container 010161f69904620cabbde2814dd73bd5139ee9047f86568e163ed78e319b917b: Status 404 returned error can't find the container with id 010161f69904620cabbde2814dd73bd5139ee9047f86568e163ed78e319b917b Apr 16 14:12:05.659504 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:05.659481 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s"] Apr 16 14:12:05.661132 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:12:05.661031 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e8d959e_e715_418f_9a13_3ad1a3dd6954.slice/crio-74d8e3a35fb1cfb1cbaceca45c2839955e437a815631a54163cc2fcee57b2675 WatchSource:0}: Error finding container 74d8e3a35fb1cfb1cbaceca45c2839955e437a815631a54163cc2fcee57b2675: Status 404 returned error can't find the container with id 74d8e3a35fb1cfb1cbaceca45c2839955e437a815631a54163cc2fcee57b2675 Apr 16 14:12:06.238466 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:06.238432 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" event={"ID":"5e8d959e-e715-418f-9a13-3ad1a3dd6954","Type":"ContainerStarted","Data":"0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e"} Apr 16 14:12:06.238466 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:06.238467 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" event={"ID":"5e8d959e-e715-418f-9a13-3ad1a3dd6954","Type":"ContainerStarted","Data":"74d8e3a35fb1cfb1cbaceca45c2839955e437a815631a54163cc2fcee57b2675"} Apr 16 14:12:06.239730 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:06.239706 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" event={"ID":"5826bce5-daea-4687-a04f-6ed05214e98d","Type":"ContainerStarted","Data":"113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe"} Apr 16 14:12:06.239834 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:06.239737 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" event={"ID":"5826bce5-daea-4687-a04f-6ed05214e98d","Type":"ContainerStarted","Data":"010161f69904620cabbde2814dd73bd5139ee9047f86568e163ed78e319b917b"} Apr 16 14:12:09.051960 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.051933 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:12:09.122909 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.122820 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6c1cd6f5-74a2-47b0-b431-864185372950-kserve-provision-location\") pod \"6c1cd6f5-74a2-47b0-b431-864185372950\" (UID: \"6c1cd6f5-74a2-47b0-b431-864185372950\") " Apr 16 14:12:09.123275 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.123220 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c1cd6f5-74a2-47b0-b431-864185372950-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "6c1cd6f5-74a2-47b0-b431-864185372950" (UID: "6c1cd6f5-74a2-47b0-b431-864185372950"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:12:09.223777 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.223744 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6c1cd6f5-74a2-47b0-b431-864185372950-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:12:09.253708 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.253677 2569 generic.go:358] "Generic (PLEG): container finished" podID="6c1cd6f5-74a2-47b0-b431-864185372950" containerID="b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed" exitCode=0 Apr 16 14:12:09.253857 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.253835 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" event={"ID":"6c1cd6f5-74a2-47b0-b431-864185372950","Type":"ContainerDied","Data":"b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed"} Apr 16 14:12:09.253920 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.253873 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" event={"ID":"6c1cd6f5-74a2-47b0-b431-864185372950","Type":"ContainerDied","Data":"a96941ab70b8d311b1eb0b190f8d051dcf9bfc479397068acf3cb07fc1bddab5"} Apr 16 14:12:09.253920 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.253887 2569 scope.go:117] "RemoveContainer" containerID="b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed" Apr 16 14:12:09.253920 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.253845 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7" Apr 16 14:12:09.255316 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.255292 2569 generic.go:358] "Generic (PLEG): container finished" podID="5826bce5-daea-4687-a04f-6ed05214e98d" containerID="113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe" exitCode=0 Apr 16 14:12:09.255408 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.255334 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" event={"ID":"5826bce5-daea-4687-a04f-6ed05214e98d","Type":"ContainerDied","Data":"113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe"} Apr 16 14:12:09.262401 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.262364 2569 scope.go:117] "RemoveContainer" containerID="0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983" Apr 16 14:12:09.269709 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.269692 2569 scope.go:117] "RemoveContainer" containerID="b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed" Apr 16 14:12:09.269926 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:12:09.269908 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed\": container with ID starting with b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed not found: ID does not exist" containerID="b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed" Apr 16 14:12:09.269977 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.269933 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed"} err="failed to get container status \"b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed\": rpc error: code = NotFound desc = could not find container \"b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed\": container with ID starting with b8459bb8f444c45eb5ff996b92176645e8ab58324e610b27c7172471095b26ed not found: ID does not exist" Apr 16 14:12:09.269977 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.269949 2569 scope.go:117] "RemoveContainer" containerID="0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983" Apr 16 14:12:09.270129 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:12:09.270115 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983\": container with ID starting with 0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983 not found: ID does not exist" containerID="0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983" Apr 16 14:12:09.270173 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.270134 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983"} err="failed to get container status \"0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983\": rpc error: code = NotFound desc = could not find container \"0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983\": container with ID starting with 0e5c8ff39cf1a4e06e93a3852f5e68ac14ca7c394dc1110428dd62386db10983 not found: ID does not exist" Apr 16 14:12:09.285589 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.285558 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7"] Apr 16 14:12:09.288618 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.288598 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-3af68-predictor-55989b6587-vx9j7"] Apr 16 14:12:09.562646 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.562622 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:12:09.627091 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.627062 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/109a7777-da00-4d77-911a-aac714533f34-kserve-provision-location\") pod \"109a7777-da00-4d77-911a-aac714533f34\" (UID: \"109a7777-da00-4d77-911a-aac714533f34\") " Apr 16 14:12:09.627403 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.627382 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/109a7777-da00-4d77-911a-aac714533f34-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "109a7777-da00-4d77-911a-aac714533f34" (UID: "109a7777-da00-4d77-911a-aac714533f34"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:12:09.728350 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:09.728272 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/109a7777-da00-4d77-911a-aac714533f34-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:12:10.261937 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.261904 2569 generic.go:358] "Generic (PLEG): container finished" podID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerID="0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e" exitCode=0 Apr 16 14:12:10.262368 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.261978 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" event={"ID":"5e8d959e-e715-418f-9a13-3ad1a3dd6954","Type":"ContainerDied","Data":"0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e"} Apr 16 14:12:10.263349 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.263327 2569 generic.go:358] "Generic (PLEG): container finished" podID="109a7777-da00-4d77-911a-aac714533f34" containerID="4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917" exitCode=0 Apr 16 14:12:10.263475 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.263459 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" Apr 16 14:12:10.263536 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.263474 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" event={"ID":"109a7777-da00-4d77-911a-aac714533f34","Type":"ContainerDied","Data":"4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917"} Apr 16 14:12:10.263536 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.263512 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq" event={"ID":"109a7777-da00-4d77-911a-aac714533f34","Type":"ContainerDied","Data":"770bcef0085909d171b73217a4bc81154add6f7e61bb17bd6639cf41063df31b"} Apr 16 14:12:10.263628 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.263535 2569 scope.go:117] "RemoveContainer" containerID="4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917" Apr 16 14:12:10.265196 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.265176 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" event={"ID":"5826bce5-daea-4687-a04f-6ed05214e98d","Type":"ContainerStarted","Data":"3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128"} Apr 16 14:12:10.265482 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.265452 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:12:10.267006 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.266982 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:12:10.272787 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.272766 2569 scope.go:117] "RemoveContainer" containerID="7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06" Apr 16 14:12:10.280881 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.280865 2569 scope.go:117] "RemoveContainer" containerID="4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917" Apr 16 14:12:10.281134 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:12:10.281116 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917\": container with ID starting with 4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917 not found: ID does not exist" containerID="4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917" Apr 16 14:12:10.281197 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.281147 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917"} err="failed to get container status \"4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917\": rpc error: code = NotFound desc = could not find container \"4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917\": container with ID starting with 4280f93f976fe42134d980c0d4e8d67e311126dd467917e200a02b5860497917 not found: ID does not exist" Apr 16 14:12:10.281197 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.281170 2569 scope.go:117] "RemoveContainer" containerID="7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06" Apr 16 14:12:10.281439 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:12:10.281415 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06\": container with ID starting with 7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06 not found: ID does not exist" containerID="7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06" Apr 16 14:12:10.281531 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.281440 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06"} err="failed to get container status \"7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06\": rpc error: code = NotFound desc = could not find container \"7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06\": container with ID starting with 7413bb866520aeac529c5b445610ae8f22b0c27e25863b28a0adc14ba56ecd06 not found: ID does not exist" Apr 16 14:12:10.289164 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.289140 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq"] Apr 16 14:12:10.293059 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.293039 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-3af68-predictor-ccc9568f9-g6qhq"] Apr 16 14:12:10.304061 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.304023 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podStartSLOduration=5.304005428 podStartE2EDuration="5.304005428s" podCreationTimestamp="2026-04-16 14:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:12:10.303062289 +0000 UTC m=+804.324350038" watchObservedRunningTime="2026-04-16 14:12:10.304005428 +0000 UTC m=+804.325293177" Apr 16 14:12:10.603612 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.603540 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="109a7777-da00-4d77-911a-aac714533f34" path="/var/lib/kubelet/pods/109a7777-da00-4d77-911a-aac714533f34/volumes" Apr 16 14:12:10.603908 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:10.603896 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" path="/var/lib/kubelet/pods/6c1cd6f5-74a2-47b0-b431-864185372950/volumes" Apr 16 14:12:11.270684 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:11.270649 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" event={"ID":"5e8d959e-e715-418f-9a13-3ad1a3dd6954","Type":"ContainerStarted","Data":"7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052"} Apr 16 14:12:11.271151 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:11.270923 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:12:11.271935 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:11.271909 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:12:11.272396 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:11.272372 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 16 14:12:11.287437 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:11.287401 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podStartSLOduration=6.287390968 podStartE2EDuration="6.287390968s" podCreationTimestamp="2026-04-16 14:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:12:11.285331739 +0000 UTC m=+805.306619487" watchObservedRunningTime="2026-04-16 14:12:11.287390968 +0000 UTC m=+805.308678757" Apr 16 14:12:12.279118 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:12.279080 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 16 14:12:21.272322 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:21.272276 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:12:22.279510 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:22.279471 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 16 14:12:31.271964 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:31.271921 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:12:32.279379 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:32.279332 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 16 14:12:41.272550 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:41.272507 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:12:42.279172 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:42.279132 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 16 14:12:51.272134 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:51.272092 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:12:52.279334 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:12:52.279290 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 16 14:13:01.272963 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:01.272921 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:13:02.279608 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:02.279569 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.38:8080: connect: connection refused" Apr 16 14:13:11.272688 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:11.272646 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:13:12.280092 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:12.280061 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:13:15.598668 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:15.598633 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:13:45.411523 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:45.411484 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5"] Apr 16 14:13:45.411986 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:45.411830 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" containerID="cri-o://3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128" gracePeriod=30 Apr 16 14:13:45.508343 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:45.508308 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s"] Apr 16 14:13:45.508593 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:45.508571 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" containerID="cri-o://7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052" gracePeriod=30 Apr 16 14:13:45.598575 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:45.598532 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.37:8080: connect: connection refused" Apr 16 14:13:46.554375 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:46.554338 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:13:46.554795 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:46.554774 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:13:46.560201 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:46.560184 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:13:46.561086 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:46.561072 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:13:48.953386 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:48.953363 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:13:49.079117 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.079032 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5e8d959e-e715-418f-9a13-3ad1a3dd6954-kserve-provision-location\") pod \"5e8d959e-e715-418f-9a13-3ad1a3dd6954\" (UID: \"5e8d959e-e715-418f-9a13-3ad1a3dd6954\") " Apr 16 14:13:49.079377 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.079355 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8d959e-e715-418f-9a13-3ad1a3dd6954-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "5e8d959e-e715-418f-9a13-3ad1a3dd6954" (UID: "5e8d959e-e715-418f-9a13-3ad1a3dd6954"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:13:49.179971 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.179934 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5e8d959e-e715-418f-9a13-3ad1a3dd6954-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:13:49.560078 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.560058 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:13:49.621060 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.620975 2569 generic.go:358] "Generic (PLEG): container finished" podID="5826bce5-daea-4687-a04f-6ed05214e98d" containerID="3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128" exitCode=0 Apr 16 14:13:49.621060 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.621047 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" Apr 16 14:13:49.621297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.621057 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" event={"ID":"5826bce5-daea-4687-a04f-6ed05214e98d","Type":"ContainerDied","Data":"3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128"} Apr 16 14:13:49.621297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.621097 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5" event={"ID":"5826bce5-daea-4687-a04f-6ed05214e98d","Type":"ContainerDied","Data":"010161f69904620cabbde2814dd73bd5139ee9047f86568e163ed78e319b917b"} Apr 16 14:13:49.621297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.621116 2569 scope.go:117] "RemoveContainer" containerID="3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128" Apr 16 14:13:49.622423 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.622402 2569 generic.go:358] "Generic (PLEG): container finished" podID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerID="7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052" exitCode=0 Apr 16 14:13:49.622531 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.622446 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" event={"ID":"5e8d959e-e715-418f-9a13-3ad1a3dd6954","Type":"ContainerDied","Data":"7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052"} Apr 16 14:13:49.622531 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.622463 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" Apr 16 14:13:49.622531 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.622468 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s" event={"ID":"5e8d959e-e715-418f-9a13-3ad1a3dd6954","Type":"ContainerDied","Data":"74d8e3a35fb1cfb1cbaceca45c2839955e437a815631a54163cc2fcee57b2675"} Apr 16 14:13:49.629544 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.629522 2569 scope.go:117] "RemoveContainer" containerID="113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe" Apr 16 14:13:49.637038 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.637019 2569 scope.go:117] "RemoveContainer" containerID="3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128" Apr 16 14:13:49.637298 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:13:49.637278 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128\": container with ID starting with 3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128 not found: ID does not exist" containerID="3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128" Apr 16 14:13:49.637346 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.637309 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128"} err="failed to get container status \"3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128\": rpc error: code = NotFound desc = could not find container \"3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128\": container with ID starting with 3ab1975a3e37856283fe43bd5c0fd0ef2b0840503c65e19671cf65208d792128 not found: ID does not exist" Apr 16 14:13:49.637346 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.637327 2569 scope.go:117] "RemoveContainer" containerID="113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe" Apr 16 14:13:49.637548 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:13:49.637531 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe\": container with ID starting with 113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe not found: ID does not exist" containerID="113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe" Apr 16 14:13:49.637583 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.637556 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe"} err="failed to get container status \"113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe\": rpc error: code = NotFound desc = could not find container \"113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe\": container with ID starting with 113483835b3544df02d6841982146b7780f8bf50dcc5565d16c786261265dcfe not found: ID does not exist" Apr 16 14:13:49.637583 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.637573 2569 scope.go:117] "RemoveContainer" containerID="7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052" Apr 16 14:13:49.642497 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.642477 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s"] Apr 16 14:13:49.646648 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.646425 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-3c9de-predictor-6948979d59-pfb2s"] Apr 16 14:13:49.647909 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.647880 2569 scope.go:117] "RemoveContainer" containerID="0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e" Apr 16 14:13:49.655109 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.655094 2569 scope.go:117] "RemoveContainer" containerID="7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052" Apr 16 14:13:49.655367 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:13:49.655350 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052\": container with ID starting with 7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052 not found: ID does not exist" containerID="7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052" Apr 16 14:13:49.655416 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.655375 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052"} err="failed to get container status \"7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052\": rpc error: code = NotFound desc = could not find container \"7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052\": container with ID starting with 7a28896820f84b7504f2ef7eb193304cc30d904b8ad14ba42cbc2f80a30b9052 not found: ID does not exist" Apr 16 14:13:49.655416 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.655394 2569 scope.go:117] "RemoveContainer" containerID="0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e" Apr 16 14:13:49.655603 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:13:49.655588 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e\": container with ID starting with 0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e not found: ID does not exist" containerID="0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e" Apr 16 14:13:49.655665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.655607 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e"} err="failed to get container status \"0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e\": rpc error: code = NotFound desc = could not find container \"0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e\": container with ID starting with 0b5903b38d40b64cf5621c112336ae2910259abcf267be316f1a4cc4b9c5c45e not found: ID does not exist" Apr 16 14:13:49.683942 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.683917 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5826bce5-daea-4687-a04f-6ed05214e98d-kserve-provision-location\") pod \"5826bce5-daea-4687-a04f-6ed05214e98d\" (UID: \"5826bce5-daea-4687-a04f-6ed05214e98d\") " Apr 16 14:13:49.684233 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.684204 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5826bce5-daea-4687-a04f-6ed05214e98d-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "5826bce5-daea-4687-a04f-6ed05214e98d" (UID: "5826bce5-daea-4687-a04f-6ed05214e98d"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:13:49.784913 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.784887 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5826bce5-daea-4687-a04f-6ed05214e98d-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:13:49.941486 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.941460 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5"] Apr 16 14:13:49.946263 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:49.946228 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-graph-raw-hpa-3c9de-predictor-5c5b76c459-d7ck5"] Apr 16 14:13:50.603552 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:50.603523 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" path="/var/lib/kubelet/pods/5826bce5-daea-4687-a04f-6ed05214e98d/volumes" Apr 16 14:13:50.603900 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:50.603858 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" path="/var/lib/kubelet/pods/5e8d959e-e715-418f-9a13-3ad1a3dd6954/volumes" Apr 16 14:13:55.473400 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473363 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg"] Apr 16 14:13:55.473757 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473726 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" Apr 16 14:13:55.473757 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473738 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" Apr 16 14:13:55.473757 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473748 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" Apr 16 14:13:55.473757 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473754 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473767 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473774 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473782 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="storage-initializer" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473787 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="storage-initializer" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473794 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473799 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473810 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="storage-initializer" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473815 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="storage-initializer" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473823 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="storage-initializer" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473828 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="storage-initializer" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473835 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="storage-initializer" Apr 16 14:13:55.473886 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473840 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="storage-initializer" Apr 16 14:13:55.474220 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473907 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c1cd6f5-74a2-47b0-b431-864185372950" containerName="kserve-container" Apr 16 14:13:55.474220 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473918 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="109a7777-da00-4d77-911a-aac714533f34" containerName="kserve-container" Apr 16 14:13:55.474220 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473925 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="5826bce5-daea-4687-a04f-6ed05214e98d" containerName="kserve-container" Apr 16 14:13:55.474220 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.473932 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e8d959e-e715-418f-9a13-3ad1a3dd6954" containerName="kserve-container" Apr 16 14:13:55.478757 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.478738 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:13:55.482440 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.482406 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-x6x2v\"" Apr 16 14:13:55.483563 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.483541 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg"] Apr 16 14:13:55.634354 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.634316 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f1a4d5e-2dff-4de4-8da2-8062eebb190f-kserve-provision-location\") pod \"isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg\" (UID: \"1f1a4d5e-2dff-4de4-8da2-8062eebb190f\") " pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:13:55.735807 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.735723 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f1a4d5e-2dff-4de4-8da2-8062eebb190f-kserve-provision-location\") pod \"isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg\" (UID: \"1f1a4d5e-2dff-4de4-8da2-8062eebb190f\") " pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:13:55.736106 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.736086 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f1a4d5e-2dff-4de4-8da2-8062eebb190f-kserve-provision-location\") pod \"isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg\" (UID: \"1f1a4d5e-2dff-4de4-8da2-8062eebb190f\") " pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:13:55.790208 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.790180 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:13:55.910036 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:55.909878 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg"] Apr 16 14:13:55.913178 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:13:55.913151 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f1a4d5e_2dff_4de4_8da2_8062eebb190f.slice/crio-10f789ee3541ee0b84a3043e4800d7cb75ac860d9e043cc70574bb5fe9b85687 WatchSource:0}: Error finding container 10f789ee3541ee0b84a3043e4800d7cb75ac860d9e043cc70574bb5fe9b85687: Status 404 returned error can't find the container with id 10f789ee3541ee0b84a3043e4800d7cb75ac860d9e043cc70574bb5fe9b85687 Apr 16 14:13:56.647626 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:56.647588 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" event={"ID":"1f1a4d5e-2dff-4de4-8da2-8062eebb190f","Type":"ContainerStarted","Data":"9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb"} Apr 16 14:13:56.647626 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:13:56.647624 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" event={"ID":"1f1a4d5e-2dff-4de4-8da2-8062eebb190f","Type":"ContainerStarted","Data":"10f789ee3541ee0b84a3043e4800d7cb75ac860d9e043cc70574bb5fe9b85687"} Apr 16 14:14:00.662476 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:00.662438 2569 generic.go:358] "Generic (PLEG): container finished" podID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerID="9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb" exitCode=0 Apr 16 14:14:00.662859 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:00.662509 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" event={"ID":"1f1a4d5e-2dff-4de4-8da2-8062eebb190f","Type":"ContainerDied","Data":"9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb"} Apr 16 14:14:01.667771 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:01.667739 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" event={"ID":"1f1a4d5e-2dff-4de4-8da2-8062eebb190f","Type":"ContainerStarted","Data":"c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8"} Apr 16 14:14:01.667771 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:01.667773 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" event={"ID":"1f1a4d5e-2dff-4de4-8da2-8062eebb190f","Type":"ContainerStarted","Data":"643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf"} Apr 16 14:14:01.668175 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:01.668048 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:14:01.669520 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:01.669494 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:14:01.682252 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:01.682192 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podStartSLOduration=6.682178043 podStartE2EDuration="6.682178043s" podCreationTimestamp="2026-04-16 14:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:14:01.680723749 +0000 UTC m=+915.702011510" watchObservedRunningTime="2026-04-16 14:14:01.682178043 +0000 UTC m=+915.703465790" Apr 16 14:14:02.671144 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:02.671112 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:14:02.671530 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:02.671274 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:14:02.672231 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:02.672209 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:14:03.674324 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:03.674281 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:14:03.674783 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:03.674637 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:14:13.675012 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:13.674967 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:14:13.675564 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:13.675540 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:14:23.675056 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:23.675004 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:14:23.675651 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:23.675490 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:14:33.674531 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:33.674487 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:14:33.676825 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:33.674891 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:14:43.674971 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:43.674919 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:14:43.675393 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:43.675373 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:14:53.674410 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:53.674361 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:14:53.674882 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:14:53.674858 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:15:03.674917 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:03.674829 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:15:03.675379 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:03.675049 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:15:10.731799 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.731760 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg"] Apr 16 14:15:10.732177 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.732118 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" containerID="cri-o://643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf" gracePeriod=30 Apr 16 14:15:10.732256 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.732211 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" containerID="cri-o://c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8" gracePeriod=30 Apr 16 14:15:10.766783 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.766751 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf"] Apr 16 14:15:10.770296 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.770280 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:15:10.777764 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.777743 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf"] Apr 16 14:15:10.870405 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.870372 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5563408b-4cf9-4c8f-a535-d65c7d955862-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf\" (UID: \"5563408b-4cf9-4c8f-a535-d65c7d955862\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:15:10.971691 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.971644 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5563408b-4cf9-4c8f-a535-d65c7d955862-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf\" (UID: \"5563408b-4cf9-4c8f-a535-d65c7d955862\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:15:10.972059 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:10.972038 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5563408b-4cf9-4c8f-a535-d65c7d955862-kserve-provision-location\") pod \"isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf\" (UID: \"5563408b-4cf9-4c8f-a535-d65c7d955862\") " pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:15:11.081276 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:11.081184 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:15:11.201151 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:11.201127 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf"] Apr 16 14:15:11.203470 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:15:11.203435 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5563408b_4cf9_4c8f_a535_d65c7d955862.slice/crio-ed07112744a44bef18ecf72c8a152a200df2c087d0948cd6c34842cdcb6a23e2 WatchSource:0}: Error finding container ed07112744a44bef18ecf72c8a152a200df2c087d0948cd6c34842cdcb6a23e2: Status 404 returned error can't find the container with id ed07112744a44bef18ecf72c8a152a200df2c087d0948cd6c34842cdcb6a23e2 Apr 16 14:15:11.912514 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:11.912475 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" event={"ID":"5563408b-4cf9-4c8f-a535-d65c7d955862","Type":"ContainerStarted","Data":"371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53"} Apr 16 14:15:11.912514 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:11.912516 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" event={"ID":"5563408b-4cf9-4c8f-a535-d65c7d955862","Type":"ContainerStarted","Data":"ed07112744a44bef18ecf72c8a152a200df2c087d0948cd6c34842cdcb6a23e2"} Apr 16 14:15:13.674523 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:13.674473 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:15:13.674967 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:13.674830 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:15:14.925621 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:14.925592 2569 generic.go:358] "Generic (PLEG): container finished" podID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerID="643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf" exitCode=0 Apr 16 14:15:14.925982 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:14.925664 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" event={"ID":"1f1a4d5e-2dff-4de4-8da2-8062eebb190f","Type":"ContainerDied","Data":"643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf"} Apr 16 14:15:14.926983 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:14.926963 2569 generic.go:358] "Generic (PLEG): container finished" podID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerID="371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53" exitCode=0 Apr 16 14:15:14.927073 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:14.926999 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" event={"ID":"5563408b-4cf9-4c8f-a535-d65c7d955862","Type":"ContainerDied","Data":"371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53"} Apr 16 14:15:15.932591 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:15.932556 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" event={"ID":"5563408b-4cf9-4c8f-a535-d65c7d955862","Type":"ContainerStarted","Data":"73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335"} Apr 16 14:15:15.932996 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:15.932867 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:15:15.934570 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:15.934547 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:15:15.949820 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:15.949770 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podStartSLOduration=5.949754727 podStartE2EDuration="5.949754727s" podCreationTimestamp="2026-04-16 14:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:15:15.9492678 +0000 UTC m=+989.970555545" watchObservedRunningTime="2026-04-16 14:15:15.949754727 +0000 UTC m=+989.971042477" Apr 16 14:15:16.936460 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:16.936424 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:15:23.674389 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:23.674351 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:15:23.674799 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:23.674680 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:15:26.936888 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:26.936835 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:15:33.674799 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:33.674747 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.39:8080: connect: connection refused" Apr 16 14:15:33.675206 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:33.674905 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:15:33.675206 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:33.675087 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 16 14:15:33.675206 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:33.675192 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:15:36.936459 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:36.936413 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:15:40.883941 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:40.883916 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:15:41.020364 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.020277 2569 generic.go:358] "Generic (PLEG): container finished" podID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerID="c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8" exitCode=137 Apr 16 14:15:41.020364 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.020358 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" Apr 16 14:15:41.020545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.020358 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" event={"ID":"1f1a4d5e-2dff-4de4-8da2-8062eebb190f","Type":"ContainerDied","Data":"c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8"} Apr 16 14:15:41.020545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.020408 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg" event={"ID":"1f1a4d5e-2dff-4de4-8da2-8062eebb190f","Type":"ContainerDied","Data":"10f789ee3541ee0b84a3043e4800d7cb75ac860d9e043cc70574bb5fe9b85687"} Apr 16 14:15:41.020545 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.020429 2569 scope.go:117] "RemoveContainer" containerID="c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8" Apr 16 14:15:41.028047 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.028028 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f1a4d5e-2dff-4de4-8da2-8062eebb190f-kserve-provision-location\") pod \"1f1a4d5e-2dff-4de4-8da2-8062eebb190f\" (UID: \"1f1a4d5e-2dff-4de4-8da2-8062eebb190f\") " Apr 16 14:15:41.028373 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.028351 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1a4d5e-2dff-4de4-8da2-8062eebb190f-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "1f1a4d5e-2dff-4de4-8da2-8062eebb190f" (UID: "1f1a4d5e-2dff-4de4-8da2-8062eebb190f"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:15:41.028423 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.028374 2569 scope.go:117] "RemoveContainer" containerID="643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf" Apr 16 14:15:41.037827 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.037810 2569 scope.go:117] "RemoveContainer" containerID="9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb" Apr 16 14:15:41.044799 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.044765 2569 scope.go:117] "RemoveContainer" containerID="c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8" Apr 16 14:15:41.045037 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:15:41.045017 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8\": container with ID starting with c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8 not found: ID does not exist" containerID="c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8" Apr 16 14:15:41.045096 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.045045 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8"} err="failed to get container status \"c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8\": rpc error: code = NotFound desc = could not find container \"c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8\": container with ID starting with c497887a04277ef3560f419a3f54008ab8a9315a9fc268b600432321074d85f8 not found: ID does not exist" Apr 16 14:15:41.045096 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.045062 2569 scope.go:117] "RemoveContainer" containerID="643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf" Apr 16 14:15:41.045288 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:15:41.045268 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf\": container with ID starting with 643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf not found: ID does not exist" containerID="643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf" Apr 16 14:15:41.045359 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.045297 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf"} err="failed to get container status \"643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf\": rpc error: code = NotFound desc = could not find container \"643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf\": container with ID starting with 643805fe5af706813d5ac0151b2b249706645fffd97e2b823c4ff31ee784f0cf not found: ID does not exist" Apr 16 14:15:41.045359 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.045321 2569 scope.go:117] "RemoveContainer" containerID="9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb" Apr 16 14:15:41.045542 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:15:41.045527 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb\": container with ID starting with 9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb not found: ID does not exist" containerID="9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb" Apr 16 14:15:41.045588 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.045545 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb"} err="failed to get container status \"9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb\": rpc error: code = NotFound desc = could not find container \"9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb\": container with ID starting with 9d1efe455dee0dc11b35ee5f33cc94c49334c36ad7792f9c338162c6a68304eb not found: ID does not exist" Apr 16 14:15:41.129444 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.129411 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/1f1a4d5e-2dff-4de4-8da2-8062eebb190f-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:15:41.343191 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.343160 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg"] Apr 16 14:15:41.347470 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:41.347439 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-logger-raw-a0d01-predictor-77cdd7f44d-x49kg"] Apr 16 14:15:42.602410 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:42.602375 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" path="/var/lib/kubelet/pods/1f1a4d5e-2dff-4de4-8da2-8062eebb190f/volumes" Apr 16 14:15:46.937141 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:46.937098 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:15:56.937334 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:15:56.937297 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:16:06.936649 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:16:06.936605 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:16:16.937138 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:16:16.937097 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:16:21.598218 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:16:21.598170 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:16:31.598597 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:16:31.598514 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:16:41.598227 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:16:41.598182 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:16:51.598476 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:16:51.598428 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:17:01.598504 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:01.598459 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:17:11.598512 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:11.598470 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:17:21.598897 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:21.598855 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:17:23.599024 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:23.598980 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:17:33.599407 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:33.599374 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:17:40.970334 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:40.970297 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf"] Apr 16 14:17:40.970687 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:40.970567 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" containerID="cri-o://73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335" gracePeriod=30 Apr 16 14:17:41.058718 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.058686 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf"] Apr 16 14:17:41.059090 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.059077 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" Apr 16 14:17:41.059133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.059093 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" Apr 16 14:17:41.059133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.059113 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="storage-initializer" Apr 16 14:17:41.059133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.059119 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="storage-initializer" Apr 16 14:17:41.059133 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.059131 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" Apr 16 14:17:41.059272 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.059137 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" Apr 16 14:17:41.059272 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.059203 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="kserve-container" Apr 16 14:17:41.059272 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.059214 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f1a4d5e-2dff-4de4-8da2-8062eebb190f" containerName="agent" Apr 16 14:17:41.062287 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.062270 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:17:41.076842 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.076818 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf"] Apr 16 14:17:41.133842 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.133808 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bbade444-703c-48fb-9168-bcbfc7deaea4-kserve-provision-location\") pod \"isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf\" (UID: \"bbade444-703c-48fb-9168-bcbfc7deaea4\") " pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:17:41.235307 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.235207 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bbade444-703c-48fb-9168-bcbfc7deaea4-kserve-provision-location\") pod \"isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf\" (UID: \"bbade444-703c-48fb-9168-bcbfc7deaea4\") " pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:17:41.235572 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.235551 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bbade444-703c-48fb-9168-bcbfc7deaea4-kserve-provision-location\") pod \"isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf\" (UID: \"bbade444-703c-48fb-9168-bcbfc7deaea4\") " pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:17:41.371815 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.371782 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:17:41.502071 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.501997 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf"] Apr 16 14:17:41.504560 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:17:41.504530 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbade444_703c_48fb_9168_bcbfc7deaea4.slice/crio-9ceeed6cd8fae63f01211d3d933605ed6cc2a13ac8c580f06996c7fd8bca5951 WatchSource:0}: Error finding container 9ceeed6cd8fae63f01211d3d933605ed6cc2a13ac8c580f06996c7fd8bca5951: Status 404 returned error can't find the container with id 9ceeed6cd8fae63f01211d3d933605ed6cc2a13ac8c580f06996c7fd8bca5951 Apr 16 14:17:41.506302 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:41.506285 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 14:17:42.439152 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:42.439113 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" event={"ID":"bbade444-703c-48fb-9168-bcbfc7deaea4","Type":"ContainerStarted","Data":"c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2"} Apr 16 14:17:42.439152 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:42.439155 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" event={"ID":"bbade444-703c-48fb-9168-bcbfc7deaea4","Type":"ContainerStarted","Data":"9ceeed6cd8fae63f01211d3d933605ed6cc2a13ac8c580f06996c7fd8bca5951"} Apr 16 14:17:43.599735 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:43.599697 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.40:8080: connect: connection refused" Apr 16 14:17:45.449272 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:45.449225 2569 generic.go:358] "Generic (PLEG): container finished" podID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerID="c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2" exitCode=0 Apr 16 14:17:45.449564 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:45.449285 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" event={"ID":"bbade444-703c-48fb-9168-bcbfc7deaea4","Type":"ContainerDied","Data":"c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2"} Apr 16 14:17:46.453864 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:46.453830 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" event={"ID":"bbade444-703c-48fb-9168-bcbfc7deaea4","Type":"ContainerStarted","Data":"d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3"} Apr 16 14:17:46.454288 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:46.454116 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:17:46.455514 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:46.455490 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.41:8080: connect: connection refused" Apr 16 14:17:46.470351 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:46.470309 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podStartSLOduration=5.470297094 podStartE2EDuration="5.470297094s" podCreationTimestamp="2026-04-16 14:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:17:46.468006871 +0000 UTC m=+1140.489294619" watchObservedRunningTime="2026-04-16 14:17:46.470297094 +0000 UTC m=+1140.491584842" Apr 16 14:17:47.457698 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:47.457659 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.41:8080: connect: connection refused" Apr 16 14:17:49.713631 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:49.713607 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:17:49.803967 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:49.803883 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5563408b-4cf9-4c8f-a535-d65c7d955862-kserve-provision-location\") pod \"5563408b-4cf9-4c8f-a535-d65c7d955862\" (UID: \"5563408b-4cf9-4c8f-a535-d65c7d955862\") " Apr 16 14:17:49.804209 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:49.804187 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5563408b-4cf9-4c8f-a535-d65c7d955862-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "5563408b-4cf9-4c8f-a535-d65c7d955862" (UID: "5563408b-4cf9-4c8f-a535-d65c7d955862"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:17:49.904594 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:49.904554 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/5563408b-4cf9-4c8f-a535-d65c7d955862-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:17:50.468085 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.468050 2569 generic.go:358] "Generic (PLEG): container finished" podID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerID="73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335" exitCode=0 Apr 16 14:17:50.468300 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.468129 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" Apr 16 14:17:50.468300 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.468137 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" event={"ID":"5563408b-4cf9-4c8f-a535-d65c7d955862","Type":"ContainerDied","Data":"73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335"} Apr 16 14:17:50.468300 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.468173 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf" event={"ID":"5563408b-4cf9-4c8f-a535-d65c7d955862","Type":"ContainerDied","Data":"ed07112744a44bef18ecf72c8a152a200df2c087d0948cd6c34842cdcb6a23e2"} Apr 16 14:17:50.468300 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.468189 2569 scope.go:117] "RemoveContainer" containerID="73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335" Apr 16 14:17:50.476188 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.476169 2569 scope.go:117] "RemoveContainer" containerID="371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53" Apr 16 14:17:50.483211 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.483197 2569 scope.go:117] "RemoveContainer" containerID="73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335" Apr 16 14:17:50.483464 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:17:50.483444 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335\": container with ID starting with 73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335 not found: ID does not exist" containerID="73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335" Apr 16 14:17:50.483537 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.483472 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335"} err="failed to get container status \"73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335\": rpc error: code = NotFound desc = could not find container \"73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335\": container with ID starting with 73b358078c65ae6cec4c97a3946e85f832cdc2bfc59d88e167c3b2e1ca936335 not found: ID does not exist" Apr 16 14:17:50.483537 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.483488 2569 scope.go:117] "RemoveContainer" containerID="371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53" Apr 16 14:17:50.483699 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:17:50.483683 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53\": container with ID starting with 371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53 not found: ID does not exist" containerID="371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53" Apr 16 14:17:50.483739 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.483704 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53"} err="failed to get container status \"371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53\": rpc error: code = NotFound desc = could not find container \"371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53\": container with ID starting with 371558c26eaa35105c4ee223965d2d4ab5bad5a263e02657295aac52a029ca53 not found: ID does not exist" Apr 16 14:17:50.493531 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.493504 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf"] Apr 16 14:17:50.493638 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.493549 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-sklearn-scale-raw-5c508-predictor-75cd64b9b9-cglzf"] Apr 16 14:17:50.602301 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:50.602268 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" path="/var/lib/kubelet/pods/5563408b-4cf9-4c8f-a535-d65c7d955862/volumes" Apr 16 14:17:57.458646 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:17:57.458552 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.41:8080: connect: connection refused" Apr 16 14:18:07.458699 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:07.458653 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.41:8080: connect: connection refused" Apr 16 14:18:17.458033 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:17.457988 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.41:8080: connect: connection refused" Apr 16 14:18:27.458565 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:27.458522 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.41:8080: connect: connection refused" Apr 16 14:18:37.458110 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:37.458067 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.41:8080: connect: connection refused" Apr 16 14:18:46.579765 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:46.579733 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:18:46.582737 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:46.582710 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:18:46.585806 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:46.585786 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:18:46.588438 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:46.588421 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:18:47.458568 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:47.458518 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.41:8080: connect: connection refused" Apr 16 14:18:56.602745 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:18:56.602709 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:19:01.183489 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.183456 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9"] Apr 16 14:19:01.183855 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.183826 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="storage-initializer" Apr 16 14:19:01.183855 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.183836 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="storage-initializer" Apr 16 14:19:01.183855 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.183843 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" Apr 16 14:19:01.183855 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.183849 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" Apr 16 14:19:01.184004 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.183918 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="5563408b-4cf9-4c8f-a535-d65c7d955862" containerName="kserve-container" Apr 16 14:19:01.188480 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.188461 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:01.190632 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.190610 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"invalid-s3-secret-8047bd\"" Apr 16 14:19:01.190872 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.190857 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"odh-kserve-custom-ca-bundle\"" Apr 16 14:19:01.190935 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.190855 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"invalid-s3-sa-8047bd-dockercfg-vx6vw\"" Apr 16 14:19:01.193962 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.193943 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9"] Apr 16 14:19:01.286379 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.286348 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/8685b553-2c3e-4882-a5f8-af6b2c69fafb-cabundle-cert\") pod \"isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9\" (UID: \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\") " pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:01.286550 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.286389 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8685b553-2c3e-4882-a5f8-af6b2c69fafb-kserve-provision-location\") pod \"isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9\" (UID: \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\") " pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:01.387709 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.387677 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/8685b553-2c3e-4882-a5f8-af6b2c69fafb-cabundle-cert\") pod \"isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9\" (UID: \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\") " pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:01.387866 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.387719 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8685b553-2c3e-4882-a5f8-af6b2c69fafb-kserve-provision-location\") pod \"isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9\" (UID: \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\") " pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:01.388076 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.388060 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8685b553-2c3e-4882-a5f8-af6b2c69fafb-kserve-provision-location\") pod \"isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9\" (UID: \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\") " pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:01.388346 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.388328 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/8685b553-2c3e-4882-a5f8-af6b2c69fafb-cabundle-cert\") pod \"isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9\" (UID: \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\") " pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:01.499762 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.499688 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:01.617256 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.617206 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9"] Apr 16 14:19:01.618949 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:19:01.618923 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8685b553_2c3e_4882_a5f8_af6b2c69fafb.slice/crio-6e9e4393cc4e64c41301f5f9c4ffcbb025311042e43d68514a7c1bae94113b78 WatchSource:0}: Error finding container 6e9e4393cc4e64c41301f5f9c4ffcbb025311042e43d68514a7c1bae94113b78: Status 404 returned error can't find the container with id 6e9e4393cc4e64c41301f5f9c4ffcbb025311042e43d68514a7c1bae94113b78 Apr 16 14:19:01.709786 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.709760 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" event={"ID":"8685b553-2c3e-4882-a5f8-af6b2c69fafb","Type":"ContainerStarted","Data":"79f05ea87680d5f6babf8abafe914892e8e1fdb3771180b4a2d5103b3e8322f6"} Apr 16 14:19:01.709911 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:01.709792 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" event={"ID":"8685b553-2c3e-4882-a5f8-af6b2c69fafb","Type":"ContainerStarted","Data":"6e9e4393cc4e64c41301f5f9c4ffcbb025311042e43d68514a7c1bae94113b78"} Apr 16 14:19:05.723380 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:05.723307 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_8685b553-2c3e-4882-a5f8-af6b2c69fafb/storage-initializer/0.log" Apr 16 14:19:05.723380 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:05.723342 2569 generic.go:358] "Generic (PLEG): container finished" podID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" containerID="79f05ea87680d5f6babf8abafe914892e8e1fdb3771180b4a2d5103b3e8322f6" exitCode=1 Apr 16 14:19:05.723752 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:05.723392 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" event={"ID":"8685b553-2c3e-4882-a5f8-af6b2c69fafb","Type":"ContainerDied","Data":"79f05ea87680d5f6babf8abafe914892e8e1fdb3771180b4a2d5103b3e8322f6"} Apr 16 14:19:06.728081 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:06.728054 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_8685b553-2c3e-4882-a5f8-af6b2c69fafb/storage-initializer/0.log" Apr 16 14:19:06.728469 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:06.728110 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" event={"ID":"8685b553-2c3e-4882-a5f8-af6b2c69fafb","Type":"ContainerStarted","Data":"4bd0e8ad3fb00d68ac87c62a418c1bd00bed4cc9deb061e556667e7fac139978"} Apr 16 14:19:07.733076 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:07.733054 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_8685b553-2c3e-4882-a5f8-af6b2c69fafb/storage-initializer/1.log" Apr 16 14:19:07.733410 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:07.733381 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_8685b553-2c3e-4882-a5f8-af6b2c69fafb/storage-initializer/0.log" Apr 16 14:19:07.733455 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:07.733413 2569 generic.go:358] "Generic (PLEG): container finished" podID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" containerID="4bd0e8ad3fb00d68ac87c62a418c1bd00bed4cc9deb061e556667e7fac139978" exitCode=1 Apr 16 14:19:07.733505 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:07.733485 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" event={"ID":"8685b553-2c3e-4882-a5f8-af6b2c69fafb","Type":"ContainerDied","Data":"4bd0e8ad3fb00d68ac87c62a418c1bd00bed4cc9deb061e556667e7fac139978"} Apr 16 14:19:07.733544 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:07.733531 2569 scope.go:117] "RemoveContainer" containerID="79f05ea87680d5f6babf8abafe914892e8e1fdb3771180b4a2d5103b3e8322f6" Apr 16 14:19:07.733881 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:07.733858 2569 scope.go:117] "RemoveContainer" containerID="79f05ea87680d5f6babf8abafe914892e8e1fdb3771180b4a2d5103b3e8322f6" Apr 16 14:19:07.754297 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:19:07.754267 2569 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_kserve-ci-e2e-test_8685b553-2c3e-4882-a5f8-af6b2c69fafb_0 in pod sandbox 6e9e4393cc4e64c41301f5f9c4ffcbb025311042e43d68514a7c1bae94113b78 from index: no such id: '79f05ea87680d5f6babf8abafe914892e8e1fdb3771180b4a2d5103b3e8322f6'" containerID="79f05ea87680d5f6babf8abafe914892e8e1fdb3771180b4a2d5103b3e8322f6" Apr 16 14:19:07.754375 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:19:07.754318 2569 kuberuntime_container.go:951] "Unhandled Error" err="failed to remove pod init container \"storage-initializer\": rpc error: code = Unknown desc = failed to delete container k8s_storage-initializer_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_kserve-ci-e2e-test_8685b553-2c3e-4882-a5f8-af6b2c69fafb_0 in pod sandbox 6e9e4393cc4e64c41301f5f9c4ffcbb025311042e43d68514a7c1bae94113b78 from index: no such id: '79f05ea87680d5f6babf8abafe914892e8e1fdb3771180b4a2d5103b3e8322f6'; Skipping pod \"isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_kserve-ci-e2e-test(8685b553-2c3e-4882-a5f8-af6b2c69fafb)\"" logger="UnhandledError" Apr 16 14:19:07.755637 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:19:07.755617 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-initializer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-initializer pod=isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_kserve-ci-e2e-test(8685b553-2c3e-4882-a5f8-af6b2c69fafb)\"" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" podUID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" Apr 16 14:19:08.737980 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:08.737954 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_8685b553-2c3e-4882-a5f8-af6b2c69fafb/storage-initializer/1.log" Apr 16 14:19:17.220276 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.220216 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf"] Apr 16 14:19:17.220697 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.220554 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" containerID="cri-o://d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3" gracePeriod=30 Apr 16 14:19:17.280440 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.280409 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9"] Apr 16 14:19:17.402582 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.402548 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56"] Apr 16 14:19:17.407528 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.407506 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:17.409701 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.409683 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"fail-s3-secret-4dad45\"" Apr 16 14:19:17.409954 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.409928 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"fail-s3-sa-4dad45-dockercfg-7ms4f\"" Apr 16 14:19:17.412553 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.412534 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56"] Apr 16 14:19:17.420912 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.420893 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_8685b553-2c3e-4882-a5f8-af6b2c69fafb/storage-initializer/1.log" Apr 16 14:19:17.421029 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.420947 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:17.517326 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.517223 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/8685b553-2c3e-4882-a5f8-af6b2c69fafb-cabundle-cert\") pod \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\" (UID: \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\") " Apr 16 14:19:17.517326 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.517278 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8685b553-2c3e-4882-a5f8-af6b2c69fafb-kserve-provision-location\") pod \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\" (UID: \"8685b553-2c3e-4882-a5f8-af6b2c69fafb\") " Apr 16 14:19:17.517587 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.517432 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/b9f10d81-6f9c-4742-9378-afa88662b0e3-kserve-provision-location\") pod \"isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56\" (UID: \"b9f10d81-6f9c-4742-9378-afa88662b0e3\") " pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:17.517587 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.517532 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/b9f10d81-6f9c-4742-9378-afa88662b0e3-cabundle-cert\") pod \"isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56\" (UID: \"b9f10d81-6f9c-4742-9378-afa88662b0e3\") " pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:17.517587 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.517533 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8685b553-2c3e-4882-a5f8-af6b2c69fafb-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "8685b553-2c3e-4882-a5f8-af6b2c69fafb" (UID: "8685b553-2c3e-4882-a5f8-af6b2c69fafb"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:19:17.517715 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.517624 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8685b553-2c3e-4882-a5f8-af6b2c69fafb-cabundle-cert" (OuterVolumeSpecName: "cabundle-cert") pod "8685b553-2c3e-4882-a5f8-af6b2c69fafb" (UID: "8685b553-2c3e-4882-a5f8-af6b2c69fafb"). InnerVolumeSpecName "cabundle-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 14:19:17.618176 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.618142 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/b9f10d81-6f9c-4742-9378-afa88662b0e3-kserve-provision-location\") pod \"isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56\" (UID: \"b9f10d81-6f9c-4742-9378-afa88662b0e3\") " pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:17.618392 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.618213 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/b9f10d81-6f9c-4742-9378-afa88662b0e3-cabundle-cert\") pod \"isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56\" (UID: \"b9f10d81-6f9c-4742-9378-afa88662b0e3\") " pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:17.618392 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.618276 2569 reconciler_common.go:299] "Volume detached for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/8685b553-2c3e-4882-a5f8-af6b2c69fafb-cabundle-cert\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:19:17.618392 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.618291 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8685b553-2c3e-4882-a5f8-af6b2c69fafb-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:19:17.618662 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.618639 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/b9f10d81-6f9c-4742-9378-afa88662b0e3-kserve-provision-location\") pod \"isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56\" (UID: \"b9f10d81-6f9c-4742-9378-afa88662b0e3\") " pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:17.618877 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.618860 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/b9f10d81-6f9c-4742-9378-afa88662b0e3-cabundle-cert\") pod \"isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56\" (UID: \"b9f10d81-6f9c-4742-9378-afa88662b0e3\") " pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:17.729787 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.729760 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:17.767615 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.767549 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9_8685b553-2c3e-4882-a5f8-af6b2c69fafb/storage-initializer/1.log" Apr 16 14:19:17.767743 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.767623 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" event={"ID":"8685b553-2c3e-4882-a5f8-af6b2c69fafb","Type":"ContainerDied","Data":"6e9e4393cc4e64c41301f5f9c4ffcbb025311042e43d68514a7c1bae94113b78"} Apr 16 14:19:17.767743 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.767653 2569 scope.go:117] "RemoveContainer" containerID="4bd0e8ad3fb00d68ac87c62a418c1bd00bed4cc9deb061e556667e7fac139978" Apr 16 14:19:17.767743 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.767660 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9" Apr 16 14:19:17.804575 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.804526 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9"] Apr 16 14:19:17.806433 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.806409 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-secondary-8047bd-predictor-7b7bdccd9f-cnrp9"] Apr 16 14:19:17.860679 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:17.860650 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56"] Apr 16 14:19:17.863203 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:19:17.863179 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9f10d81_6f9c_4742_9378_afa88662b0e3.slice/crio-58aef9877df5a262b56d2761c1faef420ad83270f81c67fddd8b9d37ed7b4a6e WatchSource:0}: Error finding container 58aef9877df5a262b56d2761c1faef420ad83270f81c67fddd8b9d37ed7b4a6e: Status 404 returned error can't find the container with id 58aef9877df5a262b56d2761c1faef420ad83270f81c67fddd8b9d37ed7b4a6e Apr 16 14:19:18.602438 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:18.602399 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" path="/var/lib/kubelet/pods/8685b553-2c3e-4882-a5f8-af6b2c69fafb/volumes" Apr 16 14:19:18.772999 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:18.772961 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" event={"ID":"b9f10d81-6f9c-4742-9378-afa88662b0e3","Type":"ContainerStarted","Data":"d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036"} Apr 16 14:19:18.772999 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:18.772992 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" event={"ID":"b9f10d81-6f9c-4742-9378-afa88662b0e3","Type":"ContainerStarted","Data":"58aef9877df5a262b56d2761c1faef420ad83270f81c67fddd8b9d37ed7b4a6e"} Apr 16 14:19:21.566165 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.566141 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:19:21.655460 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.655437 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bbade444-703c-48fb-9168-bcbfc7deaea4-kserve-provision-location\") pod \"bbade444-703c-48fb-9168-bcbfc7deaea4\" (UID: \"bbade444-703c-48fb-9168-bcbfc7deaea4\") " Apr 16 14:19:21.655747 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.655728 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbade444-703c-48fb-9168-bcbfc7deaea4-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "bbade444-703c-48fb-9168-bcbfc7deaea4" (UID: "bbade444-703c-48fb-9168-bcbfc7deaea4"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:19:21.756312 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.756279 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/bbade444-703c-48fb-9168-bcbfc7deaea4-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:19:21.783714 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.783681 2569 generic.go:358] "Generic (PLEG): container finished" podID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerID="d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3" exitCode=0 Apr 16 14:19:21.783845 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.783750 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" event={"ID":"bbade444-703c-48fb-9168-bcbfc7deaea4","Type":"ContainerDied","Data":"d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3"} Apr 16 14:19:21.783845 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.783779 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" event={"ID":"bbade444-703c-48fb-9168-bcbfc7deaea4","Type":"ContainerDied","Data":"9ceeed6cd8fae63f01211d3d933605ed6cc2a13ac8c580f06996c7fd8bca5951"} Apr 16 14:19:21.783845 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.783794 2569 scope.go:117] "RemoveContainer" containerID="d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3" Apr 16 14:19:21.783845 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.783754 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf" Apr 16 14:19:21.800818 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.800763 2569 scope.go:117] "RemoveContainer" containerID="c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2" Apr 16 14:19:21.806918 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.806896 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf"] Apr 16 14:19:21.809489 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.809468 2569 scope.go:117] "RemoveContainer" containerID="d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3" Apr 16 14:19:21.809752 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:19:21.809734 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3\": container with ID starting with d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3 not found: ID does not exist" containerID="d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3" Apr 16 14:19:21.809828 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.809765 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3"} err="failed to get container status \"d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3\": rpc error: code = NotFound desc = could not find container \"d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3\": container with ID starting with d5df1df91489427104e7bba1919e25d7125f635036e91f0fb974c9b8df9d4af3 not found: ID does not exist" Apr 16 14:19:21.809828 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.809792 2569 scope.go:117] "RemoveContainer" containerID="c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2" Apr 16 14:19:21.809929 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.809894 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-primary-8047bd-predictor-6f97f8bc7d-4clgf"] Apr 16 14:19:21.810049 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:19:21.810032 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2\": container with ID starting with c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2 not found: ID does not exist" containerID="c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2" Apr 16 14:19:21.810091 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:21.810054 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2"} err="failed to get container status \"c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2\": rpc error: code = NotFound desc = could not find container \"c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2\": container with ID starting with c7bc479b3a4a553b02ab8e7ea394ac962d451757383a318bc8996a6ef119c9f2 not found: ID does not exist" Apr 16 14:19:22.602691 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:22.602652 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" path="/var/lib/kubelet/pods/bbade444-703c-48fb-9168-bcbfc7deaea4/volumes" Apr 16 14:19:24.795815 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:24.795789 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56_b9f10d81-6f9c-4742-9378-afa88662b0e3/storage-initializer/0.log" Apr 16 14:19:24.796205 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:24.795823 2569 generic.go:358] "Generic (PLEG): container finished" podID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerID="d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036" exitCode=1 Apr 16 14:19:24.796205 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:24.795867 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" event={"ID":"b9f10d81-6f9c-4742-9378-afa88662b0e3","Type":"ContainerDied","Data":"d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036"} Apr 16 14:19:25.800975 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:25.800945 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56_b9f10d81-6f9c-4742-9378-afa88662b0e3/storage-initializer/0.log" Apr 16 14:19:25.801471 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:25.801018 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" event={"ID":"b9f10d81-6f9c-4742-9378-afa88662b0e3","Type":"ContainerStarted","Data":"5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018"} Apr 16 14:19:27.402694 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.402615 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56"] Apr 16 14:19:27.403080 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.402866 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" podUID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerName="storage-initializer" containerID="cri-o://5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018" gracePeriod=30 Apr 16 14:19:27.506612 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.506583 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t"] Apr 16 14:19:27.507008 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.506995 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" Apr 16 14:19:27.507057 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507010 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" Apr 16 14:19:27.507057 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507020 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="storage-initializer" Apr 16 14:19:27.507057 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507026 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="storage-initializer" Apr 16 14:19:27.507057 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507047 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" containerName="storage-initializer" Apr 16 14:19:27.507057 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507053 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" containerName="storage-initializer" Apr 16 14:19:27.507207 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507064 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" containerName="storage-initializer" Apr 16 14:19:27.507207 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507070 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" containerName="storage-initializer" Apr 16 14:19:27.507207 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507126 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="bbade444-703c-48fb-9168-bcbfc7deaea4" containerName="kserve-container" Apr 16 14:19:27.507207 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507136 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" containerName="storage-initializer" Apr 16 14:19:27.507207 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.507145 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="8685b553-2c3e-4882-a5f8-af6b2c69fafb" containerName="storage-initializer" Apr 16 14:19:27.510221 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.510207 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:19:27.512478 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.512458 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-x6x2v\"" Apr 16 14:19:27.516615 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.516596 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t"] Apr 16 14:19:27.604619 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.604584 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8d46a32f-58c9-49ca-9727-6efb2317fda9-kserve-provision-location\") pod \"raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t\" (UID: \"8d46a32f-58c9-49ca-9727-6efb2317fda9\") " pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:19:27.705646 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.705563 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8d46a32f-58c9-49ca-9727-6efb2317fda9-kserve-provision-location\") pod \"raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t\" (UID: \"8d46a32f-58c9-49ca-9727-6efb2317fda9\") " pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:19:27.705930 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.705906 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8d46a32f-58c9-49ca-9727-6efb2317fda9-kserve-provision-location\") pod \"raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t\" (UID: \"8d46a32f-58c9-49ca-9727-6efb2317fda9\") " pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:19:27.820976 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.820952 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:19:27.954167 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:27.954136 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t"] Apr 16 14:19:27.957227 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:19:27.957158 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d46a32f_58c9_49ca_9727_6efb2317fda9.slice/crio-4724ce4613936f013b6ceb966ebad81659c828d5df5afd22922ef9c5bae474b6 WatchSource:0}: Error finding container 4724ce4613936f013b6ceb966ebad81659c828d5df5afd22922ef9c5bae474b6: Status 404 returned error can't find the container with id 4724ce4613936f013b6ceb966ebad81659c828d5df5afd22922ef9c5bae474b6 Apr 16 14:19:28.812931 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:28.812886 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" event={"ID":"8d46a32f-58c9-49ca-9727-6efb2317fda9","Type":"ContainerStarted","Data":"6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f"} Apr 16 14:19:28.812931 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:28.812931 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" event={"ID":"8d46a32f-58c9-49ca-9727-6efb2317fda9","Type":"ContainerStarted","Data":"4724ce4613936f013b6ceb966ebad81659c828d5df5afd22922ef9c5bae474b6"} Apr 16 14:19:29.747410 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.747385 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56_b9f10d81-6f9c-4742-9378-afa88662b0e3/storage-initializer/1.log" Apr 16 14:19:29.747778 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.747764 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56_b9f10d81-6f9c-4742-9378-afa88662b0e3/storage-initializer/0.log" Apr 16 14:19:29.747893 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.747823 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:29.817138 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.817117 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56_b9f10d81-6f9c-4742-9378-afa88662b0e3/storage-initializer/1.log" Apr 16 14:19:29.817488 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.817471 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56_b9f10d81-6f9c-4742-9378-afa88662b0e3/storage-initializer/0.log" Apr 16 14:19:29.817527 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.817503 2569 generic.go:358] "Generic (PLEG): container finished" podID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerID="5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018" exitCode=1 Apr 16 14:19:29.817578 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.817563 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" Apr 16 14:19:29.817615 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.817593 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" event={"ID":"b9f10d81-6f9c-4742-9378-afa88662b0e3","Type":"ContainerDied","Data":"5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018"} Apr 16 14:19:29.817653 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.817631 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56" event={"ID":"b9f10d81-6f9c-4742-9378-afa88662b0e3","Type":"ContainerDied","Data":"58aef9877df5a262b56d2761c1faef420ad83270f81c67fddd8b9d37ed7b4a6e"} Apr 16 14:19:29.817653 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.817648 2569 scope.go:117] "RemoveContainer" containerID="5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018" Apr 16 14:19:29.821771 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.821742 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/b9f10d81-6f9c-4742-9378-afa88662b0e3-cabundle-cert\") pod \"b9f10d81-6f9c-4742-9378-afa88662b0e3\" (UID: \"b9f10d81-6f9c-4742-9378-afa88662b0e3\") " Apr 16 14:19:29.821883 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.821785 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/b9f10d81-6f9c-4742-9378-afa88662b0e3-kserve-provision-location\") pod \"b9f10d81-6f9c-4742-9378-afa88662b0e3\" (UID: \"b9f10d81-6f9c-4742-9378-afa88662b0e3\") " Apr 16 14:19:29.822073 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.822053 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9f10d81-6f9c-4742-9378-afa88662b0e3-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "b9f10d81-6f9c-4742-9378-afa88662b0e3" (UID: "b9f10d81-6f9c-4742-9378-afa88662b0e3"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:19:29.822151 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.822071 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f10d81-6f9c-4742-9378-afa88662b0e3-cabundle-cert" (OuterVolumeSpecName: "cabundle-cert") pod "b9f10d81-6f9c-4742-9378-afa88662b0e3" (UID: "b9f10d81-6f9c-4742-9378-afa88662b0e3"). InnerVolumeSpecName "cabundle-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 14:19:29.825421 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.825402 2569 scope.go:117] "RemoveContainer" containerID="d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036" Apr 16 14:19:29.835587 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.835569 2569 scope.go:117] "RemoveContainer" containerID="5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018" Apr 16 14:19:29.835815 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:19:29.835798 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018\": container with ID starting with 5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018 not found: ID does not exist" containerID="5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018" Apr 16 14:19:29.835869 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.835821 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018"} err="failed to get container status \"5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018\": rpc error: code = NotFound desc = could not find container \"5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018\": container with ID starting with 5bc97745e722cf9dc6f75f53684cb2474b569065ead39253caa0e0fa5d895018 not found: ID does not exist" Apr 16 14:19:29.835869 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.835838 2569 scope.go:117] "RemoveContainer" containerID="d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036" Apr 16 14:19:29.836064 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:19:29.836042 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036\": container with ID starting with d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036 not found: ID does not exist" containerID="d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036" Apr 16 14:19:29.836122 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.836073 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036"} err="failed to get container status \"d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036\": rpc error: code = NotFound desc = could not find container \"d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036\": container with ID starting with d055577d4ffde624c9818d7ef22c023157130927232160f0dbc1281e581e9036 not found: ID does not exist" Apr 16 14:19:29.923216 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.923188 2569 reconciler_common.go:299] "Volume detached for volume \"cabundle-cert\" (UniqueName: \"kubernetes.io/configmap/b9f10d81-6f9c-4742-9378-afa88662b0e3-cabundle-cert\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:19:29.923216 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:29.923212 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/b9f10d81-6f9c-4742-9378-afa88662b0e3-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:19:30.151625 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:30.151593 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56"] Apr 16 14:19:30.156025 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:30.156000 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-init-fail-4dad45-predictor-866bdd48d5-h6p56"] Apr 16 14:19:30.603726 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:30.603691 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f10d81-6f9c-4742-9378-afa88662b0e3" path="/var/lib/kubelet/pods/b9f10d81-6f9c-4742-9378-afa88662b0e3/volumes" Apr 16 14:19:32.830132 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:32.830095 2569 generic.go:358] "Generic (PLEG): container finished" podID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerID="6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f" exitCode=0 Apr 16 14:19:32.830507 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:32.830167 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" event={"ID":"8d46a32f-58c9-49ca-9727-6efb2317fda9","Type":"ContainerDied","Data":"6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f"} Apr 16 14:19:33.835844 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:33.835740 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" event={"ID":"8d46a32f-58c9-49ca-9727-6efb2317fda9","Type":"ContainerStarted","Data":"949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88"} Apr 16 14:19:33.865192 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:33.836102 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:19:33.865192 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:33.837113 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.44:8080: connect: connection refused" Apr 16 14:19:33.865192 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:33.851129 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podStartSLOduration=6.851115338 podStartE2EDuration="6.851115338s" podCreationTimestamp="2026-04-16 14:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:19:33.850259739 +0000 UTC m=+1247.871547480" watchObservedRunningTime="2026-04-16 14:19:33.851115338 +0000 UTC m=+1247.872403086" Apr 16 14:19:34.840025 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:34.839984 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.44:8080: connect: connection refused" Apr 16 14:19:44.840352 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:44.840317 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.44:8080: connect: connection refused" Apr 16 14:19:54.840681 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:19:54.840635 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.44:8080: connect: connection refused" Apr 16 14:20:04.840821 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:04.840780 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.44:8080: connect: connection refused" Apr 16 14:20:14.840593 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:14.840550 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.44:8080: connect: connection refused" Apr 16 14:20:24.840130 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:24.840089 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.44:8080: connect: connection refused" Apr 16 14:20:34.840871 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:34.840832 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.44:8080: connect: connection refused" Apr 16 14:20:43.599396 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:43.599367 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:20:47.688315 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.687360 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t"] Apr 16 14:20:47.688315 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.687673 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" containerID="cri-o://949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88" gracePeriod=30 Apr 16 14:20:47.757003 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.756972 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf"] Apr 16 14:20:47.757489 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.757472 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerName="storage-initializer" Apr 16 14:20:47.757558 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.757492 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerName="storage-initializer" Apr 16 14:20:47.757558 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.757509 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerName="storage-initializer" Apr 16 14:20:47.757558 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.757515 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerName="storage-initializer" Apr 16 14:20:47.757662 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.757576 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerName="storage-initializer" Apr 16 14:20:47.757719 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.757709 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="b9f10d81-6f9c-4742-9378-afa88662b0e3" containerName="storage-initializer" Apr 16 14:20:47.760625 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.760608 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:20:47.769617 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.769591 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf"] Apr 16 14:20:47.769617 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.769594 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6e89da89-9d77-4803-8992-78f3861653ea-kserve-provision-location\") pod \"raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf\" (UID: \"6e89da89-9d77-4803-8992-78f3861653ea\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:20:47.870514 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.870480 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6e89da89-9d77-4803-8992-78f3861653ea-kserve-provision-location\") pod \"raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf\" (UID: \"6e89da89-9d77-4803-8992-78f3861653ea\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:20:47.870822 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:47.870804 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6e89da89-9d77-4803-8992-78f3861653ea-kserve-provision-location\") pod \"raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf\" (UID: \"6e89da89-9d77-4803-8992-78f3861653ea\") " pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:20:48.071289 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:48.071197 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:20:48.196271 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:48.196183 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf"] Apr 16 14:20:48.199424 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:20:48.199395 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e89da89_9d77_4803_8992_78f3861653ea.slice/crio-a3a1cb6295abbbf45b07e2fe063f5b5eeca2367d0c43cf416bb3db47a961fea3 WatchSource:0}: Error finding container a3a1cb6295abbbf45b07e2fe063f5b5eeca2367d0c43cf416bb3db47a961fea3: Status 404 returned error can't find the container with id a3a1cb6295abbbf45b07e2fe063f5b5eeca2367d0c43cf416bb3db47a961fea3 Apr 16 14:20:49.097344 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:49.097309 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" event={"ID":"6e89da89-9d77-4803-8992-78f3861653ea","Type":"ContainerStarted","Data":"0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e"} Apr 16 14:20:49.097344 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:49.097347 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" event={"ID":"6e89da89-9d77-4803-8992-78f3861653ea","Type":"ContainerStarted","Data":"a3a1cb6295abbbf45b07e2fe063f5b5eeca2367d0c43cf416bb3db47a961fea3"} Apr 16 14:20:51.730476 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:51.730449 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:20:51.808814 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:51.808735 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8d46a32f-58c9-49ca-9727-6efb2317fda9-kserve-provision-location\") pod \"8d46a32f-58c9-49ca-9727-6efb2317fda9\" (UID: \"8d46a32f-58c9-49ca-9727-6efb2317fda9\") " Apr 16 14:20:51.809082 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:51.809059 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d46a32f-58c9-49ca-9727-6efb2317fda9-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "8d46a32f-58c9-49ca-9727-6efb2317fda9" (UID: "8d46a32f-58c9-49ca-9727-6efb2317fda9"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:20:51.910053 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:51.910024 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/8d46a32f-58c9-49ca-9727-6efb2317fda9-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:20:52.109744 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.109665 2569 generic.go:358] "Generic (PLEG): container finished" podID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerID="949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88" exitCode=0 Apr 16 14:20:52.109744 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.109735 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" event={"ID":"8d46a32f-58c9-49ca-9727-6efb2317fda9","Type":"ContainerDied","Data":"949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88"} Apr 16 14:20:52.109925 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.109742 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" Apr 16 14:20:52.109925 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.109763 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t" event={"ID":"8d46a32f-58c9-49ca-9727-6efb2317fda9","Type":"ContainerDied","Data":"4724ce4613936f013b6ceb966ebad81659c828d5df5afd22922ef9c5bae474b6"} Apr 16 14:20:52.109925 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.109782 2569 scope.go:117] "RemoveContainer" containerID="949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88" Apr 16 14:20:52.111110 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.111085 2569 generic.go:358] "Generic (PLEG): container finished" podID="6e89da89-9d77-4803-8992-78f3861653ea" containerID="0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e" exitCode=0 Apr 16 14:20:52.111207 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.111160 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" event={"ID":"6e89da89-9d77-4803-8992-78f3861653ea","Type":"ContainerDied","Data":"0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e"} Apr 16 14:20:52.118732 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.118715 2569 scope.go:117] "RemoveContainer" containerID="6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f" Apr 16 14:20:52.126209 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.126190 2569 scope.go:117] "RemoveContainer" containerID="949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88" Apr 16 14:20:52.126501 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:20:52.126478 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88\": container with ID starting with 949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88 not found: ID does not exist" containerID="949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88" Apr 16 14:20:52.126567 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.126514 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88"} err="failed to get container status \"949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88\": rpc error: code = NotFound desc = could not find container \"949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88\": container with ID starting with 949579f18f4bdf303c5d70303a6c2559ce49cb050e1252d8ea2dc91820885a88 not found: ID does not exist" Apr 16 14:20:52.126567 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.126541 2569 scope.go:117] "RemoveContainer" containerID="6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f" Apr 16 14:20:52.126798 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:20:52.126781 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f\": container with ID starting with 6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f not found: ID does not exist" containerID="6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f" Apr 16 14:20:52.126858 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.126805 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f"} err="failed to get container status \"6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f\": rpc error: code = NotFound desc = could not find container \"6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f\": container with ID starting with 6518a9c9d73bcbdca2a4ac6a7612c001b3f3ad85a78390d0f08f7c1a6e70c38f not found: ID does not exist" Apr 16 14:20:52.142312 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.142287 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t"] Apr 16 14:20:52.144704 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.144677 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-38bb0-predictor-5dbd6487cf-grh4t"] Apr 16 14:20:52.603952 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:52.603918 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" path="/var/lib/kubelet/pods/8d46a32f-58c9-49ca-9727-6efb2317fda9/volumes" Apr 16 14:20:53.118170 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:53.118136 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" event={"ID":"6e89da89-9d77-4803-8992-78f3861653ea","Type":"ContainerStarted","Data":"0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83"} Apr 16 14:20:53.118600 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:53.118419 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:20:53.119814 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:53.119792 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:20:53.134171 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:53.134122 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podStartSLOduration=6.134110681 podStartE2EDuration="6.134110681s" podCreationTimestamp="2026-04-16 14:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:20:53.132307844 +0000 UTC m=+1327.153595592" watchObservedRunningTime="2026-04-16 14:20:53.134110681 +0000 UTC m=+1327.155398428" Apr 16 14:20:54.122628 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:20:54.122592 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:21:04.122878 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:21:04.122789 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:21:14.123564 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:21:14.123516 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:21:24.123417 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:21:24.123364 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:21:34.123562 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:21:34.123517 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:21:44.122898 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:21:44.122861 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:21:54.122974 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:21:54.122931 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:22:00.603111 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:00.603084 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:22:07.854996 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:07.854965 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf"] Apr 16 14:22:07.855434 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:07.855221 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" containerID="cri-o://0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83" gracePeriod=30 Apr 16 14:22:10.599127 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:10.599078 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.45:8080: connect: connection refused" Apr 16 14:22:11.900500 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:11.900472 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:22:11.970002 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:11.969968 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6e89da89-9d77-4803-8992-78f3861653ea-kserve-provision-location\") pod \"6e89da89-9d77-4803-8992-78f3861653ea\" (UID: \"6e89da89-9d77-4803-8992-78f3861653ea\") " Apr 16 14:22:11.970362 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:11.970341 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e89da89-9d77-4803-8992-78f3861653ea-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "6e89da89-9d77-4803-8992-78f3861653ea" (UID: "6e89da89-9d77-4803-8992-78f3861653ea"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 14:22:12.070688 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.070603 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/6e89da89-9d77-4803-8992-78f3861653ea-kserve-provision-location\") on node \"ip-10-0-129-84.ec2.internal\" DevicePath \"\"" Apr 16 14:22:12.400940 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.400910 2569 generic.go:358] "Generic (PLEG): container finished" podID="6e89da89-9d77-4803-8992-78f3861653ea" containerID="0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83" exitCode=0 Apr 16 14:22:12.401134 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.400991 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" Apr 16 14:22:12.401134 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.400991 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" event={"ID":"6e89da89-9d77-4803-8992-78f3861653ea","Type":"ContainerDied","Data":"0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83"} Apr 16 14:22:12.401134 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.401091 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf" event={"ID":"6e89da89-9d77-4803-8992-78f3861653ea","Type":"ContainerDied","Data":"a3a1cb6295abbbf45b07e2fe063f5b5eeca2367d0c43cf416bb3db47a961fea3"} Apr 16 14:22:12.401134 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.401108 2569 scope.go:117] "RemoveContainer" containerID="0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83" Apr 16 14:22:12.412053 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.412036 2569 scope.go:117] "RemoveContainer" containerID="0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e" Apr 16 14:22:12.419894 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.419878 2569 scope.go:117] "RemoveContainer" containerID="0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83" Apr 16 14:22:12.420144 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:22:12.420128 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83\": container with ID starting with 0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83 not found: ID does not exist" containerID="0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83" Apr 16 14:22:12.420193 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.420151 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83"} err="failed to get container status \"0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83\": rpc error: code = NotFound desc = could not find container \"0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83\": container with ID starting with 0d129fcf9fec046b7df1384e47fd2edc537416a12a3fd0b5950d0448d5191e83 not found: ID does not exist" Apr 16 14:22:12.420193 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.420167 2569 scope.go:117] "RemoveContainer" containerID="0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e" Apr 16 14:22:12.420425 ip-10-0-129-84 kubenswrapper[2569]: E0416 14:22:12.420404 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e\": container with ID starting with 0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e not found: ID does not exist" containerID="0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e" Apr 16 14:22:12.420473 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.420431 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e"} err="failed to get container status \"0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e\": rpc error: code = NotFound desc = could not find container \"0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e\": container with ID starting with 0f6dac98f0dd2f36130ca2b7fe813d09ff45bcc92e6dcd2a55a19f7363bbf23e not found: ID does not exist" Apr 16 14:22:12.425566 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.425548 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf"] Apr 16 14:22:12.428742 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.428723 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/raw-sklearn-runtime-2c1fa-predictor-8ff766fcc-gczrf"] Apr 16 14:22:12.603322 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:12.603293 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e89da89-9d77-4803-8992-78f3861653ea" path="/var/lib/kubelet/pods/6e89da89-9d77-4803-8992-78f3861653ea/volumes" Apr 16 14:22:32.168845 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.168771 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p6v84/must-gather-fxpdn"] Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169140 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169150 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169160 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="storage-initializer" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169166 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="storage-initializer" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169173 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="storage-initializer" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169179 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="storage-initializer" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169195 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169201 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169272 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e89da89-9d77-4803-8992-78f3861653ea" containerName="kserve-container" Apr 16 14:22:32.169297 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.169282 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d46a32f-58c9-49ca-9727-6efb2317fda9" containerName="kserve-container" Apr 16 14:22:32.173899 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.173884 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6v84/must-gather-fxpdn" Apr 16 14:22:32.176407 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.176385 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-p6v84\"/\"kube-root-ca.crt\"" Apr 16 14:22:32.177139 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.177117 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-p6v84\"/\"default-dockercfg-hg4wh\"" Apr 16 14:22:32.177195 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.177118 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-p6v84\"/\"openshift-service-ca.crt\"" Apr 16 14:22:32.181139 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.181117 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p6v84/must-gather-fxpdn"] Apr 16 14:22:32.340517 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.340477 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9cql\" (UniqueName: \"kubernetes.io/projected/d1f155c2-6e42-4adc-852d-487adff03b0d-kube-api-access-g9cql\") pod \"must-gather-fxpdn\" (UID: \"d1f155c2-6e42-4adc-852d-487adff03b0d\") " pod="openshift-must-gather-p6v84/must-gather-fxpdn" Apr 16 14:22:32.340690 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.340535 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d1f155c2-6e42-4adc-852d-487adff03b0d-must-gather-output\") pod \"must-gather-fxpdn\" (UID: \"d1f155c2-6e42-4adc-852d-487adff03b0d\") " pod="openshift-must-gather-p6v84/must-gather-fxpdn" Apr 16 14:22:32.440957 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.440890 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d1f155c2-6e42-4adc-852d-487adff03b0d-must-gather-output\") pod \"must-gather-fxpdn\" (UID: \"d1f155c2-6e42-4adc-852d-487adff03b0d\") " pod="openshift-must-gather-p6v84/must-gather-fxpdn" Apr 16 14:22:32.441089 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.440963 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9cql\" (UniqueName: \"kubernetes.io/projected/d1f155c2-6e42-4adc-852d-487adff03b0d-kube-api-access-g9cql\") pod \"must-gather-fxpdn\" (UID: \"d1f155c2-6e42-4adc-852d-487adff03b0d\") " pod="openshift-must-gather-p6v84/must-gather-fxpdn" Apr 16 14:22:32.441221 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.441202 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d1f155c2-6e42-4adc-852d-487adff03b0d-must-gather-output\") pod \"must-gather-fxpdn\" (UID: \"d1f155c2-6e42-4adc-852d-487adff03b0d\") " pod="openshift-must-gather-p6v84/must-gather-fxpdn" Apr 16 14:22:32.448922 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.448903 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9cql\" (UniqueName: \"kubernetes.io/projected/d1f155c2-6e42-4adc-852d-487adff03b0d-kube-api-access-g9cql\") pod \"must-gather-fxpdn\" (UID: \"d1f155c2-6e42-4adc-852d-487adff03b0d\") " pod="openshift-must-gather-p6v84/must-gather-fxpdn" Apr 16 14:22:32.492105 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.492083 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6v84/must-gather-fxpdn" Apr 16 14:22:32.613102 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:32.613075 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p6v84/must-gather-fxpdn"] Apr 16 14:22:32.615634 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:22:32.615607 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f155c2_6e42_4adc_852d_487adff03b0d.slice/crio-07b5f5e7a5e9a361df3a19d6a1bf2e8848c7c9b1a9819e2b144c31909b27e86b WatchSource:0}: Error finding container 07b5f5e7a5e9a361df3a19d6a1bf2e8848c7c9b1a9819e2b144c31909b27e86b: Status 404 returned error can't find the container with id 07b5f5e7a5e9a361df3a19d6a1bf2e8848c7c9b1a9819e2b144c31909b27e86b Apr 16 14:22:33.473402 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:33.473372 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6v84/must-gather-fxpdn" event={"ID":"d1f155c2-6e42-4adc-852d-487adff03b0d","Type":"ContainerStarted","Data":"07b5f5e7a5e9a361df3a19d6a1bf2e8848c7c9b1a9819e2b144c31909b27e86b"} Apr 16 14:22:34.479064 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:34.479021 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6v84/must-gather-fxpdn" event={"ID":"d1f155c2-6e42-4adc-852d-487adff03b0d","Type":"ContainerStarted","Data":"b27ad6fe82a7f8c99cb41908cd31a4bda0436c0d4857d582acbdac7d7c222df1"} Apr 16 14:22:34.479064 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:34.479064 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6v84/must-gather-fxpdn" event={"ID":"d1f155c2-6e42-4adc-852d-487adff03b0d","Type":"ContainerStarted","Data":"86c3afed8c2f2d306d58693d59a25df6de937120ffaf982a6a5e3d206091e7fd"} Apr 16 14:22:34.493526 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:34.493483 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p6v84/must-gather-fxpdn" podStartSLOduration=1.7275143160000002 podStartE2EDuration="2.493468862s" podCreationTimestamp="2026-04-16 14:22:32 +0000 UTC" firstStartedPulling="2026-04-16 14:22:32.617356224 +0000 UTC m=+1426.638643951" lastFinishedPulling="2026-04-16 14:22:33.383310758 +0000 UTC m=+1427.404598497" observedRunningTime="2026-04-16 14:22:34.491965122 +0000 UTC m=+1428.513252872" watchObservedRunningTime="2026-04-16 14:22:34.493468862 +0000 UTC m=+1428.514756609" Apr 16 14:22:34.846126 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:34.846046 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-mqpls_be2e3878-e04d-4a67-b1b8-fcab8f431c5b/global-pull-secret-syncer/0.log" Apr 16 14:22:34.948487 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:34.948458 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-qnnbt_b86b831d-508f-4fd4-9a93-21d9e8e21be7/konnectivity-agent/0.log" Apr 16 14:22:34.967131 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:34.967100 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-129-84.ec2.internal_8a9e06fbe231682ac4f4d6b934aa0a28/haproxy/0.log" Apr 16 14:22:38.490204 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.490161 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1e761084-6a61-409d-8d29-cf8c7fc2c750/alertmanager/0.log" Apr 16 14:22:38.511467 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.511436 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1e761084-6a61-409d-8d29-cf8c7fc2c750/config-reloader/0.log" Apr 16 14:22:38.535865 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.535840 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1e761084-6a61-409d-8d29-cf8c7fc2c750/kube-rbac-proxy-web/0.log" Apr 16 14:22:38.557372 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.557338 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1e761084-6a61-409d-8d29-cf8c7fc2c750/kube-rbac-proxy/0.log" Apr 16 14:22:38.578159 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.578124 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1e761084-6a61-409d-8d29-cf8c7fc2c750/kube-rbac-proxy-metric/0.log" Apr 16 14:22:38.599951 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.599923 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1e761084-6a61-409d-8d29-cf8c7fc2c750/prom-label-proxy/0.log" Apr 16 14:22:38.621802 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.621773 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1e761084-6a61-409d-8d29-cf8c7fc2c750/init-config-reloader/0.log" Apr 16 14:22:38.670000 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.669968 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-6667474d89-jzj8g_fd96583f-aa32-457d-81a3-f0f6d9afe9d9/cluster-monitoring-operator/0.log" Apr 16 14:22:38.759106 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.759036 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-68dc684744-wk424_ada681f4-20a2-4c44-96bb-7e711b04a8dc/metrics-server/0.log" Apr 16 14:22:38.784581 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.784560 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-5876b4bbc7-t4hwq_30c840a8-6433-4186-9b21-6cae0b492905/monitoring-plugin/0.log" Apr 16 14:22:38.883823 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.883777 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-js5md_206c995a-8df6-47f1-b70d-fae559390324/node-exporter/0.log" Apr 16 14:22:38.903839 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.903799 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-js5md_206c995a-8df6-47f1-b70d-fae559390324/kube-rbac-proxy/0.log" Apr 16 14:22:38.924687 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:38.924641 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-js5md_206c995a-8df6-47f1-b70d-fae559390324/init-textfile/0.log" Apr 16 14:22:39.257721 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.257676 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-78f957474d-68dtq_b2210f1f-3fa8-41ec-83c9-27e559ff4604/prometheus-operator/0.log" Apr 16 14:22:39.275189 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.275150 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-78f957474d-68dtq_b2210f1f-3fa8-41ec-83c9-27e559ff4604/kube-rbac-proxy/0.log" Apr 16 14:22:39.298907 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.298847 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-9cb97cd87-xrbxv_345f9e1d-ee2b-421c-b6b5-3b0ac2ee70da/prometheus-operator-admission-webhook/0.log" Apr 16 14:22:39.324774 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.324750 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-58f8588d54-vj2dk_50c59d5c-1c79-4bc3-925e-84828e70d51b/telemeter-client/0.log" Apr 16 14:22:39.344459 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.344428 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-58f8588d54-vj2dk_50c59d5c-1c79-4bc3-925e-84828e70d51b/reload/0.log" Apr 16 14:22:39.365622 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.365581 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-58f8588d54-vj2dk_50c59d5c-1c79-4bc3-925e-84828e70d51b/kube-rbac-proxy/0.log" Apr 16 14:22:39.396987 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.396956 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5789679d96-mrqgr_e188b038-b59c-4fb7-b257-d08cf56b2473/thanos-query/0.log" Apr 16 14:22:39.416919 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.416892 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5789679d96-mrqgr_e188b038-b59c-4fb7-b257-d08cf56b2473/kube-rbac-proxy-web/0.log" Apr 16 14:22:39.441058 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.441026 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5789679d96-mrqgr_e188b038-b59c-4fb7-b257-d08cf56b2473/kube-rbac-proxy/0.log" Apr 16 14:22:39.460627 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.460600 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5789679d96-mrqgr_e188b038-b59c-4fb7-b257-d08cf56b2473/prom-label-proxy/0.log" Apr 16 14:22:39.481442 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.481397 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5789679d96-mrqgr_e188b038-b59c-4fb7-b257-d08cf56b2473/kube-rbac-proxy-rules/0.log" Apr 16 14:22:39.506193 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:39.506139 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5789679d96-mrqgr_e188b038-b59c-4fb7-b257-d08cf56b2473/kube-rbac-proxy-metrics/0.log" Apr 16 14:22:40.690099 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:40.690066 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-5cb6cf4cb4-fdps6_5a4977aa-a63b-46b6-a23c-4924a58855f4/networking-console-plugin/0.log" Apr 16 14:22:41.087751 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:41.087687 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/2.log" Apr 16 14:22:41.093125 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:41.093097 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-d87b8d5fc-27q26_5057171c-9c0f-4741-b8ce-987c40eb447d/console-operator/3.log" Apr 16 14:22:41.473619 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:41.473589 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-586b57c7b4-78mff_89931564-c377-407d-bb9d-ccfd758359f2/download-server/0.log" Apr 16 14:22:41.843950 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:41.843876 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7d955d5dd4-9jvhw_e56c6526-2094-4c6e-8688-b4a0daf73cb6/volume-data-source-validator/0.log" Apr 16 14:22:41.968758 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:41.968715 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4"] Apr 16 14:22:41.973578 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:41.973558 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:41.980813 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:41.980718 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4"] Apr 16 14:22:42.030665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.030621 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-lib-modules\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.030910 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.030681 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-proc\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.030910 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.030714 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7pfw\" (UniqueName: \"kubernetes.io/projected/d69d0bbc-5611-4489-b3c4-77decc6602bf-kube-api-access-z7pfw\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.030910 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.030773 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-sys\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.030910 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.030815 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-podres\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.131908 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.131873 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-sys\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.132092 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.131939 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-podres\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.132092 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.131992 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-sys\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.132092 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.132009 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-lib-modules\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.132304 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.132135 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-podres\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.132304 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.132137 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-lib-modules\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.132304 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.132193 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-proc\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.132304 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.132250 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7pfw\" (UniqueName: \"kubernetes.io/projected/d69d0bbc-5611-4489-b3c4-77decc6602bf-kube-api-access-z7pfw\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.132304 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.132296 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/d69d0bbc-5611-4489-b3c4-77decc6602bf-proc\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.141434 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.140291 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7pfw\" (UniqueName: \"kubernetes.io/projected/d69d0bbc-5611-4489-b3c4-77decc6602bf-kube-api-access-z7pfw\") pod \"perf-node-gather-daemonset-qqpj4\" (UID: \"d69d0bbc-5611-4489-b3c4-77decc6602bf\") " pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.284740 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.284713 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:42.433050 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.433024 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4"] Apr 16 14:22:42.436014 ip-10-0-129-84 kubenswrapper[2569]: W0416 14:22:42.435988 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd69d0bbc_5611_4489_b3c4_77decc6602bf.slice/crio-3ca754b750f06b5a177b01c079377bbd586104037f3064005637383ecf497b45 WatchSource:0}: Error finding container 3ca754b750f06b5a177b01c079377bbd586104037f3064005637383ecf497b45: Status 404 returned error can't find the container with id 3ca754b750f06b5a177b01c079377bbd586104037f3064005637383ecf497b45 Apr 16 14:22:42.438296 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.438227 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 14:22:42.518931 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.518907 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" event={"ID":"d69d0bbc-5611-4489-b3c4-77decc6602bf","Type":"ContainerStarted","Data":"3ca754b750f06b5a177b01c079377bbd586104037f3064005637383ecf497b45"} Apr 16 14:22:42.544022 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.543999 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-rdgwf_f334ac90-d973-40ab-bade-1a585fb2d9b2/dns/0.log" Apr 16 14:22:42.574624 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.574603 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-rdgwf_f334ac90-d973-40ab-bade-1a585fb2d9b2/kube-rbac-proxy/0.log" Apr 16 14:22:42.658641 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:42.658609 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-jnqp7_07a3b071-1443-4213-b66f-ce5f4d7ff313/dns-node-resolver/0.log" Apr 16 14:22:43.166408 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:43.166380 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-p4bsc_afa773e4-9e56-4130-bc08-0913d59056bb/node-ca/0.log" Apr 16 14:22:43.524876 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:43.524806 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" event={"ID":"d69d0bbc-5611-4489-b3c4-77decc6602bf","Type":"ContainerStarted","Data":"59bb250e18bc232339252780798c9a4251f8f71c875c143aab411e5197db0d1c"} Apr 16 14:22:43.525018 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:43.524921 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:43.542265 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:43.542194 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" podStartSLOduration=2.5421812089999998 podStartE2EDuration="2.542181209s" podCreationTimestamp="2026-04-16 14:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 14:22:43.540418398 +0000 UTC m=+1437.561706149" watchObservedRunningTime="2026-04-16 14:22:43.542181209 +0000 UTC m=+1437.563468956" Apr 16 14:22:44.240188 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:44.240163 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-92l7v_3bfef623-c79c-41c4-9fc5-0a25ecab1f4a/serve-healthcheck-canary/0.log" Apr 16 14:22:44.631665 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:44.631647 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-c7hvq_e55837c2-18f0-4bdc-bfc7-fef4606ffaf9/kube-rbac-proxy/0.log" Apr 16 14:22:44.650359 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:44.650339 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-c7hvq_e55837c2-18f0-4bdc-bfc7-fef4606ffaf9/exporter/0.log" Apr 16 14:22:44.669671 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:44.669644 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-c7hvq_e55837c2-18f0-4bdc-bfc7-fef4606ffaf9/extractor/0.log" Apr 16 14:22:46.685969 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:46.685947 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_model-serving-api-86f7b4b499-9n7t5_758de9fb-8f7d-4a4f-917a-1304d09bed4e/server/0.log" Apr 16 14:22:46.777771 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:46.777748 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_odh-model-controller-696fc77849-d9rbz_1fcb9f00-8568-4fcc-8b6c-4eede4a43768/manager/0.log" Apr 16 14:22:46.796298 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:46.796278 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_s3-init-n8lnh_7f2ac48f-ac90-4ec6-867c-9616a66ed0a5/s3-init/0.log" Apr 16 14:22:46.821624 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:46.821600 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_seaweedfs-86cc847c5c-gzqx7_3130322e-2435-4782-8e41-bd42dee3a2a8/seaweedfs/0.log" Apr 16 14:22:49.539761 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:49.539731 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-p6v84/perf-node-gather-daemonset-qqpj4" Apr 16 14:22:50.453481 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:50.453452 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-64d4d94569-7pj7l_4df7741a-6f2b-4ee8-a7a6-76bbed009a0a/migrator/0.log" Apr 16 14:22:50.471987 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:50.471964 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-64d4d94569-7pj7l_4df7741a-6f2b-4ee8-a7a6-76bbed009a0a/graceful-termination/0.log" Apr 16 14:22:52.134438 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.134410 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-t28sg_f2f44068-07cc-44c3-b6bc-448389afc9ce/kube-multus-additional-cni-plugins/0.log" Apr 16 14:22:52.155022 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.154996 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-t28sg_f2f44068-07cc-44c3-b6bc-448389afc9ce/egress-router-binary-copy/0.log" Apr 16 14:22:52.176981 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.176954 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-t28sg_f2f44068-07cc-44c3-b6bc-448389afc9ce/cni-plugins/0.log" Apr 16 14:22:52.197741 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.197717 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-t28sg_f2f44068-07cc-44c3-b6bc-448389afc9ce/bond-cni-plugin/0.log" Apr 16 14:22:52.218190 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.218159 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-t28sg_f2f44068-07cc-44c3-b6bc-448389afc9ce/routeoverride-cni/0.log" Apr 16 14:22:52.238730 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.238710 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-t28sg_f2f44068-07cc-44c3-b6bc-448389afc9ce/whereabouts-cni-bincopy/0.log" Apr 16 14:22:52.259025 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.259001 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-t28sg_f2f44068-07cc-44c3-b6bc-448389afc9ce/whereabouts-cni/0.log" Apr 16 14:22:52.290559 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.290540 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q4pj8_c6ac1ffc-23ae-4117-8a3e-a4aa0d7cb42e/kube-multus/0.log" Apr 16 14:22:52.408582 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.408500 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-99gsl_6cc56cdf-0ee0-49a9-b52c-65d8745cb390/network-metrics-daemon/0.log" Apr 16 14:22:52.427649 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:52.427623 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-99gsl_6cc56cdf-0ee0-49a9-b52c-65d8745cb390/kube-rbac-proxy/0.log" Apr 16 14:22:53.588968 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.588940 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-controller/0.log" Apr 16 14:22:53.606360 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.606340 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/0.log" Apr 16 14:22:53.612875 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.612847 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovn-acl-logging/1.log" Apr 16 14:22:53.632682 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.632654 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/kube-rbac-proxy-node/0.log" Apr 16 14:22:53.653371 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.653350 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/kube-rbac-proxy-ovn-metrics/0.log" Apr 16 14:22:53.670159 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.670137 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/northd/0.log" Apr 16 14:22:53.689544 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.689527 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/nbdb/0.log" Apr 16 14:22:53.710179 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.710165 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/sbdb/0.log" Apr 16 14:22:53.817433 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:53.817363 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ljx7q_7e5f1228-5703-456a-a909-558205e02bfc/ovnkube-controller/0.log" Apr 16 14:22:55.052811 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:55.052781 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-7b678d77c7-7fbbk_55298a9b-5dc0-448a-84f0-b2afbbac7a82/check-endpoints/0.log" Apr 16 14:22:55.099357 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:55.099325 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-bdcn7_84822439-41ed-4bb8-b7d6-6784ad00eeaf/network-check-target-container/0.log" Apr 16 14:22:56.019719 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:56.019690 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-cx8jm_165a8934-211a-41ba-a917-3ec360f1fb99/iptables-alerter/0.log" Apr 16 14:22:56.638630 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:56.638607 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-56z6w_170b3d99-353c-47c0-9fd5-7c56afedf117/tuned/0.log" Apr 16 14:22:58.284890 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:58.284823 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-667775844f-tv26r_1c0043ca-b332-4736-afd5-f76d52dc18f8/cluster-samples-operator/0.log" Apr 16 14:22:58.299385 ip-10-0-129-84 kubenswrapper[2569]: I0416 14:22:58.299361 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-667775844f-tv26r_1c0043ca-b332-4736-afd5-f76d52dc18f8/cluster-samples-operator-watch/0.log"