Apr 17 11:30:19.045963 ip-10-0-140-245 systemd[1]: Starting Kubernetes Kubelet... Apr 17 11:30:19.496649 ip-10-0-140-245 kubenswrapper[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 11:30:19.496649 ip-10-0-140-245 kubenswrapper[2567]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 17 11:30:19.496649 ip-10-0-140-245 kubenswrapper[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 11:30:19.496649 ip-10-0-140-245 kubenswrapper[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 11:30:19.496649 ip-10-0-140-245 kubenswrapper[2567]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 11:30:19.498913 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.498831 2567 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 11:30:19.503795 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503781 2567 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 11:30:19.503795 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503795 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503799 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503802 2567 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503805 2567 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503809 2567 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503812 2567 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503815 2567 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503818 2567 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503820 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503823 2567 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503825 2567 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503828 2567 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503831 2567 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503833 2567 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503836 2567 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503839 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503843 2567 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503846 2567 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503850 2567 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 11:30:19.503855 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503852 2567 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503863 2567 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503866 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503869 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503871 2567 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503874 2567 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503877 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503879 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503882 2567 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503884 2567 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503887 2567 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503891 2567 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503894 2567 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503896 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503899 2567 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503902 2567 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503904 2567 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503907 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503909 2567 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503911 2567 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 11:30:19.504390 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503914 2567 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503917 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503919 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503922 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503924 2567 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503926 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503929 2567 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503932 2567 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503934 2567 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503937 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503940 2567 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503942 2567 feature_gate.go:328] unrecognized feature gate: Example Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503945 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503947 2567 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503950 2567 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503953 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503956 2567 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503958 2567 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503961 2567 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503963 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 11:30:19.504935 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503966 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503970 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503972 2567 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503975 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503977 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503980 2567 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503982 2567 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503985 2567 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503987 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503990 2567 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503992 2567 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503995 2567 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.503997 2567 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504000 2567 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504002 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504005 2567 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504009 2567 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504012 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504015 2567 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 11:30:19.505432 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504018 2567 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504020 2567 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504023 2567 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504025 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504027 2567 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504030 2567 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504033 2567 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504438 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504444 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504447 2567 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504450 2567 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504453 2567 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504456 2567 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504458 2567 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504461 2567 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504464 2567 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504467 2567 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504469 2567 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504471 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504474 2567 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 11:30:19.505919 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504477 2567 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504480 2567 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504482 2567 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504485 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504487 2567 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504490 2567 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504492 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504495 2567 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504498 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504500 2567 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504503 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504505 2567 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504509 2567 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504512 2567 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504514 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504517 2567 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504520 2567 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504522 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504524 2567 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504528 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 11:30:19.506405 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504530 2567 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504533 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504535 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504538 2567 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504540 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504543 2567 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504545 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504548 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504550 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504552 2567 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504555 2567 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504557 2567 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504562 2567 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504565 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504568 2567 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504571 2567 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504573 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504576 2567 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504579 2567 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504581 2567 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 11:30:19.506953 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504583 2567 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504586 2567 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504589 2567 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504591 2567 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504594 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504596 2567 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504599 2567 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504601 2567 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504604 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504606 2567 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504609 2567 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504612 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504615 2567 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504618 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504620 2567 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504622 2567 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504625 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504627 2567 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504630 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504634 2567 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 11:30:19.507469 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504636 2567 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504638 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504641 2567 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504644 2567 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504646 2567 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504649 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504651 2567 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504654 2567 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504656 2567 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504659 2567 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504661 2567 feature_gate.go:328] unrecognized feature gate: Example Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504663 2567 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.504666 2567 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504747 2567 flags.go:64] FLAG: --address="0.0.0.0" Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504755 2567 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504762 2567 flags.go:64] FLAG: --anonymous-auth="true" Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504766 2567 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504771 2567 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504774 2567 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504778 2567 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504783 2567 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 17 11:30:19.507977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504786 2567 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504790 2567 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504793 2567 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504798 2567 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504801 2567 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504804 2567 flags.go:64] FLAG: --cgroup-root="" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504807 2567 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504810 2567 flags.go:64] FLAG: --client-ca-file="" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504813 2567 flags.go:64] FLAG: --cloud-config="" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504815 2567 flags.go:64] FLAG: --cloud-provider="external" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504818 2567 flags.go:64] FLAG: --cluster-dns="[]" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504823 2567 flags.go:64] FLAG: --cluster-domain="" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504826 2567 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504829 2567 flags.go:64] FLAG: --config-dir="" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504832 2567 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504835 2567 flags.go:64] FLAG: --container-log-max-files="5" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504839 2567 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504842 2567 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504845 2567 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504848 2567 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504851 2567 flags.go:64] FLAG: --contention-profiling="false" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504854 2567 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504857 2567 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504860 2567 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504863 2567 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 17 11:30:19.508481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504868 2567 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504871 2567 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504874 2567 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504877 2567 flags.go:64] FLAG: --enable-load-reader="false" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504880 2567 flags.go:64] FLAG: --enable-server="true" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504884 2567 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504888 2567 flags.go:64] FLAG: --event-burst="100" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504891 2567 flags.go:64] FLAG: --event-qps="50" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504894 2567 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504897 2567 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504900 2567 flags.go:64] FLAG: --eviction-hard="" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504904 2567 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504907 2567 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504910 2567 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504913 2567 flags.go:64] FLAG: --eviction-soft="" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504916 2567 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504919 2567 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504922 2567 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504925 2567 flags.go:64] FLAG: --experimental-mounter-path="" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504928 2567 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504931 2567 flags.go:64] FLAG: --fail-swap-on="true" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504933 2567 flags.go:64] FLAG: --feature-gates="" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504937 2567 flags.go:64] FLAG: --file-check-frequency="20s" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504940 2567 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504944 2567 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 17 11:30:19.509088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504947 2567 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504950 2567 flags.go:64] FLAG: --healthz-port="10248" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504954 2567 flags.go:64] FLAG: --help="false" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504957 2567 flags.go:64] FLAG: --hostname-override="ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504960 2567 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504963 2567 flags.go:64] FLAG: --http-check-frequency="20s" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504966 2567 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504970 2567 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504974 2567 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504977 2567 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504981 2567 flags.go:64] FLAG: --image-service-endpoint="" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504984 2567 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504987 2567 flags.go:64] FLAG: --kube-api-burst="100" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504989 2567 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504992 2567 flags.go:64] FLAG: --kube-api-qps="50" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504995 2567 flags.go:64] FLAG: --kube-reserved="" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.504998 2567 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505001 2567 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505004 2567 flags.go:64] FLAG: --kubelet-cgroups="" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505007 2567 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505009 2567 flags.go:64] FLAG: --lock-file="" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505012 2567 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505015 2567 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505020 2567 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 17 11:30:19.509728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505025 2567 flags.go:64] FLAG: --log-json-split-stream="false" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505028 2567 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505031 2567 flags.go:64] FLAG: --log-text-split-stream="false" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505033 2567 flags.go:64] FLAG: --logging-format="text" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505036 2567 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505039 2567 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505042 2567 flags.go:64] FLAG: --manifest-url="" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505045 2567 flags.go:64] FLAG: --manifest-url-header="" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505049 2567 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505052 2567 flags.go:64] FLAG: --max-open-files="1000000" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505056 2567 flags.go:64] FLAG: --max-pods="110" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505059 2567 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505062 2567 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505065 2567 flags.go:64] FLAG: --memory-manager-policy="None" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505069 2567 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505072 2567 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505074 2567 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505077 2567 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505085 2567 flags.go:64] FLAG: --node-status-max-images="50" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505088 2567 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505091 2567 flags.go:64] FLAG: --oom-score-adj="-999" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505094 2567 flags.go:64] FLAG: --pod-cidr="" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505097 2567 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8cfe89231412ff3ee8cb6207fa0be33cad0f08e88c9c0f1e9f7e8c6f14d6715" Apr 17 11:30:19.510304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505102 2567 flags.go:64] FLAG: --pod-manifest-path="" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505105 2567 flags.go:64] FLAG: --pod-max-pids="-1" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505108 2567 flags.go:64] FLAG: --pods-per-core="0" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505111 2567 flags.go:64] FLAG: --port="10250" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505114 2567 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505117 2567 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-0dca21160097ce090" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505120 2567 flags.go:64] FLAG: --qos-reserved="" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505123 2567 flags.go:64] FLAG: --read-only-port="10255" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505128 2567 flags.go:64] FLAG: --register-node="true" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505131 2567 flags.go:64] FLAG: --register-schedulable="true" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505134 2567 flags.go:64] FLAG: --register-with-taints="" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505137 2567 flags.go:64] FLAG: --registry-burst="10" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505141 2567 flags.go:64] FLAG: --registry-qps="5" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505143 2567 flags.go:64] FLAG: --reserved-cpus="" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505146 2567 flags.go:64] FLAG: --reserved-memory="" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505150 2567 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505153 2567 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505156 2567 flags.go:64] FLAG: --rotate-certificates="false" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505159 2567 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505162 2567 flags.go:64] FLAG: --runonce="false" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505165 2567 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505168 2567 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505171 2567 flags.go:64] FLAG: --seccomp-default="false" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505175 2567 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505178 2567 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505181 2567 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 17 11:30:19.510863 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505184 2567 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505187 2567 flags.go:64] FLAG: --storage-driver-password="root" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505190 2567 flags.go:64] FLAG: --storage-driver-secure="false" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505193 2567 flags.go:64] FLAG: --storage-driver-table="stats" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505196 2567 flags.go:64] FLAG: --storage-driver-user="root" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505199 2567 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505202 2567 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505205 2567 flags.go:64] FLAG: --system-cgroups="" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505207 2567 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505212 2567 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505215 2567 flags.go:64] FLAG: --tls-cert-file="" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505218 2567 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505222 2567 flags.go:64] FLAG: --tls-min-version="" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505225 2567 flags.go:64] FLAG: --tls-private-key-file="" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505229 2567 flags.go:64] FLAG: --topology-manager-policy="none" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505232 2567 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505235 2567 flags.go:64] FLAG: --topology-manager-scope="container" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505238 2567 flags.go:64] FLAG: --v="2" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505242 2567 flags.go:64] FLAG: --version="false" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505246 2567 flags.go:64] FLAG: --vmodule="" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505251 2567 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.505254 2567 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505337 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505340 2567 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 11:30:19.511501 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505343 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505346 2567 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505348 2567 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505351 2567 feature_gate.go:328] unrecognized feature gate: Example Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505354 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505356 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505360 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505363 2567 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505365 2567 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505368 2567 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505371 2567 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505374 2567 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505376 2567 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505379 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505382 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505384 2567 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505386 2567 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505389 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505392 2567 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505394 2567 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 11:30:19.512098 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505397 2567 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505399 2567 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505403 2567 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505407 2567 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505410 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505413 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505415 2567 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505418 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505420 2567 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505423 2567 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505425 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505428 2567 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505430 2567 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505433 2567 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505436 2567 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505438 2567 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505444 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505446 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505449 2567 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505452 2567 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 11:30:19.512629 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505454 2567 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505457 2567 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505459 2567 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505462 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505465 2567 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505467 2567 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505470 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505472 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505475 2567 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505478 2567 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505480 2567 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505483 2567 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505485 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505488 2567 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505493 2567 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505495 2567 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505498 2567 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505501 2567 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505503 2567 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505506 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 11:30:19.513128 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505508 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505511 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505513 2567 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505516 2567 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505519 2567 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505521 2567 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505524 2567 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505526 2567 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505530 2567 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505534 2567 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505537 2567 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505540 2567 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505543 2567 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505546 2567 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505548 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505551 2567 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505554 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505557 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505559 2567 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 11:30:19.513611 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505562 2567 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505564 2567 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505567 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505569 2567 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.505572 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.506225 2567 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.513201 2567 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.513216 2567 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513262 2567 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513267 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513271 2567 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513274 2567 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513278 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513280 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513283 2567 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513286 2567 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 11:30:19.514100 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513288 2567 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513292 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513294 2567 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513298 2567 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513303 2567 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513306 2567 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513310 2567 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513314 2567 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513317 2567 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513320 2567 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513323 2567 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513327 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513329 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513332 2567 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513334 2567 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513337 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513339 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513342 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513345 2567 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 11:30:19.514495 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513347 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513350 2567 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513353 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513356 2567 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513360 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513363 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513365 2567 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513368 2567 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513370 2567 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513373 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513376 2567 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513378 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513381 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513383 2567 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513386 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513388 2567 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513391 2567 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513394 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513396 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513399 2567 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 11:30:19.514970 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513401 2567 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513404 2567 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513407 2567 feature_gate.go:328] unrecognized feature gate: Example Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513409 2567 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513412 2567 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513415 2567 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513417 2567 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513420 2567 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513422 2567 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513425 2567 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513427 2567 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513430 2567 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513433 2567 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513436 2567 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513438 2567 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513441 2567 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513444 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513448 2567 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513451 2567 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513453 2567 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 11:30:19.515465 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513456 2567 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513458 2567 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513461 2567 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513464 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513466 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513469 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513471 2567 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513474 2567 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513477 2567 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513479 2567 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513481 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513484 2567 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513487 2567 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513490 2567 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513492 2567 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513495 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513497 2567 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513500 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 11:30:19.516022 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513502 2567 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.513507 2567 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513604 2567 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513609 2567 feature_gate.go:328] unrecognized feature gate: Example2 Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513612 2567 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513615 2567 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513618 2567 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513620 2567 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513623 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513626 2567 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513629 2567 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513633 2567 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513636 2567 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513640 2567 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513644 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 17 11:30:19.516483 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513646 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513649 2567 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513651 2567 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513654 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513657 2567 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513659 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513662 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513664 2567 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513667 2567 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513669 2567 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513672 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513674 2567 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513695 2567 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513699 2567 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513718 2567 feature_gate.go:328] unrecognized feature gate: Example Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513721 2567 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513724 2567 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513728 2567 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513732 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 17 11:30:19.516873 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513735 2567 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513738 2567 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513740 2567 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513743 2567 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513746 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513749 2567 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513752 2567 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513754 2567 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513757 2567 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513760 2567 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513763 2567 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513766 2567 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513769 2567 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513772 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513775 2567 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513777 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513780 2567 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513782 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513785 2567 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513787 2567 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 17 11:30:19.517332 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513790 2567 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513792 2567 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513795 2567 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513797 2567 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513800 2567 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513802 2567 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513805 2567 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513807 2567 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513810 2567 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513812 2567 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513815 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513817 2567 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513820 2567 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513822 2567 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513825 2567 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513827 2567 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513830 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513841 2567 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513844 2567 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513847 2567 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 17 11:30:19.517832 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513850 2567 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513852 2567 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513855 2567 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513857 2567 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513861 2567 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513863 2567 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513866 2567 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513869 2567 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513871 2567 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513874 2567 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513876 2567 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513879 2567 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513881 2567 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:19.513883 2567 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.513888 2567 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 17 11:30:19.518314 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.514628 2567 server.go:962] "Client rotation is on, will bootstrap in background" Apr 17 11:30:19.518674 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.516603 2567 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 17 11:30:19.518674 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.517714 2567 server.go:1019] "Starting client certificate rotation" Apr 17 11:30:19.518674 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.517808 2567 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 17 11:30:19.518674 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.517847 2567 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 17 11:30:19.543527 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.543509 2567 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 17 11:30:19.548642 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.548509 2567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 17 11:30:19.563965 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.563942 2567 log.go:25] "Validated CRI v1 runtime API" Apr 17 11:30:19.569316 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.569299 2567 log.go:25] "Validated CRI v1 image API" Apr 17 11:30:19.571229 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.571207 2567 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 11:30:19.575357 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.575333 2567 fs.go:135] Filesystem UUIDs: map[39012b56-457b-4a00-8684-4f4e946a35bd:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2 96315aff-d119-4037-9161-3536f0f931e6:/dev/nvme0n1p4] Apr 17 11:30:19.575421 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.575358 2567 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 17 11:30:19.577214 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.577197 2567 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 17 11:30:19.580970 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.580864 2567 manager.go:217] Machine: {Timestamp:2026-04-17 11:30:19.579055995 +0000 UTC m=+0.417293741 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3098982 MemoryCapacity:33164488704 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec27aa6ab6c92d5dab2e04aa8df0de07 SystemUUID:ec27aa6a-b6c9-2d5d-ab2e-04aa8df0de07 BootID:19472505-5d34-4245-94cf-fad91ddb03eb Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582242304 Type:vfs Inodes:4048399 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6103040 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:98:b1:36:f7:5d Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:98:b1:36:f7:5d Speed:0 Mtu:9001} {Name:ovs-system MacAddress:a6:18:f0:92:5c:d9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164488704 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 17 11:30:19.580970 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.580966 2567 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 17 11:30:19.581074 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.581041 2567 manager.go:233] Version: {KernelVersion:5.14.0-570.107.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260414-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 17 11:30:19.582103 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.582080 2567 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 11:30:19.582246 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.582115 2567 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-140-245.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 11:30:19.582291 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.582255 2567 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 11:30:19.582291 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.582264 2567 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 11:30:19.582291 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.582276 2567 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 17 11:30:19.583215 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.583204 2567 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 17 11:30:19.584151 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.584142 2567 state_mem.go:36] "Initialized new in-memory state store" Apr 17 11:30:19.584257 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.584248 2567 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 17 11:30:19.587673 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.587664 2567 kubelet.go:491] "Attempting to sync node with API server" Apr 17 11:30:19.587729 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.587688 2567 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 11:30:19.587729 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.587699 2567 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 17 11:30:19.587729 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.587708 2567 kubelet.go:397] "Adding apiserver pod source" Apr 17 11:30:19.587729 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.587716 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 11:30:19.588835 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.588823 2567 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 17 11:30:19.588887 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.588842 2567 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 17 11:30:19.591819 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.591801 2567 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 17 11:30:19.593141 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.593127 2567 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 11:30:19.595001 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.594987 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 17 11:30:19.595049 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595011 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 17 11:30:19.595049 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595021 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 17 11:30:19.595049 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595029 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 17 11:30:19.595049 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595038 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 17 11:30:19.595049 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595046 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 17 11:30:19.595180 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595055 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 17 11:30:19.595180 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595063 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 17 11:30:19.595180 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595074 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 17 11:30:19.595180 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595090 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 17 11:30:19.595180 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595101 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 17 11:30:19.595180 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.595114 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 17 11:30:19.597055 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.597043 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 17 11:30:19.597093 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.597056 2567 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 17 11:30:19.600422 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.600408 2567 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 11:30:19.600491 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.600444 2567 server.go:1295] "Started kubelet" Apr 17 11:30:19.600545 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.600515 2567 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 11:30:19.600691 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.600627 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 11:30:19.600754 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.600711 2567 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 17 11:30:19.601377 ip-10-0-140-245 systemd[1]: Started Kubernetes Kubelet. Apr 17 11:30:19.602101 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.601904 2567 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 11:30:19.602101 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.602019 2567 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-140-245.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 17 11:30:19.602101 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.602045 2567 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-140-245.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 11:30:19.602101 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.602045 2567 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 11:30:19.605712 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.605668 2567 server.go:317] "Adding debug handlers to kubelet server" Apr 17 11:30:19.609880 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.609862 2567 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 17 11:30:19.610494 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.610470 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 11:30:19.611944 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.608351 2567 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-140-245.ec2.internal.18a72181d941dd1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-140-245.ec2.internal,UID:ip-10-0-140-245.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-140-245.ec2.internal,},FirstTimestamp:2026-04-17 11:30:19.600420126 +0000 UTC m=+0.438657872,LastTimestamp:2026-04-17 11:30:19.600420126 +0000 UTC m=+0.438657872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-140-245.ec2.internal,}" Apr 17 11:30:19.612916 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.612894 2567 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 11:30:19.612916 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.612917 2567 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 17 11:30:19.613049 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.612928 2567 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 11:30:19.613049 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613034 2567 reconstruct.go:97] "Volume reconstruction finished" Apr 17 11:30:19.613151 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.613067 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:19.613151 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613066 2567 reconciler.go:26] "Reconciler: start to sync state" Apr 17 11:30:19.613151 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.613137 2567 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 17 11:30:19.613283 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613207 2567 factory.go:55] Registering systemd factory Apr 17 11:30:19.613283 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613249 2567 factory.go:223] Registration of the systemd container factory successfully Apr 17 11:30:19.613464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613446 2567 factory.go:153] Registering CRI-O factory Apr 17 11:30:19.613464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613464 2567 factory.go:223] Registration of the crio container factory successfully Apr 17 11:30:19.613576 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613511 2567 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 17 11:30:19.613576 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613535 2567 factory.go:103] Registering Raw factory Apr 17 11:30:19.613576 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613550 2567 manager.go:1196] Started watching for new ooms in manager Apr 17 11:30:19.613918 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.613903 2567 manager.go:319] Starting recovery of all containers Apr 17 11:30:19.623451 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.623422 2567 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 11:30:19.623540 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.623448 2567 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-140-245.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 17 11:30:19.623604 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.623558 2567 manager.go:324] Recovery completed Apr 17 11:30:19.627479 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.627394 2567 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 11:30:19.629824 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.629807 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientMemory" Apr 17 11:30:19.629888 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.629838 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 11:30:19.629888 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.629848 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientPID" Apr 17 11:30:19.630329 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.630313 2567 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 17 11:30:19.630329 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.630328 2567 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 17 11:30:19.630446 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.630345 2567 state_mem.go:36] "Initialized new in-memory state store" Apr 17 11:30:19.632608 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.632595 2567 policy_none.go:49] "None policy: Start" Apr 17 11:30:19.632701 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.632613 2567 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 11:30:19.632701 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.632626 2567 state_mem.go:35] "Initializing new in-memory state store" Apr 17 11:30:19.638946 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.638883 2567 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-140-245.ec2.internal.18a72181db028c5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-140-245.ec2.internal,UID:ip-10-0-140-245.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-140-245.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-140-245.ec2.internal,},FirstTimestamp:2026-04-17 11:30:19.629825118 +0000 UTC m=+0.468062864,LastTimestamp:2026-04-17 11:30:19.629825118 +0000 UTC m=+0.468062864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-140-245.ec2.internal,}" Apr 17 11:30:19.644752 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.644736 2567 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-jg4w4" Apr 17 11:30:19.649110 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.649050 2567 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-140-245.ec2.internal.18a72181db02d15f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-140-245.ec2.internal,UID:ip-10-0-140-245.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-140-245.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-140-245.ec2.internal,},FirstTimestamp:2026-04-17 11:30:19.629842783 +0000 UTC m=+0.468080530,LastTimestamp:2026-04-17 11:30:19.629842783 +0000 UTC m=+0.468080530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-140-245.ec2.internal,}" Apr 17 11:30:19.653649 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.653630 2567 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-jg4w4" Apr 17 11:30:19.680258 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.680247 2567 manager.go:341] "Starting Device Plugin manager" Apr 17 11:30:19.680365 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.680274 2567 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 11:30:19.680365 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.680284 2567 server.go:85] "Starting device plugin registration server" Apr 17 11:30:19.680519 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.680506 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 11:30:19.680555 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.680522 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 11:30:19.680658 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.680641 2567 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 17 11:30:19.680762 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.680737 2567 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 17 11:30:19.680762 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.680745 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 11:30:19.681367 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.681349 2567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 17 11:30:19.681443 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.681386 2567 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:19.707410 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.707383 2567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 11:30:19.708578 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.708557 2567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 11:30:19.708633 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.708583 2567 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 11:30:19.708633 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.708601 2567 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 11:30:19.708633 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.708607 2567 kubelet.go:2451] "Starting kubelet main sync loop" Apr 17 11:30:19.708801 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.708691 2567 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 17 11:30:19.710911 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.710891 2567 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 11:30:19.782095 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.782047 2567 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 11:30:19.782880 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.782855 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientMemory" Apr 17 11:30:19.782959 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.782884 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 11:30:19.782959 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.782898 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientPID" Apr 17 11:30:19.782959 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.782918 2567 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.790894 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.790881 2567 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.790942 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.790900 2567 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-140-245.ec2.internal\": node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:19.807876 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.807855 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:19.808882 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.808866 2567 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal"] Apr 17 11:30:19.808931 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.808920 2567 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 11:30:19.810750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.810731 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientMemory" Apr 17 11:30:19.810828 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.810759 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 11:30:19.810828 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.810773 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientPID" Apr 17 11:30:19.811970 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.811957 2567 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 11:30:19.812106 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.812094 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.812151 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.812118 2567 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 11:30:19.813196 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.813174 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientMemory" Apr 17 11:30:19.813279 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.813200 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 11:30:19.813279 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.813174 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientMemory" Apr 17 11:30:19.813279 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.813211 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientPID" Apr 17 11:30:19.813279 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.813231 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 11:30:19.813279 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.813243 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientPID" Apr 17 11:30:19.813971 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.813954 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b313ec3d20352e5d3b289f2ee066026b-config\") pod \"kube-apiserver-proxy-ip-10-0-140-245.ec2.internal\" (UID: \"b313ec3d20352e5d3b289f2ee066026b\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.814050 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.813983 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/aa74b40b257fc464a40cc0981f038693-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal\" (UID: \"aa74b40b257fc464a40cc0981f038693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.814050 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.814012 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa74b40b257fc464a40cc0981f038693-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal\" (UID: \"aa74b40b257fc464a40cc0981f038693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.814256 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.814243 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.814289 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.814268 2567 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 17 11:30:19.814954 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.814939 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientMemory" Apr 17 11:30:19.815035 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.814963 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasNoDiskPressure" Apr 17 11:30:19.815035 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.814973 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeHasSufficientPID" Apr 17 11:30:19.843190 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.843170 2567 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-140-245.ec2.internal\" not found" node="ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.847511 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.847486 2567 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-140-245.ec2.internal\" not found" node="ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.908797 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:19.908772 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:19.915110 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.915093 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b313ec3d20352e5d3b289f2ee066026b-config\") pod \"kube-apiserver-proxy-ip-10-0-140-245.ec2.internal\" (UID: \"b313ec3d20352e5d3b289f2ee066026b\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.915183 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.915118 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/aa74b40b257fc464a40cc0981f038693-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal\" (UID: \"aa74b40b257fc464a40cc0981f038693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.915183 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.915135 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa74b40b257fc464a40cc0981f038693-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal\" (UID: \"aa74b40b257fc464a40cc0981f038693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.915183 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.915161 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa74b40b257fc464a40cc0981f038693-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal\" (UID: \"aa74b40b257fc464a40cc0981f038693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.915279 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.915186 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/b313ec3d20352e5d3b289f2ee066026b-config\") pod \"kube-apiserver-proxy-ip-10-0-140-245.ec2.internal\" (UID: \"b313ec3d20352e5d3b289f2ee066026b\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" Apr 17 11:30:19.915279 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:19.915195 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/aa74b40b257fc464a40cc0981f038693-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal\" (UID: \"aa74b40b257fc464a40cc0981f038693\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:20.009258 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.009232 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.110069 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.109990 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.145449 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.145425 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" Apr 17 11:30:20.149948 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.149929 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:20.210661 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.210638 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.311180 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.311156 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.411676 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.411615 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.512216 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.512192 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.517454 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.517436 2567 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 17 11:30:20.517591 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.517575 2567 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 17 11:30:20.610400 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.610375 2567 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 17 11:30:20.612922 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.612905 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.623645 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.623623 2567 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 17 11:30:20.644386 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.644366 2567 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-qzn7k" Apr 17 11:30:20.651183 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.651164 2567 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-qzn7k" Apr 17 11:30:20.655805 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.655775 2567 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-16 11:25:19 +0000 UTC" deadline="2028-01-04 12:25:23.522833004 +0000 UTC" Apr 17 11:30:20.655805 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.655803 2567 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="15048h55m2.867032987s" Apr 17 11:30:20.705634 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.705607 2567 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 11:30:20.713637 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.713611 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.766411 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:20.766379 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb313ec3d20352e5d3b289f2ee066026b.slice/crio-fefdbc0d75cc3ddf9984132e9c8aa63c1523ad8eeccf9affc59929d492d81723 WatchSource:0}: Error finding container fefdbc0d75cc3ddf9984132e9c8aa63c1523ad8eeccf9affc59929d492d81723: Status 404 returned error can't find the container with id fefdbc0d75cc3ddf9984132e9c8aa63c1523ad8eeccf9affc59929d492d81723 Apr 17 11:30:20.766976 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:20.766946 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa74b40b257fc464a40cc0981f038693.slice/crio-903b2faf75dbb41dd8cd9c8a1af1d82905c0caa6d2fc8a41589e3370912c237f WatchSource:0}: Error finding container 903b2faf75dbb41dd8cd9c8a1af1d82905c0caa6d2fc8a41589e3370912c237f: Status 404 returned error can't find the container with id 903b2faf75dbb41dd8cd9c8a1af1d82905c0caa6d2fc8a41589e3370912c237f Apr 17 11:30:20.771737 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.771720 2567 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 11:30:20.814530 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.814355 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.914840 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:20.914811 2567 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-140-245.ec2.internal\" not found" Apr 17 11:30:20.998846 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.998768 2567 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 11:30:20.999994 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:20.999977 2567 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 11:30:21.012744 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.012719 2567 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" Apr 17 11:30:21.021823 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.021796 2567 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 11:30:21.022838 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.022825 2567 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" Apr 17 11:30:21.033384 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.033370 2567 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 11:30:21.589586 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.589556 2567 apiserver.go:52] "Watching apiserver" Apr 17 11:30:21.600081 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.600050 2567 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 17 11:30:21.601220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.601187 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/konnectivity-agent-jkr2r","kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal","openshift-dns/node-resolver-9zpcp","openshift-image-registry/node-ca-xq6rz","openshift-multus/multus-g2pjn","openshift-multus/network-metrics-daemon-tnlq8","openshift-network-diagnostics/network-check-target-8d6ts","openshift-network-operator/iptables-alerter-v2svx","openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv","openshift-cluster-node-tuning-operator/tuned-244rk","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal","openshift-multus/multus-additional-cni-plugins-fpmx5","openshift-ovn-kubernetes/ovnkube-node-vjfgd"] Apr 17 11:30:21.603760 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.603739 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:21.603872 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:21.603836 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:21.606180 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.606043 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.608404 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.608381 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.608940 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.608542 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 17 11:30:21.608940 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.608721 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-5z96q\"" Apr 17 11:30:21.608940 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.608805 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 17 11:30:21.609530 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.609512 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 17 11:30:21.609665 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.609591 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 17 11:30:21.609665 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.609614 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 17 11:30:21.609665 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.609652 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 17 11:30:21.610551 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.610529 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 17 11:30:21.610900 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.610880 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-ptsrr\"" Apr 17 11:30:21.611007 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.610987 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 17 11:30:21.614145 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.613786 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.614145 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.613894 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.616330 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.616197 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.617321 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.617300 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-z79m4\"" Apr 17 11:30:21.617592 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.617528 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 17 11:30:21.617792 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.617774 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 17 11:30:21.617861 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.617796 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-z9q6w\"" Apr 17 11:30:21.618024 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.617990 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 17 11:30:21.618205 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.618189 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 17 11:30:21.618434 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.618394 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 17 11:30:21.618555 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.618481 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:21.618618 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:21.618594 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:21.618675 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.618638 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 17 11:30:21.618877 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.618859 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 17 11:30:21.618958 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.618877 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 17 11:30:21.619092 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.619067 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 17 11:30:21.619092 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.619080 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 17 11:30:21.619387 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.619373 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-5525m\"" Apr 17 11:30:21.619466 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.619446 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 17 11:30:21.621327 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.620901 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.622809 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/989ffc44-f4df-40d5-916a-161b05378f4e-iptables-alerter-script\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.622843 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.622874 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/989ffc44-f4df-40d5-916a-161b05378f4e-host-slash\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.622898 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tcw\" (UniqueName: \"kubernetes.io/projected/989ffc44-f4df-40d5-916a-161b05378f4e-kube-api-access-f5tcw\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.622921 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.622947 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-cni-bin\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.622971 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-hosts-file\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.622992 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-node-log\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623014 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-cni-binary-copy\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623025 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623040 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623241 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-hw8ht\"" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623266 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgpjs\" (UniqueName: \"kubernetes.io/projected/ea7f56ab-276e-4b70-8003-11db06a0b72b-kube-api-access-sgpjs\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623294 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-systemd-units\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623319 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-var-lib-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623345 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovnkube-config\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.623902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623369 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovn-node-metrics-cert\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623397 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623420 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-run-netns\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623442 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-etc-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623466 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623529 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623563 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovnkube-script-lib\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623589 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-kubelet\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623616 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-log-socket\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623639 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-cni-netd\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623667 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-env-overrides\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623744 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pbkz\" (UniqueName: \"kubernetes.io/projected/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-kube-api-access-6pbkz\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623769 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-tmp-dir\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623810 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-system-cni-dir\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623845 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-cnibin\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623883 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-ovn\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.624750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.623967 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdj8s\" (UniqueName: \"kubernetes.io/projected/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-kube-api-access-qdj8s\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.625470 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.624048 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.625470 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.624073 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-slash\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.625470 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.624095 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-systemd\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.625470 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.624118 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-os-release\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.626068 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.626050 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.626716 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.626697 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.628964 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.628764 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 17 11:30:21.629055 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.628379 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 17 11:30:21.629116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.629029 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-qzfrx\"" Apr 17 11:30:21.629116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.629094 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 17 11:30:21.630655 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.629537 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:21.630655 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.629629 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 17 11:30:21.630655 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.629980 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-9b2wx\"" Apr 17 11:30:21.630655 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.630273 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 17 11:30:21.632866 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.632847 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 17 11:30:21.632965 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.632901 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 17 11:30:21.633825 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.633800 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-j7bcj\"" Apr 17 11:30:21.652421 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.652392 2567 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-16 11:25:20 +0000 UTC" deadline="2028-01-29 15:06:56.61479726 +0000 UTC" Apr 17 11:30:21.652421 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.652419 2567 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15651h36m34.962381016s" Apr 17 11:30:21.713037 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.712988 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" event={"ID":"b313ec3d20352e5d3b289f2ee066026b","Type":"ContainerStarted","Data":"fefdbc0d75cc3ddf9984132e9c8aa63c1523ad8eeccf9affc59929d492d81723"} Apr 17 11:30:21.713748 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.713727 2567 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 11:30:21.714578 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.714547 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" event={"ID":"aa74b40b257fc464a40cc0981f038693","Type":"ContainerStarted","Data":"903b2faf75dbb41dd8cd9c8a1af1d82905c0caa6d2fc8a41589e3370912c237f"} Apr 17 11:30:21.724759 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724739 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-cni-bin\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.724865 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724766 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-hosts-file\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.724865 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724787 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/710176df-941f-45f0-baed-0e9b9115157d-konnectivity-ca\") pod \"konnectivity-agent-jkr2r\" (UID: \"710176df-941f-45f0-baed-0e9b9115157d\") " pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:21.724865 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724811 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-os-release\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.725018 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724868 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-hosts-file\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.725018 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724868 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-cni-bin\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725018 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724873 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdj8s\" (UniqueName: \"kubernetes.io/projected/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-kube-api-access-qdj8s\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.725018 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724929 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-socket-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.725018 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724950 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-systemd\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.725018 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724970 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-node-log\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725018 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.724993 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-cni-binary-copy\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725037 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-systemd-units\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725068 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-var-lib-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725104 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbkwb\" (UniqueName: \"kubernetes.io/projected/a32ab52e-e11f-46e9-9714-4ecfa0c87830-kube-api-access-hbkwb\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725118 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-var-lib-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725126 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-node-log\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725147 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725158 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-systemd-units\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725193 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-run-netns\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725212 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725238 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/710176df-941f-45f0-baed-0e9b9115157d-agent-certs\") pod \"konnectivity-agent-jkr2r\" (UID: \"710176df-941f-45f0-baed-0e9b9115157d\") " pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725247 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-run-netns\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725284 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725335 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41014c60-54e4-48f5-83f8-487c7f64058e-cni-binary-copy\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725371 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-multus-certs\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725402 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-kubelet\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725441 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-log-socket\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725452 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725481 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-kubelet\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725520 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-cni-binary-copy\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725536 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-cni-netd\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725540 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-log-socket\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725575 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-cni-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725631 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-netns\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725641 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-cni-netd\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725703 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-cnibin\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725731 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-host\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725755 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-tuned\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725773 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a32ab52e-e11f-46e9-9714-4ecfa0c87830-tmp\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725766 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-cnibin\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.725963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725796 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-slash\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725831 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-systemd\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725850 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-slash\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725887 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-kubelet-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725894 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-systemd\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725940 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-modprobe-d\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725970 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-cnibin\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.725998 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/989ffc44-f4df-40d5-916a-161b05378f4e-iptables-alerter-script\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726016 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-cni-multus\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726034 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-conf-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726059 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/989ffc44-f4df-40d5-916a-161b05378f4e-host-slash\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726085 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tcw\" (UniqueName: \"kubernetes.io/projected/989ffc44-f4df-40d5-916a-161b05378f4e-kube-api-access-f5tcw\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726121 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-sys\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726145 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-system-cni-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726155 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/989ffc44-f4df-40d5-916a-161b05378f4e-host-slash\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726232 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-run\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726274 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/41014c60-54e4-48f5-83f8-487c7f64058e-multus-daemon-config\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.726734 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726303 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5zr\" (UniqueName: \"kubernetes.io/projected/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-kube-api-access-nc5zr\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726327 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-lib-modules\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726400 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726532 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/989ffc44-f4df-40d5-916a-161b05378f4e-iptables-alerter-script\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726577 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sgpjs\" (UniqueName: \"kubernetes.io/projected/ea7f56ab-276e-4b70-8003-11db06a0b72b-kube-api-access-sgpjs\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726884 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.726835 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovnkube-config\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727362 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovn-node-metrics-cert\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727396 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovnkube-script-lib\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727445 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvnbt\" (UniqueName: \"kubernetes.io/projected/96b4954a-76e8-4a06-9917-5454d450896d-kube-api-access-qvnbt\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727475 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-sys-fs\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.727500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727502 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysconfig\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727534 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-etc-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727565 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727576 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-etc-openvswitch\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727601 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6pbkz\" (UniqueName: \"kubernetes.io/projected/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-kube-api-access-6pbkz\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727626 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-run-ovn-kubernetes\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727630 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-tmp-dir\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727662 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-registration-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727710 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-env-overrides\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727736 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-etc-selinux\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727759 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96b4954a-76e8-4a06-9917-5454d450896d-host\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727782 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-cni-bin\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727791 2567 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727807 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-system-cni-dir\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727831 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-ovn\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727856 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps52r\" (UniqueName: \"kubernetes.io/projected/d6d2dca5-2ae8-41be-9865-73022a8c7601-kube-api-access-ps52r\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727904 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysctl-conf\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.727990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727930 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-var-lib-kubelet\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727956 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-hostroot\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727980 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-tmp-dir\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727992 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.727997 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovnkube-config\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728020 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-k8s-cni-cncf-io\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728044 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-kubelet\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728036 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovnkube-script-lib\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728071 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-etc-kubernetes\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728077 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-system-cni-dir\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728085 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-run-ovn\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.728650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728103 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75krk\" (UniqueName: \"kubernetes.io/projected/41014c60-54e4-48f5-83f8-487c7f64058e-kube-api-access-75krk\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728823 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-os-release\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728870 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728904 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728938 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-kubernetes\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.728970 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-socket-dir-parent\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.729007 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.729050 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-device-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.729077 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysctl-d\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.729156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.729137 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-env-overrides\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.729490 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.729178 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.729490 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.729306 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96b4954a-76e8-4a06-9917-5454d450896d-serviceca\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.729490 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.729464 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ea7f56ab-276e-4b70-8003-11db06a0b72b-os-release\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.731968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.730326 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ea7f56ab-276e-4b70-8003-11db06a0b72b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.731968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.731138 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-ovn-node-metrics-cert\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.734054 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:21.734000 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 11:30:21.734054 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:21.734031 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 11:30:21.734054 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:21.734046 2567 projected.go:194] Error preparing data for projected volume kube-api-access-g4hjp for pod openshift-network-diagnostics/network-check-target-8d6ts: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:21.734284 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:21.734109 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp podName:a1d31d48-4078-464b-b36c-28075b5f885b nodeName:}" failed. No retries permitted until 2026-04-17 11:30:22.234089617 +0000 UTC m=+3.072327367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g4hjp" (UniqueName: "kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp") pod "network-check-target-8d6ts" (UID: "a1d31d48-4078-464b-b36c-28075b5f885b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:21.736416 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.736392 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdj8s\" (UniqueName: \"kubernetes.io/projected/f79e4e5d-fbfb-429a-aa74-4c1d5725072a-kube-api-access-qdj8s\") pod \"node-resolver-9zpcp\" (UID: \"f79e4e5d-fbfb-429a-aa74-4c1d5725072a\") " pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.736502 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.736484 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgpjs\" (UniqueName: \"kubernetes.io/projected/ea7f56ab-276e-4b70-8003-11db06a0b72b-kube-api-access-sgpjs\") pod \"multus-additional-cni-plugins-fpmx5\" (UID: \"ea7f56ab-276e-4b70-8003-11db06a0b72b\") " pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.736571 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.736550 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pbkz\" (UniqueName: \"kubernetes.io/projected/efceb2ce-9379-47c0-b8c1-22f8ad408e7c-kube-api-access-6pbkz\") pod \"ovnkube-node-vjfgd\" (UID: \"efceb2ce-9379-47c0-b8c1-22f8ad408e7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.736721 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.736701 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tcw\" (UniqueName: \"kubernetes.io/projected/989ffc44-f4df-40d5-916a-161b05378f4e-kube-api-access-f5tcw\") pod \"iptables-alerter-v2svx\" (UID: \"989ffc44-f4df-40d5-916a-161b05378f4e\") " pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.830350 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830306 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5zr\" (UniqueName: \"kubernetes.io/projected/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-kube-api-access-nc5zr\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:21.830350 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830338 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-lib-modules\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.830350 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830357 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvnbt\" (UniqueName: \"kubernetes.io/projected/96b4954a-76e8-4a06-9917-5454d450896d-kube-api-access-qvnbt\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.830625 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830373 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-sys-fs\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.830625 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830509 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-lib-modules\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.830625 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830511 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-sys-fs\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.830625 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830540 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysconfig\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.830625 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830574 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-registration-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.830625 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830601 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-etc-selinux\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.830625 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830625 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96b4954a-76e8-4a06-9917-5454d450896d-host\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830631 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysconfig\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830649 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-cni-bin\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830656 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-registration-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830692 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ps52r\" (UniqueName: \"kubernetes.io/projected/d6d2dca5-2ae8-41be-9865-73022a8c7601-kube-api-access-ps52r\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830703 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-etc-selinux\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830718 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysctl-conf\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830731 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-cni-bin\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830735 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96b4954a-76e8-4a06-9917-5454d450896d-host\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830744 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-var-lib-kubelet\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830766 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-hostroot\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830788 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-k8s-cni-cncf-io\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830811 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-hostroot\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830804 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-var-lib-kubelet\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830814 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-kubelet\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830849 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-kubelet\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830855 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-etc-kubernetes\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830861 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-k8s-cni-cncf-io\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.830968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830898 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-etc-kubernetes\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830890 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75krk\" (UniqueName: \"kubernetes.io/projected/41014c60-54e4-48f5-83f8-487c7f64058e-kube-api-access-75krk\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830886 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysctl-conf\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830941 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830963 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-kubernetes\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.830987 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-socket-dir-parent\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831026 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-device-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:21.831035 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831051 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysctl-d\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831051 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-kubernetes\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831074 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96b4954a-76e8-4a06-9917-5454d450896d-serviceca\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:21.831100 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:22.331081663 +0000 UTC m=+3.169319416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831103 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-socket-dir-parent\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831108 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-device-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831131 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/710176df-941f-45f0-baed-0e9b9115157d-konnectivity-ca\") pod \"konnectivity-agent-jkr2r\" (UID: \"710176df-941f-45f0-baed-0e9b9115157d\") " pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831150 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-os-release\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831172 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-socket-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.831733 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831181 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-sysctl-d\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831189 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-systemd\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831218 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbkwb\" (UniqueName: \"kubernetes.io/projected/a32ab52e-e11f-46e9-9714-4ecfa0c87830-kube-api-access-hbkwb\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831226 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-systemd\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831245 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/710176df-941f-45f0-baed-0e9b9115157d-agent-certs\") pod \"konnectivity-agent-jkr2r\" (UID: \"710176df-941f-45f0-baed-0e9b9115157d\") " pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831249 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-os-release\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831273 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41014c60-54e4-48f5-83f8-487c7f64058e-cni-binary-copy\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831300 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-multus-certs\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831329 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-cni-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831354 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-netns\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831379 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-host\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831401 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-tuned\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831423 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a32ab52e-e11f-46e9-9714-4ecfa0c87830-tmp\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831450 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-kubelet-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831474 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-modprobe-d\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831496 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-cnibin\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831523 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-cni-multus\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831546 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-conf-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.832464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831558 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96b4954a-76e8-4a06-9917-5454d450896d-serviceca\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831572 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-sys\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831618 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-multus-certs\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831676 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-cni-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831328 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-socket-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831749 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-run-netns\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831760 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/710176df-941f-45f0-baed-0e9b9115157d-konnectivity-ca\") pod \"konnectivity-agent-jkr2r\" (UID: \"710176df-941f-45f0-baed-0e9b9115157d\") " pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831796 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-host\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831820 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-cnibin\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831836 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41014c60-54e4-48f5-83f8-487c7f64058e-cni-binary-copy\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831887 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-multus-conf-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831938 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-host-var-lib-cni-multus\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.831991 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-sys\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.832050 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2dca5-2ae8-41be-9865-73022a8c7601-kubelet-dir\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.832087 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-system-cni-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.832114 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-run\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.832138 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/41014c60-54e4-48f5-83f8-487c7f64058e-multus-daemon-config\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.832169 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-modprobe-d\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.833077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.832244 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41014c60-54e4-48f5-83f8-487c7f64058e-system-cni-dir\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833830 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.832289 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a32ab52e-e11f-46e9-9714-4ecfa0c87830-run\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.833830 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.832590 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/41014c60-54e4-48f5-83f8-487c7f64058e-multus-daemon-config\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.833963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.833942 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/a32ab52e-e11f-46e9-9714-4ecfa0c87830-etc-tuned\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.834096 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.834074 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a32ab52e-e11f-46e9-9714-4ecfa0c87830-tmp\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.834245 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.834223 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/710176df-941f-45f0-baed-0e9b9115157d-agent-certs\") pod \"konnectivity-agent-jkr2r\" (UID: \"710176df-941f-45f0-baed-0e9b9115157d\") " pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:21.840410 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.840325 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75krk\" (UniqueName: \"kubernetes.io/projected/41014c60-54e4-48f5-83f8-487c7f64058e-kube-api-access-75krk\") pod \"multus-g2pjn\" (UID: \"41014c60-54e4-48f5-83f8-487c7f64058e\") " pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.840410 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.840327 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps52r\" (UniqueName: \"kubernetes.io/projected/d6d2dca5-2ae8-41be-9865-73022a8c7601-kube-api-access-ps52r\") pod \"aws-ebs-csi-driver-node-rlnwv\" (UID: \"d6d2dca5-2ae8-41be-9865-73022a8c7601\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.840576 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.840542 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvnbt\" (UniqueName: \"kubernetes.io/projected/96b4954a-76e8-4a06-9917-5454d450896d-kube-api-access-qvnbt\") pod \"node-ca-xq6rz\" (UID: \"96b4954a-76e8-4a06-9917-5454d450896d\") " pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.841003 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.840984 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5zr\" (UniqueName: \"kubernetes.io/projected/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-kube-api-access-nc5zr\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:21.841003 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.840990 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbkwb\" (UniqueName: \"kubernetes.io/projected/a32ab52e-e11f-46e9-9714-4ecfa0c87830-kube-api-access-hbkwb\") pod \"tuned-244rk\" (UID: \"a32ab52e-e11f-46e9-9714-4ecfa0c87830\") " pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.927154 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.927126 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:21.935894 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.935873 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9zpcp" Apr 17 11:30:21.944406 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.944385 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" Apr 17 11:30:21.951897 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.951879 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-v2svx" Apr 17 11:30:21.960366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.960349 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xq6rz" Apr 17 11:30:21.968830 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.968815 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2pjn" Apr 17 11:30:21.975435 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.975418 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" Apr 17 11:30:21.982969 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.982950 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-244rk" Apr 17 11:30:21.988453 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:21.988435 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:22.019817 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.019788 2567 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 17 11:30:22.234755 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.234727 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:22.234945 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:22.234899 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 11:30:22.234945 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:22.234917 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 11:30:22.234945 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:22.234931 2567 projected.go:194] Error preparing data for projected volume kube-api-access-g4hjp for pod openshift-network-diagnostics/network-check-target-8d6ts: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:22.235088 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:22.234996 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp podName:a1d31d48-4078-464b-b36c-28075b5f885b nodeName:}" failed. No retries permitted until 2026-04-17 11:30:23.234974994 +0000 UTC m=+4.073212730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-g4hjp" (UniqueName: "kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp") pod "network-check-target-8d6ts" (UID: "a1d31d48-4078-464b-b36c-28075b5f885b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:22.335811 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.335776 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:22.336000 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:22.335920 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:22.336000 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:22.335985 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:23.335967124 +0000 UTC m=+4.174204863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:22.471060 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.471030 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b4954a_76e8_4a06_9917_5454d450896d.slice/crio-91749f48821f69f440c815fd2bbb3f7946cc8d54801169d3ca2d59c7232ac09e WatchSource:0}: Error finding container 91749f48821f69f440c815fd2bbb3f7946cc8d54801169d3ca2d59c7232ac09e: Status 404 returned error can't find the container with id 91749f48821f69f440c815fd2bbb3f7946cc8d54801169d3ca2d59c7232ac09e Apr 17 11:30:22.472736 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.472706 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod989ffc44_f4df_40d5_916a_161b05378f4e.slice/crio-9cb3590ac5f84a1995d6bffeb2cac3382c13a93046fe4c205f8f6b93a933287f WatchSource:0}: Error finding container 9cb3590ac5f84a1995d6bffeb2cac3382c13a93046fe4c205f8f6b93a933287f: Status 404 returned error can't find the container with id 9cb3590ac5f84a1995d6bffeb2cac3382c13a93046fe4c205f8f6b93a933287f Apr 17 11:30:22.473635 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.473562 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea7f56ab_276e_4b70_8003_11db06a0b72b.slice/crio-e62b1d6b75fbae7a4d4d0f2259bc121b7b2f87294b18e7c95164976e8b776406 WatchSource:0}: Error finding container e62b1d6b75fbae7a4d4d0f2259bc121b7b2f87294b18e7c95164976e8b776406: Status 404 returned error can't find the container with id e62b1d6b75fbae7a4d4d0f2259bc121b7b2f87294b18e7c95164976e8b776406 Apr 17 11:30:22.476833 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.476806 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod710176df_941f_45f0_baed_0e9b9115157d.slice/crio-75e2345a27bcca76f1f8417617c4a54e1b665c6064a443e4c5f7b77a104d53e1 WatchSource:0}: Error finding container 75e2345a27bcca76f1f8417617c4a54e1b665c6064a443e4c5f7b77a104d53e1: Status 404 returned error can't find the container with id 75e2345a27bcca76f1f8417617c4a54e1b665c6064a443e4c5f7b77a104d53e1 Apr 17 11:30:22.478393 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.478316 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda32ab52e_e11f_46e9_9714_4ecfa0c87830.slice/crio-7da7f1a358cef724b59916657593130fe5455c372f57efa074223b5e121683d5 WatchSource:0}: Error finding container 7da7f1a358cef724b59916657593130fe5455c372f57efa074223b5e121683d5: Status 404 returned error can't find the container with id 7da7f1a358cef724b59916657593130fe5455c372f57efa074223b5e121683d5 Apr 17 11:30:22.479129 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.478959 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefceb2ce_9379_47c0_b8c1_22f8ad408e7c.slice/crio-0fef17eecd922da4d8da713c0a2e256bc1d735f52eff9fab5dc47fe5840dfb52 WatchSource:0}: Error finding container 0fef17eecd922da4d8da713c0a2e256bc1d735f52eff9fab5dc47fe5840dfb52: Status 404 returned error can't find the container with id 0fef17eecd922da4d8da713c0a2e256bc1d735f52eff9fab5dc47fe5840dfb52 Apr 17 11:30:22.480577 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.480476 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6d2dca5_2ae8_41be_9865_73022a8c7601.slice/crio-9395996a34fa72ce15f3448cf90306481e091d9482c6c13e75f5c1cee03b607d WatchSource:0}: Error finding container 9395996a34fa72ce15f3448cf90306481e091d9482c6c13e75f5c1cee03b607d: Status 404 returned error can't find the container with id 9395996a34fa72ce15f3448cf90306481e091d9482c6c13e75f5c1cee03b607d Apr 17 11:30:22.481908 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.481769 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41014c60_54e4_48f5_83f8_487c7f64058e.slice/crio-7146a55d99be5ac5a70a4c0b696f67d9cdcb5a2a779f2eae35df097cc9639c63 WatchSource:0}: Error finding container 7146a55d99be5ac5a70a4c0b696f67d9cdcb5a2a779f2eae35df097cc9639c63: Status 404 returned error can't find the container with id 7146a55d99be5ac5a70a4c0b696f67d9cdcb5a2a779f2eae35df097cc9639c63 Apr 17 11:30:22.482490 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:22.482400 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf79e4e5d_fbfb_429a_aa74_4c1d5725072a.slice/crio-5e99c1d4e6c26b1029cf6da6a520585a2f3f28566ebd9e6de847c9a8585be63d WatchSource:0}: Error finding container 5e99c1d4e6c26b1029cf6da6a520585a2f3f28566ebd9e6de847c9a8585be63d: Status 404 returned error can't find the container with id 5e99c1d4e6c26b1029cf6da6a520585a2f3f28566ebd9e6de847c9a8585be63d Apr 17 11:30:22.652775 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.652732 2567 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-16 11:25:20 +0000 UTC" deadline="2027-12-11 19:52:59.276629392 +0000 UTC" Apr 17 11:30:22.652775 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.652767 2567 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="14480h22m36.623865395s" Apr 17 11:30:22.717749 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.717699 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" event={"ID":"b313ec3d20352e5d3b289f2ee066026b","Type":"ContainerStarted","Data":"00c6b629fcbe8587e21512e6390bbb797dd9f41403f68a943131ce5408b08a83"} Apr 17 11:30:22.719362 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.719333 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2pjn" event={"ID":"41014c60-54e4-48f5-83f8-487c7f64058e","Type":"ContainerStarted","Data":"7146a55d99be5ac5a70a4c0b696f67d9cdcb5a2a779f2eae35df097cc9639c63"} Apr 17 11:30:22.720446 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.720417 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" event={"ID":"d6d2dca5-2ae8-41be-9865-73022a8c7601","Type":"ContainerStarted","Data":"9395996a34fa72ce15f3448cf90306481e091d9482c6c13e75f5c1cee03b607d"} Apr 17 11:30:22.721572 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.721551 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-244rk" event={"ID":"a32ab52e-e11f-46e9-9714-4ecfa0c87830","Type":"ContainerStarted","Data":"7da7f1a358cef724b59916657593130fe5455c372f57efa074223b5e121683d5"} Apr 17 11:30:22.722483 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.722463 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-jkr2r" event={"ID":"710176df-941f-45f0-baed-0e9b9115157d","Type":"ContainerStarted","Data":"75e2345a27bcca76f1f8417617c4a54e1b665c6064a443e4c5f7b77a104d53e1"} Apr 17 11:30:22.723424 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.723404 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xq6rz" event={"ID":"96b4954a-76e8-4a06-9917-5454d450896d","Type":"ContainerStarted","Data":"91749f48821f69f440c815fd2bbb3f7946cc8d54801169d3ca2d59c7232ac09e"} Apr 17 11:30:22.724359 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.724331 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9zpcp" event={"ID":"f79e4e5d-fbfb-429a-aa74-4c1d5725072a","Type":"ContainerStarted","Data":"5e99c1d4e6c26b1029cf6da6a520585a2f3f28566ebd9e6de847c9a8585be63d"} Apr 17 11:30:22.725326 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.725304 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"0fef17eecd922da4d8da713c0a2e256bc1d735f52eff9fab5dc47fe5840dfb52"} Apr 17 11:30:22.726303 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.726279 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" event={"ID":"ea7f56ab-276e-4b70-8003-11db06a0b72b","Type":"ContainerStarted","Data":"e62b1d6b75fbae7a4d4d0f2259bc121b7b2f87294b18e7c95164976e8b776406"} Apr 17 11:30:22.727156 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.727136 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-v2svx" event={"ID":"989ffc44-f4df-40d5-916a-161b05378f4e","Type":"ContainerStarted","Data":"9cb3590ac5f84a1995d6bffeb2cac3382c13a93046fe4c205f8f6b93a933287f"} Apr 17 11:30:22.730823 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.730788 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-140-245.ec2.internal" podStartSLOduration=1.730778347 podStartE2EDuration="1.730778347s" podCreationTimestamp="2026-04-17 11:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 11:30:22.730233567 +0000 UTC m=+3.568471322" watchObservedRunningTime="2026-04-17 11:30:22.730778347 +0000 UTC m=+3.569016102" Apr 17 11:30:22.970448 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.969715 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-ftdll"] Apr 17 11:30:22.972591 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:22.972537 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:22.972728 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:22.972617 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:23.042467 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.042256 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-kubelet-config\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.042467 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.042321 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-dbus\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.042467 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.042384 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.144008 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.143324 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-dbus\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.144008 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.143414 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.144008 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.143463 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-kubelet-config\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.144008 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.143558 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-kubelet-config\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.144008 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.143594 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-dbus\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.144008 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.143664 2567 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:23.144008 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.143742 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret podName:e6522e95-55e6-43f4-9a0b-b0429a3a47c4 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:23.643724223 +0000 UTC m=+4.481961963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret") pod "global-pull-secret-syncer-ftdll" (UID: "e6522e95-55e6-43f4-9a0b-b0429a3a47c4") : object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:23.244856 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.244097 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:23.244856 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.244353 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 11:30:23.244856 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.244372 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 11:30:23.244856 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.244385 2567 projected.go:194] Error preparing data for projected volume kube-api-access-g4hjp for pod openshift-network-diagnostics/network-check-target-8d6ts: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:23.244856 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.244441 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp podName:a1d31d48-4078-464b-b36c-28075b5f885b nodeName:}" failed. No retries permitted until 2026-04-17 11:30:25.244423755 +0000 UTC m=+6.082661502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-g4hjp" (UniqueName: "kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp") pod "network-check-target-8d6ts" (UID: "a1d31d48-4078-464b-b36c-28075b5f885b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:23.345992 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.345372 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:23.345992 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.345571 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:23.345992 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.345632 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:25.345613345 +0000 UTC m=+6.183851087 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:23.647009 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.646922 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:23.647161 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.647085 2567 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:23.647161 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.647146 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret podName:e6522e95-55e6-43f4-9a0b-b0429a3a47c4 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:24.647129329 +0000 UTC m=+5.485367062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret") pod "global-pull-secret-syncer-ftdll" (UID: "e6522e95-55e6-43f4-9a0b-b0429a3a47c4") : object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:23.709855 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.709825 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:23.710324 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.709937 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:23.710518 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.710497 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:23.710635 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:23.710611 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:23.748644 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.748607 2567 generic.go:358] "Generic (PLEG): container finished" podID="aa74b40b257fc464a40cc0981f038693" containerID="13b5c2f4a2a3f86f3c21324fcb1665816979d81c70d105c1ce1a2fd838e6f6bf" exitCode=0 Apr 17 11:30:23.748812 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:23.748713 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" event={"ID":"aa74b40b257fc464a40cc0981f038693","Type":"ContainerDied","Data":"13b5c2f4a2a3f86f3c21324fcb1665816979d81c70d105c1ce1a2fd838e6f6bf"} Apr 17 11:30:24.656982 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:24.656432 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:24.656982 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:24.656568 2567 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:24.656982 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:24.656626 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret podName:e6522e95-55e6-43f4-9a0b-b0429a3a47c4 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:26.656607056 +0000 UTC m=+7.494844807 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret") pod "global-pull-secret-syncer-ftdll" (UID: "e6522e95-55e6-43f4-9a0b-b0429a3a47c4") : object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:24.710110 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:24.709509 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:24.710110 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:24.709705 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:24.753367 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:24.753334 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" event={"ID":"aa74b40b257fc464a40cc0981f038693","Type":"ContainerStarted","Data":"20f349ad05464cf69c1d845e37625115779334060de9de1bfb86742ba449d5a3"} Apr 17 11:30:25.260286 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:25.260156 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:25.260467 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:25.260355 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 11:30:25.260467 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:25.260383 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 11:30:25.260467 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:25.260397 2567 projected.go:194] Error preparing data for projected volume kube-api-access-g4hjp for pod openshift-network-diagnostics/network-check-target-8d6ts: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:25.260467 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:25.260457 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp podName:a1d31d48-4078-464b-b36c-28075b5f885b nodeName:}" failed. No retries permitted until 2026-04-17 11:30:29.260437865 +0000 UTC m=+10.098675606 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-g4hjp" (UniqueName: "kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp") pod "network-check-target-8d6ts" (UID: "a1d31d48-4078-464b-b36c-28075b5f885b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:25.361513 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:25.360971 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:25.361513 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:25.361120 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:25.361513 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:25.361183 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:29.361163501 +0000 UTC m=+10.199401235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:25.709708 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:25.709555 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:25.709708 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:25.709607 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:25.709949 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:25.709714 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:25.709949 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:25.709845 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:26.671815 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:26.671772 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:26.672236 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:26.671967 2567 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:26.672236 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:26.672034 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret podName:e6522e95-55e6-43f4-9a0b-b0429a3a47c4 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:30.672015099 +0000 UTC m=+11.510252835 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret") pod "global-pull-secret-syncer-ftdll" (UID: "e6522e95-55e6-43f4-9a0b-b0429a3a47c4") : object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:26.709129 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:26.709102 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:26.709274 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:26.709226 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:27.710121 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:27.709709 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:27.710121 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:27.709836 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:27.710659 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:27.710257 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:27.710659 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:27.710372 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:28.709433 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:28.709400 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:28.709611 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:28.709535 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:29.291698 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:29.291647 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:29.292138 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:29.291861 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 11:30:29.292138 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:29.291880 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 11:30:29.292138 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:29.291892 2567 projected.go:194] Error preparing data for projected volume kube-api-access-g4hjp for pod openshift-network-diagnostics/network-check-target-8d6ts: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:29.292138 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:29.291948 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp podName:a1d31d48-4078-464b-b36c-28075b5f885b nodeName:}" failed. No retries permitted until 2026-04-17 11:30:37.291929182 +0000 UTC m=+18.130166921 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-g4hjp" (UniqueName: "kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp") pod "network-check-target-8d6ts" (UID: "a1d31d48-4078-464b-b36c-28075b5f885b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:29.392984 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:29.392944 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:29.393155 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:29.393108 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:29.393220 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:29.393180 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:37.393160381 +0000 UTC m=+18.231398127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:29.711318 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:29.710693 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:29.711318 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:29.710807 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:29.711318 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:29.711169 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:29.711318 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:29.711271 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:30.703744 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:30.703662 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:30.704174 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:30.703822 2567 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:30.704174 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:30.703910 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret podName:e6522e95-55e6-43f4-9a0b-b0429a3a47c4 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:38.703887609 +0000 UTC m=+19.542125351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret") pod "global-pull-secret-syncer-ftdll" (UID: "e6522e95-55e6-43f4-9a0b-b0429a3a47c4") : object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:30.709293 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:30.709271 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:30.709409 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:30.709382 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:31.709005 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:31.708972 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:31.709425 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:31.708979 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:31.709425 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:31.709089 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:31.709425 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:31.709193 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:32.708899 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:32.708869 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:32.709063 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:32.708986 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:33.709830 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:33.709794 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:33.710275 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:33.709798 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:33.710275 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:33.709916 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:33.710275 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:33.709997 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:34.709592 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:34.709557 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:34.709863 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:34.709674 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:35.709016 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:35.708981 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:35.709194 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:35.708981 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:35.709194 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:35.709093 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:35.709194 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:35.709165 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:36.708903 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:36.708872 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:36.709337 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:36.709002 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:37.352854 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:37.352813 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:37.353036 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:37.352954 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 17 11:30:37.353036 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:37.352975 2567 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 17 11:30:37.353036 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:37.352987 2567 projected.go:194] Error preparing data for projected volume kube-api-access-g4hjp for pod openshift-network-diagnostics/network-check-target-8d6ts: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:37.353198 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:37.353052 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp podName:a1d31d48-4078-464b-b36c-28075b5f885b nodeName:}" failed. No retries permitted until 2026-04-17 11:30:53.35303434 +0000 UTC m=+34.191272074 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-g4hjp" (UniqueName: "kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp") pod "network-check-target-8d6ts" (UID: "a1d31d48-4078-464b-b36c-28075b5f885b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 17 11:30:37.453289 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:37.453254 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:37.453461 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:37.453401 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:37.453523 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:37.453466 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:53.453447087 +0000 UTC m=+34.291684822 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:37.709067 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:37.709033 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:37.709497 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:37.709176 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:37.709497 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:37.709213 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:37.709497 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:37.709281 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:38.709295 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:38.709264 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:38.709772 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:38.709373 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:38.763853 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:38.763817 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:38.764028 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:38.764005 2567 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:38.764133 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:38.764112 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret podName:e6522e95-55e6-43f4-9a0b-b0429a3a47c4 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:54.764092791 +0000 UTC m=+35.602330532 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret") pod "global-pull-secret-syncer-ftdll" (UID: "e6522e95-55e6-43f4-9a0b-b0429a3a47c4") : object "kube-system"/"original-pull-secret" not registered Apr 17 11:30:39.714710 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.712424 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:39.714710 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.714210 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:39.714710 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:39.714661 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:39.716171 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:39.715874 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:39.779347 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.779310 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9zpcp" event={"ID":"f79e4e5d-fbfb-429a-aa74-4c1d5725072a","Type":"ContainerStarted","Data":"b05a90d6589da4345fb244c41b57f212cab913cff2251d236081e528c0853353"} Apr 17 11:30:39.781203 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.781181 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:30:39.781494 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.781474 2567 generic.go:358] "Generic (PLEG): container finished" podID="efceb2ce-9379-47c0-b8c1-22f8ad408e7c" containerID="cdb17759989aac88b40fbda228628e29f58bc9f6bde34423d5d614718b3f67c2" exitCode=1 Apr 17 11:30:39.781603 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.781531 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"93cdb76b375b973a358976318eaf189b9eebdad0b33d5d6b3c33703b48b32448"} Apr 17 11:30:39.781603 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.781552 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"d9bbc173a4be8975632652774f59e5dc054b8c0afa60bda049b11e11cff5ff10"} Apr 17 11:30:39.781603 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.781565 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerDied","Data":"cdb17759989aac88b40fbda228628e29f58bc9f6bde34423d5d614718b3f67c2"} Apr 17 11:30:39.781603 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.781581 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"9478bc83c126f9945a478a809bb8752f479b112874a4945aa632ad36b3b4858d"} Apr 17 11:30:39.782879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.782859 2567 generic.go:358] "Generic (PLEG): container finished" podID="ea7f56ab-276e-4b70-8003-11db06a0b72b" containerID="7f566f8c6f61f5035e3716540728e2df66baadb8afd866965742470636e49825" exitCode=0 Apr 17 11:30:39.782974 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.782921 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" event={"ID":"ea7f56ab-276e-4b70-8003-11db06a0b72b","Type":"ContainerDied","Data":"7f566f8c6f61f5035e3716540728e2df66baadb8afd866965742470636e49825"} Apr 17 11:30:39.784415 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.784387 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2pjn" event={"ID":"41014c60-54e4-48f5-83f8-487c7f64058e","Type":"ContainerStarted","Data":"fe4113b013847b62d3d9ccfeb3c8b98faff2fe225c8b44f8032bdd6a2d20a1f4"} Apr 17 11:30:39.785791 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.785769 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" event={"ID":"d6d2dca5-2ae8-41be-9865-73022a8c7601","Type":"ContainerStarted","Data":"aa78401e3d4098de63f7890fff49bf115e4508b49bce49aa31bfdca2784c85d8"} Apr 17 11:30:39.786969 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.786943 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-244rk" event={"ID":"a32ab52e-e11f-46e9-9714-4ecfa0c87830","Type":"ContainerStarted","Data":"1a8f9749c83bd326312487276bf0f682aafa9bd8180da534185b529a858301ca"} Apr 17 11:30:39.788137 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.788116 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-jkr2r" event={"ID":"710176df-941f-45f0-baed-0e9b9115157d","Type":"ContainerStarted","Data":"857a4442f616315c2a17ccddb27484895d4f1639bbf544d4890852a897761697"} Apr 17 11:30:39.789414 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.789394 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xq6rz" event={"ID":"96b4954a-76e8-4a06-9917-5454d450896d","Type":"ContainerStarted","Data":"c1f20f903e9f35e023bb9c679092e09198f4808c0d925563de3b48c5a394607d"} Apr 17 11:30:39.796010 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.795969 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-140-245.ec2.internal" podStartSLOduration=18.795959344 podStartE2EDuration="18.795959344s" podCreationTimestamp="2026-04-17 11:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 11:30:24.767179542 +0000 UTC m=+5.605417299" watchObservedRunningTime="2026-04-17 11:30:39.795959344 +0000 UTC m=+20.634197110" Apr 17 11:30:39.796353 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.796325 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9zpcp" podStartSLOduration=4.009240098 podStartE2EDuration="20.796317576s" podCreationTimestamp="2026-04-17 11:30:19 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.484695159 +0000 UTC m=+3.322932907" lastFinishedPulling="2026-04-17 11:30:39.271772648 +0000 UTC m=+20.110010385" observedRunningTime="2026-04-17 11:30:39.796007356 +0000 UTC m=+20.634245115" watchObservedRunningTime="2026-04-17 11:30:39.796317576 +0000 UTC m=+20.634555333" Apr 17 11:30:39.813742 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.813658 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g2pjn" podStartSLOduration=2.97366947 podStartE2EDuration="19.813645567s" podCreationTimestamp="2026-04-17 11:30:20 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.483694486 +0000 UTC m=+3.321932225" lastFinishedPulling="2026-04-17 11:30:39.323670581 +0000 UTC m=+20.161908322" observedRunningTime="2026-04-17 11:30:39.813085865 +0000 UTC m=+20.651323620" watchObservedRunningTime="2026-04-17 11:30:39.813645567 +0000 UTC m=+20.651883322" Apr 17 11:30:39.827413 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.827366 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-jkr2r" podStartSLOduration=3.03432546 podStartE2EDuration="19.827350474s" podCreationTimestamp="2026-04-17 11:30:20 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.478704214 +0000 UTC m=+3.316941963" lastFinishedPulling="2026-04-17 11:30:39.271729228 +0000 UTC m=+20.109966977" observedRunningTime="2026-04-17 11:30:39.8265261 +0000 UTC m=+20.664763856" watchObservedRunningTime="2026-04-17 11:30:39.827350474 +0000 UTC m=+20.665588230" Apr 17 11:30:39.843262 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.843219 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-244rk" podStartSLOduration=3.051614344 podStartE2EDuration="19.843206455s" podCreationTimestamp="2026-04-17 11:30:20 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.480204332 +0000 UTC m=+3.318442065" lastFinishedPulling="2026-04-17 11:30:39.271796441 +0000 UTC m=+20.110034176" observedRunningTime="2026-04-17 11:30:39.842526689 +0000 UTC m=+20.680764446" watchObservedRunningTime="2026-04-17 11:30:39.843206455 +0000 UTC m=+20.681444210" Apr 17 11:30:39.891524 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:39.891483 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xq6rz" podStartSLOduration=3.092439527 podStartE2EDuration="19.891470197s" podCreationTimestamp="2026-04-17 11:30:20 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.472764662 +0000 UTC m=+3.311002399" lastFinishedPulling="2026-04-17 11:30:39.271795335 +0000 UTC m=+20.110033069" observedRunningTime="2026-04-17 11:30:39.891254598 +0000 UTC m=+20.729492348" watchObservedRunningTime="2026-04-17 11:30:39.891470197 +0000 UTC m=+20.729707972" Apr 17 11:30:40.503646 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.503435 2567 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 17 11:30:40.692326 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.692193 2567 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-17T11:30:40.503643269Z","UUID":"514171a1-2cd4-4439-9732-83385d971d31","Handler":null,"Name":"","Endpoint":""} Apr 17 11:30:40.695319 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.695236 2567 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 17 11:30:40.695319 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.695271 2567 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 17 11:30:40.709126 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.709103 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:40.709242 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:40.709220 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:40.793593 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.793569 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:30:40.793968 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.793941 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"0b14265500a312a1faae236388d2a5fbc43c35f4861aaa5196537fb2a567f382"} Apr 17 11:30:40.794005 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.793975 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"592f665e9370ebe8b0467cd073cab52b48a88f9121bc3a872356c496863ce746"} Apr 17 11:30:40.795505 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:40.795469 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" event={"ID":"d6d2dca5-2ae8-41be-9865-73022a8c7601","Type":"ContainerStarted","Data":"5a6f0d32b421347c2ff49876ff736947956e75be5bfeab7e42885b6eee4ca0e7"} Apr 17 11:30:41.709115 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:41.709079 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:41.709115 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:41.709107 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:41.709351 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:41.709186 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:41.709351 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:41.709307 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:41.799605 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:41.799554 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" event={"ID":"d6d2dca5-2ae8-41be-9865-73022a8c7601","Type":"ContainerStarted","Data":"046255131b9e2ace6bf6ff953adcd9ab96d8885019d81296a787826a981a89f8"} Apr 17 11:30:41.801216 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:41.801189 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-v2svx" event={"ID":"989ffc44-f4df-40d5-916a-161b05378f4e","Type":"ContainerStarted","Data":"70e0f968474f488b9294a4ed0dd358ae470fcc674db1d68a08e5f1f2e2bb42bd"} Apr 17 11:30:41.819247 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:41.819192 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-rlnwv" podStartSLOduration=2.983327396 podStartE2EDuration="21.819175285s" podCreationTimestamp="2026-04-17 11:30:20 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.482737329 +0000 UTC m=+3.320975062" lastFinishedPulling="2026-04-17 11:30:41.318585197 +0000 UTC m=+22.156822951" observedRunningTime="2026-04-17 11:30:41.81784848 +0000 UTC m=+22.656086237" watchObservedRunningTime="2026-04-17 11:30:41.819175285 +0000 UTC m=+22.657413042" Apr 17 11:30:41.836043 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:41.835994 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-v2svx" podStartSLOduration=6.012451118 podStartE2EDuration="22.835981061s" podCreationTimestamp="2026-04-17 11:30:19 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.47533099 +0000 UTC m=+3.313568724" lastFinishedPulling="2026-04-17 11:30:39.298860917 +0000 UTC m=+20.137098667" observedRunningTime="2026-04-17 11:30:41.835659548 +0000 UTC m=+22.673897305" watchObservedRunningTime="2026-04-17 11:30:41.835981061 +0000 UTC m=+22.674218817" Apr 17 11:30:41.984862 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:41.984835 2567 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:41.985440 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:41.985418 2567 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:42.709808 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:42.709780 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:42.709977 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:42.709904 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:42.805913 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:42.805884 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:30:42.806420 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:42.806329 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"b2b875e0013bbbd95a84599bdaa40b02da2c659430439633391a07d6a5140bbe"} Apr 17 11:30:42.806559 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:42.806538 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:42.807053 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:42.807034 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-jkr2r" Apr 17 11:30:43.709843 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:43.709809 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:43.709843 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:43.709836 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:43.710030 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:43.709945 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:43.710108 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:43.710085 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:44.709728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:44.709535 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:44.710323 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:44.709815 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:44.812248 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:44.812225 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:30:44.812578 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:44.812547 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"237b6875bd94632ce5551d28033fed8206c977d2ec175c6bc3ec1fbb47b05d40"} Apr 17 11:30:44.812812 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:44.812796 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:44.813029 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:44.813002 2567 scope.go:117] "RemoveContainer" containerID="cdb17759989aac88b40fbda228628e29f58bc9f6bde34423d5d614718b3f67c2" Apr 17 11:30:44.814225 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:44.814202 2567 generic.go:358] "Generic (PLEG): container finished" podID="ea7f56ab-276e-4b70-8003-11db06a0b72b" containerID="f21fbe20c6a06ca944bf5e849e08aa83c870c46850656545147a499deb99d933" exitCode=0 Apr 17 11:30:44.814345 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:44.814287 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" event={"ID":"ea7f56ab-276e-4b70-8003-11db06a0b72b","Type":"ContainerDied","Data":"f21fbe20c6a06ca944bf5e849e08aa83c870c46850656545147a499deb99d933"} Apr 17 11:30:44.828039 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:44.827991 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:45.709078 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:45.709046 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:45.709309 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:45.709062 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:45.709309 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:45.709289 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:45.709428 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:45.709162 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:45.818776 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:45.818748 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:30:45.819219 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:45.819037 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" event={"ID":"efceb2ce-9379-47c0-b8c1-22f8ad408e7c","Type":"ContainerStarted","Data":"dffb33df81a395e2bc0ae3cbf459a05e7e01ae68af3df844c27b97a768f67960"} Apr 17 11:30:45.819277 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:45.819228 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 11:30:45.819494 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:45.819480 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:45.835174 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:45.835148 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:45.847122 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:45.847072 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" podStartSLOduration=9.977345564 podStartE2EDuration="26.847058558s" podCreationTimestamp="2026-04-17 11:30:19 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.481361785 +0000 UTC m=+3.319599534" lastFinishedPulling="2026-04-17 11:30:39.351074779 +0000 UTC m=+20.189312528" observedRunningTime="2026-04-17 11:30:45.845526631 +0000 UTC m=+26.683764387" watchObservedRunningTime="2026-04-17 11:30:45.847058558 +0000 UTC m=+26.685296313" Apr 17 11:30:46.412037 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.411964 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-ftdll"] Apr 17 11:30:46.412183 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.412083 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:46.412183 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:46.412167 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:46.414575 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.414555 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-8d6ts"] Apr 17 11:30:46.414697 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.414620 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:46.414745 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:46.414700 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:46.417397 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.417375 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tnlq8"] Apr 17 11:30:46.417484 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.417464 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:46.417585 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:46.417565 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:46.821973 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.821939 2567 generic.go:358] "Generic (PLEG): container finished" podID="ea7f56ab-276e-4b70-8003-11db06a0b72b" containerID="e1f76013f35eab68a30256a23eb1fdf5cfa4564cb48d84834a4fff396ba53d64" exitCode=0 Apr 17 11:30:46.822359 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.821976 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" event={"ID":"ea7f56ab-276e-4b70-8003-11db06a0b72b","Type":"ContainerDied","Data":"e1f76013f35eab68a30256a23eb1fdf5cfa4564cb48d84834a4fff396ba53d64"} Apr 17 11:30:46.822359 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:46.822216 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 11:30:47.709291 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:47.709257 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:47.709451 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:47.709257 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:47.709451 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:47.709359 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:47.709451 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:47.709439 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:47.823467 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:47.823443 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 11:30:48.709444 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:48.709279 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:48.709587 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:48.709514 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:48.828458 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:48.828424 2567 generic.go:358] "Generic (PLEG): container finished" podID="ea7f56ab-276e-4b70-8003-11db06a0b72b" containerID="e24439f3e80f3eab62cdc80fa567b39518072895becf59fcbe7497dd3e586e66" exitCode=0 Apr 17 11:30:48.828963 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:48.828490 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" event={"ID":"ea7f56ab-276e-4b70-8003-11db06a0b72b","Type":"ContainerDied","Data":"e24439f3e80f3eab62cdc80fa567b39518072895becf59fcbe7497dd3e586e66"} Apr 17 11:30:49.624696 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:49.624645 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:30:49.624924 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:49.624906 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 11:30:49.641822 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:49.641745 2567 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" podUID="efceb2ce-9379-47c0-b8c1-22f8ad408e7c" containerName="ovnkube-controller" probeResult="failure" output="" Apr 17 11:30:49.652102 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:49.652073 2567 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" podUID="efceb2ce-9379-47c0-b8c1-22f8ad408e7c" containerName="ovnkube-controller" probeResult="failure" output="" Apr 17 11:30:49.710140 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:49.710101 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:49.710309 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:49.710210 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:49.710467 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:49.710436 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:49.710594 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:49.710555 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:50.708991 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:50.708914 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:50.709545 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:50.709022 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8d6ts" podUID="a1d31d48-4078-464b-b36c-28075b5f885b" Apr 17 11:30:51.709433 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:51.709400 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:51.710012 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:51.709400 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:51.710012 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:51.709513 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-ftdll" podUID="e6522e95-55e6-43f4-9a0b-b0429a3a47c4" Apr 17 11:30:51.710012 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:51.709615 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:30:52.504348 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.504315 2567 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-140-245.ec2.internal" event="NodeReady" Apr 17 11:30:52.504526 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.504470 2567 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 17 11:30:52.537040 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.537001 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-9b4668746-kmh5p"] Apr 17 11:30:52.565350 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.565315 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mcjdh"] Apr 17 11:30:52.565496 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.565435 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.568474 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.568440 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 17 11:30:52.571852 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.569073 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-5ntx8\"" Apr 17 11:30:52.571852 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.569172 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 17 11:30:52.572523 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.572501 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 17 11:30:52.583263 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.583245 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-9b4668746-kmh5p"] Apr 17 11:30:52.583263 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.583265 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mcjdh"] Apr 17 11:30:52.583385 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.583339 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:52.585840 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.585818 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-hmj98\"" Apr 17 11:30:52.585840 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.585831 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 17 11:30:52.585997 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.585864 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 17 11:30:52.585997 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.585881 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 17 11:30:52.586383 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.586365 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 17 11:30:52.650224 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.650201 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5q84n"] Apr 17 11:30:52.661962 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.661929 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5q84n"] Apr 17 11:30:52.662076 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.662049 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.664514 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.664497 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 17 11:30:52.664749 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.664563 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-ltml9\"" Apr 17 11:30:52.664853 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.664603 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 17 11:30:52.676904 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.676879 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.676998 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.676922 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9tx\" (UniqueName: \"kubernetes.io/projected/da2c16b4-2e18-4310-881d-5febd92c9d3d-kube-api-access-vk9tx\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:52.677061 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.676995 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35579630-eea3-41d9-8ca1-8408a45d5896-ca-trust-extracted\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.677061 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.677035 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-registry-certificates\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.677167 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.677074 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-trusted-ca\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.677167 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.677153 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-installation-pull-secrets\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.677264 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.677193 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:52.677264 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.677222 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfjz7\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-kube-api-access-nfjz7\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.677330 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.677277 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-image-registry-private-configuration\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.677330 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.677302 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-bound-sa-token\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.709010 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.708981 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:52.711792 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.711775 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 17 11:30:52.712232 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.711776 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 17 11:30:52.712232 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.711776 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-fqsgp\"" Apr 17 11:30:52.777973 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.777902 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.777973 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.777950 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.778153 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.777985 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vk9tx\" (UniqueName: \"kubernetes.io/projected/da2c16b4-2e18-4310-881d-5febd92c9d3d-kube-api-access-vk9tx\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:52.778153 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778025 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35579630-eea3-41d9-8ca1-8408a45d5896-ca-trust-extracted\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.778153 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778053 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-registry-certificates\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.778153 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:52.778075 2567 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 11:30:52.778153 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:52.778096 2567 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-9b4668746-kmh5p: secret "image-registry-tls" not found Apr 17 11:30:52.778153 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:52.778146 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls podName:35579630-eea3-41d9-8ca1-8408a45d5896 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:53.27812547 +0000 UTC m=+34.116363204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls") pod "image-registry-9b4668746-kmh5p" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896") : secret "image-registry-tls" not found Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778082 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-trusted-ca\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778276 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2da30d47-d7ea-47f4-a489-4729c8989cef-tmp-dir\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778310 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2da30d47-d7ea-47f4-a489-4729c8989cef-config-volume\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778351 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-installation-pull-secrets\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778378 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvj52\" (UniqueName: \"kubernetes.io/projected/2da30d47-d7ea-47f4-a489-4729c8989cef-kube-api-access-bvj52\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778400 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778417 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nfjz7\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-kube-api-access-nfjz7\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778437 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-image-registry-private-configuration\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778453 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-bound-sa-token\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.778665 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35579630-eea3-41d9-8ca1-8408a45d5896-ca-trust-extracted\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.779051 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-trusted-ca\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.779116 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:52.779118 2567 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 11:30:52.779565 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:52.779164 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert podName:da2c16b4-2e18-4310-881d-5febd92c9d3d nodeName:}" failed. No retries permitted until 2026-04-17 11:30:53.279148503 +0000 UTC m=+34.117386242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert") pod "ingress-canary-mcjdh" (UID: "da2c16b4-2e18-4310-881d-5febd92c9d3d") : secret "canary-serving-cert" not found Apr 17 11:30:52.779565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.779182 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-registry-certificates\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.782898 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.782876 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-installation-pull-secrets\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.783002 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.782905 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-image-registry-private-configuration\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.788868 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.788845 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-bound-sa-token\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.789652 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.789632 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk9tx\" (UniqueName: \"kubernetes.io/projected/da2c16b4-2e18-4310-881d-5febd92c9d3d-kube-api-access-vk9tx\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:52.790068 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.790046 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfjz7\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-kube-api-access-nfjz7\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:52.879312 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.879269 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2da30d47-d7ea-47f4-a489-4729c8989cef-tmp-dir\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.879465 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.879332 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2da30d47-d7ea-47f4-a489-4729c8989cef-config-volume\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.879465 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.879368 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bvj52\" (UniqueName: \"kubernetes.io/projected/2da30d47-d7ea-47f4-a489-4729c8989cef-kube-api-access-bvj52\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.879465 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.879414 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.879615 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:52.879512 2567 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 11:30:52.879615 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:52.879570 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls podName:2da30d47-d7ea-47f4-a489-4729c8989cef nodeName:}" failed. No retries permitted until 2026-04-17 11:30:53.379552285 +0000 UTC m=+34.217790032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls") pod "dns-default-5q84n" (UID: "2da30d47-d7ea-47f4-a489-4729c8989cef") : secret "dns-default-metrics-tls" not found Apr 17 11:30:52.879749 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.879646 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2da30d47-d7ea-47f4-a489-4729c8989cef-tmp-dir\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.880024 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.879996 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2da30d47-d7ea-47f4-a489-4729c8989cef-config-volume\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:52.891848 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:52.891816 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvj52\" (UniqueName: \"kubernetes.io/projected/2da30d47-d7ea-47f4-a489-4729c8989cef-kube-api-access-bvj52\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:53.282368 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.282124 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:53.282562 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.282433 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:53.282562 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.282267 2567 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 11:30:53.282676 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.282578 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert podName:da2c16b4-2e18-4310-881d-5febd92c9d3d nodeName:}" failed. No retries permitted until 2026-04-17 11:30:54.282564205 +0000 UTC m=+35.120801938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert") pod "ingress-canary-mcjdh" (UID: "da2c16b4-2e18-4310-881d-5febd92c9d3d") : secret "canary-serving-cert" not found Apr 17 11:30:53.282676 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.282516 2567 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 11:30:53.282676 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.282632 2567 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-9b4668746-kmh5p: secret "image-registry-tls" not found Apr 17 11:30:53.282868 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.282712 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls podName:35579630-eea3-41d9-8ca1-8408a45d5896 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:54.282695605 +0000 UTC m=+35.120933354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls") pod "image-registry-9b4668746-kmh5p" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896") : secret "image-registry-tls" not found Apr 17 11:30:53.383010 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.382966 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:53.383187 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.383060 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:53.383187 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.383133 2567 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 11:30:53.383300 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.383203 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls podName:2da30d47-d7ea-47f4-a489-4729c8989cef nodeName:}" failed. No retries permitted until 2026-04-17 11:30:54.38318223 +0000 UTC m=+35.221419978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls") pod "dns-default-5q84n" (UID: "2da30d47-d7ea-47f4-a489-4729c8989cef") : secret "dns-default-metrics-tls" not found Apr 17 11:30:53.385807 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.385768 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4hjp\" (UniqueName: \"kubernetes.io/projected/a1d31d48-4078-464b-b36c-28075b5f885b-kube-api-access-g4hjp\") pod \"network-check-target-8d6ts\" (UID: \"a1d31d48-4078-464b-b36c-28075b5f885b\") " pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:53.484421 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.484380 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:53.484601 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.484529 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:53.484601 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:53.484594 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:31:25.484578722 +0000 UTC m=+66.322816477 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 17 11:30:53.619316 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.619223 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:30:53.709807 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.709774 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:53.709807 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.709805 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:30:53.712711 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.712672 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 17 11:30:53.713639 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.713539 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 17 11:30:53.713639 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:53.713564 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-lzg8w\"" Apr 17 11:30:54.291251 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:54.291219 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:54.291440 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:54.291306 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:54.291440 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:54.291378 2567 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 11:30:54.291440 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:54.291400 2567 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-9b4668746-kmh5p: secret "image-registry-tls" not found Apr 17 11:30:54.291440 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:54.291424 2567 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 11:30:54.291768 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:54.291745 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert podName:da2c16b4-2e18-4310-881d-5febd92c9d3d nodeName:}" failed. No retries permitted until 2026-04-17 11:30:56.2916232 +0000 UTC m=+37.129860948 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert") pod "ingress-canary-mcjdh" (UID: "da2c16b4-2e18-4310-881d-5febd92c9d3d") : secret "canary-serving-cert" not found Apr 17 11:30:54.291899 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:54.291790 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls podName:35579630-eea3-41d9-8ca1-8408a45d5896 nodeName:}" failed. No retries permitted until 2026-04-17 11:30:56.291761905 +0000 UTC m=+37.129999639 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls") pod "image-registry-9b4668746-kmh5p" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896") : secret "image-registry-tls" not found Apr 17 11:30:54.392500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:54.392457 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:54.392670 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:54.392625 2567 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 11:30:54.392765 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:54.392714 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls podName:2da30d47-d7ea-47f4-a489-4729c8989cef nodeName:}" failed. No retries permitted until 2026-04-17 11:30:56.392693793 +0000 UTC m=+37.230931548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls") pod "dns-default-5q84n" (UID: "2da30d47-d7ea-47f4-a489-4729c8989cef") : secret "dns-default-metrics-tls" not found Apr 17 11:30:54.796231 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:54.796195 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:54.799359 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:54.799334 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/e6522e95-55e6-43f4-9a0b-b0429a3a47c4-original-pull-secret\") pod \"global-pull-secret-syncer-ftdll\" (UID: \"e6522e95-55e6-43f4-9a0b-b0429a3a47c4\") " pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:54.922338 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:54.922306 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-ftdll" Apr 17 11:30:55.293868 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:55.293815 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-ftdll"] Apr 17 11:30:55.295831 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:55.295796 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-8d6ts"] Apr 17 11:30:55.412825 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:55.412752 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6522e95_55e6_43f4_9a0b_b0429a3a47c4.slice/crio-c6aac4e4526aec4dc88957e9cd921765f2a5d8a96e22319a98e57195a151ae32 WatchSource:0}: Error finding container c6aac4e4526aec4dc88957e9cd921765f2a5d8a96e22319a98e57195a151ae32: Status 404 returned error can't find the container with id c6aac4e4526aec4dc88957e9cd921765f2a5d8a96e22319a98e57195a151ae32 Apr 17 11:30:55.413417 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:30:55.413394 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d31d48_4078_464b_b36c_28075b5f885b.slice/crio-7a3a8ff4abed3de981ae07e77c993cddafe56b5d91b40b3961802c1ccd260b64 WatchSource:0}: Error finding container 7a3a8ff4abed3de981ae07e77c993cddafe56b5d91b40b3961802c1ccd260b64: Status 404 returned error can't find the container with id 7a3a8ff4abed3de981ae07e77c993cddafe56b5d91b40b3961802c1ccd260b64 Apr 17 11:30:55.844891 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:55.844859 2567 generic.go:358] "Generic (PLEG): container finished" podID="ea7f56ab-276e-4b70-8003-11db06a0b72b" containerID="a8e3a1252c429874000ffabc7d4fd2c60afe7b4b288afd8c136c5ff38e1e25d2" exitCode=0 Apr 17 11:30:55.845296 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:55.844891 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" event={"ID":"ea7f56ab-276e-4b70-8003-11db06a0b72b","Type":"ContainerDied","Data":"a8e3a1252c429874000ffabc7d4fd2c60afe7b4b288afd8c136c5ff38e1e25d2"} Apr 17 11:30:55.846242 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:55.846219 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-ftdll" event={"ID":"e6522e95-55e6-43f4-9a0b-b0429a3a47c4","Type":"ContainerStarted","Data":"c6aac4e4526aec4dc88957e9cd921765f2a5d8a96e22319a98e57195a151ae32"} Apr 17 11:30:55.848501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:55.848479 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8d6ts" event={"ID":"a1d31d48-4078-464b-b36c-28075b5f885b","Type":"ContainerStarted","Data":"7a3a8ff4abed3de981ae07e77c993cddafe56b5d91b40b3961802c1ccd260b64"} Apr 17 11:30:56.309581 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:56.309322 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:30:56.309757 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:56.309584 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:30:56.309757 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:56.309480 2567 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 11:30:56.309757 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:56.309711 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert podName:da2c16b4-2e18-4310-881d-5febd92c9d3d nodeName:}" failed. No retries permitted until 2026-04-17 11:31:00.309669558 +0000 UTC m=+41.147907299 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert") pod "ingress-canary-mcjdh" (UID: "da2c16b4-2e18-4310-881d-5febd92c9d3d") : secret "canary-serving-cert" not found Apr 17 11:30:56.309932 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:56.309756 2567 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 11:30:56.309932 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:56.309771 2567 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-9b4668746-kmh5p: secret "image-registry-tls" not found Apr 17 11:30:56.309932 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:56.309811 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls podName:35579630-eea3-41d9-8ca1-8408a45d5896 nodeName:}" failed. No retries permitted until 2026-04-17 11:31:00.309798201 +0000 UTC m=+41.148035935 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls") pod "image-registry-9b4668746-kmh5p" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896") : secret "image-registry-tls" not found Apr 17 11:30:56.410981 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:56.410945 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:30:56.411153 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:56.411106 2567 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 11:30:56.411210 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:30:56.411184 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls podName:2da30d47-d7ea-47f4-a489-4729c8989cef nodeName:}" failed. No retries permitted until 2026-04-17 11:31:00.411162188 +0000 UTC m=+41.249399927 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls") pod "dns-default-5q84n" (UID: "2da30d47-d7ea-47f4-a489-4729c8989cef") : secret "dns-default-metrics-tls" not found Apr 17 11:30:56.854503 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:56.854455 2567 generic.go:358] "Generic (PLEG): container finished" podID="ea7f56ab-276e-4b70-8003-11db06a0b72b" containerID="37aa711a184d5deb95e2711c9cbb52a4c6dc7432be8b4c4804de58dd8f00fdfb" exitCode=0 Apr 17 11:30:56.854982 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:56.854524 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" event={"ID":"ea7f56ab-276e-4b70-8003-11db06a0b72b","Type":"ContainerDied","Data":"37aa711a184d5deb95e2711c9cbb52a4c6dc7432be8b4c4804de58dd8f00fdfb"} Apr 17 11:30:57.859596 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:57.859549 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" event={"ID":"ea7f56ab-276e-4b70-8003-11db06a0b72b","Type":"ContainerStarted","Data":"9d571ea95e9d6964245e70bacc846b87135c9b7f51dfbcfd9b168c8f2d3e3e63"} Apr 17 11:30:57.886778 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:30:57.886722 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-fpmx5" podStartSLOduration=5.901395291 podStartE2EDuration="38.886704048s" podCreationTimestamp="2026-04-17 11:30:19 +0000 UTC" firstStartedPulling="2026-04-17 11:30:22.47602496 +0000 UTC m=+3.314262693" lastFinishedPulling="2026-04-17 11:30:55.461333706 +0000 UTC m=+36.299571450" observedRunningTime="2026-04-17 11:30:57.884427421 +0000 UTC m=+38.722665200" watchObservedRunningTime="2026-04-17 11:30:57.886704048 +0000 UTC m=+38.724941798" Apr 17 11:31:00.341084 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:00.340993 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:31:00.341084 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:00.341053 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:31:00.341559 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:00.341157 2567 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 11:31:00.341559 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:00.341160 2567 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 11:31:00.341559 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:00.341239 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert podName:da2c16b4-2e18-4310-881d-5febd92c9d3d nodeName:}" failed. No retries permitted until 2026-04-17 11:31:08.34121883 +0000 UTC m=+49.179456564 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert") pod "ingress-canary-mcjdh" (UID: "da2c16b4-2e18-4310-881d-5febd92c9d3d") : secret "canary-serving-cert" not found Apr 17 11:31:00.341559 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:00.341170 2567 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-9b4668746-kmh5p: secret "image-registry-tls" not found Apr 17 11:31:00.341559 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:00.341307 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls podName:35579630-eea3-41d9-8ca1-8408a45d5896 nodeName:}" failed. No retries permitted until 2026-04-17 11:31:08.341295948 +0000 UTC m=+49.179533682 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls") pod "image-registry-9b4668746-kmh5p" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896") : secret "image-registry-tls" not found Apr 17 11:31:00.441924 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:00.441877 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:31:00.442067 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:00.442020 2567 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 11:31:00.442107 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:00.442086 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls podName:2da30d47-d7ea-47f4-a489-4729c8989cef nodeName:}" failed. No retries permitted until 2026-04-17 11:31:08.442070912 +0000 UTC m=+49.280308647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls") pod "dns-default-5q84n" (UID: "2da30d47-d7ea-47f4-a489-4729c8989cef") : secret "dns-default-metrics-tls" not found Apr 17 11:31:00.866807 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:00.866772 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-ftdll" event={"ID":"e6522e95-55e6-43f4-9a0b-b0429a3a47c4","Type":"ContainerStarted","Data":"26baed924fcfa9de865c57468e05fa7e1269e81a29664235d51eb55e4bb5b1f5"} Apr 17 11:31:00.867977 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:00.867947 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8d6ts" event={"ID":"a1d31d48-4078-464b-b36c-28075b5f885b","Type":"ContainerStarted","Data":"58c9635783df14ff1384f8bac18c3fcd3261bd735846f7ac725111b6615a713a"} Apr 17 11:31:00.868098 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:00.868052 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:31:00.885674 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:00.885622 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-ftdll" podStartSLOduration=34.378135266 podStartE2EDuration="38.885611182s" podCreationTimestamp="2026-04-17 11:30:22 +0000 UTC" firstStartedPulling="2026-04-17 11:30:55.437986225 +0000 UTC m=+36.276223966" lastFinishedPulling="2026-04-17 11:30:59.945462134 +0000 UTC m=+40.783699882" observedRunningTime="2026-04-17 11:31:00.88481722 +0000 UTC m=+41.723054989" watchObservedRunningTime="2026-04-17 11:31:00.885611182 +0000 UTC m=+41.723848937" Apr 17 11:31:00.903046 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:00.903011 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-8d6ts" podStartSLOduration=37.399370351 podStartE2EDuration="41.903002873s" podCreationTimestamp="2026-04-17 11:30:19 +0000 UTC" firstStartedPulling="2026-04-17 11:30:55.437985289 +0000 UTC m=+36.276223024" lastFinishedPulling="2026-04-17 11:30:59.941617795 +0000 UTC m=+40.779855546" observedRunningTime="2026-04-17 11:31:00.902673778 +0000 UTC m=+41.740911537" watchObservedRunningTime="2026-04-17 11:31:00.903002873 +0000 UTC m=+41.741240630" Apr 17 11:31:08.390398 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:08.390362 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:31:08.390815 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:08.390428 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:31:08.390815 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:08.390542 2567 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 11:31:08.390815 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:08.390562 2567 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-9b4668746-kmh5p: secret "image-registry-tls" not found Apr 17 11:31:08.390815 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:08.390563 2567 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 11:31:08.390815 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:08.390630 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls podName:35579630-eea3-41d9-8ca1-8408a45d5896 nodeName:}" failed. No retries permitted until 2026-04-17 11:31:24.390611626 +0000 UTC m=+65.228849370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls") pod "image-registry-9b4668746-kmh5p" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896") : secret "image-registry-tls" not found Apr 17 11:31:08.390815 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:08.390643 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert podName:da2c16b4-2e18-4310-881d-5febd92c9d3d nodeName:}" failed. No retries permitted until 2026-04-17 11:31:24.390637267 +0000 UTC m=+65.228875000 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert") pod "ingress-canary-mcjdh" (UID: "da2c16b4-2e18-4310-881d-5febd92c9d3d") : secret "canary-serving-cert" not found Apr 17 11:31:08.491459 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:08.491424 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:31:08.491617 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:08.491548 2567 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 11:31:08.491617 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:08.491608 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls podName:2da30d47-d7ea-47f4-a489-4729c8989cef nodeName:}" failed. No retries permitted until 2026-04-17 11:31:24.491591059 +0000 UTC m=+65.329828793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls") pod "dns-default-5q84n" (UID: "2da30d47-d7ea-47f4-a489-4729c8989cef") : secret "dns-default-metrics-tls" not found Apr 17 11:31:19.650795 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:19.650768 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vjfgd" Apr 17 11:31:24.400904 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:24.400869 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:31:24.401325 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:24.400921 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:31:24.401325 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:24.401003 2567 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 11:31:24.401325 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:24.401013 2567 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-9b4668746-kmh5p: secret "image-registry-tls" not found Apr 17 11:31:24.401325 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:24.401033 2567 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 11:31:24.401325 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:24.401069 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls podName:35579630-eea3-41d9-8ca1-8408a45d5896 nodeName:}" failed. No retries permitted until 2026-04-17 11:31:56.401055454 +0000 UTC m=+97.239293187 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls") pod "image-registry-9b4668746-kmh5p" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896") : secret "image-registry-tls" not found Apr 17 11:31:24.401325 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:24.401105 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert podName:da2c16b4-2e18-4310-881d-5febd92c9d3d nodeName:}" failed. No retries permitted until 2026-04-17 11:31:56.40108655 +0000 UTC m=+97.239324298 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert") pod "ingress-canary-mcjdh" (UID: "da2c16b4-2e18-4310-881d-5febd92c9d3d") : secret "canary-serving-cert" not found Apr 17 11:31:24.501790 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:24.501761 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:31:24.501927 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:24.501877 2567 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 11:31:24.501927 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:24.501920 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls podName:2da30d47-d7ea-47f4-a489-4729c8989cef nodeName:}" failed. No retries permitted until 2026-04-17 11:31:56.501908041 +0000 UTC m=+97.340145776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls") pod "dns-default-5q84n" (UID: "2da30d47-d7ea-47f4-a489-4729c8989cef") : secret "dns-default-metrics-tls" not found Apr 17 11:31:25.507277 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:25.507237 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:31:25.510172 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:25.510153 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 17 11:31:25.518088 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:25.518071 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 17 11:31:25.518131 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:25.518124 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:29.518109341 +0000 UTC m=+130.356347075 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : secret "metrics-daemon-secret" not found Apr 17 11:31:31.872087 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:31.872060 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-8d6ts" Apr 17 11:31:56.427794 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:56.427739 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:31:56.428366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:56.427889 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:31:56.428366 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:56.427917 2567 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 17 11:31:56.428366 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:56.427941 2567 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-9b4668746-kmh5p: secret "image-registry-tls" not found Apr 17 11:31:56.428366 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:56.427969 2567 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 17 11:31:56.428366 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:56.428018 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert podName:da2c16b4-2e18-4310-881d-5febd92c9d3d nodeName:}" failed. No retries permitted until 2026-04-17 11:33:00.42800566 +0000 UTC m=+161.266243394 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert") pod "ingress-canary-mcjdh" (UID: "da2c16b4-2e18-4310-881d-5febd92c9d3d") : secret "canary-serving-cert" not found Apr 17 11:31:56.428366 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:56.428030 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls podName:35579630-eea3-41d9-8ca1-8408a45d5896 nodeName:}" failed. No retries permitted until 2026-04-17 11:33:00.428024817 +0000 UTC m=+161.266262551 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls") pod "image-registry-9b4668746-kmh5p" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896") : secret "image-registry-tls" not found Apr 17 11:31:56.528325 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:31:56.528289 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:31:56.528480 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:56.528423 2567 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 17 11:31:56.528480 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:31:56.528473 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls podName:2da30d47-d7ea-47f4-a489-4729c8989cef nodeName:}" failed. No retries permitted until 2026-04-17 11:33:00.528460996 +0000 UTC m=+161.366698730 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls") pod "dns-default-5q84n" (UID: "2da30d47-d7ea-47f4-a489-4729c8989cef") : secret "dns-default-metrics-tls" not found Apr 17 11:32:24.494473 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.494440 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q"] Apr 17 11:32:24.496957 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.496936 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q" Apr 17 11:32:24.498294 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.498270 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66"] Apr 17 11:32:24.499427 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.499409 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\"" Apr 17 11:32:24.499659 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.499646 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\"" Apr 17 11:32:24.499786 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.499648 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-storage-operator\"/\"volume-data-source-validator-dockercfg-j9wnd\"" Apr 17 11:32:24.500980 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.500967 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-585dfdc468-qvvgv"] Apr 17 11:32:24.501105 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.501093 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:24.503322 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.503304 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.503943 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.503518 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Apr 17 11:32:24.503943 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.503820 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Apr 17 11:32:24.504129 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.504055 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Apr 17 11:32:24.504129 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.504114 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-f2vr5\"" Apr 17 11:32:24.505704 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.505675 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"operator-dockercfg-zbb9s\"" Apr 17 11:32:24.506026 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.506005 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 17 11:32:24.506593 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.506571 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"openshift-insights-serving-cert\"" Apr 17 11:32:24.506831 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.506817 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"service-ca-bundle\"" Apr 17 11:32:24.507121 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.507100 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 17 11:32:24.507668 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.507646 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q"] Apr 17 11:32:24.510732 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.510715 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66"] Apr 17 11:32:24.513116 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.513095 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"trusted-ca-bundle\"" Apr 17 11:32:24.513552 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.513527 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-585dfdc468-qvvgv"] Apr 17 11:32:24.625634 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625597 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-service-ca-bundle\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.625634 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625635 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-serving-cert\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.625854 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625660 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczcm\" (UniqueName: \"kubernetes.io/projected/0f1c4813-7a6c-4e2d-930c-133f87515757-kube-api-access-lczcm\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:24.625854 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625779 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd4dr\" (UniqueName: \"kubernetes.io/projected/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-kube-api-access-cd4dr\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.625854 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625829 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.625951 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625857 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-snapshots\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.625951 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625887 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrv8\" (UniqueName: \"kubernetes.io/projected/5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb-kube-api-access-mzrv8\") pod \"volume-data-source-validator-7c6cbb6c87-x5b7q\" (UID: \"5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q" Apr 17 11:32:24.625951 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625904 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-tmp\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.625951 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.625933 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:24.667242 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.667210 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-j8vsm"] Apr 17 11:32:24.669961 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.669945 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.681168 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.681149 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Apr 17 11:32:24.681384 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.681369 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Apr 17 11:32:24.681467 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.681380 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Apr 17 11:32:24.681467 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.681411 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Apr 17 11:32:24.681987 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.681972 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4bm8z\"" Apr 17 11:32:24.683101 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.683082 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-6cbc99754-98r5g"] Apr 17 11:32:24.685609 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.685595 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.689543 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.689523 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Apr 17 11:32:24.689727 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.689526 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"default-ingress-cert\"" Apr 17 11:32:24.689951 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.689935 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Apr 17 11:32:24.690519 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.690504 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Apr 17 11:32:24.690707 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.690693 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Apr 17 11:32:24.690919 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.690906 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Apr 17 11:32:24.694976 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.694960 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-6fzd5\"" Apr 17 11:32:24.697012 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.696993 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-j8vsm"] Apr 17 11:32:24.702822 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.702801 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Apr 17 11:32:24.713632 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.713611 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-6cbc99754-98r5g"] Apr 17 11:32:24.726783 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.726761 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-service-ca-bundle\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.726891 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.726793 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-serving-cert\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.726891 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.726836 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lczcm\" (UniqueName: \"kubernetes.io/projected/0f1c4813-7a6c-4e2d-930c-133f87515757-kube-api-access-lczcm\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:24.726999 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.726907 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cd4dr\" (UniqueName: \"kubernetes.io/projected/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-kube-api-access-cd4dr\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.726999 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.726950 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.726999 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.726979 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-snapshots\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.727142 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.727021 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrv8\" (UniqueName: \"kubernetes.io/projected/5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb-kube-api-access-mzrv8\") pod \"volume-data-source-validator-7c6cbb6c87-x5b7q\" (UID: \"5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q" Apr 17 11:32:24.727142 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.727046 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-tmp\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.727142 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.727080 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:24.727283 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:24.727200 2567 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 11:32:24.727283 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:24.727279 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls podName:0f1c4813-7a6c-4e2d-930c-133f87515757 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:25.227259223 +0000 UTC m=+126.065496973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-bck66" (UID: "0f1c4813-7a6c-4e2d-930c-133f87515757") : secret "samples-operator-tls" not found Apr 17 11:32:24.727392 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.727366 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-service-ca-bundle\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.727484 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.727464 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-tmp\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.727889 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.727871 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-snapshots\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.728241 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.728223 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-trusted-ca-bundle\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.730451 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.730431 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-serving-cert\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.754135 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.754076 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczcm\" (UniqueName: \"kubernetes.io/projected/0f1c4813-7a6c-4e2d-930c-133f87515757-kube-api-access-lczcm\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:24.754377 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.754357 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrv8\" (UniqueName: \"kubernetes.io/projected/5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb-kube-api-access-mzrv8\") pod \"volume-data-source-validator-7c6cbb6c87-x5b7q\" (UID: \"5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb\") " pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q" Apr 17 11:32:24.755964 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.755945 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd4dr\" (UniqueName: \"kubernetes.io/projected/7c7fd4b1-e618-4f37-8c84-dc31d902ec5d-kube-api-access-cd4dr\") pod \"insights-operator-585dfdc468-qvvgv\" (UID: \"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d\") " pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.806945 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.806904 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q" Apr 17 11:32:24.820243 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.820215 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-585dfdc468-qvvgv" Apr 17 11:32:24.828206 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828172 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.828338 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828232 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-stats-auth\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.828338 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828269 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whvfr\" (UniqueName: \"kubernetes.io/projected/b4b1d00c-98b8-45c5-80c4-0362b3303384-kube-api-access-whvfr\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.828338 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828327 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-default-certificate\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.828550 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828380 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhlb\" (UniqueName: \"kubernetes.io/projected/706595e5-78a2-4cbb-93bc-d371be497332-kube-api-access-djhlb\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.828550 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828401 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b1d00c-98b8-45c5-80c4-0362b3303384-config\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.828550 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828447 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.828550 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828501 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4b1d00c-98b8-45c5-80c4-0362b3303384-trusted-ca\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.828766 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.828587 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b1d00c-98b8-45c5-80c4-0362b3303384-serving-cert\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.926424 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.926390 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q"] Apr 17 11:32:24.929519 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.929494 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b1d00c-98b8-45c5-80c4-0362b3303384-serving-cert\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.929645 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.929560 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.929645 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.929599 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-stats-auth\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.929645 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.929633 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whvfr\" (UniqueName: \"kubernetes.io/projected/b4b1d00c-98b8-45c5-80c4-0362b3303384-kube-api-access-whvfr\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.929858 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.929662 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-default-certificate\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.929858 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:24.929755 2567 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 17 11:32:24.929858 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:24.929829 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:25.429806865 +0000 UTC m=+126.268044599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : secret "router-metrics-certs-default" not found Apr 17 11:32:24.930010 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.929940 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-djhlb\" (UniqueName: \"kubernetes.io/projected/706595e5-78a2-4cbb-93bc-d371be497332-kube-api-access-djhlb\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.930010 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.929978 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b1d00c-98b8-45c5-80c4-0362b3303384-config\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.930112 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.930031 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.930167 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.930102 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4b1d00c-98b8-45c5-80c4-0362b3303384-trusted-ca\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.930167 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:24.930127 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:25.430112569 +0000 UTC m=+126.268350302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : configmap references non-existent config key: service-ca.crt Apr 17 11:32:24.930781 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.930737 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b1d00c-98b8-45c5-80c4-0362b3303384-config\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.930930 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:32:24.930908 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a2c4dc0_7d3f_4520_8ef4_4d3c320bedeb.slice/crio-e4398ead7e7afdb27d3d0bc6902ade965def66febd9e7bd75238856a98ee61fb WatchSource:0}: Error finding container e4398ead7e7afdb27d3d0bc6902ade965def66febd9e7bd75238856a98ee61fb: Status 404 returned error can't find the container with id e4398ead7e7afdb27d3d0bc6902ade965def66febd9e7bd75238856a98ee61fb Apr 17 11:32:24.931536 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.931515 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4b1d00c-98b8-45c5-80c4-0362b3303384-trusted-ca\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.932423 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.932327 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-stats-auth\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.932548 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.932519 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b1d00c-98b8-45c5-80c4-0362b3303384-serving-cert\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.932640 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.932621 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-default-certificate\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.941518 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.941494 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhlb\" (UniqueName: \"kubernetes.io/projected/706595e5-78a2-4cbb-93bc-d371be497332-kube-api-access-djhlb\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:24.941621 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.941606 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whvfr\" (UniqueName: \"kubernetes.io/projected/b4b1d00c-98b8-45c5-80c4-0362b3303384-kube-api-access-whvfr\") pod \"console-operator-9d4b6777b-j8vsm\" (UID: \"b4b1d00c-98b8-45c5-80c4-0362b3303384\") " pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:24.944390 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.944338 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-585dfdc468-qvvgv"] Apr 17 11:32:24.946643 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:32:24.946622 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c7fd4b1_e618_4f37_8c84_dc31d902ec5d.slice/crio-921d31cd152a052c6368e19b2cf84adf868a11a4ac8bcd842647dcf66a8916a7 WatchSource:0}: Error finding container 921d31cd152a052c6368e19b2cf84adf868a11a4ac8bcd842647dcf66a8916a7: Status 404 returned error can't find the container with id 921d31cd152a052c6368e19b2cf84adf868a11a4ac8bcd842647dcf66a8916a7 Apr 17 11:32:24.977660 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:24.977635 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:25.028052 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:25.027986 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q" event={"ID":"5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb","Type":"ContainerStarted","Data":"e4398ead7e7afdb27d3d0bc6902ade965def66febd9e7bd75238856a98ee61fb"} Apr 17 11:32:25.029650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:25.029583 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-qvvgv" event={"ID":"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d","Type":"ContainerStarted","Data":"921d31cd152a052c6368e19b2cf84adf868a11a4ac8bcd842647dcf66a8916a7"} Apr 17 11:32:25.089529 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:25.089488 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-9d4b6777b-j8vsm"] Apr 17 11:32:25.093529 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:32:25.093503 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4b1d00c_98b8_45c5_80c4_0362b3303384.slice/crio-e751d0c1a51da44eee31154714726ceef5a717046147640643c312741af9f139 WatchSource:0}: Error finding container e751d0c1a51da44eee31154714726ceef5a717046147640643c312741af9f139: Status 404 returned error can't find the container with id e751d0c1a51da44eee31154714726ceef5a717046147640643c312741af9f139 Apr 17 11:32:25.232961 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:25.232921 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:25.233110 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:25.233071 2567 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 11:32:25.233185 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:25.233135 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls podName:0f1c4813-7a6c-4e2d-930c-133f87515757 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:26.233118051 +0000 UTC m=+127.071355785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-bck66" (UID: "0f1c4813-7a6c-4e2d-930c-133f87515757") : secret "samples-operator-tls" not found Apr 17 11:32:25.433811 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:25.433772 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:25.433993 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:25.433916 2567 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 17 11:32:25.434059 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:25.433999 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:26.433978386 +0000 UTC m=+127.272216121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : secret "router-metrics-certs-default" not found Apr 17 11:32:25.434059 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:25.434037 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:25.434187 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:25.434171 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:26.434158974 +0000 UTC m=+127.272396719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : configmap references non-existent config key: service-ca.crt Apr 17 11:32:26.032972 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:26.032932 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" event={"ID":"b4b1d00c-98b8-45c5-80c4-0362b3303384","Type":"ContainerStarted","Data":"e751d0c1a51da44eee31154714726ceef5a717046147640643c312741af9f139"} Apr 17 11:32:26.242366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:26.242320 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:26.242539 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:26.242455 2567 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 11:32:26.242539 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:26.242518 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls podName:0f1c4813-7a6c-4e2d-930c-133f87515757 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:28.24250056 +0000 UTC m=+129.080738305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-bck66" (UID: "0f1c4813-7a6c-4e2d-930c-133f87515757") : secret "samples-operator-tls" not found Apr 17 11:32:26.443654 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:26.443623 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:26.443832 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:26.443812 2567 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 17 11:32:26.443897 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:26.443839 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:26.443897 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:26.443893 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:28.443870258 +0000 UTC m=+129.282108007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : secret "router-metrics-certs-default" not found Apr 17 11:32:26.444005 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:26.443982 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:28.443964022 +0000 UTC m=+129.282201787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : configmap references non-existent config key: service-ca.crt Apr 17 11:32:28.038777 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.038736 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-qvvgv" event={"ID":"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d","Type":"ContainerStarted","Data":"5915394a2ec9e42ea13ff304e97bd79e3894edb04c0315a0b99c531c0ac0fe5c"} Apr 17 11:32:28.040215 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.040196 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/0.log" Apr 17 11:32:28.040316 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.040232 2567 generic.go:358] "Generic (PLEG): container finished" podID="b4b1d00c-98b8-45c5-80c4-0362b3303384" containerID="a86f797d90e00ea20574460bb9a8eb45903db2fb120063cd5711d3df4c66bace" exitCode=255 Apr 17 11:32:28.040355 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.040321 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" event={"ID":"b4b1d00c-98b8-45c5-80c4-0362b3303384","Type":"ContainerDied","Data":"a86f797d90e00ea20574460bb9a8eb45903db2fb120063cd5711d3df4c66bace"} Apr 17 11:32:28.040489 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.040465 2567 scope.go:117] "RemoveContainer" containerID="a86f797d90e00ea20574460bb9a8eb45903db2fb120063cd5711d3df4c66bace" Apr 17 11:32:28.041539 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.041517 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q" event={"ID":"5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb","Type":"ContainerStarted","Data":"725a5d8b8210352a436bb1b8ff2c7843d2c87165635ae4feb87ed67f26aec35d"} Apr 17 11:32:28.057703 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.057639 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-585dfdc468-qvvgv" podStartSLOduration=1.54690214 podStartE2EDuration="4.057624294s" podCreationTimestamp="2026-04-17 11:32:24 +0000 UTC" firstStartedPulling="2026-04-17 11:32:24.948395977 +0000 UTC m=+125.786633711" lastFinishedPulling="2026-04-17 11:32:27.459118116 +0000 UTC m=+128.297355865" observedRunningTime="2026-04-17 11:32:28.0563228 +0000 UTC m=+128.894560555" watchObservedRunningTime="2026-04-17 11:32:28.057624294 +0000 UTC m=+128.895862053" Apr 17 11:32:28.090584 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.090529 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/volume-data-source-validator-7c6cbb6c87-x5b7q" podStartSLOduration=1.5663672480000002 podStartE2EDuration="4.090511083s" podCreationTimestamp="2026-04-17 11:32:24 +0000 UTC" firstStartedPulling="2026-04-17 11:32:24.93278457 +0000 UTC m=+125.771022304" lastFinishedPulling="2026-04-17 11:32:27.456928401 +0000 UTC m=+128.295166139" observedRunningTime="2026-04-17 11:32:28.089672617 +0000 UTC m=+128.927910372" watchObservedRunningTime="2026-04-17 11:32:28.090511083 +0000 UTC m=+128.928748840" Apr 17 11:32:28.260461 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.260418 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:28.260646 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:28.260572 2567 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 11:32:28.260730 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:28.260647 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls podName:0f1c4813-7a6c-4e2d-930c-133f87515757 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:32.260630259 +0000 UTC m=+133.098867992 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-bck66" (UID: "0f1c4813-7a6c-4e2d-930c-133f87515757") : secret "samples-operator-tls" not found Apr 17 11:32:28.462141 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.462105 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:28.462328 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:28.462168 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:28.462328 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:28.462268 2567 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 17 11:32:28.462328 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:28.462317 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:32.462301882 +0000 UTC m=+133.300539629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : secret "router-metrics-certs-default" not found Apr 17 11:32:28.462328 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:28.462328 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:32.462323042 +0000 UTC m=+133.300560775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : configmap references non-existent config key: service-ca.crt Apr 17 11:32:29.045750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.045727 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/1.log" Apr 17 11:32:29.046136 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.046071 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/0.log" Apr 17 11:32:29.046136 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.046101 2567 generic.go:358] "Generic (PLEG): container finished" podID="b4b1d00c-98b8-45c5-80c4-0362b3303384" containerID="5c1d0611837a5b85913b6a57834dd09bc1de677fe7b48b501608d39efef60993" exitCode=255 Apr 17 11:32:29.046242 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.046135 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" event={"ID":"b4b1d00c-98b8-45c5-80c4-0362b3303384","Type":"ContainerDied","Data":"5c1d0611837a5b85913b6a57834dd09bc1de677fe7b48b501608d39efef60993"} Apr 17 11:32:29.046242 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.046191 2567 scope.go:117] "RemoveContainer" containerID="a86f797d90e00ea20574460bb9a8eb45903db2fb120063cd5711d3df4c66bace" Apr 17 11:32:29.046453 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.046434 2567 scope.go:117] "RemoveContainer" containerID="5c1d0611837a5b85913b6a57834dd09bc1de677fe7b48b501608d39efef60993" Apr 17 11:32:29.046711 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:29.046665 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-j8vsm_openshift-console-operator(b4b1d00c-98b8-45c5-80c4-0362b3303384)\"" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" podUID="b4b1d00c-98b8-45c5-80c4-0362b3303384" Apr 17 11:32:29.154050 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.154018 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq"] Apr 17 11:32:29.157982 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.157968 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" Apr 17 11:32:29.160634 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.160600 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Apr 17 11:32:29.160634 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.160611 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Apr 17 11:32:29.160845 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.160718 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-8gs5v\"" Apr 17 11:32:29.165492 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.165471 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq"] Apr 17 11:32:29.270226 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.270184 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c9k8\" (UniqueName: \"kubernetes.io/projected/bf50dfd8-a7da-4074-b6eb-e64696657db9-kube-api-access-7c9k8\") pod \"migrator-74bb7799d9-tmfsq\" (UID: \"bf50dfd8-a7da-4074-b6eb-e64696657db9\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" Apr 17 11:32:29.370914 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.370835 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7c9k8\" (UniqueName: \"kubernetes.io/projected/bf50dfd8-a7da-4074-b6eb-e64696657db9-kube-api-access-7c9k8\") pod \"migrator-74bb7799d9-tmfsq\" (UID: \"bf50dfd8-a7da-4074-b6eb-e64696657db9\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" Apr 17 11:32:29.378902 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.378881 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c9k8\" (UniqueName: \"kubernetes.io/projected/bf50dfd8-a7da-4074-b6eb-e64696657db9-kube-api-access-7c9k8\") pod \"migrator-74bb7799d9-tmfsq\" (UID: \"bf50dfd8-a7da-4074-b6eb-e64696657db9\") " pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" Apr 17 11:32:29.467386 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.467350 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" Apr 17 11:32:29.571980 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.571945 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:32:29.572192 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:29.572092 2567 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 17 11:32:29.572192 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:29.572155 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs podName:7b1f3e8e-0735-4b17-9e76-c70b964db9c1 nodeName:}" failed. No retries permitted until 2026-04-17 11:34:31.572138501 +0000 UTC m=+252.410376236 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs") pod "network-metrics-daemon-tnlq8" (UID: "7b1f3e8e-0735-4b17-9e76-c70b964db9c1") : secret "metrics-daemon-secret" not found Apr 17 11:32:29.583264 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:29.583236 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq"] Apr 17 11:32:29.586129 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:32:29.586102 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf50dfd8_a7da_4074_b6eb_e64696657db9.slice/crio-91f50a64361416184dfaaaee4f7ef074ffbfc0df8f6e4a51b00972bbd13f7a00 WatchSource:0}: Error finding container 91f50a64361416184dfaaaee4f7ef074ffbfc0df8f6e4a51b00972bbd13f7a00: Status 404 returned error can't find the container with id 91f50a64361416184dfaaaee4f7ef074ffbfc0df8f6e4a51b00972bbd13f7a00 Apr 17 11:32:30.049759 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:30.049732 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/1.log" Apr 17 11:32:30.050200 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:30.050181 2567 scope.go:117] "RemoveContainer" containerID="5c1d0611837a5b85913b6a57834dd09bc1de677fe7b48b501608d39efef60993" Apr 17 11:32:30.050449 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:30.050402 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-j8vsm_openshift-console-operator(b4b1d00c-98b8-45c5-80c4-0362b3303384)\"" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" podUID="b4b1d00c-98b8-45c5-80c4-0362b3303384" Apr 17 11:32:30.051125 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:30.051100 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" event={"ID":"bf50dfd8-a7da-4074-b6eb-e64696657db9","Type":"ContainerStarted","Data":"91f50a64361416184dfaaaee4f7ef074ffbfc0df8f6e4a51b00972bbd13f7a00"} Apr 17 11:32:30.421354 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:30.421327 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-9zpcp_f79e4e5d-fbfb-429a-aa74-4c1d5725072a/dns-node-resolver/0.log" Apr 17 11:32:31.056302 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:31.056218 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" event={"ID":"bf50dfd8-a7da-4074-b6eb-e64696657db9","Type":"ContainerStarted","Data":"b52737bb2e74eb31fae38c0d1817b361d3efa824d4ac50a3578ba3a1cf4ca10a"} Apr 17 11:32:31.056302 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:31.056262 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" event={"ID":"bf50dfd8-a7da-4074-b6eb-e64696657db9","Type":"ContainerStarted","Data":"09f73eb1f011d65b40bfa6155c7f566d11725771fb8fa2df94c14bf3525fcf4b"} Apr 17 11:32:31.074247 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:31.074198 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-74bb7799d9-tmfsq" podStartSLOduration=1.070485996 podStartE2EDuration="2.074182132s" podCreationTimestamp="2026-04-17 11:32:29 +0000 UTC" firstStartedPulling="2026-04-17 11:32:29.587889167 +0000 UTC m=+130.426126914" lastFinishedPulling="2026-04-17 11:32:30.591585316 +0000 UTC m=+131.429823050" observedRunningTime="2026-04-17 11:32:31.073554735 +0000 UTC m=+131.911792490" watchObservedRunningTime="2026-04-17 11:32:31.074182132 +0000 UTC m=+131.912419887" Apr 17 11:32:31.621925 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:31.621895 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-xq6rz_96b4954a-76e8-4a06-9917-5454d450896d/node-ca/0.log" Apr 17 11:32:32.296762 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:32.296723 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:32.297143 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:32.296864 2567 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Apr 17 11:32:32.297143 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:32.296932 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls podName:0f1c4813-7a6c-4e2d-930c-133f87515757 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:40.296916948 +0000 UTC m=+141.135154682 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls") pod "cluster-samples-operator-6dc5bdb6b4-bck66" (UID: "0f1c4813-7a6c-4e2d-930c-133f87515757") : secret "samples-operator-tls" not found Apr 17 11:32:32.498936 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:32.498897 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:32.499097 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:32.499054 2567 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: secret "router-metrics-certs-default" not found Apr 17 11:32:32.499097 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:32.499069 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:32.499169 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:32.499126 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:40.499105918 +0000 UTC m=+141.337343652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : secret "router-metrics-certs-default" not found Apr 17 11:32:32.499211 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:32.499172 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle podName:706595e5-78a2-4cbb-93bc-d371be497332 nodeName:}" failed. No retries permitted until 2026-04-17 11:32:40.499159396 +0000 UTC m=+141.337397136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle") pod "router-default-6cbc99754-98r5g" (UID: "706595e5-78a2-4cbb-93bc-d371be497332") : configmap references non-existent config key: service-ca.crt Apr 17 11:32:34.978498 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:34.978462 2567 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:34.978498 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:34.978496 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:34.978928 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:34.978858 2567 scope.go:117] "RemoveContainer" containerID="5c1d0611837a5b85913b6a57834dd09bc1de677fe7b48b501608d39efef60993" Apr 17 11:32:34.979033 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:34.979016 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-9d4b6777b-j8vsm_openshift-console-operator(b4b1d00c-98b8-45c5-80c4-0362b3303384)\"" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" podUID="b4b1d00c-98b8-45c5-80c4-0362b3303384" Apr 17 11:32:40.365384 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.365330 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:40.367838 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.367813 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1c4813-7a6c-4e2d-930c-133f87515757-samples-operator-tls\") pod \"cluster-samples-operator-6dc5bdb6b4-bck66\" (UID: \"0f1c4813-7a6c-4e2d-930c-133f87515757\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:40.414771 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.414738 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" Apr 17 11:32:40.534808 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.534774 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66"] Apr 17 11:32:40.567278 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.567249 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:40.567403 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.567347 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:40.568007 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.567982 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706595e5-78a2-4cbb-93bc-d371be497332-service-ca-bundle\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:40.569493 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.569465 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/706595e5-78a2-4cbb-93bc-d371be497332-metrics-certs\") pod \"router-default-6cbc99754-98r5g\" (UID: \"706595e5-78a2-4cbb-93bc-d371be497332\") " pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:40.597776 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.597753 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:40.716278 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:40.716244 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress/router-default-6cbc99754-98r5g"] Apr 17 11:32:40.718669 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:32:40.718644 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706595e5_78a2_4cbb_93bc_d371be497332.slice/crio-d09530c62d21c9ebf0466b271d3944348d1df49f4a1f46ae57bd54c100900cd3 WatchSource:0}: Error finding container d09530c62d21c9ebf0466b271d3944348d1df49f4a1f46ae57bd54c100900cd3: Status 404 returned error can't find the container with id d09530c62d21c9ebf0466b271d3944348d1df49f4a1f46ae57bd54c100900cd3 Apr 17 11:32:41.084615 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:41.084570 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" event={"ID":"0f1c4813-7a6c-4e2d-930c-133f87515757","Type":"ContainerStarted","Data":"55140fef1ec4555ffb364bc60af107e2a7e6844e3516a7c5fa28d449391a836c"} Apr 17 11:32:41.085994 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:41.085963 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-6cbc99754-98r5g" event={"ID":"706595e5-78a2-4cbb-93bc-d371be497332","Type":"ContainerStarted","Data":"437db30d8ef59ca3e2cee56298ec953126ecb92b4a3aa4508615c96835e813d8"} Apr 17 11:32:41.086102 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:41.085996 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-6cbc99754-98r5g" event={"ID":"706595e5-78a2-4cbb-93bc-d371be497332","Type":"ContainerStarted","Data":"d09530c62d21c9ebf0466b271d3944348d1df49f4a1f46ae57bd54c100900cd3"} Apr 17 11:32:41.106136 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:41.106079 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-6cbc99754-98r5g" podStartSLOduration=17.10606179 podStartE2EDuration="17.10606179s" podCreationTimestamp="2026-04-17 11:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 11:32:41.104724073 +0000 UTC m=+141.942961831" watchObservedRunningTime="2026-04-17 11:32:41.10606179 +0000 UTC m=+141.944299548" Apr 17 11:32:41.598773 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:41.598727 2567 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:41.601626 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:41.601600 2567 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:42.089837 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:42.089809 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" event={"ID":"0f1c4813-7a6c-4e2d-930c-133f87515757","Type":"ContainerStarted","Data":"f204239ff5901fe201bb17fccc127b5e3841aadbc0802d1cad014692662d2cfd"} Apr 17 11:32:42.090135 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:42.090091 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:42.091390 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:42.091371 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-6cbc99754-98r5g" Apr 17 11:32:43.093927 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:43.093888 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" event={"ID":"0f1c4813-7a6c-4e2d-930c-133f87515757","Type":"ContainerStarted","Data":"1f59a84b76e9fb69c6467407ac2852a8793837b5f82df912a1dfb54976730195"} Apr 17 11:32:43.110000 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:43.109950 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6dc5bdb6b4-bck66" podStartSLOduration=17.64974809 podStartE2EDuration="19.109933722s" podCreationTimestamp="2026-04-17 11:32:24 +0000 UTC" firstStartedPulling="2026-04-17 11:32:40.575626767 +0000 UTC m=+141.413864504" lastFinishedPulling="2026-04-17 11:32:42.035812401 +0000 UTC m=+142.874050136" observedRunningTime="2026-04-17 11:32:43.109134507 +0000 UTC m=+143.947372288" watchObservedRunningTime="2026-04-17 11:32:43.109933722 +0000 UTC m=+143.948171508" Apr 17 11:32:49.710477 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:49.710446 2567 scope.go:117] "RemoveContainer" containerID="5c1d0611837a5b85913b6a57834dd09bc1de677fe7b48b501608d39efef60993" Apr 17 11:32:50.113964 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:50.113889 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:32:50.114302 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:50.114281 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/1.log" Apr 17 11:32:50.114406 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:50.114324 2567 generic.go:358] "Generic (PLEG): container finished" podID="b4b1d00c-98b8-45c5-80c4-0362b3303384" containerID="d5a72e7b47bbd1ef95657f51bc783dc8a399a97c9fa8260f3213d567dc42ac40" exitCode=255 Apr 17 11:32:50.114461 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:50.114401 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" event={"ID":"b4b1d00c-98b8-45c5-80c4-0362b3303384","Type":"ContainerDied","Data":"d5a72e7b47bbd1ef95657f51bc783dc8a399a97c9fa8260f3213d567dc42ac40"} Apr 17 11:32:50.114461 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:50.114442 2567 scope.go:117] "RemoveContainer" containerID="5c1d0611837a5b85913b6a57834dd09bc1de677fe7b48b501608d39efef60993" Apr 17 11:32:50.114806 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:50.114792 2567 scope.go:117] "RemoveContainer" containerID="d5a72e7b47bbd1ef95657f51bc783dc8a399a97c9fa8260f3213d567dc42ac40" Apr 17 11:32:50.115038 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:50.115019 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-9d4b6777b-j8vsm_openshift-console-operator(b4b1d00c-98b8-45c5-80c4-0362b3303384)\"" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" podUID="b4b1d00c-98b8-45c5-80c4-0362b3303384" Apr 17 11:32:51.118094 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:51.118071 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:32:53.070754 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.070723 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5"] Apr 17 11:32:53.074176 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.074155 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" Apr 17 11:32:53.083918 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.083896 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 17 11:32:53.084037 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.083897 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-cvx4x\"" Apr 17 11:32:53.084037 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.083994 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 17 11:32:53.085080 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.085062 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-2h4l7"] Apr 17 11:32:53.087921 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.087904 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.093177 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.093155 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 17 11:32:53.093282 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.093190 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-4b7m5\"" Apr 17 11:32:53.093510 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.093427 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 17 11:32:53.103204 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.103181 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5"] Apr 17 11:32:53.104026 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.104010 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-2h4l7"] Apr 17 11:32:53.172064 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.172036 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/38861b43-eb3b-4987-b34b-454261f74172-crio-socket\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.172233 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.172076 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/57b8a4d3-28c9-4671-9ef8-20adb1b71c4e-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-8b7v5\" (UID: \"57b8a4d3-28c9-4671-9ef8-20adb1b71c4e\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" Apr 17 11:32:53.172233 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.172102 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/38861b43-eb3b-4987-b34b-454261f74172-data-volume\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.172233 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.172172 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/57b8a4d3-28c9-4671-9ef8-20adb1b71c4e-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-8b7v5\" (UID: \"57b8a4d3-28c9-4671-9ef8-20adb1b71c4e\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" Apr 17 11:32:53.172233 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.172206 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx78v\" (UniqueName: \"kubernetes.io/projected/38861b43-eb3b-4987-b34b-454261f74172-kube-api-access-rx78v\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.172233 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.172224 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/38861b43-eb3b-4987-b34b-454261f74172-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.172411 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.172239 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/38861b43-eb3b-4987-b34b-454261f74172-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.272827 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.272745 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/38861b43-eb3b-4987-b34b-454261f74172-crio-socket\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.272827 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.272791 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/57b8a4d3-28c9-4671-9ef8-20adb1b71c4e-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-8b7v5\" (UID: \"57b8a4d3-28c9-4671-9ef8-20adb1b71c4e\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" Apr 17 11:32:53.272827 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.272820 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/38861b43-eb3b-4987-b34b-454261f74172-data-volume\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.273074 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.272873 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/38861b43-eb3b-4987-b34b-454261f74172-crio-socket\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.273074 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.272876 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/57b8a4d3-28c9-4671-9ef8-20adb1b71c4e-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-8b7v5\" (UID: \"57b8a4d3-28c9-4671-9ef8-20adb1b71c4e\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" Apr 17 11:32:53.273074 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.272943 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rx78v\" (UniqueName: \"kubernetes.io/projected/38861b43-eb3b-4987-b34b-454261f74172-kube-api-access-rx78v\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.273074 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.272964 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/38861b43-eb3b-4987-b34b-454261f74172-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.273074 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.272981 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/38861b43-eb3b-4987-b34b-454261f74172-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.273323 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.273192 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/38861b43-eb3b-4987-b34b-454261f74172-data-volume\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.273587 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.273562 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/38861b43-eb3b-4987-b34b-454261f74172-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.274064 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.274042 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/57b8a4d3-28c9-4671-9ef8-20adb1b71c4e-nginx-conf\") pod \"networking-console-plugin-cb95c66f6-8b7v5\" (UID: \"57b8a4d3-28c9-4671-9ef8-20adb1b71c4e\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" Apr 17 11:32:53.275314 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.275291 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/38861b43-eb3b-4987-b34b-454261f74172-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.275401 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.275350 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/57b8a4d3-28c9-4671-9ef8-20adb1b71c4e-networking-console-plugin-cert\") pod \"networking-console-plugin-cb95c66f6-8b7v5\" (UID: \"57b8a4d3-28c9-4671-9ef8-20adb1b71c4e\") " pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" Apr 17 11:32:53.282519 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.282489 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx78v\" (UniqueName: \"kubernetes.io/projected/38861b43-eb3b-4987-b34b-454261f74172-kube-api-access-rx78v\") pod \"insights-runtime-extractor-2h4l7\" (UID: \"38861b43-eb3b-4987-b34b-454261f74172\") " pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.382859 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.382831 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" Apr 17 11:32:53.396522 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.396496 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-2h4l7" Apr 17 11:32:53.525015 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.524926 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5"] Apr 17 11:32:53.528222 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:32:53.528191 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57b8a4d3_28c9_4671_9ef8_20adb1b71c4e.slice/crio-53b4d81e40d5690f71a17c3d47d419fc1541911688b21354321a9919b2eb2759 WatchSource:0}: Error finding container 53b4d81e40d5690f71a17c3d47d419fc1541911688b21354321a9919b2eb2759: Status 404 returned error can't find the container with id 53b4d81e40d5690f71a17c3d47d419fc1541911688b21354321a9919b2eb2759 Apr 17 11:32:53.546178 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:53.546153 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-2h4l7"] Apr 17 11:32:53.549368 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:32:53.549344 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38861b43_eb3b_4987_b34b_454261f74172.slice/crio-baf8e75d5d555bf7f4a80a6e13776934d360471bd94d391922c3753613006299 WatchSource:0}: Error finding container baf8e75d5d555bf7f4a80a6e13776934d360471bd94d391922c3753613006299: Status 404 returned error can't find the container with id baf8e75d5d555bf7f4a80a6e13776934d360471bd94d391922c3753613006299 Apr 17 11:32:54.127091 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:54.127057 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-2h4l7" event={"ID":"38861b43-eb3b-4987-b34b-454261f74172","Type":"ContainerStarted","Data":"316c54de5faa46d566d3c4bc8dfcd12857c4cb880c730298a1f5b2d9344243b6"} Apr 17 11:32:54.127422 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:54.127098 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-2h4l7" event={"ID":"38861b43-eb3b-4987-b34b-454261f74172","Type":"ContainerStarted","Data":"baf8e75d5d555bf7f4a80a6e13776934d360471bd94d391922c3753613006299"} Apr 17 11:32:54.128158 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:54.128133 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" event={"ID":"57b8a4d3-28c9-4671-9ef8-20adb1b71c4e","Type":"ContainerStarted","Data":"53b4d81e40d5690f71a17c3d47d419fc1541911688b21354321a9919b2eb2759"} Apr 17 11:32:54.977754 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:54.977718 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:54.977935 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:54.977773 2567 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:32:54.978205 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:54.978188 2567 scope.go:117] "RemoveContainer" containerID="d5a72e7b47bbd1ef95657f51bc783dc8a399a97c9fa8260f3213d567dc42ac40" Apr 17 11:32:54.978423 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:54.978403 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-9d4b6777b-j8vsm_openshift-console-operator(b4b1d00c-98b8-45c5-80c4-0362b3303384)\"" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" podUID="b4b1d00c-98b8-45c5-80c4-0362b3303384" Apr 17 11:32:55.132626 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:55.132587 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-2h4l7" event={"ID":"38861b43-eb3b-4987-b34b-454261f74172","Type":"ContainerStarted","Data":"cfb6d57be7d437b1d1ff569093d582978496976718a8beab7aff983b479f5e8c"} Apr 17 11:32:55.133987 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:55.133955 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" event={"ID":"57b8a4d3-28c9-4671-9ef8-20adb1b71c4e","Type":"ContainerStarted","Data":"4b3e2a70108129af8414414444735020a60afa85a221f2b8fa62c056b981401b"} Apr 17 11:32:55.157103 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:55.157042 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-cb95c66f6-8b7v5" podStartSLOduration=1.058608193 podStartE2EDuration="2.157025081s" podCreationTimestamp="2026-04-17 11:32:53 +0000 UTC" firstStartedPulling="2026-04-17 11:32:53.530168093 +0000 UTC m=+154.368405830" lastFinishedPulling="2026-04-17 11:32:54.628584983 +0000 UTC m=+155.466822718" observedRunningTime="2026-04-17 11:32:55.156235821 +0000 UTC m=+155.994473592" watchObservedRunningTime="2026-04-17 11:32:55.157025081 +0000 UTC m=+155.995262838" Apr 17 11:32:55.578785 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:55.578728 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" podUID="35579630-eea3-41d9-8ca1-8408a45d5896" Apr 17 11:32:55.590897 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:55.590858 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-mcjdh" podUID="da2c16b4-2e18-4310-881d-5febd92c9d3d" Apr 17 11:32:55.673097 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:55.673046 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-5q84n" podUID="2da30d47-d7ea-47f4-a489-4729c8989cef" Apr 17 11:32:56.138638 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:56.138609 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:32:56.138638 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:56.138628 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-2h4l7" event={"ID":"38861b43-eb3b-4987-b34b-454261f74172","Type":"ContainerStarted","Data":"76fe1dfcf0e06b763a844362fa049e129d21b0e93786b43c35dce2873ba885d8"} Apr 17 11:32:56.156932 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:32:56.156875 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-2h4l7" podStartSLOduration=1.089195486 podStartE2EDuration="3.156859742s" podCreationTimestamp="2026-04-17 11:32:53 +0000 UTC" firstStartedPulling="2026-04-17 11:32:53.608874304 +0000 UTC m=+154.447112039" lastFinishedPulling="2026-04-17 11:32:55.676538557 +0000 UTC m=+156.514776295" observedRunningTime="2026-04-17 11:32:56.155461466 +0000 UTC m=+156.993699222" watchObservedRunningTime="2026-04-17 11:32:56.156859742 +0000 UTC m=+156.995097498" Apr 17 11:32:56.729351 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:32:56.729302 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-tnlq8" podUID="7b1f3e8e-0735-4b17-9e76-c70b964db9c1" Apr 17 11:33:00.434412 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.434359 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:33:00.434412 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.434413 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:33:00.436867 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.436841 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"image-registry-9b4668746-kmh5p\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:33:00.436979 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.436956 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da2c16b4-2e18-4310-881d-5febd92c9d3d-cert\") pod \"ingress-canary-mcjdh\" (UID: \"da2c16b4-2e18-4310-881d-5febd92c9d3d\") " pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:33:00.535015 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.534967 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:33:00.537203 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.537182 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2da30d47-d7ea-47f4-a489-4729c8989cef-metrics-tls\") pod \"dns-default-5q84n\" (UID: \"2da30d47-d7ea-47f4-a489-4729c8989cef\") " pod="openshift-dns/dns-default-5q84n" Apr 17 11:33:00.642558 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.642526 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-5ntx8\"" Apr 17 11:33:00.650754 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.650717 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:33:00.768583 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:00.768531 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-9b4668746-kmh5p"] Apr 17 11:33:00.770643 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:33:00.770618 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35579630_eea3_41d9_8ca1_8408a45d5896.slice/crio-6b08ed0eb1770bd91729faa3ffd8bf02e28a36663dfb3a37a02fa0016b9e9ff6 WatchSource:0}: Error finding container 6b08ed0eb1770bd91729faa3ffd8bf02e28a36663dfb3a37a02fa0016b9e9ff6: Status 404 returned error can't find the container with id 6b08ed0eb1770bd91729faa3ffd8bf02e28a36663dfb3a37a02fa0016b9e9ff6 Apr 17 11:33:01.151748 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:01.151657 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" event={"ID":"35579630-eea3-41d9-8ca1-8408a45d5896","Type":"ContainerStarted","Data":"8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea"} Apr 17 11:33:01.151748 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:01.151715 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" event={"ID":"35579630-eea3-41d9-8ca1-8408a45d5896","Type":"ContainerStarted","Data":"6b08ed0eb1770bd91729faa3ffd8bf02e28a36663dfb3a37a02fa0016b9e9ff6"} Apr 17 11:33:01.151915 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:01.151804 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:33:01.173964 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:01.173919 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" podStartSLOduration=161.173904715 podStartE2EDuration="2m41.173904715s" podCreationTimestamp="2026-04-17 11:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 11:33:01.172727001 +0000 UTC m=+162.010964757" watchObservedRunningTime="2026-04-17 11:33:01.173904715 +0000 UTC m=+162.012142471" Apr 17 11:33:02.976573 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.976529 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-w2jhn"] Apr 17 11:33:02.980824 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.980793 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:02.984543 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.984520 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 17 11:33:02.984670 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.984577 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-tls\"" Apr 17 11:33:02.984670 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.984519 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-kube-rbac-proxy-config\"" Apr 17 11:33:02.984670 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.984635 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"prometheus-operator-dockercfg-w8gp9\"" Apr 17 11:33:02.984670 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.984519 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 17 11:33:02.984670 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.984531 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 17 11:33:02.986201 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:02.986181 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-w2jhn"] Apr 17 11:33:03.056391 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.056359 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.056391 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.056394 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mw42\" (UniqueName: \"kubernetes.io/projected/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-kube-api-access-8mw42\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.056593 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.056428 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.056593 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.056542 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.156884 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.156858 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.157021 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.156890 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8mw42\" (UniqueName: \"kubernetes.io/projected/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-kube-api-access-8mw42\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.157021 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.156914 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.157021 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.156970 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.157551 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.157525 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-metrics-client-ca\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.159353 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.159301 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.159562 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.159541 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-prometheus-operator-tls\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.165361 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.165334 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mw42\" (UniqueName: \"kubernetes.io/projected/14a3d4aa-4c23-4e2b-801d-34b9b27b9941-kube-api-access-8mw42\") pod \"prometheus-operator-5676c8c784-w2jhn\" (UID: \"14a3d4aa-4c23-4e2b-801d-34b9b27b9941\") " pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.290525 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.290434 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" Apr 17 11:33:03.410277 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:03.410243 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5676c8c784-w2jhn"] Apr 17 11:33:03.413243 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:33:03.413213 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a3d4aa_4c23_4e2b_801d_34b9b27b9941.slice/crio-e5621fefcc380cf9dda87cb2bb256b48374559ae79afd9a4bcfcef087cd822e7 WatchSource:0}: Error finding container e5621fefcc380cf9dda87cb2bb256b48374559ae79afd9a4bcfcef087cd822e7: Status 404 returned error can't find the container with id e5621fefcc380cf9dda87cb2bb256b48374559ae79afd9a4bcfcef087cd822e7 Apr 17 11:33:04.160648 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:04.160612 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" event={"ID":"14a3d4aa-4c23-4e2b-801d-34b9b27b9941","Type":"ContainerStarted","Data":"e5621fefcc380cf9dda87cb2bb256b48374559ae79afd9a4bcfcef087cd822e7"} Apr 17 11:33:05.164732 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:05.164698 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" event={"ID":"14a3d4aa-4c23-4e2b-801d-34b9b27b9941","Type":"ContainerStarted","Data":"51e05dcc480185a1862127dd9edb6d4abe52e3818e868cabfe1fcd56c8e9ea05"} Apr 17 11:33:05.164732 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:05.164733 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" event={"ID":"14a3d4aa-4c23-4e2b-801d-34b9b27b9941","Type":"ContainerStarted","Data":"8d44021ba9f2f263369affa0a0a445ba3e9a0ad848e3fbb434bf5a478b4af784"} Apr 17 11:33:05.181501 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:05.181460 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5676c8c784-w2jhn" podStartSLOduration=1.973588436 podStartE2EDuration="3.181445817s" podCreationTimestamp="2026-04-17 11:33:02 +0000 UTC" firstStartedPulling="2026-04-17 11:33:03.415340816 +0000 UTC m=+164.253578551" lastFinishedPulling="2026-04-17 11:33:04.623198198 +0000 UTC m=+165.461435932" observedRunningTime="2026-04-17 11:33:05.180478019 +0000 UTC m=+166.018715774" watchObservedRunningTime="2026-04-17 11:33:05.181445817 +0000 UTC m=+166.019683573" Apr 17 11:33:07.316347 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.316318 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-9g9xm"] Apr 17 11:33:07.321050 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.321029 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-4qgkz"] Apr 17 11:33:07.321228 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.321208 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.323547 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.323521 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-kube-rbac-proxy-config\"" Apr 17 11:33:07.323661 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.323565 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-tls\"" Apr 17 11:33:07.323661 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.323598 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-dockercfg-h4jxv\"" Apr 17 11:33:07.324215 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.324200 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.324811 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.324781 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-state-metrics-custom-resource-state-configmap\"" Apr 17 11:33:07.326699 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.326660 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 17 11:33:07.327039 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.327020 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 17 11:33:07.327159 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.327042 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 17 11:33:07.327309 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.327106 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-wj9ml\"" Apr 17 11:33:07.327584 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.327569 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-9g9xm"] Apr 17 11:33:07.494027 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.493980 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-sys\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.494027 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494027 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.494248 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494047 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4kz\" (UniqueName: \"kubernetes.io/projected/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-api-access-6h4kz\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.494248 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494084 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.494248 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494114 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-metrics-client-ca\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.494248 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494154 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.494248 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494202 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-root\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.494248 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494224 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-wtmp\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.494473 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494252 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.494473 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494269 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.494473 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494283 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.494473 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494306 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svp4f\" (UniqueName: \"kubernetes.io/projected/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-kube-api-access-svp4f\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.494473 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494337 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-textfile\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.494473 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494356 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.494473 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.494375 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-accelerators-collector-config\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.595662 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595571 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.595662 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595619 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-accelerators-collector-config\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.595879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595712 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-sys\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.595879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595751 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.595879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595769 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6h4kz\" (UniqueName: \"kubernetes.io/projected/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-api-access-6h4kz\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.595879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595793 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.595879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595809 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-sys\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.595879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595830 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-metrics-client-ca\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.595879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595865 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595900 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-root\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595938 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-wtmp\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.595979 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:33:07.595985 2567 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596007 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596029 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:33:07.596049 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-tls podName:6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe nodeName:}" failed. No retries permitted until 2026-04-17 11:33:08.096029505 +0000 UTC m=+168.934267253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-tls") pod "kube-state-metrics-69db897b98-9g9xm" (UID: "6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe") : secret "kube-state-metrics-tls" not found Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596078 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svp4f\" (UniqueName: \"kubernetes.io/projected/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-kube-api-access-svp4f\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:33:07.596095 2567 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596115 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-textfile\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:33:07.596143 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls podName:1c3be40e-d817-4bb6-b190-0f2a3106d3e5 nodeName:}" failed. No retries permitted until 2026-04-17 11:33:08.096130255 +0000 UTC m=+168.934368004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls") pod "node-exporter-4qgkz" (UID: "1c3be40e-d817-4bb6-b190-0f2a3106d3e5") : secret "node-exporter-tls" not found Apr 17 11:33:07.596220 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596189 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-root\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596326 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-accelerators-collector-config\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596372 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-metrics-client-ca\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596427 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-textfile\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596455 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-volume-directive-shadow\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.596785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596572 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-wtmp\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.596785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596637 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.596984 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.596854 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-metrics-client-ca\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.598335 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.598306 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.598506 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.598490 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:07.604919 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.604900 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svp4f\" (UniqueName: \"kubernetes.io/projected/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-kube-api-access-svp4f\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:07.605081 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:07.605059 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h4kz\" (UniqueName: \"kubernetes.io/projected/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-api-access-6h4kz\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:08.101290 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.101253 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:08.101499 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.101417 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:08.101570 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:33:08.101506 2567 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Apr 17 11:33:08.101570 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:33:08.101567 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls podName:1c3be40e-d817-4bb6-b190-0f2a3106d3e5 nodeName:}" failed. No retries permitted until 2026-04-17 11:33:09.101553768 +0000 UTC m=+169.939791503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls") pod "node-exporter-4qgkz" (UID: "1c3be40e-d817-4bb6-b190-0f2a3106d3e5") : secret "node-exporter-tls" not found Apr 17 11:33:08.103728 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.103703 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe-kube-state-metrics-tls\") pod \"kube-state-metrics-69db897b98-9g9xm\" (UID: \"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe\") " pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:08.237297 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.237260 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" Apr 17 11:33:08.365729 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.365625 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-69db897b98-9g9xm"] Apr 17 11:33:08.368261 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:33:08.368231 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fa3372d_8fe5_4d0e_b7ab_179efcea0bbe.slice/crio-36afe44f9cc0513023f13ec4867803a70c884225259c6f134a53f506dbaf2018 WatchSource:0}: Error finding container 36afe44f9cc0513023f13ec4867803a70c884225259c6f134a53f506dbaf2018: Status 404 returned error can't find the container with id 36afe44f9cc0513023f13ec4867803a70c884225259c6f134a53f506dbaf2018 Apr 17 11:33:08.709779 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.709746 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:33:08.709943 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.709744 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:33:08.712539 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.712522 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-hmj98\"" Apr 17 11:33:08.720787 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.720771 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mcjdh" Apr 17 11:33:08.833698 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:08.833658 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mcjdh"] Apr 17 11:33:08.835590 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:33:08.835562 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda2c16b4_2e18_4310_881d_5febd92c9d3d.slice/crio-98542ab240a9e78a8e8976ab0a2b0d2d39be86e04d1dfe49a8be369c0d0d4921 WatchSource:0}: Error finding container 98542ab240a9e78a8e8976ab0a2b0d2d39be86e04d1dfe49a8be369c0d0d4921: Status 404 returned error can't find the container with id 98542ab240a9e78a8e8976ab0a2b0d2d39be86e04d1dfe49a8be369c0d0d4921 Apr 17 11:33:09.110753 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:09.110401 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:09.113340 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:09.113309 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1c3be40e-d817-4bb6-b190-0f2a3106d3e5-node-exporter-tls\") pod \"node-exporter-4qgkz\" (UID: \"1c3be40e-d817-4bb6-b190-0f2a3106d3e5\") " pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:09.141472 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:09.141442 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-4qgkz" Apr 17 11:33:09.177700 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:09.177613 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mcjdh" event={"ID":"da2c16b4-2e18-4310-881d-5febd92c9d3d","Type":"ContainerStarted","Data":"98542ab240a9e78a8e8976ab0a2b0d2d39be86e04d1dfe49a8be369c0d0d4921"} Apr 17 11:33:09.178776 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:09.178752 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" event={"ID":"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe","Type":"ContainerStarted","Data":"36afe44f9cc0513023f13ec4867803a70c884225259c6f134a53f506dbaf2018"} Apr 17 11:33:09.442301 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:33:09.442273 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c3be40e_d817_4bb6_b190_0f2a3106d3e5.slice/crio-51d349ae68b34c221cb38b08d8b88a678abf7bbcf233696790cbdf112e950f40 WatchSource:0}: Error finding container 51d349ae68b34c221cb38b08d8b88a678abf7bbcf233696790cbdf112e950f40: Status 404 returned error can't find the container with id 51d349ae68b34c221cb38b08d8b88a678abf7bbcf233696790cbdf112e950f40 Apr 17 11:33:09.715618 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:09.714954 2567 scope.go:117] "RemoveContainer" containerID="d5a72e7b47bbd1ef95657f51bc783dc8a399a97c9fa8260f3213d567dc42ac40" Apr 17 11:33:09.715618 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:33:09.715162 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-9d4b6777b-j8vsm_openshift-console-operator(b4b1d00c-98b8-45c5-80c4-0362b3303384)\"" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" podUID="b4b1d00c-98b8-45c5-80c4-0362b3303384" Apr 17 11:33:10.184101 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.184054 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-4qgkz" event={"ID":"1c3be40e-d817-4bb6-b190-0f2a3106d3e5","Type":"ContainerStarted","Data":"51d349ae68b34c221cb38b08d8b88a678abf7bbcf233696790cbdf112e950f40"} Apr 17 11:33:10.186124 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.186096 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" event={"ID":"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe","Type":"ContainerStarted","Data":"390c640cb8eeef5d5e6eee4f6293be419e4c290874f531bd5a3709c140357cb9"} Apr 17 11:33:10.186242 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.186129 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" event={"ID":"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe","Type":"ContainerStarted","Data":"612ae0760f734525544aac76068ced8c00a2cb99e5cbd10fe45412d76ca2d033"} Apr 17 11:33:10.186242 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.186139 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" event={"ID":"6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe","Type":"ContainerStarted","Data":"a74c108b90fd6e38e4f339afe1d22c5f6e62bbeaebfe587e46e59d613b198255"} Apr 17 11:33:10.208148 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.208086 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-69db897b98-9g9xm" podStartSLOduration=2.0959938989999998 podStartE2EDuration="3.208067802s" podCreationTimestamp="2026-04-17 11:33:07 +0000 UTC" firstStartedPulling="2026-04-17 11:33:08.370103029 +0000 UTC m=+169.208340763" lastFinishedPulling="2026-04-17 11:33:09.482176917 +0000 UTC m=+170.320414666" observedRunningTime="2026-04-17 11:33:10.207196903 +0000 UTC m=+171.045434671" watchObservedRunningTime="2026-04-17 11:33:10.208067802 +0000 UTC m=+171.046305560" Apr 17 11:33:10.709160 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.709111 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5q84n" Apr 17 11:33:10.712145 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.712095 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-ltml9\"" Apr 17 11:33:10.720061 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.720033 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5q84n" Apr 17 11:33:10.869373 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:10.869347 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5q84n"] Apr 17 11:33:10.872735 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:33:10.872673 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2da30d47_d7ea_47f4_a489_4729c8989cef.slice/crio-bd0b8dbee96b60e0033937133d63b2798530659bb4e09f4ae07371ad7c55f7c7 WatchSource:0}: Error finding container bd0b8dbee96b60e0033937133d63b2798530659bb4e09f4ae07371ad7c55f7c7: Status 404 returned error can't find the container with id bd0b8dbee96b60e0033937133d63b2798530659bb4e09f4ae07371ad7c55f7c7 Apr 17 11:33:11.190774 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:11.190743 2567 generic.go:358] "Generic (PLEG): container finished" podID="1c3be40e-d817-4bb6-b190-0f2a3106d3e5" containerID="4bf76b057e24d2e66ab16a9e3fc869b3ab3876030cb8d8d2ead84a5294cd3f3a" exitCode=0 Apr 17 11:33:11.190959 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:11.190828 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-4qgkz" event={"ID":"1c3be40e-d817-4bb6-b190-0f2a3106d3e5","Type":"ContainerDied","Data":"4bf76b057e24d2e66ab16a9e3fc869b3ab3876030cb8d8d2ead84a5294cd3f3a"} Apr 17 11:33:11.192188 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:11.192165 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mcjdh" event={"ID":"da2c16b4-2e18-4310-881d-5febd92c9d3d","Type":"ContainerStarted","Data":"422d7a299be4bf0662fa64bddd6e000e8a1575a5195fdb09f566f06c7d7f2125"} Apr 17 11:33:11.193184 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:11.193162 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5q84n" event={"ID":"2da30d47-d7ea-47f4-a489-4729c8989cef","Type":"ContainerStarted","Data":"bd0b8dbee96b60e0033937133d63b2798530659bb4e09f4ae07371ad7c55f7c7"} Apr 17 11:33:11.223653 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:11.223598 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mcjdh" podStartSLOduration=137.260982127 podStartE2EDuration="2m19.223577211s" podCreationTimestamp="2026-04-17 11:30:52 +0000 UTC" firstStartedPulling="2026-04-17 11:33:08.837454845 +0000 UTC m=+169.675692580" lastFinishedPulling="2026-04-17 11:33:10.800049928 +0000 UTC m=+171.638287664" observedRunningTime="2026-04-17 11:33:11.222018701 +0000 UTC m=+172.060256458" watchObservedRunningTime="2026-04-17 11:33:11.223577211 +0000 UTC m=+172.061814968" Apr 17 11:33:12.201323 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:12.201286 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-4qgkz" event={"ID":"1c3be40e-d817-4bb6-b190-0f2a3106d3e5","Type":"ContainerStarted","Data":"24b01ce7d2face274e9981919b1ea2e8926c60dd7fcf46747a73f9ed2cca0995"} Apr 17 11:33:12.201767 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:12.201335 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-4qgkz" event={"ID":"1c3be40e-d817-4bb6-b190-0f2a3106d3e5","Type":"ContainerStarted","Data":"1546cc5741907a14c347ea737ca7c15b32360dccdf8a2b369c664b638d82564d"} Apr 17 11:33:12.223510 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:12.223439 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-4qgkz" podStartSLOduration=4.428839478 podStartE2EDuration="5.223421448s" podCreationTimestamp="2026-04-17 11:33:07 +0000 UTC" firstStartedPulling="2026-04-17 11:33:09.44391282 +0000 UTC m=+170.282150555" lastFinishedPulling="2026-04-17 11:33:10.238494789 +0000 UTC m=+171.076732525" observedRunningTime="2026-04-17 11:33:12.222325687 +0000 UTC m=+173.060563444" watchObservedRunningTime="2026-04-17 11:33:12.223421448 +0000 UTC m=+173.061659207" Apr 17 11:33:13.206112 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:13.206078 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5q84n" event={"ID":"2da30d47-d7ea-47f4-a489-4729c8989cef","Type":"ContainerStarted","Data":"bca95bd856deb05af6c238b92c64fe2c04453258b2c7bc7bff7c83eacfc82c8c"} Apr 17 11:33:13.206112 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:13.206116 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5q84n" event={"ID":"2da30d47-d7ea-47f4-a489-4729c8989cef","Type":"ContainerStarted","Data":"99244689cff6e9938818ef8daae32f83e34b47f5155fd9ac1a9831a8070493e1"} Apr 17 11:33:13.206566 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:13.206230 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-5q84n" Apr 17 11:33:13.224198 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:13.224145 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5q84n" podStartSLOduration=139.93604133 podStartE2EDuration="2m21.22412903s" podCreationTimestamp="2026-04-17 11:30:52 +0000 UTC" firstStartedPulling="2026-04-17 11:33:10.874993293 +0000 UTC m=+171.713231041" lastFinishedPulling="2026-04-17 11:33:12.163080997 +0000 UTC m=+173.001318741" observedRunningTime="2026-04-17 11:33:13.222480709 +0000 UTC m=+174.060718465" watchObservedRunningTime="2026-04-17 11:33:13.22412903 +0000 UTC m=+174.062366777" Apr 17 11:33:20.655455 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:20.655421 2567 patch_prober.go:28] interesting pod/image-registry-9b4668746-kmh5p container/registry namespace/openshift-image-registry: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body={"errors":[{"code":"UNAVAILABLE","message":"service unavailable","detail":"health check failed: please see /debug/health"}]} Apr 17 11:33:20.655831 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:20.655473 2567 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" podUID="35579630-eea3-41d9-8ca1-8408a45d5896" containerName="registry" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 17 11:33:20.709708 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:20.709664 2567 scope.go:117] "RemoveContainer" containerID="d5a72e7b47bbd1ef95657f51bc783dc8a399a97c9fa8260f3213d567dc42ac40" Apr 17 11:33:21.232708 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:21.232663 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:33:21.232876 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:21.232796 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" event={"ID":"b4b1d00c-98b8-45c5-80c4-0362b3303384","Type":"ContainerStarted","Data":"c6660654e6126a41729df0650e9d2f1dc752dc1c57cecfd999cd452a108edd1b"} Apr 17 11:33:21.233187 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:21.233148 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:33:21.238276 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:21.238245 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" Apr 17 11:33:21.261982 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:21.261930 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-9d4b6777b-j8vsm" podStartSLOduration=54.896635235 podStartE2EDuration="57.261914936s" podCreationTimestamp="2026-04-17 11:32:24 +0000 UTC" firstStartedPulling="2026-04-17 11:32:25.097093121 +0000 UTC m=+125.935330855" lastFinishedPulling="2026-04-17 11:32:27.462372822 +0000 UTC m=+128.300610556" observedRunningTime="2026-04-17 11:33:21.260947492 +0000 UTC m=+182.099185249" watchObservedRunningTime="2026-04-17 11:33:21.261914936 +0000 UTC m=+182.100152691" Apr 17 11:33:22.158238 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:22.158205 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:33:23.211354 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:23.211319 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5q84n" Apr 17 11:33:25.549604 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:25.549568 2567 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-9b4668746-kmh5p"] Apr 17 11:33:50.573085 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:50.573042 2567 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" podUID="35579630-eea3-41d9-8ca1-8408a45d5896" containerName="registry" containerID="cri-o://8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea" gracePeriod=30 Apr 17 11:33:51.801411 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.801389 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:33:51.963529 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.963500 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-bound-sa-token\") pod \"35579630-eea3-41d9-8ca1-8408a45d5896\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " Apr 17 11:33:51.963706 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.963546 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-registry-certificates\") pod \"35579630-eea3-41d9-8ca1-8408a45d5896\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " Apr 17 11:33:51.963706 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.963566 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35579630-eea3-41d9-8ca1-8408a45d5896-ca-trust-extracted\") pod \"35579630-eea3-41d9-8ca1-8408a45d5896\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " Apr 17 11:33:51.963706 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.963586 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-image-registry-private-configuration\") pod \"35579630-eea3-41d9-8ca1-8408a45d5896\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " Apr 17 11:33:51.963706 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.963614 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") pod \"35579630-eea3-41d9-8ca1-8408a45d5896\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " Apr 17 11:33:51.963706 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.963638 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfjz7\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-kube-api-access-nfjz7\") pod \"35579630-eea3-41d9-8ca1-8408a45d5896\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " Apr 17 11:33:51.963939 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.963711 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-installation-pull-secrets\") pod \"35579630-eea3-41d9-8ca1-8408a45d5896\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " Apr 17 11:33:51.963939 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.963738 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-trusted-ca\") pod \"35579630-eea3-41d9-8ca1-8408a45d5896\" (UID: \"35579630-eea3-41d9-8ca1-8408a45d5896\") " Apr 17 11:33:51.964334 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.964305 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "35579630-eea3-41d9-8ca1-8408a45d5896" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 11:33:51.964922 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.964873 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "35579630-eea3-41d9-8ca1-8408a45d5896" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 11:33:51.966054 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.966008 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "35579630-eea3-41d9-8ca1-8408a45d5896" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:33:51.966054 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.966038 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "35579630-eea3-41d9-8ca1-8408a45d5896" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 11:33:51.968736 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.966384 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "35579630-eea3-41d9-8ca1-8408a45d5896" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:33:51.968736 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.966470 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "35579630-eea3-41d9-8ca1-8408a45d5896" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 11:33:51.971945 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.971921 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-kube-api-access-nfjz7" (OuterVolumeSpecName: "kube-api-access-nfjz7") pod "35579630-eea3-41d9-8ca1-8408a45d5896" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896"). InnerVolumeSpecName "kube-api-access-nfjz7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:33:51.975546 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:51.975517 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35579630-eea3-41d9-8ca1-8408a45d5896-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "35579630-eea3-41d9-8ca1-8408a45d5896" (UID: "35579630-eea3-41d9-8ca1-8408a45d5896"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 11:33:52.064804 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.064774 2567 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-installation-pull-secrets\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:33:52.064804 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.064799 2567 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-trusted-ca\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:33:52.064804 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.064809 2567 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-bound-sa-token\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:33:52.064986 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.064817 2567 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/35579630-eea3-41d9-8ca1-8408a45d5896-registry-certificates\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:33:52.064986 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.064826 2567 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/35579630-eea3-41d9-8ca1-8408a45d5896-ca-trust-extracted\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:33:52.064986 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.064836 2567 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/35579630-eea3-41d9-8ca1-8408a45d5896-image-registry-private-configuration\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:33:52.064986 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.064845 2567 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-registry-tls\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:33:52.064986 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.064854 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nfjz7\" (UniqueName: \"kubernetes.io/projected/35579630-eea3-41d9-8ca1-8408a45d5896-kube-api-access-nfjz7\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:33:52.318115 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.318022 2567 generic.go:358] "Generic (PLEG): container finished" podID="35579630-eea3-41d9-8ca1-8408a45d5896" containerID="8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea" exitCode=0 Apr 17 11:33:52.318115 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.318108 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" Apr 17 11:33:52.318341 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.318112 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" event={"ID":"35579630-eea3-41d9-8ca1-8408a45d5896","Type":"ContainerDied","Data":"8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea"} Apr 17 11:33:52.318341 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.318159 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-9b4668746-kmh5p" event={"ID":"35579630-eea3-41d9-8ca1-8408a45d5896","Type":"ContainerDied","Data":"6b08ed0eb1770bd91729faa3ffd8bf02e28a36663dfb3a37a02fa0016b9e9ff6"} Apr 17 11:33:52.318341 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.318179 2567 scope.go:117] "RemoveContainer" containerID="8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea" Apr 17 11:33:52.326132 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.326116 2567 scope.go:117] "RemoveContainer" containerID="8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea" Apr 17 11:33:52.326390 ip-10-0-140-245 kubenswrapper[2567]: E0417 11:33:52.326366 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea\": container with ID starting with 8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea not found: ID does not exist" containerID="8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea" Apr 17 11:33:52.326465 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.326401 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea"} err="failed to get container status \"8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea\": rpc error: code = NotFound desc = could not find container \"8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea\": container with ID starting with 8df2d51b1ec846cf9931bebfd3899e8148168aac84987ba723a0a93d524958ea not found: ID does not exist" Apr 17 11:33:52.343243 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.343218 2567 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-9b4668746-kmh5p"] Apr 17 11:33:52.352053 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:52.352027 2567 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-9b4668746-kmh5p"] Apr 17 11:33:53.713588 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:53.713540 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35579630-eea3-41d9-8ca1-8408a45d5896" path="/var/lib/kubelet/pods/35579630-eea3-41d9-8ca1-8408a45d5896/volumes" Apr 17 11:33:58.337174 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:58.337089 2567 generic.go:358] "Generic (PLEG): container finished" podID="7c7fd4b1-e618-4f37-8c84-dc31d902ec5d" containerID="5915394a2ec9e42ea13ff304e97bd79e3894edb04c0315a0b99c531c0ac0fe5c" exitCode=0 Apr 17 11:33:58.337174 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:58.337163 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-qvvgv" event={"ID":"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d","Type":"ContainerDied","Data":"5915394a2ec9e42ea13ff304e97bd79e3894edb04c0315a0b99c531c0ac0fe5c"} Apr 17 11:33:58.337608 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:58.337465 2567 scope.go:117] "RemoveContainer" containerID="5915394a2ec9e42ea13ff304e97bd79e3894edb04c0315a0b99c531c0ac0fe5c" Apr 17 11:33:59.341771 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:33:59.341720 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-585dfdc468-qvvgv" event={"ID":"7c7fd4b1-e618-4f37-8c84-dc31d902ec5d","Type":"ContainerStarted","Data":"9510c5316a92b0da3c69cab9026a81c7fd4dad4723de771be89f7cbbd6555a6c"} Apr 17 11:34:31.585567 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:31.585519 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:34:31.588149 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:31.588122 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7b1f3e8e-0735-4b17-9e76-c70b964db9c1-metrics-certs\") pod \"network-metrics-daemon-tnlq8\" (UID: \"7b1f3e8e-0735-4b17-9e76-c70b964db9c1\") " pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:34:31.813121 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:31.813090 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-lzg8w\"" Apr 17 11:34:31.820846 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:31.820805 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tnlq8" Apr 17 11:34:31.946491 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:31.946415 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tnlq8"] Apr 17 11:34:31.950209 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:34:31.950180 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b1f3e8e_0735_4b17_9e76_c70b964db9c1.slice/crio-6d7ea589708d1693fd8d314c214e589fca30c1f5f45f16273c045a02e3d88529 WatchSource:0}: Error finding container 6d7ea589708d1693fd8d314c214e589fca30c1f5f45f16273c045a02e3d88529: Status 404 returned error can't find the container with id 6d7ea589708d1693fd8d314c214e589fca30c1f5f45f16273c045a02e3d88529 Apr 17 11:34:32.433166 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:32.433133 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tnlq8" event={"ID":"7b1f3e8e-0735-4b17-9e76-c70b964db9c1","Type":"ContainerStarted","Data":"6d7ea589708d1693fd8d314c214e589fca30c1f5f45f16273c045a02e3d88529"} Apr 17 11:34:33.437240 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:33.437208 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tnlq8" event={"ID":"7b1f3e8e-0735-4b17-9e76-c70b964db9c1","Type":"ContainerStarted","Data":"17f49fe133fb7a0ebf629a3c37565db48fc414b695613730bea5dd0d08030978"} Apr 17 11:34:33.437240 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:33.437245 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tnlq8" event={"ID":"7b1f3e8e-0735-4b17-9e76-c70b964db9c1","Type":"ContainerStarted","Data":"9e3d1e3b82293b7a00ae0af6499f7e6680faf486afd740a74b3a73f3a39c0d34"} Apr 17 11:34:33.454162 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:34:33.454093 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tnlq8" podStartSLOduration=252.45377055 podStartE2EDuration="4m13.454073098s" podCreationTimestamp="2026-04-17 11:30:20 +0000 UTC" firstStartedPulling="2026-04-17 11:34:31.952076347 +0000 UTC m=+252.790314082" lastFinishedPulling="2026-04-17 11:34:32.952378884 +0000 UTC m=+253.790616630" observedRunningTime="2026-04-17 11:34:33.453782813 +0000 UTC m=+254.292020567" watchObservedRunningTime="2026-04-17 11:34:33.454073098 +0000 UTC m=+254.292310852" Apr 17 11:35:19.623910 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:35:19.623876 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:35:19.623910 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:35:19.623898 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:35:19.629407 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:35:19.629376 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:35:19.629573 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:35:19.629464 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:35:19.636595 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:35:19.636573 2567 kubelet.go:1628] "Image garbage collection succeeded" Apr 17 11:36:34.414169 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.414128 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb"] Apr 17 11:36:34.414807 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.414544 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35579630-eea3-41d9-8ca1-8408a45d5896" containerName="registry" Apr 17 11:36:34.414807 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.414562 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="35579630-eea3-41d9-8ca1-8408a45d5896" containerName="registry" Apr 17 11:36:34.414807 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.414658 2567 memory_manager.go:356] "RemoveStaleState removing state" podUID="35579630-eea3-41d9-8ca1-8408a45d5896" containerName="registry" Apr 17 11:36:34.417582 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.417560 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.420139 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.420112 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 17 11:36:34.420251 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.420118 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-vkgnw\"" Apr 17 11:36:34.420965 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.420951 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 17 11:36:34.425961 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.425940 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb"] Apr 17 11:36:34.545765 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.545731 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.545955 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.545773 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fmr\" (UniqueName: \"kubernetes.io/projected/6850c322-f205-4b22-8289-2325250ecf07-kube-api-access-p5fmr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.545955 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.545837 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.646991 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.646952 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.647194 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.647006 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p5fmr\" (UniqueName: \"kubernetes.io/projected/6850c322-f205-4b22-8289-2325250ecf07-kube-api-access-p5fmr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.647194 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.647064 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.647359 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.647336 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.647422 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.647399 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.655945 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.655919 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5fmr\" (UniqueName: \"kubernetes.io/projected/6850c322-f205-4b22-8289-2325250ecf07-kube-api-access-p5fmr\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.727054 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.727012 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:34.848561 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.848528 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb"] Apr 17 11:36:34.851265 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:36:34.851238 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6850c322_f205_4b22_8289_2325250ecf07.slice/crio-0cf5496ad98ae776769040b1ca2b328dccd67dd0893417c5c0076e82ec0bdf51 WatchSource:0}: Error finding container 0cf5496ad98ae776769040b1ca2b328dccd67dd0893417c5c0076e82ec0bdf51: Status 404 returned error can't find the container with id 0cf5496ad98ae776769040b1ca2b328dccd67dd0893417c5c0076e82ec0bdf51 Apr 17 11:36:34.853145 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:34.853129 2567 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 11:36:35.775336 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:35.775296 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" event={"ID":"6850c322-f205-4b22-8289-2325250ecf07","Type":"ContainerStarted","Data":"0cf5496ad98ae776769040b1ca2b328dccd67dd0893417c5c0076e82ec0bdf51"} Apr 17 11:36:41.794091 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:41.794053 2567 generic.go:358] "Generic (PLEG): container finished" podID="6850c322-f205-4b22-8289-2325250ecf07" containerID="01c3ceef512e7464e37352ea31311b746c6ad537391e6691b2388e40ac0a986b" exitCode=0 Apr 17 11:36:41.794477 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:41.794141 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" event={"ID":"6850c322-f205-4b22-8289-2325250ecf07","Type":"ContainerDied","Data":"01c3ceef512e7464e37352ea31311b746c6ad537391e6691b2388e40ac0a986b"} Apr 17 11:36:43.801821 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:43.801773 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" event={"ID":"6850c322-f205-4b22-8289-2325250ecf07","Type":"ContainerStarted","Data":"2677eccac4c79d8a6eb51f32958c4bd41a8d92358b63f178977fc5020db9a211"} Apr 17 11:36:44.806489 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:44.806453 2567 generic.go:358] "Generic (PLEG): container finished" podID="6850c322-f205-4b22-8289-2325250ecf07" containerID="2677eccac4c79d8a6eb51f32958c4bd41a8d92358b63f178977fc5020db9a211" exitCode=0 Apr 17 11:36:44.806907 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:44.806531 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" event={"ID":"6850c322-f205-4b22-8289-2325250ecf07","Type":"ContainerDied","Data":"2677eccac4c79d8a6eb51f32958c4bd41a8d92358b63f178977fc5020db9a211"} Apr 17 11:36:50.828173 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:50.828134 2567 generic.go:358] "Generic (PLEG): container finished" podID="6850c322-f205-4b22-8289-2325250ecf07" containerID="bdcef2d17e55e4369ab4754cc9eea0210108fc22e327a3b99ea321fda1d6239f" exitCode=0 Apr 17 11:36:50.828558 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:50.828216 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" event={"ID":"6850c322-f205-4b22-8289-2325250ecf07","Type":"ContainerDied","Data":"bdcef2d17e55e4369ab4754cc9eea0210108fc22e327a3b99ea321fda1d6239f"} Apr 17 11:36:51.954353 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:51.954326 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:52.096976 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.096883 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-util\") pod \"6850c322-f205-4b22-8289-2325250ecf07\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " Apr 17 11:36:52.096976 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.096937 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5fmr\" (UniqueName: \"kubernetes.io/projected/6850c322-f205-4b22-8289-2325250ecf07-kube-api-access-p5fmr\") pod \"6850c322-f205-4b22-8289-2325250ecf07\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " Apr 17 11:36:52.096976 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.096957 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-bundle\") pod \"6850c322-f205-4b22-8289-2325250ecf07\" (UID: \"6850c322-f205-4b22-8289-2325250ecf07\") " Apr 17 11:36:52.097646 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.097620 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-bundle" (OuterVolumeSpecName: "bundle") pod "6850c322-f205-4b22-8289-2325250ecf07" (UID: "6850c322-f205-4b22-8289-2325250ecf07"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 11:36:52.099200 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.099173 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6850c322-f205-4b22-8289-2325250ecf07-kube-api-access-p5fmr" (OuterVolumeSpecName: "kube-api-access-p5fmr") pod "6850c322-f205-4b22-8289-2325250ecf07" (UID: "6850c322-f205-4b22-8289-2325250ecf07"). InnerVolumeSpecName "kube-api-access-p5fmr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:36:52.100932 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.100903 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-util" (OuterVolumeSpecName: "util") pod "6850c322-f205-4b22-8289-2325250ecf07" (UID: "6850c322-f205-4b22-8289-2325250ecf07"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 11:36:52.198286 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.198252 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p5fmr\" (UniqueName: \"kubernetes.io/projected/6850c322-f205-4b22-8289-2325250ecf07-kube-api-access-p5fmr\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:36:52.198286 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.198284 2567 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-bundle\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:36:52.198286 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.198294 2567 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6850c322-f205-4b22-8289-2325250ecf07-util\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:36:52.835009 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.834983 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" Apr 17 11:36:52.835195 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.834980 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56bzxb" event={"ID":"6850c322-f205-4b22-8289-2325250ecf07","Type":"ContainerDied","Data":"0cf5496ad98ae776769040b1ca2b328dccd67dd0893417c5c0076e82ec0bdf51"} Apr 17 11:36:52.835195 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:52.835092 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cf5496ad98ae776769040b1ca2b328dccd67dd0893417c5c0076e82ec0bdf51" Apr 17 11:36:56.961143 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:56.961112 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm"] Apr 17 11:36:56.961565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:56.961405 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6850c322-f205-4b22-8289-2325250ecf07" containerName="extract" Apr 17 11:36:56.961565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:56.961418 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="6850c322-f205-4b22-8289-2325250ecf07" containerName="extract" Apr 17 11:36:56.961565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:56.961427 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6850c322-f205-4b22-8289-2325250ecf07" containerName="pull" Apr 17 11:36:56.961565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:56.961432 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="6850c322-f205-4b22-8289-2325250ecf07" containerName="pull" Apr 17 11:36:56.961565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:56.961447 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6850c322-f205-4b22-8289-2325250ecf07" containerName="util" Apr 17 11:36:56.961565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:56.961453 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="6850c322-f205-4b22-8289-2325250ecf07" containerName="util" Apr 17 11:36:56.961565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:56.961501 2567 memory_manager.go:356] "RemoveStaleState removing state" podUID="6850c322-f205-4b22-8289-2325250ecf07" containerName="extract" Apr 17 11:36:57.002465 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.002431 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm"] Apr 17 11:36:57.002619 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.002561 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" Apr 17 11:36:57.005138 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.005104 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Apr 17 11:36:57.005273 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.005151 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-mgb5f\"" Apr 17 11:36:57.005273 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.005157 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Apr 17 11:36:57.142488 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.142444 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltp6m\" (UniqueName: \"kubernetes.io/projected/fb6c4d05-a2b5-4845-bbe2-78aec351af07-kube-api-access-ltp6m\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-qtdwm\" (UID: \"fb6c4d05-a2b5-4845-bbe2-78aec351af07\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" Apr 17 11:36:57.142652 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.142528 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb6c4d05-a2b5-4845-bbe2-78aec351af07-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-qtdwm\" (UID: \"fb6c4d05-a2b5-4845-bbe2-78aec351af07\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" Apr 17 11:36:57.243448 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.243360 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb6c4d05-a2b5-4845-bbe2-78aec351af07-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-qtdwm\" (UID: \"fb6c4d05-a2b5-4845-bbe2-78aec351af07\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" Apr 17 11:36:57.243448 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.243420 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ltp6m\" (UniqueName: \"kubernetes.io/projected/fb6c4d05-a2b5-4845-bbe2-78aec351af07-kube-api-access-ltp6m\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-qtdwm\" (UID: \"fb6c4d05-a2b5-4845-bbe2-78aec351af07\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" Apr 17 11:36:57.243774 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.243756 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb6c4d05-a2b5-4845-bbe2-78aec351af07-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-qtdwm\" (UID: \"fb6c4d05-a2b5-4845-bbe2-78aec351af07\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" Apr 17 11:36:57.252023 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.252000 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltp6m\" (UniqueName: \"kubernetes.io/projected/fb6c4d05-a2b5-4845-bbe2-78aec351af07-kube-api-access-ltp6m\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-qtdwm\" (UID: \"fb6c4d05-a2b5-4845-bbe2-78aec351af07\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" Apr 17 11:36:57.311927 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.311886 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" Apr 17 11:36:57.434123 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.434090 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm"] Apr 17 11:36:57.437704 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:36:57.437654 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb6c4d05_a2b5_4845_bbe2_78aec351af07.slice/crio-2da781df399481e8feb000ece43953778cf1dd25510bad4d35b9129b34c6399f WatchSource:0}: Error finding container 2da781df399481e8feb000ece43953778cf1dd25510bad4d35b9129b34c6399f: Status 404 returned error can't find the container with id 2da781df399481e8feb000ece43953778cf1dd25510bad4d35b9129b34c6399f Apr 17 11:36:57.853806 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:36:57.853775 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" event={"ID":"fb6c4d05-a2b5-4845-bbe2-78aec351af07","Type":"ContainerStarted","Data":"2da781df399481e8feb000ece43953778cf1dd25510bad4d35b9129b34c6399f"} Apr 17 11:37:00.864982 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:00.864946 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" event={"ID":"fb6c4d05-a2b5-4845-bbe2-78aec351af07","Type":"ContainerStarted","Data":"ec0a5f41405ffca1c0643058582226dcc9f1d696da821fa449881b54342c5667"} Apr 17 11:37:00.887382 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:00.887320 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-qtdwm" podStartSLOduration=1.5792487899999998 podStartE2EDuration="4.887306026s" podCreationTimestamp="2026-04-17 11:36:56 +0000 UTC" firstStartedPulling="2026-04-17 11:36:57.44079765 +0000 UTC m=+398.279035384" lastFinishedPulling="2026-04-17 11:37:00.748854879 +0000 UTC m=+401.587092620" observedRunningTime="2026-04-17 11:37:00.887139375 +0000 UTC m=+401.725377141" watchObservedRunningTime="2026-04-17 11:37:00.887306026 +0000 UTC m=+401.725543775" Apr 17 11:37:06.497856 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.497823 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-cp2mb"] Apr 17 11:37:06.500867 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.500852 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" Apr 17 11:37:06.503375 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.503350 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-cllqn\"" Apr 17 11:37:06.503508 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.503361 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Apr 17 11:37:06.504055 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.504040 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Apr 17 11:37:06.507785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.507761 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-cp2mb"] Apr 17 11:37:06.526838 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.526807 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvfvn\" (UniqueName: \"kubernetes.io/projected/05207aa1-62f7-4440-87de-5ae8bd56406d-kube-api-access-jvfvn\") pod \"cert-manager-cainjector-8966b78d4-cp2mb\" (UID: \"05207aa1-62f7-4440-87de-5ae8bd56406d\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" Apr 17 11:37:06.526989 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.526853 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05207aa1-62f7-4440-87de-5ae8bd56406d-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-cp2mb\" (UID: \"05207aa1-62f7-4440-87de-5ae8bd56406d\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" Apr 17 11:37:06.628109 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.628055 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05207aa1-62f7-4440-87de-5ae8bd56406d-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-cp2mb\" (UID: \"05207aa1-62f7-4440-87de-5ae8bd56406d\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" Apr 17 11:37:06.628299 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.628168 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jvfvn\" (UniqueName: \"kubernetes.io/projected/05207aa1-62f7-4440-87de-5ae8bd56406d-kube-api-access-jvfvn\") pod \"cert-manager-cainjector-8966b78d4-cp2mb\" (UID: \"05207aa1-62f7-4440-87de-5ae8bd56406d\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" Apr 17 11:37:06.636276 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.636252 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05207aa1-62f7-4440-87de-5ae8bd56406d-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-cp2mb\" (UID: \"05207aa1-62f7-4440-87de-5ae8bd56406d\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" Apr 17 11:37:06.636463 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.636442 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvfvn\" (UniqueName: \"kubernetes.io/projected/05207aa1-62f7-4440-87de-5ae8bd56406d-kube-api-access-jvfvn\") pod \"cert-manager-cainjector-8966b78d4-cp2mb\" (UID: \"05207aa1-62f7-4440-87de-5ae8bd56406d\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" Apr 17 11:37:06.819059 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.818973 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" Apr 17 11:37:06.938285 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:06.938145 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-cp2mb"] Apr 17 11:37:06.940921 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:37:06.940890 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05207aa1_62f7_4440_87de_5ae8bd56406d.slice/crio-b8da6987b05934648fa27437dfdfa2cc7a886d117bf3d1484720689611793a59 WatchSource:0}: Error finding container b8da6987b05934648fa27437dfdfa2cc7a886d117bf3d1484720689611793a59: Status 404 returned error can't find the container with id b8da6987b05934648fa27437dfdfa2cc7a886d117bf3d1484720689611793a59 Apr 17 11:37:07.889699 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:07.889652 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" event={"ID":"05207aa1-62f7-4440-87de-5ae8bd56406d","Type":"ContainerStarted","Data":"b8da6987b05934648fa27437dfdfa2cc7a886d117bf3d1484720689611793a59"} Apr 17 11:37:09.896969 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:09.896931 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" event={"ID":"05207aa1-62f7-4440-87de-5ae8bd56406d","Type":"ContainerStarted","Data":"e3a5822ec5ee753cfc33bf2bcda483fa6b1739f3e6252c6740119a0a3bd4382b"} Apr 17 11:37:09.915012 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:09.914952 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-cp2mb" podStartSLOduration=1.156956103 podStartE2EDuration="3.914933646s" podCreationTimestamp="2026-04-17 11:37:06 +0000 UTC" firstStartedPulling="2026-04-17 11:37:06.942741076 +0000 UTC m=+407.780978814" lastFinishedPulling="2026-04-17 11:37:09.700718623 +0000 UTC m=+410.538956357" observedRunningTime="2026-04-17 11:37:09.914298031 +0000 UTC m=+410.752535801" watchObservedRunningTime="2026-04-17 11:37:09.914933646 +0000 UTC m=+410.753171406" Apr 17 11:37:22.561897 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.561855 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-7n5bd"] Apr 17 11:37:22.565255 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.565238 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-7n5bd" Apr 17 11:37:22.567749 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.567724 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-jmrfg\"" Apr 17 11:37:22.573284 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.573262 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-7n5bd"] Apr 17 11:37:22.647146 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.647106 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ce89cb0-6366-4b31-ac61-65a3fc587b64-bound-sa-token\") pod \"cert-manager-759f64656b-7n5bd\" (UID: \"3ce89cb0-6366-4b31-ac61-65a3fc587b64\") " pod="cert-manager/cert-manager-759f64656b-7n5bd" Apr 17 11:37:22.647401 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.647384 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpbqg\" (UniqueName: \"kubernetes.io/projected/3ce89cb0-6366-4b31-ac61-65a3fc587b64-kube-api-access-xpbqg\") pod \"cert-manager-759f64656b-7n5bd\" (UID: \"3ce89cb0-6366-4b31-ac61-65a3fc587b64\") " pod="cert-manager/cert-manager-759f64656b-7n5bd" Apr 17 11:37:22.748031 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.747999 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ce89cb0-6366-4b31-ac61-65a3fc587b64-bound-sa-token\") pod \"cert-manager-759f64656b-7n5bd\" (UID: \"3ce89cb0-6366-4b31-ac61-65a3fc587b64\") " pod="cert-manager/cert-manager-759f64656b-7n5bd" Apr 17 11:37:22.748234 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.748043 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xpbqg\" (UniqueName: \"kubernetes.io/projected/3ce89cb0-6366-4b31-ac61-65a3fc587b64-kube-api-access-xpbqg\") pod \"cert-manager-759f64656b-7n5bd\" (UID: \"3ce89cb0-6366-4b31-ac61-65a3fc587b64\") " pod="cert-manager/cert-manager-759f64656b-7n5bd" Apr 17 11:37:22.756254 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.756227 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ce89cb0-6366-4b31-ac61-65a3fc587b64-bound-sa-token\") pod \"cert-manager-759f64656b-7n5bd\" (UID: \"3ce89cb0-6366-4b31-ac61-65a3fc587b64\") " pod="cert-manager/cert-manager-759f64656b-7n5bd" Apr 17 11:37:22.756370 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.756296 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpbqg\" (UniqueName: \"kubernetes.io/projected/3ce89cb0-6366-4b31-ac61-65a3fc587b64-kube-api-access-xpbqg\") pod \"cert-manager-759f64656b-7n5bd\" (UID: \"3ce89cb0-6366-4b31-ac61-65a3fc587b64\") " pod="cert-manager/cert-manager-759f64656b-7n5bd" Apr 17 11:37:22.875230 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.875147 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-7n5bd" Apr 17 11:37:22.995130 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:22.995097 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-7n5bd"] Apr 17 11:37:22.998183 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:37:22.998156 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ce89cb0_6366_4b31_ac61_65a3fc587b64.slice/crio-b96582b6d680094acb71a31d2d3365169d5b653d7091082961567b67db23dfcf WatchSource:0}: Error finding container b96582b6d680094acb71a31d2d3365169d5b653d7091082961567b67db23dfcf: Status 404 returned error can't find the container with id b96582b6d680094acb71a31d2d3365169d5b653d7091082961567b67db23dfcf Apr 17 11:37:23.943041 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:23.943006 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-7n5bd" event={"ID":"3ce89cb0-6366-4b31-ac61-65a3fc587b64","Type":"ContainerStarted","Data":"dd16f0bd00e61850cde317c611ce84c919c4f26dc831bcc896ea8585cf586540"} Apr 17 11:37:23.943041 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:23.943040 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-7n5bd" event={"ID":"3ce89cb0-6366-4b31-ac61-65a3fc587b64","Type":"ContainerStarted","Data":"b96582b6d680094acb71a31d2d3365169d5b653d7091082961567b67db23dfcf"} Apr 17 11:37:23.959834 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:23.959783 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-7n5bd" podStartSLOduration=1.9597699 podStartE2EDuration="1.9597699s" podCreationTimestamp="2026-04-17 11:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 11:37:23.958623455 +0000 UTC m=+424.796861208" watchObservedRunningTime="2026-04-17 11:37:23.9597699 +0000 UTC m=+424.798007655" Apr 17 11:37:24.295927 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.295846 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88"] Apr 17 11:37:24.299306 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.299286 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.301639 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.301616 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-vkgnw\"" Apr 17 11:37:24.301785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.301624 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Apr 17 11:37:24.301785 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.301711 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Apr 17 11:37:24.307192 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.307167 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88"] Apr 17 11:37:24.358658 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.358623 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-util\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.358845 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.358673 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs28z\" (UniqueName: \"kubernetes.io/projected/a0970469-bff7-4a87-854d-02a55f7a5d9c-kube-api-access-rs28z\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.358845 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.358723 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-bundle\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.459331 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.459298 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-util\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.459331 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.459347 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rs28z\" (UniqueName: \"kubernetes.io/projected/a0970469-bff7-4a87-854d-02a55f7a5d9c-kube-api-access-rs28z\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.459563 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.459380 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-bundle\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.459717 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.459673 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-util\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.459758 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.459733 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-bundle\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.467268 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.467244 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs28z\" (UniqueName: \"kubernetes.io/projected/a0970469-bff7-4a87-854d-02a55f7a5d9c-kube-api-access-rs28z\") pod \"c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.610073 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.609989 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:24.746969 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.746944 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88"] Apr 17 11:37:24.749338 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:37:24.749309 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0970469_bff7_4a87_854d_02a55f7a5d9c.slice/crio-fbab41ab226c5c375ef19f0a704558e4a7135703cd9240916242c493c6285400 WatchSource:0}: Error finding container fbab41ab226c5c375ef19f0a704558e4a7135703cd9240916242c493c6285400: Status 404 returned error can't find the container with id fbab41ab226c5c375ef19f0a704558e4a7135703cd9240916242c493c6285400 Apr 17 11:37:24.947640 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.947602 2567 generic.go:358] "Generic (PLEG): container finished" podID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerID="89e27c7fac1876f3d7208ef8a9bc025066626108c6d11019e8d299cdb0f501ac" exitCode=0 Apr 17 11:37:24.948016 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.947713 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" event={"ID":"a0970469-bff7-4a87-854d-02a55f7a5d9c","Type":"ContainerDied","Data":"89e27c7fac1876f3d7208ef8a9bc025066626108c6d11019e8d299cdb0f501ac"} Apr 17 11:37:24.948016 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:24.947747 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" event={"ID":"a0970469-bff7-4a87-854d-02a55f7a5d9c","Type":"ContainerStarted","Data":"fbab41ab226c5c375ef19f0a704558e4a7135703cd9240916242c493c6285400"} Apr 17 11:37:27.960370 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:27.960339 2567 generic.go:358] "Generic (PLEG): container finished" podID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerID="b9f98da61da0d324874b297fea8f0316fa9119fa235beb5aa04d3ef31d2902eb" exitCode=0 Apr 17 11:37:27.960778 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:27.960400 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" event={"ID":"a0970469-bff7-4a87-854d-02a55f7a5d9c","Type":"ContainerDied","Data":"b9f98da61da0d324874b297fea8f0316fa9119fa235beb5aa04d3ef31d2902eb"} Apr 17 11:37:28.965897 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:28.965852 2567 generic.go:358] "Generic (PLEG): container finished" podID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerID="bacae79a2f8600db687f00ef2e805224ffb8b3bbf1f2eb1553b3da244095bdb5" exitCode=0 Apr 17 11:37:28.966279 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:28.965938 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" event={"ID":"a0970469-bff7-4a87-854d-02a55f7a5d9c","Type":"ContainerDied","Data":"bacae79a2f8600db687f00ef2e805224ffb8b3bbf1f2eb1553b3da244095bdb5"} Apr 17 11:37:30.089517 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.089495 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:30.204565 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.204525 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-bundle\") pod \"a0970469-bff7-4a87-854d-02a55f7a5d9c\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " Apr 17 11:37:30.204731 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.204581 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs28z\" (UniqueName: \"kubernetes.io/projected/a0970469-bff7-4a87-854d-02a55f7a5d9c-kube-api-access-rs28z\") pod \"a0970469-bff7-4a87-854d-02a55f7a5d9c\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " Apr 17 11:37:30.204731 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.204629 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-util\") pod \"a0970469-bff7-4a87-854d-02a55f7a5d9c\" (UID: \"a0970469-bff7-4a87-854d-02a55f7a5d9c\") " Apr 17 11:37:30.205016 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.204990 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-bundle" (OuterVolumeSpecName: "bundle") pod "a0970469-bff7-4a87-854d-02a55f7a5d9c" (UID: "a0970469-bff7-4a87-854d-02a55f7a5d9c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 11:37:30.206735 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.206706 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0970469-bff7-4a87-854d-02a55f7a5d9c-kube-api-access-rs28z" (OuterVolumeSpecName: "kube-api-access-rs28z") pod "a0970469-bff7-4a87-854d-02a55f7a5d9c" (UID: "a0970469-bff7-4a87-854d-02a55f7a5d9c"). InnerVolumeSpecName "kube-api-access-rs28z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:37:30.235766 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.235698 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-util" (OuterVolumeSpecName: "util") pod "a0970469-bff7-4a87-854d-02a55f7a5d9c" (UID: "a0970469-bff7-4a87-854d-02a55f7a5d9c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 11:37:30.305747 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.305705 2567 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-bundle\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:37:30.305747 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.305745 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rs28z\" (UniqueName: \"kubernetes.io/projected/a0970469-bff7-4a87-854d-02a55f7a5d9c-kube-api-access-rs28z\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:37:30.305747 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.305760 2567 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a0970469-bff7-4a87-854d-02a55f7a5d9c-util\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:37:30.973461 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.973428 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" event={"ID":"a0970469-bff7-4a87-854d-02a55f7a5d9c","Type":"ContainerDied","Data":"fbab41ab226c5c375ef19f0a704558e4a7135703cd9240916242c493c6285400"} Apr 17 11:37:30.973461 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.973463 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbab41ab226c5c375ef19f0a704558e4a7135703cd9240916242c493c6285400" Apr 17 11:37:30.973670 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:30.973507 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/c2ca89134faa49158137edb0141b62ea0c6a854657aff316cf72d9c78e6zf88" Apr 17 11:37:36.288022 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.287942 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp"] Apr 17 11:37:36.288366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.288244 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerName="extract" Apr 17 11:37:36.288366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.288255 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerName="extract" Apr 17 11:37:36.288366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.288270 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerName="pull" Apr 17 11:37:36.288366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.288276 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerName="pull" Apr 17 11:37:36.288366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.288289 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerName="util" Apr 17 11:37:36.288366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.288297 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerName="util" Apr 17 11:37:36.288366 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.288346 2567 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0970469-bff7-4a87-854d-02a55f7a5d9c" containerName="extract" Apr 17 11:37:36.292304 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.292279 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" Apr 17 11:37:36.294173 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.294151 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp"] Apr 17 11:37:36.295536 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.295510 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-jobset-operator\"/\"openshift-service-ca.crt\"" Apr 17 11:37:36.296547 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.296525 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-jobset-operator\"/\"kube-root-ca.crt\"" Apr 17 11:37:36.296644 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.296524 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-jobset-operator\"/\"jobset-operator-dockercfg-vbrkj\"" Apr 17 11:37:36.354832 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.350814 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dbf9889-fba9-4adc-8fa9-e80b6a93e014-tmp\") pod \"jobset-operator-747c5859c7-hpjnp\" (UID: \"1dbf9889-fba9-4adc-8fa9-e80b6a93e014\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" Apr 17 11:37:36.354832 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.350933 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hpm\" (UniqueName: \"kubernetes.io/projected/1dbf9889-fba9-4adc-8fa9-e80b6a93e014-kube-api-access-m4hpm\") pod \"jobset-operator-747c5859c7-hpjnp\" (UID: \"1dbf9889-fba9-4adc-8fa9-e80b6a93e014\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" Apr 17 11:37:36.451391 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.451351 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dbf9889-fba9-4adc-8fa9-e80b6a93e014-tmp\") pod \"jobset-operator-747c5859c7-hpjnp\" (UID: \"1dbf9889-fba9-4adc-8fa9-e80b6a93e014\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" Apr 17 11:37:36.451569 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.451427 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hpm\" (UniqueName: \"kubernetes.io/projected/1dbf9889-fba9-4adc-8fa9-e80b6a93e014-kube-api-access-m4hpm\") pod \"jobset-operator-747c5859c7-hpjnp\" (UID: \"1dbf9889-fba9-4adc-8fa9-e80b6a93e014\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" Apr 17 11:37:36.451812 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.451789 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1dbf9889-fba9-4adc-8fa9-e80b6a93e014-tmp\") pod \"jobset-operator-747c5859c7-hpjnp\" (UID: \"1dbf9889-fba9-4adc-8fa9-e80b6a93e014\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" Apr 17 11:37:36.459298 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.459278 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hpm\" (UniqueName: \"kubernetes.io/projected/1dbf9889-fba9-4adc-8fa9-e80b6a93e014-kube-api-access-m4hpm\") pod \"jobset-operator-747c5859c7-hpjnp\" (UID: \"1dbf9889-fba9-4adc-8fa9-e80b6a93e014\") " pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" Apr 17 11:37:36.602602 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.602512 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" Apr 17 11:37:36.746305 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.746279 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp"] Apr 17 11:37:36.748989 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:37:36.748955 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dbf9889_fba9_4adc_8fa9_e80b6a93e014.slice/crio-d8efc9699d954f116baf9b8b27ff1838439e309a9dbd2943d8db5f870d033806 WatchSource:0}: Error finding container d8efc9699d954f116baf9b8b27ff1838439e309a9dbd2943d8db5f870d033806: Status 404 returned error can't find the container with id d8efc9699d954f116baf9b8b27ff1838439e309a9dbd2943d8db5f870d033806 Apr 17 11:37:36.993621 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:36.993582 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" event={"ID":"1dbf9889-fba9-4adc-8fa9-e80b6a93e014","Type":"ContainerStarted","Data":"d8efc9699d954f116baf9b8b27ff1838439e309a9dbd2943d8db5f870d033806"} Apr 17 11:37:39.001508 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:39.001475 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" event={"ID":"1dbf9889-fba9-4adc-8fa9-e80b6a93e014","Type":"ContainerStarted","Data":"09582c2eb547f6c8046b54d59d550ed2be322c5b28314e4e071cb07287503367"} Apr 17 11:37:39.020180 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:37:39.020133 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-jobset-operator/jobset-operator-747c5859c7-hpjnp" podStartSLOduration=0.939686798 podStartE2EDuration="3.020120003s" podCreationTimestamp="2026-04-17 11:37:36 +0000 UTC" firstStartedPulling="2026-04-17 11:37:36.750403376 +0000 UTC m=+437.588641109" lastFinishedPulling="2026-04-17 11:37:38.830836569 +0000 UTC m=+439.669074314" observedRunningTime="2026-04-17 11:37:39.018379583 +0000 UTC m=+439.856617340" watchObservedRunningTime="2026-04-17 11:37:39.020120003 +0000 UTC m=+439.858357759" Apr 17 11:40:19.648994 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:40:19.648967 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:40:19.650806 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:40:19.650779 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:40:19.653635 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:40:19.653615 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:40:19.655318 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:40:19.655300 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:45:19.670042 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:45:19.670015 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:45:19.672509 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:45:19.672487 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:45:19.674256 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:45:19.674239 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:45:19.676800 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:45:19.676779 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:47:35.775632 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.775592 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f"] Apr 17 11:47:35.778853 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.778837 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:47:35.782331 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.782308 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"rhai-e2e-progression-hfmts\"/\"default-dockercfg-h6m6b\"" Apr 17 11:47:35.782331 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.782324 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-hfmts\"/\"openshift-service-ca.crt\"" Apr 17 11:47:35.782504 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.782310 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-hfmts\"/\"kube-root-ca.crt\"" Apr 17 11:47:35.789710 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.789674 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f"] Apr 17 11:47:35.943117 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.943085 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4"] Apr 17 11:47:35.946212 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.946196 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:47:35.952311 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.952286 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7jfz\" (UniqueName: \"kubernetes.io/projected/9cd24a1b-9c16-4192-a203-c07b2d86c711-kube-api-access-l7jfz\") pod \"progression-custom-config-node-0-0-g5c9f\" (UID: \"9cd24a1b-9c16-4192-a203-c07b2d86c711\") " pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:47:35.955088 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:35.955068 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4"] Apr 17 11:47:36.053615 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.053535 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz6tk\" (UniqueName: \"kubernetes.io/projected/6e8d064d-5a2a-4f1b-bb21-521e22a2819f-kube-api-access-pz6tk\") pod \"progression-job-failure-node-0-0-vqmx4\" (UID: \"6e8d064d-5a2a-4f1b-bb21-521e22a2819f\") " pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:47:36.053615 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.053582 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7jfz\" (UniqueName: \"kubernetes.io/projected/9cd24a1b-9c16-4192-a203-c07b2d86c711-kube-api-access-l7jfz\") pod \"progression-custom-config-node-0-0-g5c9f\" (UID: \"9cd24a1b-9c16-4192-a203-c07b2d86c711\") " pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:47:36.061468 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.061446 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7jfz\" (UniqueName: \"kubernetes.io/projected/9cd24a1b-9c16-4192-a203-c07b2d86c711-kube-api-access-l7jfz\") pod \"progression-custom-config-node-0-0-g5c9f\" (UID: \"9cd24a1b-9c16-4192-a203-c07b2d86c711\") " pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:47:36.088393 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.088362 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:47:36.154524 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.154493 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pz6tk\" (UniqueName: \"kubernetes.io/projected/6e8d064d-5a2a-4f1b-bb21-521e22a2819f-kube-api-access-pz6tk\") pod \"progression-job-failure-node-0-0-vqmx4\" (UID: \"6e8d064d-5a2a-4f1b-bb21-521e22a2819f\") " pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:47:36.163470 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.163442 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz6tk\" (UniqueName: \"kubernetes.io/projected/6e8d064d-5a2a-4f1b-bb21-521e22a2819f-kube-api-access-pz6tk\") pod \"progression-job-failure-node-0-0-vqmx4\" (UID: \"6e8d064d-5a2a-4f1b-bb21-521e22a2819f\") " pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:47:36.217611 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.217579 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f"] Apr 17 11:47:36.217879 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.217860 2567 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 11:47:36.255468 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.255446 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:47:36.372039 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.372014 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4"] Apr 17 11:47:36.374499 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:47:36.374472 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8d064d_5a2a_4f1b_bb21_521e22a2819f.slice/crio-e7221d8d9bc7f24ef22a8ff165ef19895e3cbd5101c7be793c247c9e83cf6200 WatchSource:0}: Error finding container e7221d8d9bc7f24ef22a8ff165ef19895e3cbd5101c7be793c247c9e83cf6200: Status 404 returned error can't find the container with id e7221d8d9bc7f24ef22a8ff165ef19895e3cbd5101c7be793c247c9e83cf6200 Apr 17 11:47:36.946857 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.946748 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" event={"ID":"6e8d064d-5a2a-4f1b-bb21-521e22a2819f","Type":"ContainerStarted","Data":"e7221d8d9bc7f24ef22a8ff165ef19895e3cbd5101c7be793c247c9e83cf6200"} Apr 17 11:47:36.949134 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:47:36.949083 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" event={"ID":"9cd24a1b-9c16-4192-a203-c07b2d86c711","Type":"ContainerStarted","Data":"874e9fecae32f81729ef9006bad27012e06f774ed007bfd5f3eff49164267270"} Apr 17 11:49:21.356966 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:21.356086 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" event={"ID":"6e8d064d-5a2a-4f1b-bb21-521e22a2819f","Type":"ContainerStarted","Data":"055adc7b660bf837728f0b32234d455fbcfaf2f1f441310586722f40dffca91c"} Apr 17 11:49:21.356966 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:21.356927 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:49:21.359500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:21.359013 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" event={"ID":"9cd24a1b-9c16-4192-a203-c07b2d86c711","Type":"ContainerStarted","Data":"19ec6653d4356248cfc37ac1d804aac71ff8cab23208c073bd4792eadd57e7a4"} Apr 17 11:49:21.359500 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:21.359469 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:49:21.393222 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:21.393165 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" podStartSLOduration=2.265968713 podStartE2EDuration="1m46.393145745s" podCreationTimestamp="2026-04-17 11:47:35 +0000 UTC" firstStartedPulling="2026-04-17 11:47:36.376482732 +0000 UTC m=+1037.214720467" lastFinishedPulling="2026-04-17 11:49:20.503659762 +0000 UTC m=+1141.341897499" observedRunningTime="2026-04-17 11:49:21.37554928 +0000 UTC m=+1142.213787039" watchObservedRunningTime="2026-04-17 11:49:21.393145745 +0000 UTC m=+1142.231383502" Apr 17 11:49:21.393852 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:21.393804 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" podStartSLOduration=1.954154195 podStartE2EDuration="1m46.393793311s" podCreationTimestamp="2026-04-17 11:47:35 +0000 UTC" firstStartedPulling="2026-04-17 11:47:36.21800903 +0000 UTC m=+1037.056246764" lastFinishedPulling="2026-04-17 11:49:20.657648142 +0000 UTC m=+1141.495885880" observedRunningTime="2026-04-17 11:49:21.392020495 +0000 UTC m=+1142.230258252" watchObservedRunningTime="2026-04-17 11:49:21.393793311 +0000 UTC m=+1142.232031065" Apr 17 11:49:22.359346 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:22.359312 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:49:22.361526 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:22.361505 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:49:30.357630 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:30.357541 2567 prober.go:120] "Probe failed" probeType="Readiness" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" podUID="6e8d064d-5a2a-4f1b-bb21-521e22a2819f" containerName="node" probeResult="failure" output="Get \"http://10.132.0.26:28080/metrics\": dial tcp 10.132.0.26:28080: connect: connection refused" Apr 17 11:49:30.387795 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:30.387767 2567 generic.go:358] "Generic (PLEG): container finished" podID="6e8d064d-5a2a-4f1b-bb21-521e22a2819f" containerID="055adc7b660bf837728f0b32234d455fbcfaf2f1f441310586722f40dffca91c" exitCode=1 Apr 17 11:49:30.387940 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:30.387841 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" event={"ID":"6e8d064d-5a2a-4f1b-bb21-521e22a2819f","Type":"ContainerDied","Data":"055adc7b660bf837728f0b32234d455fbcfaf2f1f441310586722f40dffca91c"} Apr 17 11:49:31.513453 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:31.513429 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:49:31.547712 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:31.547674 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz6tk\" (UniqueName: \"kubernetes.io/projected/6e8d064d-5a2a-4f1b-bb21-521e22a2819f-kube-api-access-pz6tk\") pod \"6e8d064d-5a2a-4f1b-bb21-521e22a2819f\" (UID: \"6e8d064d-5a2a-4f1b-bb21-521e22a2819f\") " Apr 17 11:49:31.549813 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:31.549782 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8d064d-5a2a-4f1b-bb21-521e22a2819f-kube-api-access-pz6tk" (OuterVolumeSpecName: "kube-api-access-pz6tk") pod "6e8d064d-5a2a-4f1b-bb21-521e22a2819f" (UID: "6e8d064d-5a2a-4f1b-bb21-521e22a2819f"). InnerVolumeSpecName "kube-api-access-pz6tk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:49:31.649026 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:31.648959 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pz6tk\" (UniqueName: \"kubernetes.io/projected/6e8d064d-5a2a-4f1b-bb21-521e22a2819f-kube-api-access-pz6tk\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:49:32.395149 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:32.395116 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" event={"ID":"6e8d064d-5a2a-4f1b-bb21-521e22a2819f","Type":"ContainerDied","Data":"e7221d8d9bc7f24ef22a8ff165ef19895e3cbd5101c7be793c247c9e83cf6200"} Apr 17 11:49:32.395149 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:32.395149 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7221d8d9bc7f24ef22a8ff165ef19895e3cbd5101c7be793c247c9e83cf6200" Apr 17 11:49:32.395346 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:32.395162 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4" Apr 17 11:49:44.360049 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:44.360000 2567 prober.go:120] "Probe failed" probeType="Readiness" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" podUID="9cd24a1b-9c16-4192-a203-c07b2d86c711" containerName="node" probeResult="failure" output="Get \"http://10.132.0.25:28080/metrics\": dial tcp 10.132.0.25:28080: connect: connection refused" Apr 17 11:49:44.445993 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:44.445955 2567 generic.go:358] "Generic (PLEG): container finished" podID="9cd24a1b-9c16-4192-a203-c07b2d86c711" containerID="19ec6653d4356248cfc37ac1d804aac71ff8cab23208c073bd4792eadd57e7a4" exitCode=0 Apr 17 11:49:44.446158 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:44.446028 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" event={"ID":"9cd24a1b-9c16-4192-a203-c07b2d86c711","Type":"ContainerDied","Data":"19ec6653d4356248cfc37ac1d804aac71ff8cab23208c073bd4792eadd57e7a4"} Apr 17 11:49:45.577824 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:45.577801 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:49:45.651386 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:45.651356 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7jfz\" (UniqueName: \"kubernetes.io/projected/9cd24a1b-9c16-4192-a203-c07b2d86c711-kube-api-access-l7jfz\") pod \"9cd24a1b-9c16-4192-a203-c07b2d86c711\" (UID: \"9cd24a1b-9c16-4192-a203-c07b2d86c711\") " Apr 17 11:49:45.653376 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:45.653350 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd24a1b-9c16-4192-a203-c07b2d86c711-kube-api-access-l7jfz" (OuterVolumeSpecName: "kube-api-access-l7jfz") pod "9cd24a1b-9c16-4192-a203-c07b2d86c711" (UID: "9cd24a1b-9c16-4192-a203-c07b2d86c711"). InnerVolumeSpecName "kube-api-access-l7jfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:49:45.752054 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:45.752025 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l7jfz\" (UniqueName: \"kubernetes.io/projected/9cd24a1b-9c16-4192-a203-c07b2d86c711-kube-api-access-l7jfz\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:49:46.453662 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:46.453624 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" event={"ID":"9cd24a1b-9c16-4192-a203-c07b2d86c711","Type":"ContainerDied","Data":"874e9fecae32f81729ef9006bad27012e06f774ed007bfd5f3eff49164267270"} Apr 17 11:49:46.453662 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:46.453644 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f" Apr 17 11:49:46.453662 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:49:46.453662 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="874e9fecae32f81729ef9006bad27012e06f774ed007bfd5f3eff49164267270" Apr 17 11:50:19.692031 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:50:19.691998 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:50:19.694991 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:50:19.694969 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:50:19.696309 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:50:19.696289 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:50:19.699252 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:50:19.699235 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:52:36.078481 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.078444 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w"] Apr 17 11:52:36.081159 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.078747 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9cd24a1b-9c16-4192-a203-c07b2d86c711" containerName="node" Apr 17 11:52:36.081159 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.078758 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd24a1b-9c16-4192-a203-c07b2d86c711" containerName="node" Apr 17 11:52:36.081159 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.078776 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6e8d064d-5a2a-4f1b-bb21-521e22a2819f" containerName="node" Apr 17 11:52:36.081159 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.078782 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8d064d-5a2a-4f1b-bb21-521e22a2819f" containerName="node" Apr 17 11:52:36.081159 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.078832 2567 memory_manager.go:356] "RemoveStaleState removing state" podUID="9cd24a1b-9c16-4192-a203-c07b2d86c711" containerName="node" Apr 17 11:52:36.081159 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.078841 2567 memory_manager.go:356] "RemoveStaleState removing state" podUID="6e8d064d-5a2a-4f1b-bb21-521e22a2819f" containerName="node" Apr 17 11:52:36.082167 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.082150 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" Apr 17 11:52:36.084573 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.084548 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-hfmts\"/\"openshift-service-ca.crt\"" Apr 17 11:52:36.084705 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.084607 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"rhai-e2e-progression-hfmts\"/\"default-dockercfg-h6m6b\"" Apr 17 11:52:36.085584 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.085569 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"rhai-e2e-progression-hfmts\"/\"kube-root-ca.crt\"" Apr 17 11:52:36.096518 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.096490 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w"] Apr 17 11:52:36.138134 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.138104 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6cnf\" (UniqueName: \"kubernetes.io/projected/d2ded4d0-14a8-4174-bfe2-9e1420fb69a3-kube-api-access-r6cnf\") pod \"progression-no-metrics-node-0-0-ql22w\" (UID: \"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3\") " pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" Apr 17 11:52:36.239494 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.239454 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6cnf\" (UniqueName: \"kubernetes.io/projected/d2ded4d0-14a8-4174-bfe2-9e1420fb69a3-kube-api-access-r6cnf\") pod \"progression-no-metrics-node-0-0-ql22w\" (UID: \"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3\") " pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" Apr 17 11:52:36.247404 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.247373 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6cnf\" (UniqueName: \"kubernetes.io/projected/d2ded4d0-14a8-4174-bfe2-9e1420fb69a3-kube-api-access-r6cnf\") pod \"progression-no-metrics-node-0-0-ql22w\" (UID: \"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3\") " pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" Apr 17 11:52:36.391515 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.391427 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" Apr 17 11:52:36.509402 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.509355 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w"] Apr 17 11:52:36.511538 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:52:36.511507 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2ded4d0_14a8_4174_bfe2_9e1420fb69a3.slice/crio-b0df7cec87a990388bfc0e01956cbabdb2838bc08eb8ffe281af709029ee95a3 WatchSource:0}: Error finding container b0df7cec87a990388bfc0e01956cbabdb2838bc08eb8ffe281af709029ee95a3: Status 404 returned error can't find the container with id b0df7cec87a990388bfc0e01956cbabdb2838bc08eb8ffe281af709029ee95a3 Apr 17 11:52:36.513364 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:36.513348 2567 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 17 11:52:37.013181 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:37.013143 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" event={"ID":"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3","Type":"ContainerStarted","Data":"4999822c9c411b4e0f3c579d3054e138d4a75cebf98d0f380e488dfdca99ca6a"} Apr 17 11:52:37.013181 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:37.013181 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" event={"ID":"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3","Type":"ContainerStarted","Data":"b0df7cec87a990388bfc0e01956cbabdb2838bc08eb8ffe281af709029ee95a3"} Apr 17 11:52:37.030121 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:37.030073 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" podStartSLOduration=1.030059691 podStartE2EDuration="1.030059691s" podCreationTimestamp="2026-04-17 11:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 11:52:37.028470429 +0000 UTC m=+1337.866708181" watchObservedRunningTime="2026-04-17 11:52:37.030059691 +0000 UTC m=+1337.868297446" Apr 17 11:52:42.030382 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:42.030347 2567 generic.go:358] "Generic (PLEG): container finished" podID="d2ded4d0-14a8-4174-bfe2-9e1420fb69a3" containerID="4999822c9c411b4e0f3c579d3054e138d4a75cebf98d0f380e488dfdca99ca6a" exitCode=0 Apr 17 11:52:42.030767 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:42.030418 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" event={"ID":"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3","Type":"ContainerDied","Data":"4999822c9c411b4e0f3c579d3054e138d4a75cebf98d0f380e488dfdca99ca6a"} Apr 17 11:52:43.159975 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:43.159954 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" Apr 17 11:52:43.194596 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:43.194567 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6cnf\" (UniqueName: \"kubernetes.io/projected/d2ded4d0-14a8-4174-bfe2-9e1420fb69a3-kube-api-access-r6cnf\") pod \"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3\" (UID: \"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3\") " Apr 17 11:52:43.196614 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:43.196583 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ded4d0-14a8-4174-bfe2-9e1420fb69a3-kube-api-access-r6cnf" (OuterVolumeSpecName: "kube-api-access-r6cnf") pod "d2ded4d0-14a8-4174-bfe2-9e1420fb69a3" (UID: "d2ded4d0-14a8-4174-bfe2-9e1420fb69a3"). InnerVolumeSpecName "kube-api-access-r6cnf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:52:43.296051 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:43.295972 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6cnf\" (UniqueName: \"kubernetes.io/projected/d2ded4d0-14a8-4174-bfe2-9e1420fb69a3-kube-api-access-r6cnf\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:52:44.038699 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:44.038598 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" event={"ID":"d2ded4d0-14a8-4174-bfe2-9e1420fb69a3","Type":"ContainerDied","Data":"b0df7cec87a990388bfc0e01956cbabdb2838bc08eb8ffe281af709029ee95a3"} Apr 17 11:52:44.038699 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:44.038635 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w" Apr 17 11:52:44.038880 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:44.038636 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0df7cec87a990388bfc0e01956cbabdb2838bc08eb8ffe281af709029ee95a3" Apr 17 11:52:49.077731 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.077690 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4j6mk/must-gather-wmxq9"] Apr 17 11:52:49.078095 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.077999 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2ded4d0-14a8-4174-bfe2-9e1420fb69a3" containerName="node" Apr 17 11:52:49.078095 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.078010 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ded4d0-14a8-4174-bfe2-9e1420fb69a3" containerName="node" Apr 17 11:52:49.078095 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.078056 2567 memory_manager.go:356] "RemoveStaleState removing state" podUID="d2ded4d0-14a8-4174-bfe2-9e1420fb69a3" containerName="node" Apr 17 11:52:49.081139 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.081123 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:52:49.083958 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.083942 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-4j6mk\"/\"kube-root-ca.crt\"" Apr 17 11:52:49.084815 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.084794 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-4j6mk\"/\"openshift-service-ca.crt\"" Apr 17 11:52:49.084911 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.084795 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-4j6mk\"/\"default-dockercfg-b7zxk\"" Apr 17 11:52:49.089382 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.089361 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4j6mk/must-gather-wmxq9"] Apr 17 11:52:49.142385 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.142354 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnf5f\" (UniqueName: \"kubernetes.io/projected/c9478624-c429-40b3-9526-898ed5e755f4-kube-api-access-lnf5f\") pod \"must-gather-wmxq9\" (UID: \"c9478624-c429-40b3-9526-898ed5e755f4\") " pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:52:49.142549 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.142394 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c9478624-c429-40b3-9526-898ed5e755f4-must-gather-output\") pod \"must-gather-wmxq9\" (UID: \"c9478624-c429-40b3-9526-898ed5e755f4\") " pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:52:49.243771 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.243730 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnf5f\" (UniqueName: \"kubernetes.io/projected/c9478624-c429-40b3-9526-898ed5e755f4-kube-api-access-lnf5f\") pod \"must-gather-wmxq9\" (UID: \"c9478624-c429-40b3-9526-898ed5e755f4\") " pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:52:49.243958 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.243791 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c9478624-c429-40b3-9526-898ed5e755f4-must-gather-output\") pod \"must-gather-wmxq9\" (UID: \"c9478624-c429-40b3-9526-898ed5e755f4\") " pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:52:49.244192 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.244170 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c9478624-c429-40b3-9526-898ed5e755f4-must-gather-output\") pod \"must-gather-wmxq9\" (UID: \"c9478624-c429-40b3-9526-898ed5e755f4\") " pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:52:49.251454 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.251422 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnf5f\" (UniqueName: \"kubernetes.io/projected/c9478624-c429-40b3-9526-898ed5e755f4-kube-api-access-lnf5f\") pod \"must-gather-wmxq9\" (UID: \"c9478624-c429-40b3-9526-898ed5e755f4\") " pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:52:49.390718 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.390571 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:52:49.511865 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:49.511837 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4j6mk/must-gather-wmxq9"] Apr 17 11:52:49.512850 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:52:49.512824 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9478624_c429_40b3_9526_898ed5e755f4.slice/crio-3d9563824755cfed2f9a03e335106180be03ab231699262a56e518aec1ad3ad4 WatchSource:0}: Error finding container 3d9563824755cfed2f9a03e335106180be03ab231699262a56e518aec1ad3ad4: Status 404 returned error can't find the container with id 3d9563824755cfed2f9a03e335106180be03ab231699262a56e518aec1ad3ad4 Apr 17 11:52:50.058659 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:50.058623 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" event={"ID":"c9478624-c429-40b3-9526-898ed5e755f4","Type":"ContainerStarted","Data":"3d9563824755cfed2f9a03e335106180be03ab231699262a56e518aec1ad3ad4"} Apr 17 11:52:53.023528 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.023488 2567 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f"] Apr 17 11:52:53.026864 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.026839 2567 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["rhai-e2e-progression-hfmts/progression-custom-config-node-0-0-g5c9f"] Apr 17 11:52:53.055530 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.055491 2567 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4"] Apr 17 11:52:53.060653 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.060631 2567 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["rhai-e2e-progression-hfmts/progression-job-failure-node-0-0-vqmx4"] Apr 17 11:52:53.072400 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.072378 2567 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w"] Apr 17 11:52:53.075703 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.075654 2567 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["rhai-e2e-progression-hfmts/progression-no-metrics-node-0-0-ql22w"] Apr 17 11:52:53.716110 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.716072 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8d064d-5a2a-4f1b-bb21-521e22a2819f" path="/var/lib/kubelet/pods/6e8d064d-5a2a-4f1b-bb21-521e22a2819f/volumes" Apr 17 11:52:53.716613 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.716594 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd24a1b-9c16-4192-a203-c07b2d86c711" path="/var/lib/kubelet/pods/9cd24a1b-9c16-4192-a203-c07b2d86c711/volumes" Apr 17 11:52:53.717043 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:53.717028 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ded4d0-14a8-4174-bfe2-9e1420fb69a3" path="/var/lib/kubelet/pods/d2ded4d0-14a8-4174-bfe2-9e1420fb69a3/volumes" Apr 17 11:52:55.078149 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:55.078118 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" event={"ID":"c9478624-c429-40b3-9526-898ed5e755f4","Type":"ContainerStarted","Data":"b1e2cd50d9b6bc2b1fe21a1b635e66cefbad4e19b0028c124080c29a964b312e"} Apr 17 11:52:55.078149 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:55.078153 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" event={"ID":"c9478624-c429-40b3-9526-898ed5e755f4","Type":"ContainerStarted","Data":"7792a500b2644b3d8d77b796435b6b7e3b65fe3da60323a613694f731edabc4c"} Apr 17 11:52:55.093675 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:52:55.093625 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" podStartSLOduration=0.865012021 podStartE2EDuration="6.093607615s" podCreationTimestamp="2026-04-17 11:52:49 +0000 UTC" firstStartedPulling="2026-04-17 11:52:49.514395419 +0000 UTC m=+1350.352633153" lastFinishedPulling="2026-04-17 11:52:54.742991 +0000 UTC m=+1355.581228747" observedRunningTime="2026-04-17 11:52:55.091648774 +0000 UTC m=+1355.929886528" watchObservedRunningTime="2026-04-17 11:52:55.093607615 +0000 UTC m=+1355.931845416" Apr 17 11:53:40.249353 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:40.249318 2567 generic.go:358] "Generic (PLEG): container finished" podID="c9478624-c429-40b3-9526-898ed5e755f4" containerID="7792a500b2644b3d8d77b796435b6b7e3b65fe3da60323a613694f731edabc4c" exitCode=0 Apr 17 11:53:40.250038 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:40.249394 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" event={"ID":"c9478624-c429-40b3-9526-898ed5e755f4","Type":"ContainerDied","Data":"7792a500b2644b3d8d77b796435b6b7e3b65fe3da60323a613694f731edabc4c"} Apr 17 11:53:40.250038 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:40.249739 2567 scope.go:117] "RemoveContainer" containerID="7792a500b2644b3d8d77b796435b6b7e3b65fe3da60323a613694f731edabc4c" Apr 17 11:53:40.795703 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:40.795655 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4j6mk_must-gather-wmxq9_c9478624-c429-40b3-9526-898ed5e755f4/gather/0.log" Apr 17 11:53:44.037855 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:44.037825 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-ftdll_e6522e95-55e6-43f4-9a0b-b0429a3a47c4/global-pull-secret-syncer/0.log" Apr 17 11:53:44.185719 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:44.185661 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-jkr2r_710176df-941f-45f0-baed-0e9b9115157d/konnectivity-agent/0.log" Apr 17 11:53:44.249097 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:44.249068 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-140-245.ec2.internal_b313ec3d20352e5d3b289f2ee066026b/haproxy/0.log" Apr 17 11:53:46.117906 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.117871 2567 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4j6mk/must-gather-wmxq9"] Apr 17 11:53:46.118288 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.118102 2567 kuberuntime_container.go:864] "Killing container with a grace period" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" podUID="c9478624-c429-40b3-9526-898ed5e755f4" containerName="copy" containerID="cri-o://b1e2cd50d9b6bc2b1fe21a1b635e66cefbad4e19b0028c124080c29a964b312e" gracePeriod=2 Apr 17 11:53:46.120464 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.120432 2567 status_manager.go:895] "Failed to get status for pod" podUID="c9478624-c429-40b3-9526-898ed5e755f4" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" err="pods \"must-gather-wmxq9\" is forbidden: User \"system:node:ip-10-0-140-245.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-4j6mk\": no relationship found between node 'ip-10-0-140-245.ec2.internal' and this object" Apr 17 11:53:46.121534 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.121512 2567 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4j6mk/must-gather-wmxq9"] Apr 17 11:53:46.271331 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.271305 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4j6mk_must-gather-wmxq9_c9478624-c429-40b3-9526-898ed5e755f4/copy/0.log" Apr 17 11:53:46.271613 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.271591 2567 generic.go:358] "Generic (PLEG): container finished" podID="c9478624-c429-40b3-9526-898ed5e755f4" containerID="b1e2cd50d9b6bc2b1fe21a1b635e66cefbad4e19b0028c124080c29a964b312e" exitCode=143 Apr 17 11:53:46.350507 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.350485 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4j6mk_must-gather-wmxq9_c9478624-c429-40b3-9526-898ed5e755f4/copy/0.log" Apr 17 11:53:46.350846 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.350831 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:53:46.352714 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.352657 2567 status_manager.go:895] "Failed to get status for pod" podUID="c9478624-c429-40b3-9526-898ed5e755f4" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" err="pods \"must-gather-wmxq9\" is forbidden: User \"system:node:ip-10-0-140-245.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-4j6mk\": no relationship found between node 'ip-10-0-140-245.ec2.internal' and this object" Apr 17 11:53:46.424077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.424054 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c9478624-c429-40b3-9526-898ed5e755f4-must-gather-output\") pod \"c9478624-c429-40b3-9526-898ed5e755f4\" (UID: \"c9478624-c429-40b3-9526-898ed5e755f4\") " Apr 17 11:53:46.424160 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.424088 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnf5f\" (UniqueName: \"kubernetes.io/projected/c9478624-c429-40b3-9526-898ed5e755f4-kube-api-access-lnf5f\") pod \"c9478624-c429-40b3-9526-898ed5e755f4\" (UID: \"c9478624-c429-40b3-9526-898ed5e755f4\") " Apr 17 11:53:46.426232 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.426203 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9478624-c429-40b3-9526-898ed5e755f4-kube-api-access-lnf5f" (OuterVolumeSpecName: "kube-api-access-lnf5f") pod "c9478624-c429-40b3-9526-898ed5e755f4" (UID: "c9478624-c429-40b3-9526-898ed5e755f4"). InnerVolumeSpecName "kube-api-access-lnf5f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 11:53:46.426335 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.426314 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9478624-c429-40b3-9526-898ed5e755f4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c9478624-c429-40b3-9526-898ed5e755f4" (UID: "c9478624-c429-40b3-9526-898ed5e755f4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 17 11:53:46.524915 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.524883 2567 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c9478624-c429-40b3-9526-898ed5e755f4-must-gather-output\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:53:46.524915 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:46.524916 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lnf5f\" (UniqueName: \"kubernetes.io/projected/c9478624-c429-40b3-9526-898ed5e755f4-kube-api-access-lnf5f\") on node \"ip-10-0-140-245.ec2.internal\" DevicePath \"\"" Apr 17 11:53:47.275650 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.275626 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4j6mk_must-gather-wmxq9_c9478624-c429-40b3-9526-898ed5e755f4/copy/0.log" Apr 17 11:53:47.276016 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.275995 2567 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" Apr 17 11:53:47.276095 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.276006 2567 scope.go:117] "RemoveContainer" containerID="b1e2cd50d9b6bc2b1fe21a1b635e66cefbad4e19b0028c124080c29a964b312e" Apr 17 11:53:47.278048 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.278020 2567 status_manager.go:895] "Failed to get status for pod" podUID="c9478624-c429-40b3-9526-898ed5e755f4" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" err="pods \"must-gather-wmxq9\" is forbidden: User \"system:node:ip-10-0-140-245.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-4j6mk\": no relationship found between node 'ip-10-0-140-245.ec2.internal' and this object" Apr 17 11:53:47.283090 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.283060 2567 scope.go:117] "RemoveContainer" containerID="7792a500b2644b3d8d77b796435b6b7e3b65fe3da60323a613694f731edabc4c" Apr 17 11:53:47.286054 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.286020 2567 status_manager.go:895] "Failed to get status for pod" podUID="c9478624-c429-40b3-9526-898ed5e755f4" pod="openshift-must-gather-4j6mk/must-gather-wmxq9" err="pods \"must-gather-wmxq9\" is forbidden: User \"system:node:ip-10-0-140-245.ec2.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-4j6mk\": no relationship found between node 'ip-10-0-140-245.ec2.internal' and this object" Apr 17 11:53:47.306660 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.306633 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-9g9xm_6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe/kube-state-metrics/0.log" Apr 17 11:53:47.324818 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.324798 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-9g9xm_6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe/kube-rbac-proxy-main/0.log" Apr 17 11:53:47.351548 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.351527 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-69db897b98-9g9xm_6fa3372d-8fe5-4d0e-b7ab-179efcea0bbe/kube-rbac-proxy-self/0.log" Apr 17 11:53:47.450137 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.450113 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-4qgkz_1c3be40e-d817-4bb6-b190-0f2a3106d3e5/node-exporter/0.log" Apr 17 11:53:47.466757 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.466673 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-4qgkz_1c3be40e-d817-4bb6-b190-0f2a3106d3e5/kube-rbac-proxy/0.log" Apr 17 11:53:47.484990 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.484966 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-4qgkz_1c3be40e-d817-4bb6-b190-0f2a3106d3e5/init-textfile/0.log" Apr 17 11:53:47.713581 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.713550 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9478624-c429-40b3-9526-898ed5e755f4" path="/var/lib/kubelet/pods/c9478624-c429-40b3-9526-898ed5e755f4/volumes" Apr 17 11:53:47.878131 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.878058 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-w2jhn_14a3d4aa-4c23-4e2b-801d-34b9b27b9941/prometheus-operator/0.log" Apr 17 11:53:47.892330 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:47.892312 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5676c8c784-w2jhn_14a3d4aa-4c23-4e2b-801d-34b9b27b9941/kube-rbac-proxy/0.log" Apr 17 11:53:49.292151 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:49.292122 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-cb95c66f6-8b7v5_57b8a4d3-28c9-4671-9ef8-20adb1b71c4e/networking-console-plugin/0.log" Apr 17 11:53:49.699998 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:49.699974 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/2.log" Apr 17 11:53:49.708201 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:49.708176 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-9d4b6777b-j8vsm_b4b1d00c-98b8-45c5-80c4-0362b3303384/console-operator/3.log" Apr 17 11:53:50.446492 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.446465 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_volume-data-source-validator-7c6cbb6c87-x5b7q_5a2c4dc0-7d3f-4520-8ef4-4d3c320bedeb/volume-data-source-validator/0.log" Apr 17 11:53:50.814351 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.814277 2567 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt"] Apr 17 11:53:50.814596 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.814584 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9478624-c429-40b3-9526-898ed5e755f4" containerName="gather" Apr 17 11:53:50.814641 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.814597 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9478624-c429-40b3-9526-898ed5e755f4" containerName="gather" Apr 17 11:53:50.814641 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.814617 2567 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9478624-c429-40b3-9526-898ed5e755f4" containerName="copy" Apr 17 11:53:50.814641 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.814623 2567 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9478624-c429-40b3-9526-898ed5e755f4" containerName="copy" Apr 17 11:53:50.814745 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.814674 2567 memory_manager.go:356] "RemoveStaleState removing state" podUID="c9478624-c429-40b3-9526-898ed5e755f4" containerName="gather" Apr 17 11:53:50.814745 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.814698 2567 memory_manager.go:356] "RemoveStaleState removing state" podUID="c9478624-c429-40b3-9526-898ed5e755f4" containerName="copy" Apr 17 11:53:50.820122 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.820098 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.822165 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.822143 2567 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-jjn79\"/\"default-dockercfg-9rt4j\"" Apr 17 11:53:50.822340 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.822316 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-jjn79\"/\"openshift-service-ca.crt\"" Apr 17 11:53:50.822652 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.822635 2567 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-jjn79\"/\"kube-root-ca.crt\"" Apr 17 11:53:50.823619 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.823592 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt"] Apr 17 11:53:50.856841 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.856811 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-lib-modules\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.856957 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.856844 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-proc\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.856957 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.856874 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-podres\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.857040 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.856957 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zg68\" (UniqueName: \"kubernetes.io/projected/dd81347b-6712-40a8-82b3-741405d0e50a-kube-api-access-2zg68\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.857040 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.856986 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-sys\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957437 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957410 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-lib-modules\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957437 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957438 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-proc\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957605 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957455 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-podres\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957605 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957478 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zg68\" (UniqueName: \"kubernetes.io/projected/dd81347b-6712-40a8-82b3-741405d0e50a-kube-api-access-2zg68\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957605 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957496 2567 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-sys\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957605 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957530 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-proc\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957605 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957585 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-podres\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957605 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957591 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-lib-modules\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.957834 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.957614 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dd81347b-6712-40a8-82b3-741405d0e50a-sys\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:50.964401 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:50.964384 2567 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zg68\" (UniqueName: \"kubernetes.io/projected/dd81347b-6712-40a8-82b3-741405d0e50a-kube-api-access-2zg68\") pod \"perf-node-gather-daemonset-8v6xt\" (UID: \"dd81347b-6712-40a8-82b3-741405d0e50a\") " pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:51.028248 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:51.028224 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5q84n_2da30d47-d7ea-47f4-a489-4729c8989cef/dns/0.log" Apr 17 11:53:51.047251 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:51.047232 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5q84n_2da30d47-d7ea-47f4-a489-4729c8989cef/kube-rbac-proxy/0.log" Apr 17 11:53:51.130585 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:51.130519 2567 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:51.161822 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:51.161796 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-9zpcp_f79e4e5d-fbfb-429a-aa74-4c1d5725072a/dns-node-resolver/0.log" Apr 17 11:53:51.245312 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:51.245290 2567 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt"] Apr 17 11:53:51.247449 ip-10-0-140-245 kubenswrapper[2567]: W0417 11:53:51.247407 2567 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddd81347b_6712_40a8_82b3_741405d0e50a.slice/crio-bfb43ab709ba4d52a9ef1da51a2b0c2460cf69c6a8099bd4b5e6adfaf06ae160 WatchSource:0}: Error finding container bfb43ab709ba4d52a9ef1da51a2b0c2460cf69c6a8099bd4b5e6adfaf06ae160: Status 404 returned error can't find the container with id bfb43ab709ba4d52a9ef1da51a2b0c2460cf69c6a8099bd4b5e6adfaf06ae160 Apr 17 11:53:51.291537 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:51.291512 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" event={"ID":"dd81347b-6712-40a8-82b3-741405d0e50a","Type":"ContainerStarted","Data":"bfb43ab709ba4d52a9ef1da51a2b0c2460cf69c6a8099bd4b5e6adfaf06ae160"} Apr 17 11:53:51.605957 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:51.605925 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-xq6rz_96b4954a-76e8-4a06-9917-5454d450896d/node-ca/0.log" Apr 17 11:53:52.245235 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.245200 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-6cbc99754-98r5g_706595e5-78a2-4cbb-93bc-d371be497332/router/0.log" Apr 17 11:53:52.295989 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.295960 2567 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" event={"ID":"dd81347b-6712-40a8-82b3-741405d0e50a","Type":"ContainerStarted","Data":"6730bdc8ee1a6bb2d65d8b81fcd3b78d37bbfd922a316598ee6c5c4b67ea79f1"} Apr 17 11:53:52.296142 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.296036 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:52.310813 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.310774 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" podStartSLOduration=2.310761317 podStartE2EDuration="2.310761317s" podCreationTimestamp="2026-04-17 11:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 11:53:52.309851538 +0000 UTC m=+1413.148089296" watchObservedRunningTime="2026-04-17 11:53:52.310761317 +0000 UTC m=+1413.148999073" Apr 17 11:53:52.549313 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.549243 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-mcjdh_da2c16b4-2e18-4310-881d-5febd92c9d3d/serve-healthcheck-canary/0.log" Apr 17 11:53:52.888867 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.888794 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-585dfdc468-qvvgv_7c7fd4b1-e618-4f37-8c84-dc31d902ec5d/insights-operator/1.log" Apr 17 11:53:52.889374 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.889331 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-585dfdc468-qvvgv_7c7fd4b1-e618-4f37-8c84-dc31d902ec5d/insights-operator/0.log" Apr 17 11:53:52.905556 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.905526 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-2h4l7_38861b43-eb3b-4987-b34b-454261f74172/kube-rbac-proxy/0.log" Apr 17 11:53:52.921983 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.921960 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-2h4l7_38861b43-eb3b-4987-b34b-454261f74172/exporter/0.log" Apr 17 11:53:52.938909 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:52.938884 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-2h4l7_38861b43-eb3b-4987-b34b-454261f74172/extractor/0.log" Apr 17 11:53:54.654750 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:54.654718 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-jobset-operator_jobset-operator-747c5859c7-hpjnp_1dbf9889-fba9-4adc-8fa9-e80b6a93e014/jobset-operator/0.log" Apr 17 11:53:57.483252 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:57.483198 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-tmfsq_bf50dfd8-a7da-4074-b6eb-e64696657db9/migrator/0.log" Apr 17 11:53:57.498765 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:57.498740 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-74bb7799d9-tmfsq_bf50dfd8-a7da-4074-b6eb-e64696657db9/graceful-termination/0.log" Apr 17 11:53:58.308097 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.308065 2567 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-jjn79/perf-node-gather-daemonset-8v6xt" Apr 17 11:53:58.630001 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.629928 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-fpmx5_ea7f56ab-276e-4b70-8003-11db06a0b72b/kube-multus-additional-cni-plugins/0.log" Apr 17 11:53:58.647021 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.646995 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-fpmx5_ea7f56ab-276e-4b70-8003-11db06a0b72b/egress-router-binary-copy/0.log" Apr 17 11:53:58.664072 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.664051 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-fpmx5_ea7f56ab-276e-4b70-8003-11db06a0b72b/cni-plugins/0.log" Apr 17 11:53:58.683659 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.683637 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-fpmx5_ea7f56ab-276e-4b70-8003-11db06a0b72b/bond-cni-plugin/0.log" Apr 17 11:53:58.700191 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.700165 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-fpmx5_ea7f56ab-276e-4b70-8003-11db06a0b72b/routeoverride-cni/0.log" Apr 17 11:53:58.717198 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.717177 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-fpmx5_ea7f56ab-276e-4b70-8003-11db06a0b72b/whereabouts-cni-bincopy/0.log" Apr 17 11:53:58.734577 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.734557 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-fpmx5_ea7f56ab-276e-4b70-8003-11db06a0b72b/whereabouts-cni/0.log" Apr 17 11:53:58.923641 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.923615 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2pjn_41014c60-54e4-48f5-83f8-487c7f64058e/kube-multus/0.log" Apr 17 11:53:58.967711 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.967692 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tnlq8_7b1f3e8e-0735-4b17-9e76-c70b964db9c1/network-metrics-daemon/0.log" Apr 17 11:53:58.985831 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:53:58.985812 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tnlq8_7b1f3e8e-0735-4b17-9e76-c70b964db9c1/kube-rbac-proxy/0.log" Apr 17 11:54:00.130584 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.130495 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-controller/0.log" Apr 17 11:54:00.144077 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.144030 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/0.log" Apr 17 11:54:00.156857 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.156838 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovn-acl-logging/1.log" Apr 17 11:54:00.178308 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.178281 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/kube-rbac-proxy-node/0.log" Apr 17 11:54:00.201550 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.201515 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/kube-rbac-proxy-ovn-metrics/0.log" Apr 17 11:54:00.216454 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.216429 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/northd/0.log" Apr 17 11:54:00.234696 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.234656 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/nbdb/0.log" Apr 17 11:54:00.255482 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.255462 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/sbdb/0.log" Apr 17 11:54:00.423782 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:00.423760 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vjfgd_efceb2ce-9379-47c0-b8c1-22f8ad408e7c/ovnkube-controller/0.log" Apr 17 11:54:01.608226 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:01.608195 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-8d6ts_a1d31d48-4078-464b-b36c-28075b5f885b/network-check-target-container/0.log" Apr 17 11:54:02.434972 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:02.434940 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-v2svx_989ffc44-f4df-40d5-916a-161b05378f4e/iptables-alerter/0.log" Apr 17 11:54:03.040928 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:03.040903 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-244rk_a32ab52e-e11f-46e9-9714-4ecfa0c87830/tuned/0.log" Apr 17 11:54:04.616744 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:04.616712 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6dc5bdb6b4-bck66_0f1c4813-7a6c-4e2d-930c-133f87515757/cluster-samples-operator/0.log" Apr 17 11:54:04.630947 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:04.630914 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-6dc5bdb6b4-bck66_0f1c4813-7a6c-4e2d-930c-133f87515757/cluster-samples-operator-watch/0.log" Apr 17 11:54:06.096989 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:06.096957 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-rlnwv_d6d2dca5-2ae8-41be-9865-73022a8c7601/csi-driver/0.log" Apr 17 11:54:06.113368 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:06.113344 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-rlnwv_d6d2dca5-2ae8-41be-9865-73022a8c7601/csi-node-driver-registrar/0.log" Apr 17 11:54:06.133559 ip-10-0-140-245 kubenswrapper[2567]: I0417 11:54:06.133517 2567 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-csi-drivers_aws-ebs-csi-driver-node-rlnwv_d6d2dca5-2ae8-41be-9865-73022a8c7601/csi-liveness-probe/0.log"