Apr 16 18:27:33.395963 ip-10-0-142-225 systemd[1]: kubelet.service: Failed to load environment files: No such file or directory Apr 16 18:27:33.395975 ip-10-0-142-225 systemd[1]: kubelet.service: Failed to run 'start-pre' task: No such file or directory Apr 16 18:27:33.395982 ip-10-0-142-225 systemd[1]: kubelet.service: Failed with result 'resources'. Apr 16 18:27:33.396277 ip-10-0-142-225 systemd[1]: Failed to start Kubernetes Kubelet. Apr 16 18:27:43.418229 ip-10-0-142-225 systemd[1]: kubelet.service: Failed to schedule restart job: Unit crio.service not found. Apr 16 18:27:43.418254 ip-10-0-142-225 systemd[1]: kubelet.service: Failed with result 'resources'. -- Boot f8b7476aea904b7d97b95d166dee3d9a -- Apr 16 18:29:57.006656 ip-10-0-142-225 systemd[1]: Starting Kubernetes Kubelet... Apr 16 18:29:57.381565 ip-10-0-142-225 kubenswrapper[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:29:57.381565 ip-10-0-142-225 kubenswrapper[2569]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 16 18:29:57.381565 ip-10-0-142-225 kubenswrapper[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:29:57.381565 ip-10-0-142-225 kubenswrapper[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 18:29:57.381565 ip-10-0-142-225 kubenswrapper[2569]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 18:29:57.383624 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.383547 2569 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 18:29:57.388712 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388691 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:29:57.388712 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388708 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:29:57.388712 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388714 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:29:57.388712 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388717 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388720 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388724 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388727 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388730 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388732 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388735 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388738 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388741 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388743 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388746 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388749 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388751 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388754 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388756 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388759 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388761 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388764 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388767 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388769 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:29:57.388877 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388772 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388775 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388779 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388784 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388786 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388790 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388792 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388795 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388798 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388801 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388803 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388806 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388808 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388811 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388815 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388818 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388821 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388824 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388826 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388829 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:29:57.389353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388832 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388835 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388837 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388841 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388843 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388846 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388848 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388851 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388854 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388857 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388859 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388862 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388865 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388867 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388870 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388873 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388875 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388878 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388882 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:29:57.389855 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388887 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388890 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388893 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388896 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388899 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388902 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388905 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388909 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388912 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388915 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388918 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388921 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388924 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388927 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388930 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388933 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388937 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388940 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388943 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388945 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:29:57.390307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388948 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388951 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388953 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.388956 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389325 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389330 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389333 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389336 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389339 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389341 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389344 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389347 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389350 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389353 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389356 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389359 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389362 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389365 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389367 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389371 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:29:57.390806 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389388 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389391 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389394 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389396 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389399 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389401 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389404 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389407 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389410 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389413 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389415 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389418 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389421 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389423 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389426 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389429 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389432 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389434 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389437 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389440 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:29:57.391323 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389442 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389446 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389450 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389453 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389456 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389459 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389461 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389464 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389466 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389469 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389472 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389475 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389478 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389480 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389483 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389486 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389488 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389491 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389493 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:29:57.391831 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389496 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389498 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389501 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389504 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389508 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389511 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389514 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389517 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389521 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389524 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389526 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389529 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389532 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389534 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389537 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389539 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389542 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389545 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389548 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:29:57.392298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389551 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389553 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389556 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389558 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389561 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389564 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389567 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389570 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389572 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389575 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389578 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.389580 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390615 2569 flags.go:64] FLAG: --address="0.0.0.0" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390626 2569 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390636 2569 flags.go:64] FLAG: --anonymous-auth="true" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390640 2569 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390648 2569 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390651 2569 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390656 2569 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390660 2569 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390664 2569 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 16 18:29:57.392773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390667 2569 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390670 2569 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390674 2569 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390677 2569 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390680 2569 flags.go:64] FLAG: --cgroup-root="" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390683 2569 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390686 2569 flags.go:64] FLAG: --client-ca-file="" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390691 2569 flags.go:64] FLAG: --cloud-config="" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390694 2569 flags.go:64] FLAG: --cloud-provider="external" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390697 2569 flags.go:64] FLAG: --cluster-dns="[]" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390701 2569 flags.go:64] FLAG: --cluster-domain="" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390704 2569 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390707 2569 flags.go:64] FLAG: --config-dir="" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390710 2569 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390713 2569 flags.go:64] FLAG: --container-log-max-files="5" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390718 2569 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390721 2569 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390724 2569 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390727 2569 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390731 2569 flags.go:64] FLAG: --contention-profiling="false" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390733 2569 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390736 2569 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390739 2569 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390742 2569 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390747 2569 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 16 18:29:57.393274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390750 2569 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390752 2569 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390755 2569 flags.go:64] FLAG: --enable-load-reader="false" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390759 2569 flags.go:64] FLAG: --enable-server="true" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390762 2569 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390767 2569 flags.go:64] FLAG: --event-burst="100" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390770 2569 flags.go:64] FLAG: --event-qps="50" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390773 2569 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390776 2569 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390779 2569 flags.go:64] FLAG: --eviction-hard="" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390782 2569 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390786 2569 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390789 2569 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390792 2569 flags.go:64] FLAG: --eviction-soft="" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390796 2569 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390799 2569 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390802 2569 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390805 2569 flags.go:64] FLAG: --experimental-mounter-path="" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390808 2569 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390811 2569 flags.go:64] FLAG: --fail-swap-on="true" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390814 2569 flags.go:64] FLAG: --feature-gates="" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390818 2569 flags.go:64] FLAG: --file-check-frequency="20s" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390822 2569 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390825 2569 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390828 2569 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390831 2569 flags.go:64] FLAG: --healthz-port="10248" Apr 16 18:29:57.393882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390834 2569 flags.go:64] FLAG: --help="false" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390837 2569 flags.go:64] FLAG: --hostname-override="ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390840 2569 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390843 2569 flags.go:64] FLAG: --http-check-frequency="20s" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390847 2569 flags.go:64] FLAG: --image-credential-provider-bin-dir="/usr/libexec/kubelet-image-credential-provider-plugins" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390850 2569 flags.go:64] FLAG: --image-credential-provider-config="/etc/kubernetes/credential-providers/ecr-credential-provider.yaml" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390853 2569 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390856 2569 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390859 2569 flags.go:64] FLAG: --image-service-endpoint="" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390862 2569 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390865 2569 flags.go:64] FLAG: --kube-api-burst="100" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390868 2569 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390871 2569 flags.go:64] FLAG: --kube-api-qps="50" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390874 2569 flags.go:64] FLAG: --kube-reserved="" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390877 2569 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390879 2569 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390882 2569 flags.go:64] FLAG: --kubelet-cgroups="" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390885 2569 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390888 2569 flags.go:64] FLAG: --lock-file="" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390891 2569 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390894 2569 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390897 2569 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390902 2569 flags.go:64] FLAG: --log-json-split-stream="false" Apr 16 18:29:57.394507 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390905 2569 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390908 2569 flags.go:64] FLAG: --log-text-split-stream="false" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390911 2569 flags.go:64] FLAG: --logging-format="text" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390914 2569 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390917 2569 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390921 2569 flags.go:64] FLAG: --manifest-url="" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390924 2569 flags.go:64] FLAG: --manifest-url-header="" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390928 2569 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390931 2569 flags.go:64] FLAG: --max-open-files="1000000" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390936 2569 flags.go:64] FLAG: --max-pods="110" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390938 2569 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390941 2569 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390944 2569 flags.go:64] FLAG: --memory-manager-policy="None" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390947 2569 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390950 2569 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390953 2569 flags.go:64] FLAG: --node-ip="0.0.0.0" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390956 2569 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhel" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390963 2569 flags.go:64] FLAG: --node-status-max-images="50" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390967 2569 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390969 2569 flags.go:64] FLAG: --oom-score-adj="-999" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390973 2569 flags.go:64] FLAG: --pod-cidr="" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390976 2569 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc76bab72f320de3d4105c90d73c4fb139c09e20ce0fa8dcbc0cb59920d27dec" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390982 2569 flags.go:64] FLAG: --pod-manifest-path="" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390986 2569 flags.go:64] FLAG: --pod-max-pids="-1" Apr 16 18:29:57.395057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390989 2569 flags.go:64] FLAG: --pods-per-core="0" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390992 2569 flags.go:64] FLAG: --port="10250" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390995 2569 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.390998 2569 flags.go:64] FLAG: --provider-id="aws:///us-east-1a/i-060830c3d02eac2a0" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391001 2569 flags.go:64] FLAG: --qos-reserved="" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391004 2569 flags.go:64] FLAG: --read-only-port="10255" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391007 2569 flags.go:64] FLAG: --register-node="true" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391010 2569 flags.go:64] FLAG: --register-schedulable="true" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391013 2569 flags.go:64] FLAG: --register-with-taints="" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391016 2569 flags.go:64] FLAG: --registry-burst="10" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391019 2569 flags.go:64] FLAG: --registry-qps="5" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391022 2569 flags.go:64] FLAG: --reserved-cpus="" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391025 2569 flags.go:64] FLAG: --reserved-memory="" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391029 2569 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391032 2569 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391035 2569 flags.go:64] FLAG: --rotate-certificates="false" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391038 2569 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391041 2569 flags.go:64] FLAG: --runonce="false" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391044 2569 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391046 2569 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391049 2569 flags.go:64] FLAG: --seccomp-default="false" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391052 2569 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391055 2569 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391058 2569 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391061 2569 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391064 2569 flags.go:64] FLAG: --storage-driver-password="root" Apr 16 18:29:57.395672 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391067 2569 flags.go:64] FLAG: --storage-driver-secure="false" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391070 2569 flags.go:64] FLAG: --storage-driver-table="stats" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391073 2569 flags.go:64] FLAG: --storage-driver-user="root" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391077 2569 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391079 2569 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391083 2569 flags.go:64] FLAG: --system-cgroups="" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391085 2569 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391091 2569 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391093 2569 flags.go:64] FLAG: --tls-cert-file="" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391096 2569 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391100 2569 flags.go:64] FLAG: --tls-min-version="" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391103 2569 flags.go:64] FLAG: --tls-private-key-file="" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391105 2569 flags.go:64] FLAG: --topology-manager-policy="none" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391108 2569 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391111 2569 flags.go:64] FLAG: --topology-manager-scope="container" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391114 2569 flags.go:64] FLAG: --v="2" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391118 2569 flags.go:64] FLAG: --version="false" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391122 2569 flags.go:64] FLAG: --vmodule="" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391130 2569 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391135 2569 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391238 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391242 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391245 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391249 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:29:57.396298 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391251 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391254 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391257 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391259 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391263 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391267 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391269 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391272 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391275 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391278 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391280 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391283 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391286 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391288 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391291 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391294 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391296 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391299 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391301 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:29:57.396898 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391304 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391307 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391310 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391312 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391314 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391317 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391319 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391322 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391326 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391329 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391332 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391334 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391337 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391340 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391342 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391345 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391347 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391350 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391353 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391355 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:29:57.397400 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391358 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391360 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391363 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391365 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391368 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391370 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391387 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391390 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391393 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391396 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391398 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391401 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391404 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391406 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391408 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391411 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391414 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391416 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391419 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391422 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:29:57.397899 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391426 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391428 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391431 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391434 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391437 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391439 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391442 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391445 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391447 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391450 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391452 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391455 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391458 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391460 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391463 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391465 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391468 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391471 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391474 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391478 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:29:57.398484 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391480 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391484 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.391488 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.391975 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.397866 2569 server.go:530] "Kubelet version" kubeletVersion="v1.33.9" Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.397880 2569 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397935 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397941 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397944 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397947 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397949 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397952 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397955 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397958 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397960 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:29:57.398972 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397963 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397965 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397968 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397971 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397973 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397976 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397979 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397981 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397984 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397990 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397993 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397995 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.397998 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398001 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398003 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398006 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398008 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398011 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398014 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:29:57.399395 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398016 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398019 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398021 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398031 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398034 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398037 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398039 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398042 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398044 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398047 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398050 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398052 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398054 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398057 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398059 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398062 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398064 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398067 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398069 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398072 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:29:57.399859 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398074 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398077 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398082 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398086 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398089 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398092 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398095 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398098 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398100 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398104 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398108 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398111 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398113 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398116 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398118 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398121 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398130 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398133 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398136 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:29:57.400344 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398139 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398142 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398144 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398146 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398149 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398151 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398154 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398157 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398159 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398162 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398164 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398167 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398170 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398172 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398174 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398177 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398180 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398182 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:29:57.400893 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398185 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.398190 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398303 2569 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398308 2569 feature_gate.go:328] unrecognized feature gate: PinnedImages Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398311 2569 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398314 2569 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398316 2569 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398319 2569 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398321 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398324 2569 feature_gate.go:328] unrecognized feature gate: NewOLM Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398327 2569 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398329 2569 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398335 2569 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398338 2569 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398340 2569 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Apr 16 18:29:57.401335 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398343 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398346 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398348 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398351 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398353 2569 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398356 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398359 2569 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398361 2569 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398364 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398366 2569 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398369 2569 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398385 2569 feature_gate.go:328] unrecognized feature gate: GatewayAPI Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398388 2569 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398391 2569 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398393 2569 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398396 2569 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398399 2569 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398401 2569 feature_gate.go:328] unrecognized feature gate: DualReplica Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398404 2569 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398406 2569 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Apr 16 18:29:57.401719 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398409 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398412 2569 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398414 2569 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398417 2569 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398419 2569 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398422 2569 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398424 2569 feature_gate.go:328] unrecognized feature gate: Example2 Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398427 2569 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398430 2569 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398433 2569 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398436 2569 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398438 2569 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398442 2569 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398446 2569 feature_gate.go:328] unrecognized feature gate: SignatureStores Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398449 2569 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398452 2569 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398454 2569 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398457 2569 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398460 2569 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Apr 16 18:29:57.402220 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398462 2569 feature_gate.go:328] unrecognized feature gate: OVNObservability Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398465 2569 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398468 2569 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398470 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398473 2569 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398476 2569 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398478 2569 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398481 2569 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398483 2569 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398486 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398489 2569 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398491 2569 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398494 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398497 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398499 2569 feature_gate.go:328] unrecognized feature gate: Example Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398502 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398505 2569 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398507 2569 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398510 2569 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398513 2569 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Apr 16 18:29:57.402693 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398517 2569 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398520 2569 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398522 2569 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398525 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398529 2569 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398531 2569 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398534 2569 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398536 2569 feature_gate.go:328] unrecognized feature gate: InsightsConfig Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398539 2569 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398541 2569 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398544 2569 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398546 2569 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398549 2569 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:57.398551 2569 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.398556 2569 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Apr 16 18:29:57.403169 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.399090 2569 server.go:962] "Client rotation is on, will bootstrap in background" Apr 16 18:29:57.403546 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.401964 2569 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 16 18:29:57.403546 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.402780 2569 server.go:1019] "Starting client certificate rotation" Apr 16 18:29:57.403546 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.402877 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 18:29:57.403546 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.402917 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Apr 16 18:29:57.422979 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.422961 2569 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 18:29:57.425229 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.425203 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 16 18:29:57.441696 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.441677 2569 log.go:25] "Validated CRI v1 runtime API" Apr 16 18:29:57.446597 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.446582 2569 log.go:25] "Validated CRI v1 image API" Apr 16 18:29:57.447643 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.447629 2569 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 18:29:57.449626 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.449603 2569 fs.go:135] Filesystem UUIDs: map[52a0b8bc-09fe-40f3-b6af-20666a6494fa:/dev/nvme0n1p4 53bc6f70-60df-48d2-9044-3a8e99147bfd:/dev/nvme0n1p3 7B77-95E7:/dev/nvme0n1p2] Apr 16 18:29:57.449690 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.449625 2569 fs.go:136] Filesystem partitions: map[/dev/nvme0n1p3:{mountpoint:/boot major:259 minor:3 fsType:ext4 blockSize:0} /dev/nvme0n1p4:{mountpoint:/var major:259 minor:4 fsType:xfs blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Apr 16 18:29:57.453349 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.453330 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 18:29:57.455006 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.454893 2569 manager.go:217] Machine: {Timestamp:2026-04-16 18:29:57.4538913 +0000 UTC m=+0.346208661 CPUVendorID:GenuineIntel NumCores:8 NumPhysicalCores:4 NumSockets:1 CpuFrequency:3111902 MemoryCapacity:33164488704 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ec21c828f600e29e73596941da8baf96 SystemUUID:ec21c828-f600-e29e-7359-6941da8baf96 BootID:f8b7476a-ea90-4b7d-97b9-5d166dee3d9a Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16582242304 Type:vfs Inodes:4048399 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6632898560 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/nvme0n1p4 DeviceMajor:259 DeviceMinor:4 Capacity:128243970048 Type:vfs Inodes:62651840 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6098944 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16582246400 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/nvme0n1p3 DeviceMajor:259 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[259:0:{Name:nvme0n1 Major:259 Minor:0 Size:128849018880 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:02:84:e1:38:26:bd Speed:0 Mtu:9001} {Name:ens5 MacAddress:02:84:e1:38:26:bd Speed:0 Mtu:9001} {Name:ovs-system MacAddress:96:0e:97:10:75:61 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33164488704 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 4] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:1 Threads:[1 5] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:2 Threads:[2 6] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:} {Id:3 Threads:[3 7] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:1048576 Type:Unified Level:2}] UncoreCaches:[] SocketID:0 BookID: DrawerID:}] Caches:[{Id:0 Size:37486592 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 16 18:29:57.455006 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.454998 2569 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 16 18:29:57.455150 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.455067 2569 manager.go:233] Version: {KernelVersion:5.14.0-570.104.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20260401-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 16 18:29:57.456593 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.456570 2569 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 18:29:57.456754 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.456596 2569 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-10-0-142-225.ec2.internal","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 18:29:57.456841 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.456767 2569 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 18:29:57.456841 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.456781 2569 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 18:29:57.456841 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.456799 2569 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 18:29:57.457518 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.457506 2569 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 16 18:29:57.458813 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.458800 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 16 18:29:57.458941 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.458930 2569 server.go:1267] "Using root directory" path="/var/lib/kubelet" Apr 16 18:29:57.460944 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.460933 2569 kubelet.go:491] "Attempting to sync node with API server" Apr 16 18:29:57.461011 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.460949 2569 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 18:29:57.461011 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.460966 2569 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 16 18:29:57.461011 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.460978 2569 kubelet.go:397] "Adding apiserver pod source" Apr 16 18:29:57.461011 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.460989 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 18:29:57.461913 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.461900 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 18:29:57.461976 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.461923 2569 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Apr 16 18:29:57.464682 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.464666 2569 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.10-2.rhaos4.20.gita4d0894.el9" apiVersion="v1" Apr 16 18:29:57.466061 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.466048 2569 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 18:29:57.467249 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467237 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 16 18:29:57.467301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467255 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 16 18:29:57.467301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467261 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 16 18:29:57.467301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467269 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 16 18:29:57.467301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467274 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 16 18:29:57.467301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467280 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 16 18:29:57.467301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467286 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 16 18:29:57.467301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467294 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 16 18:29:57.467301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467305 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 16 18:29:57.467537 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467313 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 16 18:29:57.467537 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467328 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 16 18:29:57.467537 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467337 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 16 18:29:57.467537 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467366 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 16 18:29:57.467537 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.467387 2569 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Apr 16 18:29:57.470788 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.470775 2569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 18:29:57.470863 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.470805 2569 server.go:1295] "Started kubelet" Apr 16 18:29:57.470935 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.470886 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 18:29:57.470990 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.470950 2569 server_v1.go:47] "podresources" method="list" useActivePods=true Apr 16 18:29:57.472249 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.471084 2569 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 18:29:57.472441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.472425 2569 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 18:29:57.472777 ip-10-0-142-225 systemd[1]: Started Kubernetes Kubelet. Apr 16 18:29:57.475819 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.475802 2569 server.go:317] "Adding debug handlers to kubelet server" Apr 16 18:29:57.475909 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.475813 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 18:29:57.475909 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.475819 2569 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "ip-10-0-142-225.ec2.internal" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 16 18:29:57.475993 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.475963 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"ip-10-0-142-225.ec2.internal\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 18:29:57.481741 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.481717 2569 kubelet.go:1618] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Apr 16 18:29:57.483163 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.483146 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 18:29:57.483247 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.483169 2569 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Apr 16 18:29:57.483711 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.483692 2569 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 18:29:57.484095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.483904 2569 factory.go:55] Registering systemd factory Apr 16 18:29:57.484095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.483934 2569 factory.go:223] Registration of the systemd container factory successfully Apr 16 18:29:57.484095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.483919 2569 volume_manager.go:295] "The desired_state_of_world populator starts" Apr 16 18:29:57.484095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.483979 2569 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 18:29:57.484095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.484082 2569 reconstruct.go:97] "Volume reconstruction finished" Apr 16 18:29:57.484095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.484091 2569 reconciler.go:26] "Reconciler: start to sync state" Apr 16 18:29:57.484446 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.484110 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:57.484446 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.484146 2569 factory.go:153] Registering CRI-O factory Apr 16 18:29:57.484446 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.484160 2569 factory.go:223] Registration of the crio container factory successfully Apr 16 18:29:57.484446 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.484218 2569 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 16 18:29:57.484446 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.484237 2569 factory.go:103] Registering Raw factory Apr 16 18:29:57.484446 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.484250 2569 manager.go:1196] Started watching for new ooms in manager Apr 16 18:29:57.484860 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.484847 2569 manager.go:319] Starting recovery of all containers Apr 16 18:29:57.490779 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.490753 2569 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 18:29:57.490931 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.490911 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"ip-10-0-142-225.ec2.internal\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Apr 16 18:29:57.491163 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.491134 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 18:29:57.491728 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.490840 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-225.ec2.internal.18a6e9d375f7d333 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-225.ec2.internal,UID:ip-10-0-142-225.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-10-0-142-225.ec2.internal,},FirstTimestamp:2026-04-16 18:29:57.470786355 +0000 UTC m=+0.363103722,LastTimestamp:2026-04-16 18:29:57.470786355 +0000 UTC m=+0.363103722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-225.ec2.internal,}" Apr 16 18:29:57.494884 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.494743 2569 manager.go:324] Recovery completed Apr 16 18:29:57.498738 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.498725 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:29:57.500868 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.500852 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:29:57.500945 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.500880 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:29:57.500945 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.500896 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:29:57.501367 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.501355 2569 cpu_manager.go:222] "Starting CPU manager" policy="none" Apr 16 18:29:57.501367 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.501366 2569 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Apr 16 18:29:57.501470 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.501394 2569 state_mem.go:36] "Initialized new in-memory state store" Apr 16 18:29:57.502825 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.502771 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-225.ec2.internal.18a6e9d377c2d2ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-225.ec2.internal,UID:ip-10-0-142-225.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-10-0-142-225.ec2.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-10-0-142-225.ec2.internal,},FirstTimestamp:2026-04-16 18:29:57.500867308 +0000 UTC m=+0.393184663,LastTimestamp:2026-04-16 18:29:57.500867308 +0000 UTC m=+0.393184663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-225.ec2.internal,}" Apr 16 18:29:57.504403 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.504391 2569 policy_none.go:49] "None policy: Start" Apr 16 18:29:57.504437 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.504408 2569 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 18:29:57.504437 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.504418 2569 state_mem.go:35] "Initializing new in-memory state store" Apr 16 18:29:57.504578 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.504564 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-w7lzf" Apr 16 18:29:57.511287 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.511268 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-w7lzf" Apr 16 18:29:57.515165 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.515109 2569 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ip-10-0-142-225.ec2.internal.18a6e9d377c32b0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-10-0-142-225.ec2.internal,UID:ip-10-0-142-225.ec2.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-10-0-142-225.ec2.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-10-0-142-225.ec2.internal,},FirstTimestamp:2026-04-16 18:29:57.500889869 +0000 UTC m=+0.393207224,LastTimestamp:2026-04-16 18:29:57.500889869 +0000 UTC m=+0.393207224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-10-0-142-225.ec2.internal,}" Apr 16 18:29:57.547956 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.547940 2569 manager.go:341] "Starting Device Plugin manager" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.547976 2569 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.547989 2569 server.go:85] "Starting device plugin registration server" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.548224 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.548235 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.548319 2569 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.548425 2569 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.548435 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.548885 2569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.548928 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.552111 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.552133 2569 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.552149 2569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.552155 2569 kubelet.go:2451] "Starting kubelet main sync loop" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.552182 2569 kubelet.go:2475] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 18:29:57.567509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.555839 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:29:57.649089 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.649037 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:29:57.649840 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.649826 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:29:57.649887 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.649855 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:29:57.649887 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.649865 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:29:57.649887 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.649886 2569 kubelet_node_status.go:78] "Attempting to register node" node="ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.652959 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.652944 2569 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal","kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal"] Apr 16 18:29:57.653010 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.653002 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:29:57.653652 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.653639 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:29:57.653717 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.653664 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:29:57.653717 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.653674 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:29:57.655984 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.655973 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:29:57.656109 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656096 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.656143 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656134 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:29:57.656650 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656637 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:29:57.656719 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656656 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:29:57.656719 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656667 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:29:57.656719 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656641 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:29:57.656719 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656693 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:29:57.656719 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656704 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:29:57.656901 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.656881 2569 kubelet_node_status.go:81] "Successfully registered node" node="ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.656901 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.656894 2569 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"ip-10-0-142-225.ec2.internal\": node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:57.659327 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.659314 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.659397 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.659340 2569 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Apr 16 18:29:57.659989 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.659973 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientMemory" Apr 16 18:29:57.660051 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.660006 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasNoDiskPressure" Apr 16 18:29:57.660051 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.660020 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeHasSufficientPID" Apr 16 18:29:57.672452 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.672439 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:57.685218 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.685200 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/73d3305f726a432b08c0adb8461c873e-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal\" (UID: \"73d3305f726a432b08c0adb8461c873e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.685270 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.685224 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/73d3305f726a432b08c0adb8461c873e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal\" (UID: \"73d3305f726a432b08c0adb8461c873e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.685270 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.685241 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46a218914bddbe8acb8c3bd91619bc9c-config\") pod \"kube-apiserver-proxy-ip-10-0-142-225.ec2.internal\" (UID: \"46a218914bddbe8acb8c3bd91619bc9c\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.686971 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.686955 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-225.ec2.internal\" not found" node="ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.690257 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.690239 2569 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-10-0-142-225.ec2.internal\" not found" node="ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.773052 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.773032 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:57.786269 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.786252 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/73d3305f726a432b08c0adb8461c873e-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal\" (UID: \"73d3305f726a432b08c0adb8461c873e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.786370 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.786274 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/73d3305f726a432b08c0adb8461c873e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal\" (UID: \"73d3305f726a432b08c0adb8461c873e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.786370 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.786291 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46a218914bddbe8acb8c3bd91619bc9c-config\") pod \"kube-apiserver-proxy-ip-10-0-142-225.ec2.internal\" (UID: \"46a218914bddbe8acb8c3bd91619bc9c\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.786370 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.786334 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46a218914bddbe8acb8c3bd91619bc9c-config\") pod \"kube-apiserver-proxy-ip-10-0-142-225.ec2.internal\" (UID: \"46a218914bddbe8acb8c3bd91619bc9c\") " pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.786370 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.786351 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/73d3305f726a432b08c0adb8461c873e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal\" (UID: \"73d3305f726a432b08c0adb8461c873e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.786532 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.786351 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/73d3305f726a432b08c0adb8461c873e-etc-kube\") pod \"kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal\" (UID: \"73d3305f726a432b08c0adb8461c873e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.873957 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.873929 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:57.974691 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:57.974665 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:57.990107 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.990093 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:57.992657 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:57.992631 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" Apr 16 18:29:58.074849 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:58.074817 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:58.175330 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:58.175299 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:58.275901 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:58.275820 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:58.286007 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.285983 2569 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:29:58.376446 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:58.376413 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:58.402964 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.402941 2569 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 16 18:29:58.403410 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.403082 2569 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 18:29:58.403410 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.403108 2569 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Apr 16 18:29:58.477557 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:58.477523 2569 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"ip-10-0-142-225.ec2.internal\" not found" Apr 16 18:29:58.478085 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.478066 2569 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:29:58.484122 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.484100 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" Apr 16 18:29:58.484207 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.484138 2569 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Apr 16 18:29:58.495765 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.495742 2569 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Apr 16 18:29:58.499505 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.499486 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 18:29:58.500325 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.500310 2569 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" Apr 16 18:29:58.513220 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.513189 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2028-04-15 18:24:57 +0000 UTC" deadline="2027-09-10 12:17:12.180429509 +0000 UTC" Apr 16 18:29:58.513220 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.513214 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="12281h47m13.667218453s" Apr 16 18:29:58.514065 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.514046 2569 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 18:29:58.514666 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:58.514642 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a218914bddbe8acb8c3bd91619bc9c.slice/crio-c29504b8c3d13d7b6c0a5ef197cc91c18c49889004b3b0f557916849a68f4573 WatchSource:0}: Error finding container c29504b8c3d13d7b6c0a5ef197cc91c18c49889004b3b0f557916849a68f4573: Status 404 returned error can't find the container with id c29504b8c3d13d7b6c0a5ef197cc91c18c49889004b3b0f557916849a68f4573 Apr 16 18:29:58.514995 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:29:58.514973 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73d3305f726a432b08c0adb8461c873e.slice/crio-953125c8f75e0e618928e121769984b848597f962c52828eb5ed3494bb2863dc WatchSource:0}: Error finding container 953125c8f75e0e618928e121769984b848597f962c52828eb5ed3494bb2863dc: Status 404 returned error can't find the container with id 953125c8f75e0e618928e121769984b848597f962c52828eb5ed3494bb2863dc Apr 16 18:29:58.518899 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.518885 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:29:58.523038 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.523021 2569 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-f2pdg" Apr 16 18:29:58.530301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.530255 2569 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-f2pdg" Apr 16 18:29:58.555298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.555258 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" event={"ID":"46a218914bddbe8acb8c3bd91619bc9c","Type":"ContainerStarted","Data":"c29504b8c3d13d7b6c0a5ef197cc91c18c49889004b3b0f557916849a68f4573"} Apr 16 18:29:58.556188 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:58.556168 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" event={"ID":"73d3305f726a432b08c0adb8461c873e","Type":"ContainerStarted","Data":"953125c8f75e0e618928e121769984b848597f962c52828eb5ed3494bb2863dc"} Apr 16 18:29:59.051245 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.051180 2569 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:29:59.461714 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.461683 2569 apiserver.go:52] "Watching apiserver" Apr 16 18:29:59.469058 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.469031 2569 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Apr 16 18:29:59.469449 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.469425 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm","openshift-cluster-node-tuning-operator/tuned-299vl","openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal","openshift-multus/multus-q9jcb","openshift-multus/network-metrics-daemon-p2hph","openshift-network-diagnostics/network-check-target-8mjdj","openshift-ovn-kubernetes/ovnkube-node-m5gbp","kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal","openshift-image-registry/node-ca-r2w68","openshift-multus/multus-additional-cni-plugins-kmhkm","openshift-network-operator/iptables-alerter-fqdpz","kube-system/konnectivity-agent-24f2c"] Apr 16 18:29:59.472663 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.472641 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.474746 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.474717 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.475426 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.475407 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"kube-root-ca.crt\"" Apr 16 18:29:59.475522 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.475433 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-csi-drivers\"/\"openshift-service-ca.crt\"" Apr 16 18:29:59.475522 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.475472 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-sa-dockercfg-qz94d\"" Apr 16 18:29:59.475631 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.475591 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-csi-drivers\"/\"aws-ebs-csi-driver-node-metrics-serving-cert\"" Apr 16 18:29:59.477008 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.476859 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.477719 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.477356 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:29:59.477719 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.477524 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"tuned-dockercfg-sdrgb\"" Apr 16 18:29:59.477719 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.477535 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\"" Apr 16 18:29:59.479198 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.479175 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Apr 16 18:29:59.479483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.479334 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Apr 16 18:29:59.479483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.479396 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Apr 16 18:29:59.479483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.479412 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-xkbdg\"" Apr 16 18:29:59.479743 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.479727 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Apr 16 18:29:59.481559 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.481329 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:29:59.481559 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.481417 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:29:59.481559 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:59.481469 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:29:59.481559 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:59.481415 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:29:59.485477 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.485455 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.488469 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.488267 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Apr 16 18:29:59.488469 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.488302 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Apr 16 18:29:59.488469 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.488334 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Apr 16 18:29:59.488469 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.488461 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-qq9d8\"" Apr 16 18:29:59.488708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.488494 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Apr 16 18:29:59.489305 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.489248 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Apr 16 18:29:59.489450 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.489432 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Apr 16 18:29:59.490261 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.490164 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.490347 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.490318 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.493178 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.492741 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.493178 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.492764 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Apr 16 18:29:59.493178 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.492812 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Apr 16 18:29:59.493178 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.492875 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-s6bzs\"" Apr 16 18:29:59.493178 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.492995 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Apr 16 18:29:59.493178 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.493026 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Apr 16 18:29:59.493178 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.493133 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Apr 16 18:29:59.493630 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.493398 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-xn4bx\"" Apr 16 18:29:59.495085 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.495060 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:29:59.496172 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496157 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Apr 16 18:29:59.496297 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496276 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Apr 16 18:29:59.496527 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496507 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-os-release\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.496614 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496545 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-cni-bin\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.496614 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496563 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Apr 16 18:29:59.496614 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496570 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.496614 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496598 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysconfig\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.496822 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496620 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-k8s-cni-cncf-io\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.496822 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496643 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-conf-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.496822 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496681 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-etc-kubernetes\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.496822 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496756 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-system-cni-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.496822 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496785 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e4899c5b-5582-41f6-8785-b1420a447044-multus-daemon-config\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.496822 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496819 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-sys-fs\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496842 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-cnibin\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496865 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-cni-bin\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496891 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovnkube-config\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496915 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-env-overrides\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496939 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-tuned\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496963 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-hostroot\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.496987 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-etc-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497010 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrf8s\" (UniqueName: \"kubernetes.io/projected/8a6cc626-96f9-4f30-a283-abdb6733cdac-kube-api-access-wrf8s\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497047 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-socket-dir-parent\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497077 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-kubelet-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497099 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-device-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497129 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-etc-selinux\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.497142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497143 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kube-system\"/\"konnectivity-ca-bundle\"" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497157 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysctl-d\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497184 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-systemd-units\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497208 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-socket-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497233 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-var-lib-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497253 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-kubernetes\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497272 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-sys\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497292 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx2fl\" (UniqueName: \"kubernetes.io/projected/e4899c5b-5582-41f6-8785-b1420a447044-kube-api-access-kx2fl\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497319 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-kubelet\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497340 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-registration-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497354 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-modprobe-d\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497386 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-cni-multus\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497405 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-ovn\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497420 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-log-socket\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497453 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-run\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497461 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"default-dockercfg-947w9\"" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497482 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-host\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497485 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-dockercfg-wg4gm\"" Apr 16 18:29:59.497845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497521 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"konnectivity-agent\"" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497523 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/127cd67a-6124-4bcc-baa8-d0ae87cd028f-tmp\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497556 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcrh\" (UniqueName: \"kubernetes.io/projected/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-kube-api-access-jfcrh\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497581 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-run-netns\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497620 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497647 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-var-lib-kubelet\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497671 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5psg4\" (UniqueName: \"kubernetes.io/projected/127cd67a-6124-4bcc-baa8-d0ae87cd028f-kube-api-access-5psg4\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497709 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-kubelet\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497742 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497771 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497795 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-systemd\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497816 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497836 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6842\" (UniqueName: \"kubernetes.io/projected/53ef616d-9ee1-4d5c-844a-05af3965cf4a-kube-api-access-c6842\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497869 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysctl-conf\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497903 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-lib-modules\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497930 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e4899c5b-5582-41f6-8785-b1420a447044-cni-binary-copy\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.498708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497970 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-netns\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.499545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.497987 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-multus-certs\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.499545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.498135 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-cni-netd\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.499545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.498165 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-cni-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.499545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.498219 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-node-log\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.499545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.498250 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovn-node-metrics-cert\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.499545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.498273 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-systemd\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.499545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.498309 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-slash\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.499545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.498335 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovnkube-script-lib\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.530933 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.530898 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 18:24:58 +0000 UTC" deadline="2027-09-29 07:39:46.419085579 +0000 UTC" Apr 16 18:29:59.530933 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.530925 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="12733h9m46.888162562s" Apr 16 18:29:59.585024 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.585004 2569 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 18:29:59.598732 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598708 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-run\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.598843 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598743 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-host\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.598843 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598765 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/127cd67a-6124-4bcc-baa8-d0ae87cd028f-tmp\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.598843 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598828 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-host\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.599000 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598887 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-run\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.599000 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598919 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jfcrh\" (UniqueName: \"kubernetes.io/projected/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-kube-api-access-jfcrh\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:29:59.599000 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598949 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-run-netns\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599000 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598971 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599000 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.598991 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-var-lib-kubelet\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599010 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5psg4\" (UniqueName: \"kubernetes.io/projected/127cd67a-6124-4bcc-baa8-d0ae87cd028f-kube-api-access-5psg4\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599034 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-kubelet\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599063 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599058 2569 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599076 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599087 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599088 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-var-lib-kubelet\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599033 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-run-netns\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599111 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-systemd\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599130 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-kubelet\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599156 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599183 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c6842\" (UniqueName: \"kubernetes.io/projected/53ef616d-9ee1-4d5c-844a-05af3965cf4a-kube-api-access-c6842\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599209 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysctl-conf\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599245 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-lib-modules\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599273 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e4899c5b-5582-41f6-8785-b1420a447044-cni-binary-copy\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599296 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-netns\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599321 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-multus-certs\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599344 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-cni-netd\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:59.599350 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599367 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-cni-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599406 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-node-log\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:59.599446 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:00.099415651 +0000 UTC m=+2.991733019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599452 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-node-log\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599466 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovn-node-metrics-cert\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599495 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-serviceca\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599501 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysctl-conf\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599517 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/671b786a-255d-4021-84d2-9c0ed65bd8da-konnectivity-ca\") pod \"konnectivity-agent-24f2c\" (UID: \"671b786a-255d-4021-84d2-9c0ed65bd8da\") " pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599556 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-multus-certs\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.599762 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599565 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-lib-modules\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599599 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-cni-netd\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599627 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-netns\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599654 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/c997d6be-0aad-4171-850c-d7aedaf7032f-iptables-alerter-script\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599689 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-systemd\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599712 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-cni-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599714 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-slash\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599760 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-systemd\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599816 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-slash\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599824 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599820 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-systemd\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599852 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovnkube-script-lib\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599888 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-os-release\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599912 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-cni-bin\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599936 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599956 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-os-release\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.599962 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysconfig\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600012 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-cni-bin\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.600575 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600066 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-k8s-cni-cncf-io\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600093 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-conf-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600095 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-run-k8s-cni-cncf-io\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600135 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600154 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-etc-kubernetes\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600171 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-conf-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600206 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-system-cni-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600242 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-etc-kubernetes\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600323 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-system-cni-dir\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600331 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e4899c5b-5582-41f6-8785-b1420a447044-multus-daemon-config\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600366 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysconfig\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600421 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-os-release\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600440 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovnkube-script-lib\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600461 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-sys-fs\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600488 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-cnibin\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600514 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-cni-bin\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600541 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovnkube-config\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600549 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-sys-fs\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.601404 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600566 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-env-overrides\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600589 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-cni-bin\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600595 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600621 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6nj\" (UniqueName: \"kubernetes.io/projected/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-kube-api-access-nm6nj\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600636 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-cnibin\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600649 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-tuned\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600648 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e4899c5b-5582-41f6-8785-b1420a447044-cni-binary-copy\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600677 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-hostroot\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600726 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-etc-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600721 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-hostroot\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600763 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrf8s\" (UniqueName: \"kubernetes.io/projected/8a6cc626-96f9-4f30-a283-abdb6733cdac-kube-api-access-wrf8s\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600796 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-system-cni-dir\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600823 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cnibin\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600826 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-etc-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600845 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e4899c5b-5582-41f6-8785-b1420a447044-multus-daemon-config\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600852 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600882 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-socket-dir-parent\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.602288 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600918 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cni-binary-copy\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600949 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600951 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-multus-socket-dir-parent\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.600988 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thtl\" (UniqueName: \"kubernetes.io/projected/c997d6be-0aad-4171-850c-d7aedaf7032f-kube-api-access-2thtl\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601022 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-kubelet-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601073 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-kubelet-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601090 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-device-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601117 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-etc-selinux\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601141 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysctl-d\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601166 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-systemd-units\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601191 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-host\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601213 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovnkube-config\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601215 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-socket-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601244 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-sysctl-d\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601252 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-selinux\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-etc-selinux\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601271 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-var-lib-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601293 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-systemd-units\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.603104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601297 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-kubernetes\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601310 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-socket-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601301 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-device-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601321 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-sys\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601333 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-var-lib-openvswitch\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601347 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kx2fl\" (UniqueName: \"kubernetes.io/projected/e4899c5b-5582-41f6-8785-b1420a447044-kube-api-access-kx2fl\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601359 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-kubernetes\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601386 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-kubelet\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601394 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-sys\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601416 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-host-kubelet\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601415 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/671b786a-255d-4021-84d2-9c0ed65bd8da-agent-certs\") pod \"konnectivity-agent-24f2c\" (UID: \"671b786a-255d-4021-84d2-9c0ed65bd8da\") " pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601466 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c997d6be-0aad-4171-850c-d7aedaf7032f-host-slash\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601497 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-registration-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601523 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-modprobe-d\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601564 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53ef616d-9ee1-4d5c-844a-05af3965cf4a-registration-dir\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601568 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-cni-multus\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601605 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-ovn\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.603818 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601627 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-log-socket\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601647 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-modprobe-d\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601672 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvjnd\" (UniqueName: \"kubernetes.io/projected/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-kube-api-access-zvjnd\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601692 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e4899c5b-5582-41f6-8785-b1420a447044-host-var-lib-cni-multus\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601701 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-run-ovn\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601698 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8a6cc626-96f9-4f30-a283-abdb6733cdac-log-socket\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.601918 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a6cc626-96f9-4f30-a283-abdb6733cdac-env-overrides\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.602661 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/127cd67a-6124-4bcc-baa8-d0ae87cd028f-tmp\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.602727 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a6cc626-96f9-4f30-a283-abdb6733cdac-ovn-node-metrics-cert\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.604425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.603611 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/127cd67a-6124-4bcc-baa8-d0ae87cd028f-etc-tuned\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.610096 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:59.610076 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:29:59.610096 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:59.610099 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:29:59.610278 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:59.610113 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n975s for pod openshift-network-diagnostics/network-check-target-8mjdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:29:59.610278 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:29:59.610188 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s podName:c1b6e68a-5279-411c-ba1c-fd6c274af91f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:00.110171892 +0000 UTC m=+3.002489253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n975s" (UniqueName: "kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s") pod "network-check-target-8mjdj" (UID: "c1b6e68a-5279-411c-ba1c-fd6c274af91f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:29:59.612366 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.612341 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5psg4\" (UniqueName: \"kubernetes.io/projected/127cd67a-6124-4bcc-baa8-d0ae87cd028f-kube-api-access-5psg4\") pod \"tuned-299vl\" (UID: \"127cd67a-6124-4bcc-baa8-d0ae87cd028f\") " pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.613309 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.613274 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrf8s\" (UniqueName: \"kubernetes.io/projected/8a6cc626-96f9-4f30-a283-abdb6733cdac-kube-api-access-wrf8s\") pod \"ovnkube-node-m5gbp\" (UID: \"8a6cc626-96f9-4f30-a283-abdb6733cdac\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.613429 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.613354 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfcrh\" (UniqueName: \"kubernetes.io/projected/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-kube-api-access-jfcrh\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:29:59.615928 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.614934 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx2fl\" (UniqueName: \"kubernetes.io/projected/e4899c5b-5582-41f6-8785-b1420a447044-kube-api-access-kx2fl\") pod \"multus-q9jcb\" (UID: \"e4899c5b-5582-41f6-8785-b1420a447044\") " pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.616323 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.616303 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6842\" (UniqueName: \"kubernetes.io/projected/53ef616d-9ee1-4d5c-844a-05af3965cf4a-kube-api-access-c6842\") pod \"aws-ebs-csi-driver-node-w5snm\" (UID: \"53ef616d-9ee1-4d5c-844a-05af3965cf4a\") " pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.702857 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.702828 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/671b786a-255d-4021-84d2-9c0ed65bd8da-agent-certs\") pod \"konnectivity-agent-24f2c\" (UID: \"671b786a-255d-4021-84d2-9c0ed65bd8da\") " pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:29:59.702979 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.702865 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c997d6be-0aad-4171-850c-d7aedaf7032f-host-slash\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.702979 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.702894 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zvjnd\" (UniqueName: \"kubernetes.io/projected/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-kube-api-access-zvjnd\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.702979 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.702951 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-serviceca\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.702979 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.702945 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c997d6be-0aad-4171-850c-d7aedaf7032f-host-slash\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.702979 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.702976 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/671b786a-255d-4021-84d2-9c0ed65bd8da-konnectivity-ca\") pod \"konnectivity-agent-24f2c\" (UID: \"671b786a-255d-4021-84d2-9c0ed65bd8da\") " pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:29:59.703225 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703147 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/c997d6be-0aad-4171-850c-d7aedaf7032f-iptables-alerter-script\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.703225 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703175 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-os-release\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703225 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703192 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703225 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703209 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6nj\" (UniqueName: \"kubernetes.io/projected/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-kube-api-access-nm6nj\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703237 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-system-cni-dir\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703261 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cnibin\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703287 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703303 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-os-release\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703312 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cni-binary-copy\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703337 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703361 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2thtl\" (UniqueName: \"kubernetes.io/projected/c997d6be-0aad-4171-850c-d7aedaf7032f-kube-api-access-2thtl\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703362 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cnibin\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703404 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-host\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.703433 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703422 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-system-cni-dir\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.703881 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703475 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-host\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.703881 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703517 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-serviceca\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.703881 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703527 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"konnectivity-ca\" (UniqueName: \"kubernetes.io/configmap/671b786a-255d-4021-84d2-9c0ed65bd8da-konnectivity-ca\") pod \"konnectivity-agent-24f2c\" (UID: \"671b786a-255d-4021-84d2-9c0ed65bd8da\") " pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:29:59.703881 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703761 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/c997d6be-0aad-4171-850c-d7aedaf7032f-iptables-alerter-script\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.703881 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703855 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.704119 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703893 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-cni-binary-copy\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.704119 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.703957 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.704119 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.704063 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.705253 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.705232 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"agent-certs\" (UniqueName: \"kubernetes.io/secret/671b786a-255d-4021-84d2-9c0ed65bd8da-agent-certs\") pod \"konnectivity-agent-24f2c\" (UID: \"671b786a-255d-4021-84d2-9c0ed65bd8da\") " pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:29:59.716906 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.716847 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvjnd\" (UniqueName: \"kubernetes.io/projected/e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0-kube-api-access-zvjnd\") pod \"node-ca-r2w68\" (UID: \"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0\") " pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.717085 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.717056 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6nj\" (UniqueName: \"kubernetes.io/projected/9494cb1a-9b64-48a6-ae24-14717ce0b8f0-kube-api-access-nm6nj\") pod \"multus-additional-cni-plugins-kmhkm\" (UID: \"9494cb1a-9b64-48a6-ae24-14717ce0b8f0\") " pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.718547 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.718530 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thtl\" (UniqueName: \"kubernetes.io/projected/c997d6be-0aad-4171-850c-d7aedaf7032f-kube-api-access-2thtl\") pod \"iptables-alerter-fqdpz\" (UID: \"c997d6be-0aad-4171-850c-d7aedaf7032f\") " pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.786545 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.786517 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" Apr 16 18:29:59.795241 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.795215 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-299vl" Apr 16 18:29:59.803948 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.803922 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q9jcb" Apr 16 18:29:59.809499 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.809480 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:29:59.813535 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.813515 2569 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:29:59.815441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.815411 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r2w68" Apr 16 18:29:59.821951 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.821932 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" Apr 16 18:29:59.829525 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.829501 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-fqdpz" Apr 16 18:29:59.835077 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:29:59.835059 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:30:00.105763 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.105672 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:00.105924 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:00.105818 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:00.105924 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:00.105891 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:01.105869736 +0000 UTC m=+3.998187080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:00.206867 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.206835 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:00.207021 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:00.206976 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:00.207021 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:00.206993 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:00.207021 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:00.207004 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n975s for pod openshift-network-diagnostics/network-check-target-8mjdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:00.207236 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:00.207063 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s podName:c1b6e68a-5279-411c-ba1c-fd6c274af91f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:01.207044825 +0000 UTC m=+4.099362170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n975s" (UniqueName: "kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s") pod "network-check-target-8mjdj" (UID: "c1b6e68a-5279-411c-ba1c-fd6c274af91f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:00.286450 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:00.286407 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod127cd67a_6124_4bcc_baa8_d0ae87cd028f.slice/crio-8c71600bb79dcdab424bfaa2c4dc0885d7d4cf94e9c3823ef8e8361a9dc28a28 WatchSource:0}: Error finding container 8c71600bb79dcdab424bfaa2c4dc0885d7d4cf94e9c3823ef8e8361a9dc28a28: Status 404 returned error can't find the container with id 8c71600bb79dcdab424bfaa2c4dc0885d7d4cf94e9c3823ef8e8361a9dc28a28 Apr 16 18:30:00.287175 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:00.287116 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53ef616d_9ee1_4d5c_844a_05af3965cf4a.slice/crio-52fd6399e48eb5231f9fffec296f20399b48c9a1dbb2c310177baa04a9815723 WatchSource:0}: Error finding container 52fd6399e48eb5231f9fffec296f20399b48c9a1dbb2c310177baa04a9815723: Status 404 returned error can't find the container with id 52fd6399e48eb5231f9fffec296f20399b48c9a1dbb2c310177baa04a9815723 Apr 16 18:30:00.291691 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:00.291663 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod671b786a_255d_4021_84d2_9c0ed65bd8da.slice/crio-67c608b21aa72d5c3313560bca34769c634d03106e4233e668467462802ee2bc WatchSource:0}: Error finding container 67c608b21aa72d5c3313560bca34769c634d03106e4233e668467462802ee2bc: Status 404 returned error can't find the container with id 67c608b21aa72d5c3313560bca34769c634d03106e4233e668467462802ee2bc Apr 16 18:30:00.292216 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:00.292184 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5f5f6fc_73bc_4bc1_a607_dda6c5bbb1a0.slice/crio-3d06f3f2fd96de6180750d2def489ceed7d16c355524fc2957c362a1fc5c9375 WatchSource:0}: Error finding container 3d06f3f2fd96de6180750d2def489ceed7d16c355524fc2957c362a1fc5c9375: Status 404 returned error can't find the container with id 3d06f3f2fd96de6180750d2def489ceed7d16c355524fc2957c362a1fc5c9375 Apr 16 18:30:00.293192 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:00.292991 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6cc626_96f9_4f30_a283_abdb6733cdac.slice/crio-a168247f86015ffb67afeb3355fdafe5e4cd7b876ff4e3a0346fd514098e4459 WatchSource:0}: Error finding container a168247f86015ffb67afeb3355fdafe5e4cd7b876ff4e3a0346fd514098e4459: Status 404 returned error can't find the container with id a168247f86015ffb67afeb3355fdafe5e4cd7b876ff4e3a0346fd514098e4459 Apr 16 18:30:00.293969 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:00.293917 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc997d6be_0aad_4171_850c_d7aedaf7032f.slice/crio-796b9e511faa0c7048f03b747605b00e3f46127a4f09bd879b660f760e8b29ff WatchSource:0}: Error finding container 796b9e511faa0c7048f03b747605b00e3f46127a4f09bd879b660f760e8b29ff: Status 404 returned error can't find the container with id 796b9e511faa0c7048f03b747605b00e3f46127a4f09bd879b660f760e8b29ff Apr 16 18:30:00.295496 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:00.294782 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9494cb1a_9b64_48a6_ae24_14717ce0b8f0.slice/crio-194dd4270ee56c55edabcc0283b8b7c6f95998b753b31e3ed3c2fda50bf10c21 WatchSource:0}: Error finding container 194dd4270ee56c55edabcc0283b8b7c6f95998b753b31e3ed3c2fda50bf10c21: Status 404 returned error can't find the container with id 194dd4270ee56c55edabcc0283b8b7c6f95998b753b31e3ed3c2fda50bf10c21 Apr 16 18:30:00.295496 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:00.295454 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4899c5b_5582_41f6_8785_b1420a447044.slice/crio-0ace289fcc0214b8f714082fc5c5a12e68d746c47b8e7bc95c4673fb859a7c8c WatchSource:0}: Error finding container 0ace289fcc0214b8f714082fc5c5a12e68d746c47b8e7bc95c4673fb859a7c8c: Status 404 returned error can't find the container with id 0ace289fcc0214b8f714082fc5c5a12e68d746c47b8e7bc95c4673fb859a7c8c Apr 16 18:30:00.531221 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.531184 2569 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2028-04-15 18:24:58 +0000 UTC" deadline="2028-01-12 19:40:40.866949064 +0000 UTC" Apr 16 18:30:00.531221 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.531213 2569 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="15265h10m40.335738237s" Apr 16 18:30:00.559429 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.559400 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9jcb" event={"ID":"e4899c5b-5582-41f6-8785-b1420a447044","Type":"ContainerStarted","Data":"0ace289fcc0214b8f714082fc5c5a12e68d746c47b8e7bc95c4673fb859a7c8c"} Apr 16 18:30:00.560846 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.560822 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerStarted","Data":"194dd4270ee56c55edabcc0283b8b7c6f95998b753b31e3ed3c2fda50bf10c21"} Apr 16 18:30:00.562497 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.562471 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r2w68" event={"ID":"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0","Type":"ContainerStarted","Data":"3d06f3f2fd96de6180750d2def489ceed7d16c355524fc2957c362a1fc5c9375"} Apr 16 18:30:00.563325 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.563304 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" event={"ID":"53ef616d-9ee1-4d5c-844a-05af3965cf4a","Type":"ContainerStarted","Data":"52fd6399e48eb5231f9fffec296f20399b48c9a1dbb2c310177baa04a9815723"} Apr 16 18:30:00.564144 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.564121 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-299vl" event={"ID":"127cd67a-6124-4bcc-baa8-d0ae87cd028f","Type":"ContainerStarted","Data":"8c71600bb79dcdab424bfaa2c4dc0885d7d4cf94e9c3823ef8e8361a9dc28a28"} Apr 16 18:30:00.566249 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.566228 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" event={"ID":"46a218914bddbe8acb8c3bd91619bc9c","Type":"ContainerStarted","Data":"02654f42ca7478e61330188cc50b53e96080e6e4cb52ab3f57a52efc91396449"} Apr 16 18:30:00.567268 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.567235 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-fqdpz" event={"ID":"c997d6be-0aad-4171-850c-d7aedaf7032f","Type":"ContainerStarted","Data":"796b9e511faa0c7048f03b747605b00e3f46127a4f09bd879b660f760e8b29ff"} Apr 16 18:30:00.568739 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.568720 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"a168247f86015ffb67afeb3355fdafe5e4cd7b876ff4e3a0346fd514098e4459"} Apr 16 18:30:00.569584 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.569566 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-24f2c" event={"ID":"671b786a-255d-4021-84d2-9c0ed65bd8da","Type":"ContainerStarted","Data":"67c608b21aa72d5c3313560bca34769c634d03106e4233e668467462802ee2bc"} Apr 16 18:30:00.580967 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.580928 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-proxy-ip-10-0-142-225.ec2.internal" podStartSLOduration=2.5809173530000002 podStartE2EDuration="2.580917353s" podCreationTimestamp="2026-04-16 18:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:30:00.580719482 +0000 UTC m=+3.473036844" watchObservedRunningTime="2026-04-16 18:30:00.580917353 +0000 UTC m=+3.473234709" Apr 16 18:30:00.680300 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:00.679934 2569 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Apr 16 18:30:01.044848 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.044817 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kube-system/global-pull-secret-syncer-8rfj7"] Apr 16 18:30:01.047942 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.047664 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.047942 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.047779 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:01.115301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.115265 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:01.115494 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.115341 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.115494 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.115387 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-kubelet-config\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.115494 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.115419 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-dbus\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.124938 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.124852 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:01.124938 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.124941 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:03.124920329 +0000 UTC m=+6.017237690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:01.216307 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.216272 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:01.216480 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.216331 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.216480 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.216362 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-kubelet-config\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.216480 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.216400 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-dbus\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.216655 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.216580 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"dbus\" (UniqueName: \"kubernetes.io/host-path/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-dbus\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.216720 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.216701 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:01.216772 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.216727 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:01.216772 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.216741 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n975s for pod openshift-network-diagnostics/network-check-target-8mjdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:01.216864 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.216797 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s podName:c1b6e68a-5279-411c-ba1c-fd6c274af91f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:03.216777647 +0000 UTC m=+6.109095004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n975s" (UniqueName: "kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s") pod "network-check-target-8mjdj" (UID: "c1b6e68a-5279-411c-ba1c-fd6c274af91f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:01.217192 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.217174 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:01.217257 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.217227 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret podName:6ca02c8d-5554-44e4-9884-f9e0bcd462ed nodeName:}" failed. No retries permitted until 2026-04-16 18:30:01.717212661 +0000 UTC m=+4.609530019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret") pod "global-pull-secret-syncer-8rfj7" (UID: "6ca02c8d-5554-44e4-9884-f9e0bcd462ed") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:01.217323 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.217280 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-config\" (UniqueName: \"kubernetes.io/host-path/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-kubelet-config\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.553596 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.553525 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:01.553596 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.553581 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:01.554074 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.553754 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:01.554074 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.553827 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:01.576604 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.576577 2569 generic.go:358] "Generic (PLEG): container finished" podID="73d3305f726a432b08c0adb8461c873e" containerID="cea7032b3c35f9aad323008850a66f6fdfa2c92298bbe24d533702fce451973f" exitCode=0 Apr 16 18:30:01.577521 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.577497 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" event={"ID":"73d3305f726a432b08c0adb8461c873e","Type":"ContainerDied","Data":"cea7032b3c35f9aad323008850a66f6fdfa2c92298bbe24d533702fce451973f"} Apr 16 18:30:01.719901 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:01.719864 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:01.720069 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.720022 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:01.720143 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:01.720083 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret podName:6ca02c8d-5554-44e4-9884-f9e0bcd462ed nodeName:}" failed. No retries permitted until 2026-04-16 18:30:02.720064803 +0000 UTC m=+5.612382151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret") pod "global-pull-secret-syncer-8rfj7" (UID: "6ca02c8d-5554-44e4-9884-f9e0bcd462ed") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:02.552482 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:02.552452 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:02.552664 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:02.552587 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:02.582208 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:02.582158 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" event={"ID":"73d3305f726a432b08c0adb8461c873e","Type":"ContainerStarted","Data":"2519ba9f2411ddf8ea9bf1c244f4c0ffb4c381030852cb2a7e28d3e3acc18c11"} Apr 16 18:30:02.727941 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:02.727909 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:02.728095 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:02.728057 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:02.728167 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:02.728113 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret podName:6ca02c8d-5554-44e4-9884-f9e0bcd462ed nodeName:}" failed. No retries permitted until 2026-04-16 18:30:04.728096573 +0000 UTC m=+7.620413916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret") pod "global-pull-secret-syncer-8rfj7" (UID: "6ca02c8d-5554-44e4-9884-f9e0bcd462ed") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:03.130988 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:03.130437 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:03.130988 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:03.130618 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:03.130988 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:03.130673 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:07.130656626 +0000 UTC m=+10.022973972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:03.231618 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:03.230998 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:03.231618 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:03.231174 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:03.231618 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:03.231191 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:03.231618 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:03.231202 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n975s for pod openshift-network-diagnostics/network-check-target-8mjdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:03.231618 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:03.231255 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s podName:c1b6e68a-5279-411c-ba1c-fd6c274af91f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:07.231235649 +0000 UTC m=+10.123552993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n975s" (UniqueName: "kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s") pod "network-check-target-8mjdj" (UID: "c1b6e68a-5279-411c-ba1c-fd6c274af91f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:03.553393 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:03.553353 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:03.553580 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:03.553507 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:03.554399 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:03.553727 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:03.554399 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:03.553827 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:04.552960 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:04.552430 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:04.552960 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:04.552565 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:04.744172 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:04.744113 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:04.744365 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:04.744245 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:04.744365 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:04.744320 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret podName:6ca02c8d-5554-44e4-9884-f9e0bcd462ed nodeName:}" failed. No retries permitted until 2026-04-16 18:30:08.744301147 +0000 UTC m=+11.636618513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret") pod "global-pull-secret-syncer-8rfj7" (UID: "6ca02c8d-5554-44e4-9884-f9e0bcd462ed") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:05.552871 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:05.552833 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:05.553053 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:05.552884 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:05.553053 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:05.552988 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:05.553493 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:05.553114 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:06.552407 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:06.552360 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:06.552586 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:06.552497 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:07.165974 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:07.165939 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:07.166433 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:07.166096 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:07.166433 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:07.166158 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:15.166142217 +0000 UTC m=+18.058459576 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:07.266811 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:07.266723 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:07.267035 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:07.266929 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:07.267035 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:07.266956 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:07.267035 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:07.266968 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n975s for pod openshift-network-diagnostics/network-check-target-8mjdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:07.267203 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:07.267060 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s podName:c1b6e68a-5279-411c-ba1c-fd6c274af91f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:15.267041214 +0000 UTC m=+18.159358561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n975s" (UniqueName: "kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s") pod "network-check-target-8mjdj" (UID: "c1b6e68a-5279-411c-ba1c-fd6c274af91f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:07.553757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:07.553652 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:07.554011 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:07.553801 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:07.554199 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:07.554177 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:07.554615 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:07.554591 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:08.034978 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.034930 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-ip-10-0-142-225.ec2.internal" podStartSLOduration=10.034914494 podStartE2EDuration="10.034914494s" podCreationTimestamp="2026-04-16 18:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:30:02.596958395 +0000 UTC m=+5.489275761" watchObservedRunningTime="2026-04-16 18:30:08.034914494 +0000 UTC m=+10.927231872" Apr 16 18:30:08.035718 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.035692 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-z27vt"] Apr 16 18:30:08.038513 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.038490 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.042715 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.042691 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-xnp2l\"" Apr 16 18:30:08.042832 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.042697 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Apr 16 18:30:08.042832 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.042767 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Apr 16 18:30:08.074128 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.074093 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6b34a2b3-f5f1-4606-970d-5865032489f3-hosts-file\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.074323 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.074186 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b34a2b3-f5f1-4606-970d-5865032489f3-tmp-dir\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.074323 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.074211 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkhs\" (UniqueName: \"kubernetes.io/projected/6b34a2b3-f5f1-4606-970d-5865032489f3-kube-api-access-7nkhs\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.174732 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.174694 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6b34a2b3-f5f1-4606-970d-5865032489f3-hosts-file\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.175180 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.174774 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b34a2b3-f5f1-4606-970d-5865032489f3-tmp-dir\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.175180 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.174797 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7nkhs\" (UniqueName: \"kubernetes.io/projected/6b34a2b3-f5f1-4606-970d-5865032489f3-kube-api-access-7nkhs\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.175180 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.174876 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6b34a2b3-f5f1-4606-970d-5865032489f3-hosts-file\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.175180 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.175156 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b34a2b3-f5f1-4606-970d-5865032489f3-tmp-dir\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.186347 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.186293 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nkhs\" (UniqueName: \"kubernetes.io/projected/6b34a2b3-f5f1-4606-970d-5865032489f3-kube-api-access-7nkhs\") pod \"node-resolver-z27vt\" (UID: \"6b34a2b3-f5f1-4606-970d-5865032489f3\") " pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.349204 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.349127 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z27vt" Apr 16 18:30:08.553137 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.553103 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:08.553305 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:08.553241 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:08.779870 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:08.779836 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:08.780046 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:08.779983 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:08.780046 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:08.780044 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret podName:6ca02c8d-5554-44e4-9884-f9e0bcd462ed nodeName:}" failed. No retries permitted until 2026-04-16 18:30:16.780025869 +0000 UTC m=+19.672343211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret") pod "global-pull-secret-syncer-8rfj7" (UID: "6ca02c8d-5554-44e4-9884-f9e0bcd462ed") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:09.553153 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:09.553122 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:09.553611 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:09.553240 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:09.553611 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:09.553281 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:09.553611 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:09.553365 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:10.552559 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:10.552523 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:10.552740 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:10.552662 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:11.552589 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:11.552555 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:11.553039 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:11.552555 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:11.553039 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:11.552674 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:11.553039 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:11.552747 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:12.552568 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:12.552536 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:12.552771 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:12.552663 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:13.553441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:13.553405 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:13.553441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:13.553430 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:13.553882 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:13.553522 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:13.553882 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:13.553672 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:14.553232 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:14.553204 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:14.553500 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:14.553307 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:15.228920 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:15.228891 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:15.229100 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:15.229032 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:15.229169 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:15.229103 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:31.229080391 +0000 UTC m=+34.121397734 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:15.330200 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:15.330166 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:15.330393 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:15.330309 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:15.330393 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:15.330331 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:15.330393 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:15.330347 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n975s for pod openshift-network-diagnostics/network-check-target-8mjdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:15.330521 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:15.330422 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s podName:c1b6e68a-5279-411c-ba1c-fd6c274af91f nodeName:}" failed. No retries permitted until 2026-04-16 18:30:31.330403148 +0000 UTC m=+34.222720501 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n975s" (UniqueName: "kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s") pod "network-check-target-8mjdj" (UID: "c1b6e68a-5279-411c-ba1c-fd6c274af91f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:15.552833 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:15.552752 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:15.552972 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:15.552879 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:15.552972 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:15.552903 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:15.553083 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:15.553036 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:16.553272 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:16.553237 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:16.553721 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:16.553363 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:16.665163 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:16.665131 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b34a2b3_f5f1_4606_970d_5865032489f3.slice/crio-1c4f64fde732d5e3f5f586807478d5692cfcfb78dbe2a2300aa28eec9fddd7a5 WatchSource:0}: Error finding container 1c4f64fde732d5e3f5f586807478d5692cfcfb78dbe2a2300aa28eec9fddd7a5: Status 404 returned error can't find the container with id 1c4f64fde732d5e3f5f586807478d5692cfcfb78dbe2a2300aa28eec9fddd7a5 Apr 16 18:30:16.842829 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:16.842801 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:16.842953 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:16.842921 2569 secret.go:189] Couldn't get secret kube-system/original-pull-secret: object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:16.843028 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:16.842979 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret podName:6ca02c8d-5554-44e4-9884-f9e0bcd462ed nodeName:}" failed. No retries permitted until 2026-04-16 18:30:32.842961385 +0000 UTC m=+35.735278729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "original-pull-secret" (UniqueName: "kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret") pod "global-pull-secret-syncer-8rfj7" (UID: "6ca02c8d-5554-44e4-9884-f9e0bcd462ed") : object "kube-system"/"original-pull-secret" not registered Apr 16 18:30:17.553246 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.552934 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:17.553338 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.553014 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:17.553914 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:17.553335 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:17.553914 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:17.553428 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:17.605001 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.604904 2569 generic.go:358] "Generic (PLEG): container finished" podID="9494cb1a-9b64-48a6-ae24-14717ce0b8f0" containerID="4f1aed9883adcbca8ee4b2bf9d9e892d737044f305773caa7a54230028e9cd34" exitCode=0 Apr 16 18:30:17.605001 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.604979 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerDied","Data":"4f1aed9883adcbca8ee4b2bf9d9e892d737044f305773caa7a54230028e9cd34"} Apr 16 18:30:17.606467 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.606443 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r2w68" event={"ID":"e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0","Type":"ContainerStarted","Data":"18d5ce7d6010082da6573e998dff01f72ca1aead439523894627aec8cf199cf9"} Apr 16 18:30:17.607895 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.607876 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" event={"ID":"53ef616d-9ee1-4d5c-844a-05af3965cf4a","Type":"ContainerStarted","Data":"5d95c2bca9e4c20ee9ab97656e390545728be7317732de99f30ea1b73530fd50"} Apr 16 18:30:17.609211 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.609181 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-299vl" event={"ID":"127cd67a-6124-4bcc-baa8-d0ae87cd028f","Type":"ContainerStarted","Data":"eec76fcccce4c84f49dbedd594687f629cf9230aa6be5af63ebb38a9c08c5ad5"} Apr 16 18:30:17.611483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.611465 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:30:17.611822 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.611800 2569 generic.go:358] "Generic (PLEG): container finished" podID="8a6cc626-96f9-4f30-a283-abdb6733cdac" containerID="53deae1d3fa4ecd0df515b4dedddd9326108d07e88c766bac1a55c8e5338191d" exitCode=1 Apr 16 18:30:17.611905 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.611866 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"284e43ed13c922959abff43c0ea759090c8e6a9677c3533e5da1ed1158f260e8"} Apr 16 18:30:17.611905 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.611887 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"a534f7f19dee7d26015eb307be6513c8eae6452e7fe145eb424f4beec24baae9"} Apr 16 18:30:17.611905 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.611901 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"91977ea27c02eacc18f81f3df17cf42287999a08a338079e7c75793e507cc005"} Apr 16 18:30:17.612049 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.611912 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerDied","Data":"53deae1d3fa4ecd0df515b4dedddd9326108d07e88c766bac1a55c8e5338191d"} Apr 16 18:30:17.612049 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.611922 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"a573e1339b98f4678bb8d40a83aef5233d8b9fa896a209b6de56fd4480c472fa"} Apr 16 18:30:17.613212 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.613191 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/konnectivity-agent-24f2c" event={"ID":"671b786a-255d-4021-84d2-9c0ed65bd8da","Type":"ContainerStarted","Data":"c70b63f761daf55f300540fb63d7e43a753e20458a839902b98f60aff404581c"} Apr 16 18:30:17.614648 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.614617 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z27vt" event={"ID":"6b34a2b3-f5f1-4606-970d-5865032489f3","Type":"ContainerStarted","Data":"abf35f5a1a2739946e5bc0be02886236708bb0a2a82c2f98b41dc909f8ca6cc5"} Apr 16 18:30:17.614648 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.614646 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z27vt" event={"ID":"6b34a2b3-f5f1-4606-970d-5865032489f3","Type":"ContainerStarted","Data":"1c4f64fde732d5e3f5f586807478d5692cfcfb78dbe2a2300aa28eec9fddd7a5"} Apr 16 18:30:17.615984 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.615961 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q9jcb" event={"ID":"e4899c5b-5582-41f6-8785-b1420a447044","Type":"ContainerStarted","Data":"98dc535e0f19f4228feac000da4e48fae6ebb52c2885f3c82fcc6b047560a63e"} Apr 16 18:30:17.642011 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.641979 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-299vl" podStartSLOduration=4.267389316 podStartE2EDuration="20.641965339s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:30:00.288260102 +0000 UTC m=+3.180577450" lastFinishedPulling="2026-04-16 18:30:16.662836125 +0000 UTC m=+19.555153473" observedRunningTime="2026-04-16 18:30:17.641520382 +0000 UTC m=+20.533837748" watchObservedRunningTime="2026-04-16 18:30:17.641965339 +0000 UTC m=+20.534282904" Apr 16 18:30:17.653476 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.653422 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-r2w68" podStartSLOduration=8.675041448 podStartE2EDuration="20.653408937s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:30:00.294269195 +0000 UTC m=+3.186586542" lastFinishedPulling="2026-04-16 18:30:12.272636689 +0000 UTC m=+15.164954031" observedRunningTime="2026-04-16 18:30:17.653242591 +0000 UTC m=+20.545559992" watchObservedRunningTime="2026-04-16 18:30:17.653408937 +0000 UTC m=+20.545726303" Apr 16 18:30:17.671604 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.671560 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/konnectivity-agent-24f2c" podStartSLOduration=4.304648833 podStartE2EDuration="20.671548207s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:30:00.293810312 +0000 UTC m=+3.186127669" lastFinishedPulling="2026-04-16 18:30:16.660709695 +0000 UTC m=+19.553027043" observedRunningTime="2026-04-16 18:30:17.671069042 +0000 UTC m=+20.563386407" watchObservedRunningTime="2026-04-16 18:30:17.671548207 +0000 UTC m=+20.563865572" Apr 16 18:30:17.691183 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.688931 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-z27vt" podStartSLOduration=9.688917788 podStartE2EDuration="9.688917788s" podCreationTimestamp="2026-04-16 18:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:30:17.688459539 +0000 UTC m=+20.580776906" watchObservedRunningTime="2026-04-16 18:30:17.688917788 +0000 UTC m=+20.581235150" Apr 16 18:30:17.709085 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:17.709041 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-q9jcb" podStartSLOduration=4.3079258320000005 podStartE2EDuration="20.709027181s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:30:00.297491758 +0000 UTC m=+3.189809101" lastFinishedPulling="2026-04-16 18:30:16.698593105 +0000 UTC m=+19.590910450" observedRunningTime="2026-04-16 18:30:17.708769172 +0000 UTC m=+20.601086539" watchObservedRunningTime="2026-04-16 18:30:17.709027181 +0000 UTC m=+20.601344547" Apr 16 18:30:18.204558 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.204533 2569 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock" Apr 16 18:30:18.552496 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.552468 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:18.552647 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:18.552576 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:18.559929 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.559645 2569 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/ebs.csi.aws.com-reg.sock","Timestamp":"2026-04-16T18:30:18.204553528Z","UUID":"35a0955e-d909-4e31-acc7-7491619e6d43","Handler":null,"Name":"","Endpoint":""} Apr 16 18:30:18.561316 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.561295 2569 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: ebs.csi.aws.com endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock versions: 1.0.0 Apr 16 18:30:18.561465 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.561322 2569 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: ebs.csi.aws.com at endpoint: /var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock Apr 16 18:30:18.626040 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.626006 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" event={"ID":"53ef616d-9ee1-4d5c-844a-05af3965cf4a","Type":"ContainerStarted","Data":"a1698eff0563b3b0b2bf4a67f9bf1271d2b59be34191f904aa943c9fa5de1642"} Apr 16 18:30:18.627654 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.627610 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-fqdpz" event={"ID":"c997d6be-0aad-4171-850c-d7aedaf7032f","Type":"ContainerStarted","Data":"0e38ac655d18be55caf26fe218d208c4e41601883a7a94037e6431ff5f0b9341"} Apr 16 18:30:18.630439 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.630407 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:30:18.630804 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.630777 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"358647ffc1012ad9fb5b8f2f0e755e9fa6eb5e6abf954dbfe2f12e9fd9cd8f7c"} Apr 16 18:30:18.643056 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:18.643009 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-fqdpz" podStartSLOduration=5.277914209 podStartE2EDuration="21.642998102s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:30:00.29610914 +0000 UTC m=+3.188426486" lastFinishedPulling="2026-04-16 18:30:16.661193023 +0000 UTC m=+19.553510379" observedRunningTime="2026-04-16 18:30:18.64243975 +0000 UTC m=+21.534757117" watchObservedRunningTime="2026-04-16 18:30:18.642998102 +0000 UTC m=+21.535315467" Apr 16 18:30:19.553226 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:19.553035 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:19.553552 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:19.553095 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:19.553552 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:19.553343 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:19.553552 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:19.553415 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:19.634635 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:19.634554 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" event={"ID":"53ef616d-9ee1-4d5c-844a-05af3965cf4a","Type":"ContainerStarted","Data":"7f59fc15d98fc2036f25e7d1a0da2caa323f779e68621e53ab05a59c1632a712"} Apr 16 18:30:19.662054 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:19.662012 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-w5snm" podStartSLOduration=3.615805324 podStartE2EDuration="22.66199984s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:30:00.289453677 +0000 UTC m=+3.181771023" lastFinishedPulling="2026-04-16 18:30:19.335648178 +0000 UTC m=+22.227965539" observedRunningTime="2026-04-16 18:30:19.661979725 +0000 UTC m=+22.554297091" watchObservedRunningTime="2026-04-16 18:30:19.66199984 +0000 UTC m=+22.554317205" Apr 16 18:30:20.171823 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:20.171744 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:30:20.172494 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:20.172468 2569 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:30:20.553212 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:20.553184 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:20.553411 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:20.553306 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:20.639702 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:20.639671 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:30:20.640305 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:20.640033 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"522bfcd39b2358a05c21fad63f03a29882d28dc466f820ad039a1e11216b4410"} Apr 16 18:30:20.640305 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:20.640291 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:30:20.640993 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:20.640972 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/konnectivity-agent-24f2c" Apr 16 18:30:21.552663 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:21.552631 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:21.552663 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:21.552651 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:21.552902 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:21.552748 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:21.552902 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:21.552880 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:22.553459 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:22.553277 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:22.554104 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:22.553541 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:22.646139 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:22.646112 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:30:22.646495 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:22.646412 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"337867bdb3e1b9444271221d47177d5c872adc7e8e60ab798af342a9a8d0af50"} Apr 16 18:30:22.646718 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:22.646704 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:30:22.646879 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:22.646860 2569 scope.go:117] "RemoveContainer" containerID="53deae1d3fa4ecd0df515b4dedddd9326108d07e88c766bac1a55c8e5338191d" Apr 16 18:30:22.647917 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:22.647896 2569 generic.go:358] "Generic (PLEG): container finished" podID="9494cb1a-9b64-48a6-ae24-14717ce0b8f0" containerID="7b318c29e0af5a9e5d6f8108f0cc5812e2f49f32d05e6894ce9fd0ee75ed3331" exitCode=0 Apr 16 18:30:22.648025 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:22.647994 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerDied","Data":"7b318c29e0af5a9e5d6f8108f0cc5812e2f49f32d05e6894ce9fd0ee75ed3331"} Apr 16 18:30:22.662594 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:22.662528 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:30:23.553115 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:23.553088 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:23.553289 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:23.553088 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:23.553289 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:23.553192 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:23.553289 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:23.553256 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:23.652764 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:23.652739 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:30:23.653154 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:23.653007 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" event={"ID":"8a6cc626-96f9-4f30-a283-abdb6733cdac","Type":"ContainerStarted","Data":"787428f36703b9dd435f23d270620367492c5540303d08c89a7b7c744a09bb5f"} Apr 16 18:30:23.653223 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:23.653208 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:30:23.653266 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:23.653231 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:30:23.666341 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:23.666317 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:30:23.682018 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:23.681975 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" podStartSLOduration=9.984018306 podStartE2EDuration="26.681965569s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:30:00.295504451 +0000 UTC m=+3.187821802" lastFinishedPulling="2026-04-16 18:30:16.993451722 +0000 UTC m=+19.885769065" observedRunningTime="2026-04-16 18:30:23.681543885 +0000 UTC m=+26.573861273" watchObservedRunningTime="2026-04-16 18:30:23.681965569 +0000 UTC m=+26.574282934" Apr 16 18:30:24.063553 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:24.063516 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-8mjdj"] Apr 16 18:30:24.063706 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:24.063640 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:24.063747 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:24.063714 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:24.066943 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:24.066907 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-8rfj7"] Apr 16 18:30:24.067079 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:24.067043 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:24.067197 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:24.067163 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:24.067638 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:24.067612 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p2hph"] Apr 16 18:30:24.067773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:24.067757 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:24.067895 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:24.067875 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:24.656766 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:24.656737 2569 generic.go:358] "Generic (PLEG): container finished" podID="9494cb1a-9b64-48a6-ae24-14717ce0b8f0" containerID="a95ccc51a7aac86d7252a3e69774db2f571e38630b8384b9b92e8d28ef6ed8e4" exitCode=0 Apr 16 18:30:24.657102 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:24.656829 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerDied","Data":"a95ccc51a7aac86d7252a3e69774db2f571e38630b8384b9b92e8d28ef6ed8e4"} Apr 16 18:30:25.552593 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:25.552564 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:25.552768 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:25.552564 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:25.552768 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:25.552678 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:25.552768 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:25.552744 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:25.552768 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:25.552574 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:25.552956 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:25.552818 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:26.662713 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:26.662683 2569 generic.go:358] "Generic (PLEG): container finished" podID="9494cb1a-9b64-48a6-ae24-14717ce0b8f0" containerID="2cc4bf76f749b3d10a6becbb5e9858e7944425fb528f93edb5e6e1ed4c37da50" exitCode=0 Apr 16 18:30:26.663067 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:26.662727 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerDied","Data":"2cc4bf76f749b3d10a6becbb5e9858e7944425fb528f93edb5e6e1ed4c37da50"} Apr 16 18:30:27.553698 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:27.553489 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:27.553874 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:27.553571 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:27.553874 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:27.553805 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:27.553874 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:27.553595 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:27.554041 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:27.553882 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:27.554041 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:27.553954 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:29.553214 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:29.553180 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:29.553871 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:29.553180 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:29.553871 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:29.553301 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:30:29.553871 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:29.553348 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="kube-system/global-pull-secret-syncer-8rfj7" podUID="6ca02c8d-5554-44e4-9884-f9e0bcd462ed" Apr 16 18:30:29.553871 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:29.553183 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:29.553871 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:29.553445 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-8mjdj" podUID="c1b6e68a-5279-411c-ba1c-fd6c274af91f" Apr 16 18:30:30.952752 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:30.952720 2569 kubelet_node_status.go:736] "Recording event message for node" node="ip-10-0-142-225.ec2.internal" event="NodeReady" Apr 16 18:30:30.953283 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:30.952859 2569 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Apr 16 18:30:30.996994 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:30.996967 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p"] Apr 16 18:30:31.033163 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.033132 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-769c8c769b-pr8h9"] Apr 16 18:30:31.048335 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.048309 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk"] Apr 16 18:30:31.048498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.048460 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:31.048498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.048478 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.051544 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.051469 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-private-configuration\"" Apr 16 18:30:31.051697 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.051564 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Apr 16 18:30:31.052240 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.052221 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-zvvl6\"" Apr 16 18:30:31.052354 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.052218 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Apr 16 18:30:31.052354 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.052320 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Apr 16 18:30:31.052527 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.052425 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Apr 16 18:30:31.052527 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.052449 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-5jftk\"" Apr 16 18:30:31.061698 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.061677 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss"] Apr 16 18:30:31.061807 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.061792 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:30:31.065026 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.064948 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-hub-kubeconfig\"" Apr 16 18:30:31.065026 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.065010 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"openshift-service-ca.crt\"" Apr 16 18:30:31.065202 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.065048 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"managed-serviceaccount-dockercfg-czj58\"" Apr 16 18:30:31.065202 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.064948 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"open-cluster-management-image-pull-credentials\"" Apr 16 18:30:31.065324 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.065312 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"open-cluster-management-agent-addon\"/\"kube-root-ca.crt\"" Apr 16 18:30:31.070664 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.070648 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Apr 16 18:30:31.079884 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.079865 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d"] Apr 16 18:30:31.080048 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.080030 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.084001 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.083982 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"work-manager-hub-kubeconfig\"" Apr 16 18:30:31.100875 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.100854 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p"] Apr 16 18:30:31.100979 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.100883 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hv956"] Apr 16 18:30:31.101032 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.101001 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.104185 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.104147 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-ca\"" Apr 16 18:30:31.104506 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.104484 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-hub-kubeconfig\"" Apr 16 18:30:31.104634 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.104613 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-service-proxy-server-certificates\"" Apr 16 18:30:31.104993 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.104976 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"open-cluster-management-agent-addon\"/\"cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert\"" Apr 16 18:30:31.116683 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.116664 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bhlgn"] Apr 16 18:30:31.116830 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.116814 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:31.119749 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.119725 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Apr 16 18:30:31.119749 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.119735 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-2pvvf\"" Apr 16 18:30:31.119898 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.119752 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Apr 16 18:30:31.120015 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.119999 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Apr 16 18:30:31.131657 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.131596 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk"] Apr 16 18:30:31.131752 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.131677 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss"] Apr 16 18:30:31.131752 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.131700 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d"] Apr 16 18:30:31.131752 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.131710 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-769c8c769b-pr8h9"] Apr 16 18:30:31.131752 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.131718 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hv956"] Apr 16 18:30:31.131752 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.131720 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.131752 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.131726 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bhlgn"] Apr 16 18:30:31.134100 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.134067 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Apr 16 18:30:31.134345 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.134323 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Apr 16 18:30:31.134499 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.134323 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-h4pkb\"" Apr 16 18:30:31.158052 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158027 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-ca-trust-extracted\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.158161 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158066 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-installation-pull-secrets\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.158161 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158097 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c6572ec9-4256-4030-a8db-98573ded7d80-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:31.158161 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158144 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4bj5\" (UniqueName: \"kubernetes.io/projected/b103d799-6b1c-4451-9de5-97e25dd04337-kube-api-access-w4bj5\") pod \"managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk\" (UID: \"b103d799-6b1c-4451-9de5-97e25dd04337\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:30:31.158296 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158167 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:31.158296 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158193 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-image-registry-private-configuration\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.158399 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158286 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/b103d799-6b1c-4451-9de5-97e25dd04337-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk\" (UID: \"b103d799-6b1c-4451-9de5-97e25dd04337\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:30:31.158399 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158351 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.158399 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158393 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-bound-sa-token\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.158552 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158420 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqmp2\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-kube-api-access-lqmp2\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.158552 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158454 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-trusted-ca\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.158552 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.158483 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-certificates\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.259520 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259442 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/b103d799-6b1c-4451-9de5-97e25dd04337-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk\" (UID: \"b103d799-6b1c-4451-9de5-97e25dd04337\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:30:31.259520 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259492 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:31.259733 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259530 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/718409d2-012e-47ae-bb45-151ceb86feb0-klusterlet-config\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.259733 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259564 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbtj\" (UniqueName: \"kubernetes.io/projected/077542aa-4ef3-4409-a13f-1196cff06904-kube-api-access-5mbtj\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:31.259733 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259606 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.259733 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259661 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-bound-sa-token\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.259733 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259692 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-trusted-ca\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.259733 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259720 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259748 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p698s\" (UniqueName: \"kubernetes.io/projected/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-kube-api-access-p698s\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.259757 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.259785 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259776 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28j5\" (UniqueName: \"kubernetes.io/projected/718409d2-012e-47ae-bb45-151ceb86feb0-kube-api-access-p28j5\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.259858 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:31.759837412 +0000 UTC m=+34.652154771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259850 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz6fn\" (UniqueName: \"kubernetes.io/projected/df4a1222-1cb9-4e57-890c-542f990453e1-kube-api-access-hz6fn\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259895 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-ca\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259939 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-config-volume\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.260012 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.259983 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260031 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-hub\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260066 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.260097 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260103 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lqmp2\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-kube-api-access-lqmp2\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260140 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-tmp-dir\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.260163 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:03.260145302 +0000 UTC m=+66.152462645 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260211 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-certificates\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260251 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-ca-trust-extracted\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260277 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-installation-pull-secrets\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260307 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/718409d2-012e-47ae-bb45-151ceb86feb0-tmp\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260334 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/df4a1222-1cb9-4e57-890c-542f990453e1-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260361 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260406 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c6572ec9-4256-4030-a8db-98573ded7d80-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260425 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4bj5\" (UniqueName: \"kubernetes.io/projected/b103d799-6b1c-4451-9de5-97e25dd04337-kube-api-access-w4bj5\") pod \"managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk\" (UID: \"b103d799-6b1c-4451-9de5-97e25dd04337\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:30:31.260483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260445 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:31.261226 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260462 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-image-registry-private-configuration\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.261226 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260682 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-trusted-ca\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.261226 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260799 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-certificates\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.261226 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.260897 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:31.261226 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.260915 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-ca-trust-extracted\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.261226 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.260961 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:31.760943326 +0000 UTC m=+34.653260673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:30:31.261522 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.261314 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c6572ec9-4256-4030-a8db-98573ded7d80-nginx-conf\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:31.264742 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.264718 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-installation-pull-secrets\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.264845 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.264718 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-image-registry-private-configuration\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.264911 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.264839 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/b103d799-6b1c-4451-9de5-97e25dd04337-hub-kubeconfig\") pod \"managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk\" (UID: \"b103d799-6b1c-4451-9de5-97e25dd04337\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:30:31.276084 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.276059 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqmp2\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-kube-api-access-lqmp2\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.276774 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.276753 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4bj5\" (UniqueName: \"kubernetes.io/projected/b103d799-6b1c-4451-9de5-97e25dd04337-kube-api-access-w4bj5\") pod \"managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk\" (UID: \"b103d799-6b1c-4451-9de5-97e25dd04337\") " pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:30:31.277904 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.277886 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-bound-sa-token\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.361301 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361262 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbtj\" (UniqueName: \"kubernetes.io/projected/077542aa-4ef3-4409-a13f-1196cff06904-kube-api-access-5mbtj\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:31.361498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361340 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.361498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361369 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p698s\" (UniqueName: \"kubernetes.io/projected/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-kube-api-access-p698s\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.361498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361416 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p28j5\" (UniqueName: \"kubernetes.io/projected/718409d2-012e-47ae-bb45-151ceb86feb0-kube-api-access-p28j5\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.361498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361449 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hz6fn\" (UniqueName: \"kubernetes.io/projected/df4a1222-1cb9-4e57-890c-542f990453e1-kube-api-access-hz6fn\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.361498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361477 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-ca\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.361757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361509 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-config-volume\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.361757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361556 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:31.361757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361579 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hub\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-hub\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.361757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361611 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.361757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361645 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-tmp-dir\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.361757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361683 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/718409d2-012e-47ae-bb45-151ceb86feb0-tmp\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.361757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361709 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/df4a1222-1cb9-4e57-890c-542f990453e1-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.361757 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.361730 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.362126 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.362044 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-tmp-dir\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.362183 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.362146 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:31.362237 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.362200 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:31.862180914 +0000 UTC m=+34.754498261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:30:31.362237 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.362201 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/718409d2-012e-47ae-bb45-151ceb86feb0-tmp\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.362532 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.362453 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:31.362532 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.362476 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 16 18:30:31.362532 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.362499 2569 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 16 18:30:31.362532 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.362514 2569 projected.go:194] Error preparing data for projected volume kube-api-access-n975s for pod openshift-network-diagnostics/network-check-target-8mjdj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:31.362532 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.362513 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/718409d2-012e-47ae-bb45-151ceb86feb0-klusterlet-config\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.362856 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.362569 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s podName:c1b6e68a-5279-411c-ba1c-fd6c274af91f nodeName:}" failed. No retries permitted until 2026-04-16 18:31:03.362553306 +0000 UTC m=+66.254870664 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-n975s" (UniqueName: "kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s") pod "network-check-target-8mjdj" (UID: "c1b6e68a-5279-411c-ba1c-fd6c274af91f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 16 18:30:31.362856 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.362622 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ocpservice-ca\" (UniqueName: \"kubernetes.io/configmap/df4a1222-1cb9-4e57-890c-542f990453e1-ocpservice-ca\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.362856 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.362639 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:31.362856 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.362698 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:31.862682782 +0000 UTC m=+34.755000144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:30:31.362856 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.362732 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-config-volume\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.364320 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.364299 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub-kubeconfig\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-hub-kubeconfig\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.364435 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.364402 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hub\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-hub\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.365097 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.365076 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"klusterlet-config\" (UniqueName: \"kubernetes.io/secret/718409d2-012e-47ae-bb45-151ceb86feb0-klusterlet-config\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.370286 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.370261 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-proxy-server-cert\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-service-proxy-server-cert\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.370416 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.370320 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca\" (UniqueName: \"kubernetes.io/secret/df4a1222-1cb9-4e57-890c-542f990453e1-ca\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.375645 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.375617 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28j5\" (UniqueName: \"kubernetes.io/projected/718409d2-012e-47ae-bb45-151ceb86feb0-kube-api-access-p28j5\") pod \"klusterlet-addon-workmgr-6c84f9c746-flvss\" (UID: \"718409d2-012e-47ae-bb45-151ceb86feb0\") " pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.376269 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.376240 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p698s\" (UniqueName: \"kubernetes.io/projected/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-kube-api-access-p698s\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.378221 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.378201 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz6fn\" (UniqueName: \"kubernetes.io/projected/df4a1222-1cb9-4e57-890c-542f990453e1-kube-api-access-hz6fn\") pod \"cluster-proxy-proxy-agent-56cd457576-rnm9d\" (UID: \"df4a1222-1cb9-4e57-890c-542f990453e1\") " pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.383599 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.383577 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:30:31.384727 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.384703 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbtj\" (UniqueName: \"kubernetes.io/projected/077542aa-4ef3-4409-a13f-1196cff06904-kube-api-access-5mbtj\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:31.392414 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.392397 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:31.411018 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.410988 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:30:31.553114 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.553040 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:30:31.553315 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.553048 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:31.553315 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.553185 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:30:31.555974 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.555941 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-wddp7\"" Apr 16 18:30:31.556113 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.556091 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 18:30:31.556113 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.556103 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 18:30:31.556252 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.556240 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 18:30:31.556644 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.556623 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kube-system\"/\"original-pull-secret\"" Apr 16 18:30:31.557636 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.557613 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-pk24s\"" Apr 16 18:30:31.765985 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.765946 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:31.766165 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.766016 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:31.766165 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.766136 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:31.766260 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.766196 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:32.766181357 +0000 UTC m=+35.658498705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:30:31.766260 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.766138 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:31.766260 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.766245 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:30:31.766413 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.766285 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:32.766274181 +0000 UTC m=+35.658591525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:30:31.866460 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.866371 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:31.866460 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:31.866455 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:31.866682 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.866516 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:31.866682 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.866593 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:32.866573019 +0000 UTC m=+35.758890362 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:30:31.866682 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.866607 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:31.866682 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:31.866669 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:32.866652961 +0000 UTC m=+35.758970304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:30:32.271447 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.270635 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d"] Apr 16 18:30:32.274724 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.274664 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss"] Apr 16 18:30:32.292328 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.292307 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk"] Apr 16 18:30:32.414307 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:32.414275 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4a1222_1cb9_4e57_890c_542f990453e1.slice/crio-d86800928d15a68adef584360c62a84db60b8b7cf84351893af4303d5e4febf8 WatchSource:0}: Error finding container d86800928d15a68adef584360c62a84db60b8b7cf84351893af4303d5e4febf8: Status 404 returned error can't find the container with id d86800928d15a68adef584360c62a84db60b8b7cf84351893af4303d5e4febf8 Apr 16 18:30:32.414690 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:32.414654 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718409d2_012e_47ae_bb45_151ceb86feb0.slice/crio-ba26c2931c6f37e11f2e632089455a17fada9ef486f7a1bca56824df48e09be7 WatchSource:0}: Error finding container ba26c2931c6f37e11f2e632089455a17fada9ef486f7a1bca56824df48e09be7: Status 404 returned error can't find the container with id ba26c2931c6f37e11f2e632089455a17fada9ef486f7a1bca56824df48e09be7 Apr 16 18:30:32.415576 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:32.415531 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb103d799_6b1c_4451_9de5_97e25dd04337.slice/crio-12ed3f156d2b6c4be381dca6b16667e4091eaf4f1c10257a9c1d7c0ee09e95a4 WatchSource:0}: Error finding container 12ed3f156d2b6c4be381dca6b16667e4091eaf4f1c10257a9c1d7c0ee09e95a4: Status 404 returned error can't find the container with id 12ed3f156d2b6c4be381dca6b16667e4091eaf4f1c10257a9c1d7c0ee09e95a4 Apr 16 18:30:32.674809 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.674741 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" event={"ID":"df4a1222-1cb9-4e57-890c-542f990453e1","Type":"ContainerStarted","Data":"d86800928d15a68adef584360c62a84db60b8b7cf84351893af4303d5e4febf8"} Apr 16 18:30:32.675799 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.675772 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" event={"ID":"b103d799-6b1c-4451-9de5-97e25dd04337","Type":"ContainerStarted","Data":"12ed3f156d2b6c4be381dca6b16667e4091eaf4f1c10257a9c1d7c0ee09e95a4"} Apr 16 18:30:32.678411 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.678361 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerStarted","Data":"c111b54e26bc6a2a2c486d09173c5685969342379cefc5811bfd62a595df5c46"} Apr 16 18:30:32.679460 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.679437 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" event={"ID":"718409d2-012e-47ae-bb45-151ceb86feb0","Type":"ContainerStarted","Data":"ba26c2931c6f37e11f2e632089455a17fada9ef486f7a1bca56824df48e09be7"} Apr 16 18:30:32.775326 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.775300 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:32.775489 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.775464 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:32.775601 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.775550 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:32.775655 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.775612 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:34.775597485 +0000 UTC m=+37.667914827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:30:32.775655 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.775623 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:32.775655 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.775640 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:30:32.775770 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.775688 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:34.775671893 +0000 UTC m=+37.667989259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:30:32.876710 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.876680 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:32.876827 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.876749 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:32.876827 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.876794 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:32.876933 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.876913 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:32.876979 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.876933 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:32.876979 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.876975 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:34.876962808 +0000 UTC m=+37.769280151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:30:32.877061 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:32.876987 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:34.876980601 +0000 UTC m=+37.769297943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:30:32.880045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:32.880022 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"original-pull-secret\" (UniqueName: \"kubernetes.io/secret/6ca02c8d-5554-44e4-9884-f9e0bcd462ed-original-pull-secret\") pod \"global-pull-secret-syncer-8rfj7\" (UID: \"6ca02c8d-5554-44e4-9884-f9e0bcd462ed\") " pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:33.074712 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:33.074634 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/global-pull-secret-syncer-8rfj7" Apr 16 18:30:33.233399 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:33.230603 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kube-system/global-pull-secret-syncer-8rfj7"] Apr 16 18:30:33.234260 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:30:33.234038 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ca02c8d_5554_44e4_9884_f9e0bcd462ed.slice/crio-d0ebc609506f425b5b1e8d918c60d0b378286605a0589b4c7682ca9f68af77db WatchSource:0}: Error finding container d0ebc609506f425b5b1e8d918c60d0b378286605a0589b4c7682ca9f68af77db: Status 404 returned error can't find the container with id d0ebc609506f425b5b1e8d918c60d0b378286605a0589b4c7682ca9f68af77db Apr 16 18:30:33.688440 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:33.688406 2569 generic.go:358] "Generic (PLEG): container finished" podID="9494cb1a-9b64-48a6-ae24-14717ce0b8f0" containerID="c111b54e26bc6a2a2c486d09173c5685969342379cefc5811bfd62a595df5c46" exitCode=0 Apr 16 18:30:33.688983 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:33.688516 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerDied","Data":"c111b54e26bc6a2a2c486d09173c5685969342379cefc5811bfd62a595df5c46"} Apr 16 18:30:33.690801 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:33.690772 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-8rfj7" event={"ID":"6ca02c8d-5554-44e4-9884-f9e0bcd462ed","Type":"ContainerStarted","Data":"d0ebc609506f425b5b1e8d918c60d0b378286605a0589b4c7682ca9f68af77db"} Apr 16 18:30:34.699269 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:34.699007 2569 generic.go:358] "Generic (PLEG): container finished" podID="9494cb1a-9b64-48a6-ae24-14717ce0b8f0" containerID="14b3730df16da5fa601808869004c6155393add50c5eb3c89ae1181f2f30c2f6" exitCode=0 Apr 16 18:30:34.699878 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:34.699345 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerDied","Data":"14b3730df16da5fa601808869004c6155393add50c5eb3c89ae1181f2f30c2f6"} Apr 16 18:30:34.798107 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:34.798074 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:34.798289 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:34.798202 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:34.798360 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.798297 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:34.798360 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.798345 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:38.798331399 +0000 UTC m=+41.690648742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:30:34.798847 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.798825 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:34.798847 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.798851 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:30:34.799023 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.798903 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:38.79888703 +0000 UTC m=+41.691204374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:30:34.898970 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:34.898915 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:34.899139 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:34.899028 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:34.899139 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.899068 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:34.899264 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.899143 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:38.899124074 +0000 UTC m=+41.791441422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:30:34.899264 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.899162 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:34.899264 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:34.899234 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:38.899202169 +0000 UTC m=+41.791519527 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:30:38.829969 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:38.829930 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:38.830502 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:38.829997 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:38.830502 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.830081 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:38.830502 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.830146 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:46.830128676 +0000 UTC m=+49.722446030 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:30:38.830502 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.830186 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:38.830502 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.830208 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:30:38.830502 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.830269 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:46.830247665 +0000 UTC m=+49.722565009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:30:38.930904 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:38.930874 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:38.931069 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:38.930966 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:38.931069 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.931030 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:38.931069 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.931056 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:38.931203 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.931102 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:46.931082663 +0000 UTC m=+49.823400025 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:30:38.931203 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:38.931121 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:30:46.931113253 +0000 UTC m=+49.823430602 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:30:40.712498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.712457 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kube-system/global-pull-secret-syncer-8rfj7" event={"ID":"6ca02c8d-5554-44e4-9884-f9e0bcd462ed","Type":"ContainerStarted","Data":"f97ceb85cdbd5208e2f56de116e6a1fb7445c3314e5c3cef5644322864146075"} Apr 16 18:30:40.714828 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.714797 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" event={"ID":"df4a1222-1cb9-4e57-890c-542f990453e1","Type":"ContainerStarted","Data":"a5ce68d3368c8914001daffd3279d5824034fb54fe849f2065407a9ab2d232ae"} Apr 16 18:30:40.718160 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.718137 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" event={"ID":"9494cb1a-9b64-48a6-ae24-14717ce0b8f0","Type":"ContainerStarted","Data":"5c8f32735756bfa9c2f20dd538fc2c042ccc213efa0c62bc711119b8bed74cc3"} Apr 16 18:30:40.719309 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.719292 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" event={"ID":"718409d2-012e-47ae-bb45-151ceb86feb0","Type":"ContainerStarted","Data":"a4e96b43f34832dd62985f1959e29cdc0441f7309000117870e90a7298054f94"} Apr 16 18:30:40.719478 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.719462 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:40.720530 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.720507 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" event={"ID":"b103d799-6b1c-4451-9de5-97e25dd04337","Type":"ContainerStarted","Data":"118d93eed304c151ccf2e456a381326966474ad58c0063d719e4b7a3d105eba6"} Apr 16 18:30:40.721207 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.721190 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:30:40.728341 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.728303 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/global-pull-secret-syncer-8rfj7" podStartSLOduration=33.05900375 podStartE2EDuration="39.728292866s" podCreationTimestamp="2026-04-16 18:30:01 +0000 UTC" firstStartedPulling="2026-04-16 18:30:33.236427915 +0000 UTC m=+36.128745264" lastFinishedPulling="2026-04-16 18:30:39.905717024 +0000 UTC m=+42.798034380" observedRunningTime="2026-04-16 18:30:40.727570955 +0000 UTC m=+43.619888330" watchObservedRunningTime="2026-04-16 18:30:40.728292866 +0000 UTC m=+43.620610302" Apr 16 18:30:40.744095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.744032 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" podStartSLOduration=22.285175768 podStartE2EDuration="29.74401894s" podCreationTimestamp="2026-04-16 18:30:11 +0000 UTC" firstStartedPulling="2026-04-16 18:30:32.434170769 +0000 UTC m=+35.326488112" lastFinishedPulling="2026-04-16 18:30:39.89301393 +0000 UTC m=+42.785331284" observedRunningTime="2026-04-16 18:30:40.743964036 +0000 UTC m=+43.636281400" watchObservedRunningTime="2026-04-16 18:30:40.74401894 +0000 UTC m=+43.636336312" Apr 16 18:30:40.765151 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:40.765108 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kmhkm" podStartSLOduration=11.605201811 podStartE2EDuration="43.765095013s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:30:00.297654185 +0000 UTC m=+3.189971543" lastFinishedPulling="2026-04-16 18:30:32.457547394 +0000 UTC m=+35.349864745" observedRunningTime="2026-04-16 18:30:40.764019693 +0000 UTC m=+43.656337059" watchObservedRunningTime="2026-04-16 18:30:40.765095013 +0000 UTC m=+43.657412383" Apr 16 18:30:43.728765 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:43.728719 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" event={"ID":"df4a1222-1cb9-4e57-890c-542f990453e1","Type":"ContainerStarted","Data":"45ba2df12d5c480b5a38b84d93be421f6e4a52acc3f68a2c7392f47284c12d90"} Apr 16 18:30:43.729154 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:43.728768 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" event={"ID":"df4a1222-1cb9-4e57-890c-542f990453e1","Type":"ContainerStarted","Data":"453a62d4018fe2f4ec1cf78499357a9bd303facd6bf2ac0e810300ff21dc7b12"} Apr 16 18:30:43.747368 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:43.747320 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" podStartSLOduration=22.503481792 podStartE2EDuration="32.747309312s" podCreationTimestamp="2026-04-16 18:30:11 +0000 UTC" firstStartedPulling="2026-04-16 18:30:32.434181035 +0000 UTC m=+35.326498382" lastFinishedPulling="2026-04-16 18:30:42.678008541 +0000 UTC m=+45.570325902" observedRunningTime="2026-04-16 18:30:43.746186555 +0000 UTC m=+46.638503920" watchObservedRunningTime="2026-04-16 18:30:43.747309312 +0000 UTC m=+46.639626676" Apr 16 18:30:43.747893 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:43.747860 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" podStartSLOduration=25.288802583 podStartE2EDuration="32.747852717s" podCreationTimestamp="2026-04-16 18:30:11 +0000 UTC" firstStartedPulling="2026-04-16 18:30:32.43413537 +0000 UTC m=+35.326452719" lastFinishedPulling="2026-04-16 18:30:39.893185492 +0000 UTC m=+42.785502853" observedRunningTime="2026-04-16 18:30:40.779809288 +0000 UTC m=+43.672126662" watchObservedRunningTime="2026-04-16 18:30:43.747852717 +0000 UTC m=+46.640170081" Apr 16 18:30:46.893365 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:46.893330 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:30:46.893365 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:46.893390 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:30:46.893767 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.893474 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:30:46.893767 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.893503 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:30:46.893767 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.893513 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:30:46.893767 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.893534 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:02.893518143 +0000 UTC m=+65.785835486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:30:46.893767 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.893554 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:02.893543873 +0000 UTC m=+65.785861216 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:30:46.993855 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:46.993833 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:30:46.993975 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.993917 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:30:46.993975 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:46.993940 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:30:46.993975 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.993947 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:02.993938372 +0000 UTC m=+65.886255715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:30:46.994076 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.994044 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:30:46.994108 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:30:46.994087 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:02.994074862 +0000 UTC m=+65.886392223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:30:55.668666 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:30:55.668641 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5gbp" Apr 16 18:31:02.914651 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:02.914613 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:31:02.915003 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:02.914665 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:31:02.915003 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:02.914760 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:31:02.915003 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:02.914771 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:31:02.915003 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:02.914759 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:31:02.915003 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:02.914826 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:34.914807953 +0000 UTC m=+97.807125296 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:31:02.915003 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:02.914874 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:34.914857846 +0000 UTC m=+97.807175188 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:31:03.015244 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.015211 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:31:03.015463 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.015289 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:31:03.015463 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:03.015368 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:31:03.015463 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:03.015395 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:31:03.015463 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:03.015445 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:35.015432446 +0000 UTC m=+97.907749789 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:31:03.015463 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:03.015459 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:31:35.015453444 +0000 UTC m=+97.907770787 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:31:03.317072 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.317044 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:31:03.319599 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.319581 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Apr 16 18:31:03.327616 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:03.327591 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:31:03.327763 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:03.327656 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:32:07.327636879 +0000 UTC m=+130.219954236 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : secret "metrics-daemon-secret" not found Apr 16 18:31:03.418095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.418061 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:31:03.420832 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.420810 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Apr 16 18:31:03.430699 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.430674 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Apr 16 18:31:03.442519 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.442494 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n975s\" (UniqueName: \"kubernetes.io/projected/c1b6e68a-5279-411c-ba1c-fd6c274af91f-kube-api-access-n975s\") pod \"network-check-target-8mjdj\" (UID: \"c1b6e68a-5279-411c-ba1c-fd6c274af91f\") " pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:31:03.670427 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.670345 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-diagnostics\"/\"default-dockercfg-pk24s\"" Apr 16 18:31:03.677492 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.677477 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:31:03.786120 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:03.786093 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-8mjdj"] Apr 16 18:31:03.789642 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:31:03.789617 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1b6e68a_5279_411c_ba1c_fd6c274af91f.slice/crio-426fa7c1e94635af3e3ff5b4b86b25ee3261c6f5a2fbf77fd321b5de4300764e WatchSource:0}: Error finding container 426fa7c1e94635af3e3ff5b4b86b25ee3261c6f5a2fbf77fd321b5de4300764e: Status 404 returned error can't find the container with id 426fa7c1e94635af3e3ff5b4b86b25ee3261c6f5a2fbf77fd321b5de4300764e Apr 16 18:31:04.777528 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:04.777486 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8mjdj" event={"ID":"c1b6e68a-5279-411c-ba1c-fd6c274af91f","Type":"ContainerStarted","Data":"426fa7c1e94635af3e3ff5b4b86b25ee3261c6f5a2fbf77fd321b5de4300764e"} Apr 16 18:31:06.783855 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:06.783822 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-8mjdj" event={"ID":"c1b6e68a-5279-411c-ba1c-fd6c274af91f","Type":"ContainerStarted","Data":"da82ef0ad2f5393b5f20b005d90d2f28a3ebe906e7b9bd517330aa41d723c95b"} Apr 16 18:31:06.784226 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:06.783968 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:31:06.798806 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:06.798695 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-8mjdj" podStartSLOduration=67.187573667 podStartE2EDuration="1m9.798679262s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:31:03.791292046 +0000 UTC m=+66.683609388" lastFinishedPulling="2026-04-16 18:31:06.402397637 +0000 UTC m=+69.294714983" observedRunningTime="2026-04-16 18:31:06.798604554 +0000 UTC m=+69.690921920" watchObservedRunningTime="2026-04-16 18:31:06.798679262 +0000 UTC m=+69.690996627" Apr 16 18:31:34.953675 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:34.953644 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:31:34.954095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:34.953687 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:31:34.954095 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:34.953771 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:31:34.954095 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:34.953771 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:31:34.954095 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:34.953780 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:31:34.954095 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:34.953826 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:32:38.953812502 +0000 UTC m=+161.846129844 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:31:34.954095 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:34.953865 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:32:38.953850131 +0000 UTC m=+161.846167474 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:31:35.054276 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:35.054244 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:31:35.054464 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:35.054301 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:31:35.054464 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:35.054413 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:31:35.054551 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:35.054473 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:32:39.054460289 +0000 UTC m=+161.946777632 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:31:35.054551 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:35.054413 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:31:35.054551 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:31:35.054538 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:32:39.054525717 +0000 UTC m=+161.946843075 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:31:37.788611 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:31:37.788582 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-8mjdj" Apr 16 18:32:07.397647 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:07.397605 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:32:07.398113 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:07.397754 2569 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Apr 16 18:32:07.398113 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:07.397821 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs podName:dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30 nodeName:}" failed. No retries permitted until 2026-04-16 18:34:09.397804997 +0000 UTC m=+252.290122344 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs") pod "network-metrics-daemon-p2hph" (UID: "dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30") : secret "metrics-daemon-secret" not found Apr 16 18:32:32.961150 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:32.961118 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z27vt_6b34a2b3-f5f1-4606-970d-5865032489f3/dns-node-resolver/0.log" Apr 16 18:32:33.762217 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:33.762191 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-r2w68_e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0/node-ca/0.log" Apr 16 18:32:34.062301 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:34.062265 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" podUID="c6572ec9-4256-4030-a8db-98573ded7d80" Apr 16 18:32:34.069354 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:34.069328 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" podUID="b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" Apr 16 18:32:34.133595 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:34.133573 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-hv956" podUID="077542aa-4ef3-4409-a13f-1196cff06904" Apr 16 18:32:34.140675 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:34.140654 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-bhlgn" podUID="3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6" Apr 16 18:32:34.580157 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:34.580118 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-p2hph" podUID="dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30" Apr 16 18:32:34.988808 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:34.988778 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bhlgn" Apr 16 18:32:34.988976 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:34.988778 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:32:34.988976 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:34.988778 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:32:34.989100 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:34.988778 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:32:39.032702 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:39.032617 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:32:39.032702 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:39.032674 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") pod \"image-registry-769c8c769b-pr8h9\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:32:39.033214 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.032757 2569 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Apr 16 18:32:39.033214 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.032785 2569 projected.go:264] Couldn't get secret openshift-image-registry/image-registry-tls: secret "image-registry-tls" not found Apr 16 18:32:39.033214 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.032795 2569 projected.go:194] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-769c8c769b-pr8h9: secret "image-registry-tls" not found Apr 16 18:32:39.033214 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.032822 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert podName:c6572ec9-4256-4030-a8db-98573ded7d80 nodeName:}" failed. No retries permitted until 2026-04-16 18:34:41.032807874 +0000 UTC m=+283.925125217 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert") pod "networking-console-plugin-5cb6cf4cb4-m422p" (UID: "c6572ec9-4256-4030-a8db-98573ded7d80") : secret "networking-console-plugin-cert" not found Apr 16 18:32:39.033214 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.032838 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls podName:b0479613-7fe1-4e4f-8f5b-6f46165c0dd7 nodeName:}" failed. No retries permitted until 2026-04-16 18:34:41.032831012 +0000 UTC m=+283.925148355 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls") pod "image-registry-769c8c769b-pr8h9" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7") : secret "image-registry-tls" not found Apr 16 18:32:39.133660 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:39.133629 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:32:39.133800 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:39.133707 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:32:39.133800 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.133760 2569 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Apr 16 18:32:39.133865 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.133801 2569 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Apr 16 18:32:39.133865 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.133817 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert podName:077542aa-4ef3-4409-a13f-1196cff06904 nodeName:}" failed. No retries permitted until 2026-04-16 18:34:41.133802949 +0000 UTC m=+284.026120291 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert") pod "ingress-canary-hv956" (UID: "077542aa-4ef3-4409-a13f-1196cff06904") : secret "canary-serving-cert" not found Apr 16 18:32:39.133865 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:32:39.133841 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls podName:3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6 nodeName:}" failed. No retries permitted until 2026-04-16 18:34:41.133826893 +0000 UTC m=+284.026144239 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls") pod "dns-default-bhlgn" (UID: "3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6") : secret "dns-default-metrics-tls" not found Apr 16 18:32:40.719929 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:40.719867 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" podUID="718409d2-012e-47ae-bb45-151ceb86feb0" containerName="acm-agent" probeResult="failure" output="Get \"http://10.132.0.9:8000/readyz\": dial tcp 10.132.0.9:8000: connect: connection refused" Apr 16 18:32:41.003807 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:41.003718 2569 generic.go:358] "Generic (PLEG): container finished" podID="b103d799-6b1c-4451-9de5-97e25dd04337" containerID="118d93eed304c151ccf2e456a381326966474ad58c0063d719e4b7a3d105eba6" exitCode=255 Apr 16 18:32:41.003807 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:41.003792 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" event={"ID":"b103d799-6b1c-4451-9de5-97e25dd04337","Type":"ContainerDied","Data":"118d93eed304c151ccf2e456a381326966474ad58c0063d719e4b7a3d105eba6"} Apr 16 18:32:41.004133 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:41.004111 2569 scope.go:117] "RemoveContainer" containerID="118d93eed304c151ccf2e456a381326966474ad58c0063d719e4b7a3d105eba6" Apr 16 18:32:41.005021 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:41.005004 2569 generic.go:358] "Generic (PLEG): container finished" podID="718409d2-012e-47ae-bb45-151ceb86feb0" containerID="a4e96b43f34832dd62985f1959e29cdc0441f7309000117870e90a7298054f94" exitCode=1 Apr 16 18:32:41.005114 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:41.005043 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" event={"ID":"718409d2-012e-47ae-bb45-151ceb86feb0","Type":"ContainerDied","Data":"a4e96b43f34832dd62985f1959e29cdc0441f7309000117870e90a7298054f94"} Apr 16 18:32:41.005347 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:41.005328 2569 scope.go:117] "RemoveContainer" containerID="a4e96b43f34832dd62985f1959e29cdc0441f7309000117870e90a7298054f94" Apr 16 18:32:41.384654 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:41.384576 2569 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" Apr 16 18:32:41.392808 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:41.392797 2569 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:32:42.008293 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:42.008252 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" event={"ID":"718409d2-012e-47ae-bb45-151ceb86feb0","Type":"ContainerStarted","Data":"d9276566ca6cab8732ae0f92e73e2a1fadb419cc13e1476e0309eb84b228b04e"} Apr 16 18:32:42.009047 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:42.009019 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:32:42.010198 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:42.010175 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="open-cluster-management-agent-addon/klusterlet-addon-workmgr-6c84f9c746-flvss" Apr 16 18:32:42.012837 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:42.012807 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/managed-serviceaccount-addon-agent-7d54cdb6c7-8zskk" event={"ID":"b103d799-6b1c-4451-9de5-97e25dd04337","Type":"ContainerStarted","Data":"14e1b64b154e1aabc8cd8817803f6646bb1209d86d6c852e5c19c3391a44d48c"} Apr 16 18:32:46.552850 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:46.552818 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:32:58.434946 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.434917 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-runtime-extractor-8bw6b"] Apr 16 18:32:58.437924 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.437909 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.442187 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.442158 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"openshift-service-ca.crt\"" Apr 16 18:32:58.442329 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.442200 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-rbac-proxy\"" Apr 16 18:32:58.442329 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.442199 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-sa-dockercfg-ppqlw\"" Apr 16 18:32:58.442329 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.442220 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-insights\"/\"kube-root-ca.crt\"" Apr 16 18:32:58.442329 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.442236 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-insights\"/\"insights-runtime-extractor-tls\"" Apr 16 18:32:58.450696 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.450676 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-8bw6b"] Apr 16 18:32:58.585138 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.585108 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/64539562-3f22-45bd-b402-713b3d503522-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.585298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.585160 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srg54\" (UniqueName: \"kubernetes.io/projected/64539562-3f22-45bd-b402-713b3d503522-kube-api-access-srg54\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.585298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.585184 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/64539562-3f22-45bd-b402-713b3d503522-crio-socket\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.585298 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.585265 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/64539562-3f22-45bd-b402-713b3d503522-data-volume\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.585446 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.585297 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/64539562-3f22-45bd-b402-713b3d503522-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.685969 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.685885 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/64539562-3f22-45bd-b402-713b3d503522-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.685969 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.685947 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-srg54\" (UniqueName: \"kubernetes.io/projected/64539562-3f22-45bd-b402-713b3d503522-kube-api-access-srg54\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.686158 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.685978 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/64539562-3f22-45bd-b402-713b3d503522-crio-socket\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.686158 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.686009 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/64539562-3f22-45bd-b402-713b3d503522-data-volume\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.686158 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.686071 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/64539562-3f22-45bd-b402-713b3d503522-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.686158 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.686094 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"crio-socket\" (UniqueName: \"kubernetes.io/host-path/64539562-3f22-45bd-b402-713b3d503522-crio-socket\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.686361 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.686341 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-volume\" (UniqueName: \"kubernetes.io/empty-dir/64539562-3f22-45bd-b402-713b3d503522-data-volume\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.686564 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.686549 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-rbac-proxy-cm\" (UniqueName: \"kubernetes.io/configmap/64539562-3f22-45bd-b402-713b3d503522-kube-rbac-proxy-cm\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.688194 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.688179 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"insights-runtime-extractor-tls\" (UniqueName: \"kubernetes.io/secret/64539562-3f22-45bd-b402-713b3d503522-insights-runtime-extractor-tls\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.695463 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.695442 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-srg54\" (UniqueName: \"kubernetes.io/projected/64539562-3f22-45bd-b402-713b3d503522-kube-api-access-srg54\") pod \"insights-runtime-extractor-8bw6b\" (UID: \"64539562-3f22-45bd-b402-713b3d503522\") " pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.746202 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.746178 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-runtime-extractor-8bw6b" Apr 16 18:32:58.865974 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:58.865941 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-runtime-extractor-8bw6b"] Apr 16 18:32:58.869441 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:32:58.869412 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64539562_3f22_45bd_b402_713b3d503522.slice/crio-607077a7eee3c905c329570d64dd075bf11c7d327d07fd958d34408d175f70cb WatchSource:0}: Error finding container 607077a7eee3c905c329570d64dd075bf11c7d327d07fd958d34408d175f70cb: Status 404 returned error can't find the container with id 607077a7eee3c905c329570d64dd075bf11c7d327d07fd958d34408d175f70cb Apr 16 18:32:59.053110 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:59.053076 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-8bw6b" event={"ID":"64539562-3f22-45bd-b402-713b3d503522","Type":"ContainerStarted","Data":"9c9d8d433bd572134aed4a9808b47992fab82f2ea0e6f64dcd579130235c7fac"} Apr 16 18:32:59.053110 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:32:59.053110 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-8bw6b" event={"ID":"64539562-3f22-45bd-b402-713b3d503522","Type":"ContainerStarted","Data":"607077a7eee3c905c329570d64dd075bf11c7d327d07fd958d34408d175f70cb"} Apr 16 18:33:00.061123 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:00.061087 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-8bw6b" event={"ID":"64539562-3f22-45bd-b402-713b3d503522","Type":"ContainerStarted","Data":"a8a93c4dd61043db9b41997306db206155e5ea9d0e9c9ef9c376145c807ffc1b"} Apr 16 18:33:01.065512 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:01.065474 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-runtime-extractor-8bw6b" event={"ID":"64539562-3f22-45bd-b402-713b3d503522","Type":"ContainerStarted","Data":"3c47acb527e4aceacce7551fa28c4f891a19354f9c90ef5c4b158d38712c9d49"} Apr 16 18:33:01.086465 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:01.086422 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-runtime-extractor-8bw6b" podStartSLOduration=1.076063449 podStartE2EDuration="3.086407963s" podCreationTimestamp="2026-04-16 18:32:58 +0000 UTC" firstStartedPulling="2026-04-16 18:32:58.928249684 +0000 UTC m=+181.820567027" lastFinishedPulling="2026-04-16 18:33:00.938594182 +0000 UTC m=+183.830911541" observedRunningTime="2026-04-16 18:33:01.085428781 +0000 UTC m=+183.977746141" watchObservedRunningTime="2026-04-16 18:33:01.086407963 +0000 UTC m=+183.978725327" Apr 16 18:33:05.886915 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.886877 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-mxfc8"] Apr 16 18:33:05.890050 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.890027 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.892724 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.892703 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"metrics-client-ca\"" Apr 16 18:33:05.892959 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.892944 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-kube-rbac-proxy-config\"" Apr 16 18:33:05.893015 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.892958 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-dockercfg-rxsj5\"" Apr 16 18:33:05.893664 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.893650 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"kube-root-ca.crt\"" Apr 16 18:33:05.893725 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.893711 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-monitoring\"/\"node-exporter-tls\"" Apr 16 18:33:05.893771 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.893723 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"node-exporter-accelerators-collector-config\"" Apr 16 18:33:05.895883 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.895866 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\"" Apr 16 18:33:05.935634 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935612 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-textfile\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.935708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935657 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.935754 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935709 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-root\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.935754 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935738 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-accelerators-collector-config\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.935815 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935784 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a63a325-dd7c-4e50-b378-3f71641300c3-metrics-client-ca\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.935852 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935816 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-sys\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.935852 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935831 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-wtmp\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.935909 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935883 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-tls\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:05.935942 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:05.935912 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjzh\" (UniqueName: \"kubernetes.io/projected/6a63a325-dd7c-4e50-b378-3f71641300c3-kube-api-access-sdjzh\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.036974 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.036951 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037062 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.036994 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-root\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037062 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037013 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-accelerators-collector-config\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037062 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037049 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a63a325-dd7c-4e50-b378-3f71641300c3-metrics-client-ca\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037153 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037061 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-root\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037153 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037072 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-sys\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037219 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037159 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-wtmp\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037219 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037114 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-sys\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037219 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037212 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-tls\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037333 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037237 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sdjzh\" (UniqueName: \"kubernetes.io/projected/6a63a325-dd7c-4e50-b378-3f71641300c3-kube-api-access-sdjzh\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037333 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037281 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-textfile\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037333 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037311 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-wtmp\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037642 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037623 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-textfile\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037699 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037671 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-accelerators-collector-config\" (UniqueName: \"kubernetes.io/configmap/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-accelerators-collector-config\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.037732 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.037701 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a63a325-dd7c-4e50-b378-3f71641300c3-metrics-client-ca\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.039224 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.039206 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.039355 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.039340 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/6a63a325-dd7c-4e50-b378-3f71641300c3-node-exporter-tls\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.047062 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.047043 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdjzh\" (UniqueName: \"kubernetes.io/projected/6a63a325-dd7c-4e50-b378-3f71641300c3-kube-api-access-sdjzh\") pod \"node-exporter-mxfc8\" (UID: \"6a63a325-dd7c-4e50-b378-3f71641300c3\") " pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.198904 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:06.198882 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mxfc8" Apr 16 18:33:06.206794 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:33:06.206769 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a63a325_dd7c_4e50_b378_3f71641300c3.slice/crio-9820a324d96b3a45404f03dcc79f6ea363e13733a5e7c879c8ce9f66aff8d60f WatchSource:0}: Error finding container 9820a324d96b3a45404f03dcc79f6ea363e13733a5e7c879c8ce9f66aff8d60f: Status 404 returned error can't find the container with id 9820a324d96b3a45404f03dcc79f6ea363e13733a5e7c879c8ce9f66aff8d60f Apr 16 18:33:07.081074 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:07.081041 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mxfc8" event={"ID":"6a63a325-dd7c-4e50-b378-3f71641300c3","Type":"ContainerStarted","Data":"9820a324d96b3a45404f03dcc79f6ea363e13733a5e7c879c8ce9f66aff8d60f"} Apr 16 18:33:08.084698 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:08.084668 2569 generic.go:358] "Generic (PLEG): container finished" podID="6a63a325-dd7c-4e50-b378-3f71641300c3" containerID="a01fdf8aef82366cd8dac2f09a7f26e74aa462c7cece6a322b7430adcd661e24" exitCode=0 Apr 16 18:33:08.085102 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:08.084724 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mxfc8" event={"ID":"6a63a325-dd7c-4e50-b378-3f71641300c3","Type":"ContainerDied","Data":"a01fdf8aef82366cd8dac2f09a7f26e74aa462c7cece6a322b7430adcd661e24"} Apr 16 18:33:09.089660 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:09.089626 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mxfc8" event={"ID":"6a63a325-dd7c-4e50-b378-3f71641300c3","Type":"ContainerStarted","Data":"0d2f614ef5ffa80dccbe2f4b874de67255e4090d0f932618a234ab267eae3436"} Apr 16 18:33:09.089660 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:09.089662 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mxfc8" event={"ID":"6a63a325-dd7c-4e50-b378-3f71641300c3","Type":"ContainerStarted","Data":"99f4a50c0172aab72c53d12544e540f2aeff4eb1978a92fd5d9bb8f2e534b9e7"} Apr 16 18:33:09.111883 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:09.111838 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-mxfc8" podStartSLOduration=3.221108144 podStartE2EDuration="4.111826248s" podCreationTimestamp="2026-04-16 18:33:05 +0000 UTC" firstStartedPulling="2026-04-16 18:33:06.208619307 +0000 UTC m=+189.100936651" lastFinishedPulling="2026-04-16 18:33:07.099337396 +0000 UTC m=+189.991654755" observedRunningTime="2026-04-16 18:33:09.110153797 +0000 UTC m=+192.002471161" watchObservedRunningTime="2026-04-16 18:33:09.111826248 +0000 UTC m=+192.004143612" Apr 16 18:33:21.412055 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:21.412013 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" podUID="df4a1222-1cb9-4e57-890c-542f990453e1" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:33:31.412269 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:31.412232 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" podUID="df4a1222-1cb9-4e57-890c-542f990453e1" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:33:41.412305 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:41.412242 2569 prober.go:120] "Probe failed" probeType="Liveness" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" podUID="df4a1222-1cb9-4e57-890c-542f990453e1" containerName="service-proxy" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 16 18:33:41.412849 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:41.412324 2569 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" Apr 16 18:33:41.412991 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:41.412956 2569 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="service-proxy" containerStatusID={"Type":"cri-o","ID":"45ba2df12d5c480b5a38b84d93be421f6e4a52acc3f68a2c7392f47284c12d90"} pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" containerMessage="Container service-proxy failed liveness probe, will be restarted" Apr 16 18:33:41.413054 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:41.413019 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" podUID="df4a1222-1cb9-4e57-890c-542f990453e1" containerName="service-proxy" containerID="cri-o://45ba2df12d5c480b5a38b84d93be421f6e4a52acc3f68a2c7392f47284c12d90" gracePeriod=30 Apr 16 18:33:42.173414 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:42.173353 2569 generic.go:358] "Generic (PLEG): container finished" podID="df4a1222-1cb9-4e57-890c-542f990453e1" containerID="45ba2df12d5c480b5a38b84d93be421f6e4a52acc3f68a2c7392f47284c12d90" exitCode=2 Apr 16 18:33:42.173580 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:42.173424 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" event={"ID":"df4a1222-1cb9-4e57-890c-542f990453e1","Type":"ContainerDied","Data":"45ba2df12d5c480b5a38b84d93be421f6e4a52acc3f68a2c7392f47284c12d90"} Apr 16 18:33:42.173580 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:42.173461 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="open-cluster-management-agent-addon/cluster-proxy-proxy-agent-56cd457576-rnm9d" event={"ID":"df4a1222-1cb9-4e57-890c-542f990453e1","Type":"ContainerStarted","Data":"c764da82678c41b8f92d84657f8826e4362d3b26ceecc535f21a8a48b1dc4eb0"} Apr 16 18:33:48.898901 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:48.898861 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-769c8c769b-pr8h9"] Apr 16 18:33:48.899308 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:33:48.899099 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[registry-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" podUID="b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" Apr 16 18:33:49.191285 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.191192 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:33:49.195129 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.195108 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:33:49.258282 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258261 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-installation-pull-secrets\") pod \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " Apr 16 18:33:49.258358 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258294 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-certificates\") pod \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " Apr 16 18:33:49.258358 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258316 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-trusted-ca\") pod \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " Apr 16 18:33:49.258358 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258340 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-ca-trust-extracted\") pod \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " Apr 16 18:33:49.258528 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258463 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-image-registry-private-configuration\") pod \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " Apr 16 18:33:49.258528 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258497 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqmp2\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-kube-api-access-lqmp2\") pod \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " Apr 16 18:33:49.258528 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258516 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-bound-sa-token\") pod \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\" (UID: \"b0479613-7fe1-4e4f-8f5b-6f46165c0dd7\") " Apr 16 18:33:49.258781 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258755 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:33:49.258853 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258805 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:33:49.258853 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.258816 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:33:49.260615 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.260591 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:33:49.260715 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.260662 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-kube-api-access-lqmp2" (OuterVolumeSpecName: "kube-api-access-lqmp2") pod "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7"). InnerVolumeSpecName "kube-api-access-lqmp2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:33:49.260715 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.260668 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-image-registry-private-configuration" (OuterVolumeSpecName: "image-registry-private-configuration") pod "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7"). InnerVolumeSpecName "image-registry-private-configuration". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:33:49.260715 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.260692 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" (UID: "b0479613-7fe1-4e4f-8f5b-6f46165c0dd7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:33:49.359034 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.359008 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lqmp2\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-kube-api-access-lqmp2\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:33:49.359034 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.359030 2569 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-bound-sa-token\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:33:49.359163 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.359039 2569 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-installation-pull-secrets\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:33:49.359163 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.359048 2569 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-certificates\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:33:49.359163 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.359058 2569 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-trusted-ca\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:33:49.359163 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.359067 2569 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-ca-trust-extracted\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:33:49.359163 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:49.359076 2569 reconciler_common.go:299] "Volume detached for volume \"image-registry-private-configuration\" (UniqueName: \"kubernetes.io/secret/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-image-registry-private-configuration\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:33:50.193458 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:50.193429 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-769c8c769b-pr8h9" Apr 16 18:33:50.239361 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:50.239335 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-769c8c769b-pr8h9"] Apr 16 18:33:50.254118 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:50.254096 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-769c8c769b-pr8h9"] Apr 16 18:33:50.366515 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:50.366488 2569 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7-registry-tls\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:33:51.555586 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:33:51.555558 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0479613-7fe1-4e4f-8f5b-6f46165c0dd7" path="/var/lib/kubelet/pods/b0479613-7fe1-4e4f-8f5b-6f46165c0dd7/volumes" Apr 16 18:34:05.949688 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:05.949661 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z27vt_6b34a2b3-f5f1-4606-970d-5865032489f3/dns-node-resolver/0.log" Apr 16 18:34:09.402942 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:09.402903 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:34:09.405559 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:09.405540 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30-metrics-certs\") pod \"network-metrics-daemon-p2hph\" (UID: \"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30\") " pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:34:09.656489 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:09.656415 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-wddp7\"" Apr 16 18:34:09.664622 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:09.664598 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hph" Apr 16 18:34:09.779271 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:09.779242 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p2hph"] Apr 16 18:34:10.241951 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:10.241913 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p2hph" event={"ID":"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30","Type":"ContainerStarted","Data":"60fc6ef95201abe3d0dc5103c2d7e17e616c198a775aa833e784930cb25c003c"} Apr 16 18:34:11.246236 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:11.246208 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p2hph" event={"ID":"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30","Type":"ContainerStarted","Data":"d8643b6fa0674bda97e4f31c1cdf2c10f87354501d91d770c0dd286b3a4841f0"} Apr 16 18:34:11.246236 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:11.246240 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p2hph" event={"ID":"dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30","Type":"ContainerStarted","Data":"25871f434c2013bffe903ca9bccccc312562090e12178ccaedaf550c99233619"} Apr 16 18:34:11.271624 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:11.271579 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-p2hph" podStartSLOduration=253.347016118 podStartE2EDuration="4m14.271566718s" podCreationTimestamp="2026-04-16 18:29:57 +0000 UTC" firstStartedPulling="2026-04-16 18:34:09.785051057 +0000 UTC m=+252.677368404" lastFinishedPulling="2026-04-16 18:34:10.709601656 +0000 UTC m=+253.601919004" observedRunningTime="2026-04-16 18:34:11.271086799 +0000 UTC m=+254.163404164" watchObservedRunningTime="2026-04-16 18:34:11.271566718 +0000 UTC m=+254.163884082" Apr 16 18:34:37.990165 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:34:37.990124 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-dns/dns-default-bhlgn" podUID="3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6" Apr 16 18:34:37.990165 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:34:37.990130 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" podUID="c6572ec9-4256-4030-a8db-98573ded7d80" Apr 16 18:34:37.990665 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:34:37.990135 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-hv956" podUID="077542aa-4ef3-4409-a13f-1196cff06904" Apr 16 18:34:38.316258 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:38.316179 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:34:38.316437 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:38.316182 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:34:38.316437 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:38.316186 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bhlgn" Apr 16 18:34:41.123338 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.123305 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:34:41.125613 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.125592 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c6572ec9-4256-4030-a8db-98573ded7d80-networking-console-plugin-cert\") pod \"networking-console-plugin-5cb6cf4cb4-m422p\" (UID: \"c6572ec9-4256-4030-a8db-98573ded7d80\") " pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:34:41.223824 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.223792 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:34:41.223968 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.223845 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:34:41.226104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.226076 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/077542aa-4ef3-4409-a13f-1196cff06904-cert\") pod \"ingress-canary-hv956\" (UID: \"077542aa-4ef3-4409-a13f-1196cff06904\") " pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:34:41.226209 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.226153 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6-metrics-tls\") pod \"dns-default-bhlgn\" (UID: \"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6\") " pod="openshift-dns/dns-default-bhlgn" Apr 16 18:34:41.320769 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.320741 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"default-dockercfg-zvvl6\"" Apr 16 18:34:41.320910 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.320742 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-2pvvf\"" Apr 16 18:34:41.320910 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.320742 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-h4pkb\"" Apr 16 18:34:41.327245 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.327219 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hv956" Apr 16 18:34:41.327245 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.327231 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" Apr 16 18:34:41.327402 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.327259 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bhlgn" Apr 16 18:34:41.470311 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.470264 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bhlgn"] Apr 16 18:34:41.474211 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:34:41.474185 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e9ada21_637f_4e0a_bef4_dfb5ed41f9b6.slice/crio-d3b77a09869d6d720a6663e5ac4a56607f0e7417cd4926232e8a9ecb5a905ef8 WatchSource:0}: Error finding container d3b77a09869d6d720a6663e5ac4a56607f0e7417cd4926232e8a9ecb5a905ef8: Status 404 returned error can't find the container with id d3b77a09869d6d720a6663e5ac4a56607f0e7417cd4926232e8a9ecb5a905ef8 Apr 16 18:34:41.684666 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.684598 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hv956"] Apr 16 18:34:41.687816 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:41.687793 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p"] Apr 16 18:34:41.688321 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:34:41.688296 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077542aa_4ef3_4409_a13f_1196cff06904.slice/crio-0c402e9d58e9b5f8a5e74b790d160298de9fc39705dadc2d764a8ac41ffbf573 WatchSource:0}: Error finding container 0c402e9d58e9b5f8a5e74b790d160298de9fc39705dadc2d764a8ac41ffbf573: Status 404 returned error can't find the container with id 0c402e9d58e9b5f8a5e74b790d160298de9fc39705dadc2d764a8ac41ffbf573 Apr 16 18:34:41.690870 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:34:41.690846 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6572ec9_4256_4030_a8db_98573ded7d80.slice/crio-5d871d10960e08a2a6d7a6db520b12b07179a272a9dbb66d7b8cb45e383325b3 WatchSource:0}: Error finding container 5d871d10960e08a2a6d7a6db520b12b07179a272a9dbb66d7b8cb45e383325b3: Status 404 returned error can't find the container with id 5d871d10960e08a2a6d7a6db520b12b07179a272a9dbb66d7b8cb45e383325b3 Apr 16 18:34:42.328634 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:42.328579 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bhlgn" event={"ID":"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6","Type":"ContainerStarted","Data":"d3b77a09869d6d720a6663e5ac4a56607f0e7417cd4926232e8a9ecb5a905ef8"} Apr 16 18:34:42.330346 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:42.330296 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hv956" event={"ID":"077542aa-4ef3-4409-a13f-1196cff06904","Type":"ContainerStarted","Data":"0c402e9d58e9b5f8a5e74b790d160298de9fc39705dadc2d764a8ac41ffbf573"} Apr 16 18:34:42.332465 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:42.332430 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" event={"ID":"c6572ec9-4256-4030-a8db-98573ded7d80","Type":"ContainerStarted","Data":"5d871d10960e08a2a6d7a6db520b12b07179a272a9dbb66d7b8cb45e383325b3"} Apr 16 18:34:44.341807 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:44.341768 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bhlgn" event={"ID":"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6","Type":"ContainerStarted","Data":"a69a36033cf3a90be0c412475f09c36410f150d590f4e8dbd81243380b35842a"} Apr 16 18:34:44.341807 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:44.341813 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bhlgn" event={"ID":"3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6","Type":"ContainerStarted","Data":"ccf85b74434c12e55e54328f3838b06c0ebde60bf42ebd3d9df95de687142a84"} Apr 16 18:34:44.342256 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:44.341849 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-bhlgn" Apr 16 18:34:44.343087 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:44.343052 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hv956" event={"ID":"077542aa-4ef3-4409-a13f-1196cff06904","Type":"ContainerStarted","Data":"50961ced0867c215990d8ec4c02ba87d85046aeb6da374d20384499b17d3a96d"} Apr 16 18:34:44.344177 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:44.344149 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" event={"ID":"c6572ec9-4256-4030-a8db-98573ded7d80","Type":"ContainerStarted","Data":"c9941dbc39b268d88f06360c55cae4ecac6359093d8bce6ffca170d3c14c2083"} Apr 16 18:34:44.360798 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:44.360750 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bhlgn" podStartSLOduration=252.331996786 podStartE2EDuration="4m14.360739207s" podCreationTimestamp="2026-04-16 18:30:30 +0000 UTC" firstStartedPulling="2026-04-16 18:34:41.476058654 +0000 UTC m=+284.368375998" lastFinishedPulling="2026-04-16 18:34:43.504801059 +0000 UTC m=+286.397118419" observedRunningTime="2026-04-16 18:34:44.359864016 +0000 UTC m=+287.252181381" watchObservedRunningTime="2026-04-16 18:34:44.360739207 +0000 UTC m=+287.253056550" Apr 16 18:34:44.377015 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:44.376980 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hv956" podStartSLOduration=252.557767814 podStartE2EDuration="4m14.376969335s" podCreationTimestamp="2026-04-16 18:30:30 +0000 UTC" firstStartedPulling="2026-04-16 18:34:41.690345962 +0000 UTC m=+284.582663305" lastFinishedPulling="2026-04-16 18:34:43.509547472 +0000 UTC m=+286.401864826" observedRunningTime="2026-04-16 18:34:44.376350242 +0000 UTC m=+287.268667607" watchObservedRunningTime="2026-04-16 18:34:44.376969335 +0000 UTC m=+287.269286678" Apr 16 18:34:44.394879 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:44.394834 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cb6cf4cb4-m422p" podStartSLOduration=268.582613848 podStartE2EDuration="4m30.394820546s" podCreationTimestamp="2026-04-16 18:30:14 +0000 UTC" firstStartedPulling="2026-04-16 18:34:41.692670759 +0000 UTC m=+284.584988102" lastFinishedPulling="2026-04-16 18:34:43.504877458 +0000 UTC m=+286.397194800" observedRunningTime="2026-04-16 18:34:44.393465364 +0000 UTC m=+287.285782729" watchObservedRunningTime="2026-04-16 18:34:44.394820546 +0000 UTC m=+287.287137912" Apr 16 18:34:54.348965 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:54.348932 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bhlgn" Apr 16 18:34:57.503593 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:57.503561 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:34:57.503968 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:57.503640 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:34:57.507084 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:34:57.507066 2569 kubelet.go:1628] "Image garbage collection succeeded" Apr 16 18:39:30.131746 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.131713 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm"] Apr 16 18:39:30.134727 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.134711 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.137443 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.137422 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"kedaorg-certs\"" Apr 16 18:39:30.138011 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.137992 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"keda-ocp-cabundle\"" Apr 16 18:39:30.138285 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.138269 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"openshift-service-ca.crt\"" Apr 16 18:39:30.139314 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.139297 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-keda\"/\"kube-root-ca.crt\"" Apr 16 18:39:30.139416 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.139400 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-operator-dockercfg-2zld6\"" Apr 16 18:39:30.139470 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.139445 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-keda\"/\"keda-metrics-apiserver-certs\"" Apr 16 18:39:30.146509 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.146483 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm"] Apr 16 18:39:30.245057 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.245031 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nlt\" (UniqueName: \"kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-kube-api-access-c9nlt\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.245204 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.245067 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/52707d1e-5398-42cc-a3d6-7a212d7ba804-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.245204 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.245141 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.346049 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.346017 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.346191 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.346066 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c9nlt\" (UniqueName: \"kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-kube-api-access-c9nlt\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.346191 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.346097 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/52707d1e-5398-42cc-a3d6-7a212d7ba804-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.346191 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:39:30.346161 2569 secret.go:281] references non-existent secret key: tls.crt Apr 16 18:39:30.346191 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:39:30.346182 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 18:39:30.346345 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:39:30.346199 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm: references non-existent secret key: tls.crt Apr 16 18:39:30.346345 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:39:30.346261 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates podName:52707d1e-5398-42cc-a3d6-7a212d7ba804 nodeName:}" failed. No retries permitted until 2026-04-16 18:39:30.846244684 +0000 UTC m=+573.738562047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates") pod "keda-metrics-apiserver-7c9f485588-xqcnm" (UID: "52707d1e-5398-42cc-a3d6-7a212d7ba804") : references non-existent secret key: tls.crt Apr 16 18:39:30.346498 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.346470 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"temp-vol\" (UniqueName: \"kubernetes.io/empty-dir/52707d1e-5398-42cc-a3d6-7a212d7ba804-temp-vol\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.358221 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.358199 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9nlt\" (UniqueName: \"kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-kube-api-access-c9nlt\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.849937 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:30.849910 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:30.850082 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:39:30.850015 2569 secret.go:281] references non-existent secret key: tls.crt Apr 16 18:39:30.850082 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:39:30.850027 2569 projected.go:277] Couldn't get secret payload openshift-keda/kedaorg-certs: references non-existent secret key: tls.crt Apr 16 18:39:30.850082 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:39:30.850045 2569 projected.go:194] Error preparing data for projected volume certificates for pod openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm: references non-existent secret key: tls.crt Apr 16 18:39:30.850217 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:39:30.850105 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates podName:52707d1e-5398-42cc-a3d6-7a212d7ba804 nodeName:}" failed. No retries permitted until 2026-04-16 18:39:31.850087802 +0000 UTC m=+574.742405148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certificates" (UniqueName: "kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates") pod "keda-metrics-apiserver-7c9f485588-xqcnm" (UID: "52707d1e-5398-42cc-a3d6-7a212d7ba804") : references non-existent secret key: tls.crt Apr 16 18:39:31.857120 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:31.857081 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:31.859514 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:31.859484 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certificates\" (UniqueName: \"kubernetes.io/projected/52707d1e-5398-42cc-a3d6-7a212d7ba804-certificates\") pod \"keda-metrics-apiserver-7c9f485588-xqcnm\" (UID: \"52707d1e-5398-42cc-a3d6-7a212d7ba804\") " pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:31.944848 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:31.944823 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:32.060027 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:32.059999 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm"] Apr 16 18:39:32.063015 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:39:32.062978 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52707d1e_5398_42cc_a3d6_7a212d7ba804.slice/crio-b1d8ecba54855991ec5d826bce212818dd513db201eadde0479acc1698329dd7 WatchSource:0}: Error finding container b1d8ecba54855991ec5d826bce212818dd513db201eadde0479acc1698329dd7: Status 404 returned error can't find the container with id b1d8ecba54855991ec5d826bce212818dd513db201eadde0479acc1698329dd7 Apr 16 18:39:32.064332 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:32.064310 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:39:33.058335 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:33.058293 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" event={"ID":"52707d1e-5398-42cc-a3d6-7a212d7ba804","Type":"ContainerStarted","Data":"b1d8ecba54855991ec5d826bce212818dd513db201eadde0479acc1698329dd7"} Apr 16 18:39:35.064580 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:35.064543 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" event={"ID":"52707d1e-5398-42cc-a3d6-7a212d7ba804","Type":"ContainerStarted","Data":"e19cfb2c34fca62a7c7c76fbdfe5bd17895b684fa1b36d0ba0eb0eea15e6789f"} Apr 16 18:39:35.065020 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:35.064658 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:35.083750 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:35.083697 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" podStartSLOduration=2.2827671560000002 podStartE2EDuration="5.083679296s" podCreationTimestamp="2026-04-16 18:39:30 +0000 UTC" firstStartedPulling="2026-04-16 18:39:32.064514724 +0000 UTC m=+574.956832074" lastFinishedPulling="2026-04-16 18:39:34.865426858 +0000 UTC m=+577.757744214" observedRunningTime="2026-04-16 18:39:35.082174409 +0000 UTC m=+577.974491785" watchObservedRunningTime="2026-04-16 18:39:35.083679296 +0000 UTC m=+577.975996665" Apr 16 18:39:46.073245 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:46.073216 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-keda/keda-metrics-apiserver-7c9f485588-xqcnm" Apr 16 18:39:57.520335 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:57.520312 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:39:57.520335 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:39:57.520319 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:40:38.186645 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.186615 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-2cbzb"] Apr 16 18:40:38.190160 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.190146 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:38.192714 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.192693 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-webhook-server-cert\"" Apr 16 18:40:38.192840 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.192713 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"openshift-service-ca.crt\"" Apr 16 18:40:38.192840 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.192830 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve\"/\"kube-root-ca.crt\"" Apr 16 18:40:38.193681 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.193663 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve\"/\"kserve-controller-manager-dockercfg-gzs4n\"" Apr 16 18:40:38.201231 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.201209 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-2cbzb"] Apr 16 18:40:38.220611 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.220586 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ms2\" (UniqueName: \"kubernetes.io/projected/c35e3326-203d-487e-bc0e-4b928bb3a82e-kube-api-access-g2ms2\") pod \"kserve-controller-manager-7c68cb4fc8-2cbzb\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:38.220725 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.220625 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert\") pod \"kserve-controller-manager-7c68cb4fc8-2cbzb\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:38.321654 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.321622 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ms2\" (UniqueName: \"kubernetes.io/projected/c35e3326-203d-487e-bc0e-4b928bb3a82e-kube-api-access-g2ms2\") pod \"kserve-controller-manager-7c68cb4fc8-2cbzb\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:38.321795 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.321665 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert\") pod \"kserve-controller-manager-7c68cb4fc8-2cbzb\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:38.321860 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:40:38.321795 2569 secret.go:189] Couldn't get secret kserve/kserve-webhook-server-cert: secret "kserve-webhook-server-cert" not found Apr 16 18:40:38.321912 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:40:38.321891 2569 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert podName:c35e3326-203d-487e-bc0e-4b928bb3a82e nodeName:}" failed. No retries permitted until 2026-04-16 18:40:38.821869921 +0000 UTC m=+641.714187285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert") pod "kserve-controller-manager-7c68cb4fc8-2cbzb" (UID: "c35e3326-203d-487e-bc0e-4b928bb3a82e") : secret "kserve-webhook-server-cert" not found Apr 16 18:40:38.331206 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.331171 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ms2\" (UniqueName: \"kubernetes.io/projected/c35e3326-203d-487e-bc0e-4b928bb3a82e-kube-api-access-g2ms2\") pod \"kserve-controller-manager-7c68cb4fc8-2cbzb\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:38.825053 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.825024 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert\") pod \"kserve-controller-manager-7c68cb4fc8-2cbzb\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:38.827099 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:38.827073 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert\") pod \"kserve-controller-manager-7c68cb4fc8-2cbzb\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:39.100499 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:39.100413 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:39.213152 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:39.213124 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-2cbzb"] Apr 16 18:40:39.215985 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:40:39.215955 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35e3326_203d_487e_bc0e_4b928bb3a82e.slice/crio-1e4efbc244552abd03b4346f2314600aa80d3b4f63c8788b374611ec73791f56 WatchSource:0}: Error finding container 1e4efbc244552abd03b4346f2314600aa80d3b4f63c8788b374611ec73791f56: Status 404 returned error can't find the container with id 1e4efbc244552abd03b4346f2314600aa80d3b4f63c8788b374611ec73791f56 Apr 16 18:40:39.223601 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:39.223577 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" event={"ID":"c35e3326-203d-487e-bc0e-4b928bb3a82e","Type":"ContainerStarted","Data":"1e4efbc244552abd03b4346f2314600aa80d3b4f63c8788b374611ec73791f56"} Apr 16 18:40:42.234662 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:42.234619 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" event={"ID":"c35e3326-203d-487e-bc0e-4b928bb3a82e","Type":"ContainerStarted","Data":"bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60"} Apr 16 18:40:42.235084 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:42.234739 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:40:42.252282 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:40:42.252220 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" podStartSLOduration=1.8665980709999999 podStartE2EDuration="4.252204214s" podCreationTimestamp="2026-04-16 18:40:38 +0000 UTC" firstStartedPulling="2026-04-16 18:40:39.217288647 +0000 UTC m=+642.109606005" lastFinishedPulling="2026-04-16 18:40:41.602894788 +0000 UTC m=+644.495212148" observedRunningTime="2026-04-16 18:40:42.251506661 +0000 UTC m=+645.143824027" watchObservedRunningTime="2026-04-16 18:40:42.252204214 +0000 UTC m=+645.144521579" Apr 16 18:41:13.242224 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:13.242194 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:41:13.921270 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:13.921238 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-2cbzb"] Apr 16 18:41:13.921529 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:13.921506 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" podUID="c35e3326-203d-487e-bc0e-4b928bb3a82e" containerName="manager" containerID="cri-o://bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60" gracePeriod=10 Apr 16 18:41:13.961088 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:13.961060 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-l2xrn"] Apr 16 18:41:13.963984 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:13.963970 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:13.975682 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:13.975659 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-l2xrn"] Apr 16 18:41:14.042943 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.042919 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cj6p\" (UniqueName: \"kubernetes.io/projected/18f30541-9041-4ce9-a86d-852543730d0c-kube-api-access-5cj6p\") pod \"kserve-controller-manager-7c68cb4fc8-l2xrn\" (UID: \"18f30541-9041-4ce9-a86d-852543730d0c\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:14.043045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.042957 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f30541-9041-4ce9-a86d-852543730d0c-cert\") pod \"kserve-controller-manager-7c68cb4fc8-l2xrn\" (UID: \"18f30541-9041-4ce9-a86d-852543730d0c\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:14.143794 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.143771 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5cj6p\" (UniqueName: \"kubernetes.io/projected/18f30541-9041-4ce9-a86d-852543730d0c-kube-api-access-5cj6p\") pod \"kserve-controller-manager-7c68cb4fc8-l2xrn\" (UID: \"18f30541-9041-4ce9-a86d-852543730d0c\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:14.143919 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.143820 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f30541-9041-4ce9-a86d-852543730d0c-cert\") pod \"kserve-controller-manager-7c68cb4fc8-l2xrn\" (UID: \"18f30541-9041-4ce9-a86d-852543730d0c\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:14.145998 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.145977 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f30541-9041-4ce9-a86d-852543730d0c-cert\") pod \"kserve-controller-manager-7c68cb4fc8-l2xrn\" (UID: \"18f30541-9041-4ce9-a86d-852543730d0c\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:14.147904 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.147886 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:41:14.152576 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.152558 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cj6p\" (UniqueName: \"kubernetes.io/projected/18f30541-9041-4ce9-a86d-852543730d0c-kube-api-access-5cj6p\") pod \"kserve-controller-manager-7c68cb4fc8-l2xrn\" (UID: \"18f30541-9041-4ce9-a86d-852543730d0c\") " pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:14.244815 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.244792 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2ms2\" (UniqueName: \"kubernetes.io/projected/c35e3326-203d-487e-bc0e-4b928bb3a82e-kube-api-access-g2ms2\") pod \"c35e3326-203d-487e-bc0e-4b928bb3a82e\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " Apr 16 18:41:14.245133 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.244847 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert\") pod \"c35e3326-203d-487e-bc0e-4b928bb3a82e\" (UID: \"c35e3326-203d-487e-bc0e-4b928bb3a82e\") " Apr 16 18:41:14.246776 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.246752 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35e3326-203d-487e-bc0e-4b928bb3a82e-kube-api-access-g2ms2" (OuterVolumeSpecName: "kube-api-access-g2ms2") pod "c35e3326-203d-487e-bc0e-4b928bb3a82e" (UID: "c35e3326-203d-487e-bc0e-4b928bb3a82e"). InnerVolumeSpecName "kube-api-access-g2ms2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 18:41:14.246839 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.246772 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert" (OuterVolumeSpecName: "cert") pod "c35e3326-203d-487e-bc0e-4b928bb3a82e" (UID: "c35e3326-203d-487e-bc0e-4b928bb3a82e"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 18:41:14.290603 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.290586 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:14.315187 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.315161 2569 generic.go:358] "Generic (PLEG): container finished" podID="c35e3326-203d-487e-bc0e-4b928bb3a82e" containerID="bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60" exitCode=0 Apr 16 18:41:14.315302 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.315220 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" Apr 16 18:41:14.315302 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.315249 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" event={"ID":"c35e3326-203d-487e-bc0e-4b928bb3a82e","Type":"ContainerDied","Data":"bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60"} Apr 16 18:41:14.315302 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.315286 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7c68cb4fc8-2cbzb" event={"ID":"c35e3326-203d-487e-bc0e-4b928bb3a82e","Type":"ContainerDied","Data":"1e4efbc244552abd03b4346f2314600aa80d3b4f63c8788b374611ec73791f56"} Apr 16 18:41:14.315302 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.315303 2569 scope.go:117] "RemoveContainer" containerID="bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60" Apr 16 18:41:14.322513 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.322496 2569 scope.go:117] "RemoveContainer" containerID="bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60" Apr 16 18:41:14.322889 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:41:14.322763 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60\": container with ID starting with bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60 not found: ID does not exist" containerID="bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60" Apr 16 18:41:14.322889 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.322798 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60"} err="failed to get container status \"bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60\": rpc error: code = NotFound desc = could not find container \"bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60\": container with ID starting with bb6636366cb0f72b28cc3733b5d90c4788ead11db665c2744bd00a88334d5a60 not found: ID does not exist" Apr 16 18:41:14.345885 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.342193 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-2cbzb"] Apr 16 18:41:14.345885 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.345821 2569 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c35e3326-203d-487e-bc0e-4b928bb3a82e-cert\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:41:14.345885 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.345856 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g2ms2\" (UniqueName: \"kubernetes.io/projected/c35e3326-203d-487e-bc0e-4b928bb3a82e-kube-api-access-g2ms2\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:41:14.349632 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.349584 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-2cbzb"] Apr 16 18:41:14.404909 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:14.404880 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve/kserve-controller-manager-7c68cb4fc8-l2xrn"] Apr 16 18:41:14.407669 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:41:14.407646 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18f30541_9041_4ce9_a86d_852543730d0c.slice/crio-db4cfcee5f1fa8f1359b54978fc1f2f0bee04186d277037745ab4ae22bc82e9d WatchSource:0}: Error finding container db4cfcee5f1fa8f1359b54978fc1f2f0bee04186d277037745ab4ae22bc82e9d: Status 404 returned error can't find the container with id db4cfcee5f1fa8f1359b54978fc1f2f0bee04186d277037745ab4ae22bc82e9d Apr 16 18:41:15.319266 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:15.319226 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" event={"ID":"18f30541-9041-4ce9-a86d-852543730d0c","Type":"ContainerStarted","Data":"e9f9a7854e53f7e43279d7f2f36e5f00e6d913de92d0785d82c4b36047ec910a"} Apr 16 18:41:15.319266 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:15.319262 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" event={"ID":"18f30541-9041-4ce9-a86d-852543730d0c","Type":"ContainerStarted","Data":"db4cfcee5f1fa8f1359b54978fc1f2f0bee04186d277037745ab4ae22bc82e9d"} Apr 16 18:41:15.319778 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:15.319448 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:41:15.338825 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:15.338774 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" podStartSLOduration=1.6075048330000001 podStartE2EDuration="2.338759888s" podCreationTimestamp="2026-04-16 18:41:13 +0000 UTC" firstStartedPulling="2026-04-16 18:41:14.408902345 +0000 UTC m=+677.301219689" lastFinishedPulling="2026-04-16 18:41:15.140157401 +0000 UTC m=+678.032474744" observedRunningTime="2026-04-16 18:41:15.337424786 +0000 UTC m=+678.229742147" watchObservedRunningTime="2026-04-16 18:41:15.338759888 +0000 UTC m=+678.231077253" Apr 16 18:41:15.556208 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:15.556165 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35e3326-203d-487e-bc0e-4b928bb3a82e" path="/var/lib/kubelet/pods/c35e3326-203d-487e-bc0e-4b928bb3a82e/volumes" Apr 16 18:41:46.327814 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:41:46.327785 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve/kserve-controller-manager-7c68cb4fc8-l2xrn" Apr 16 18:43:58.456445 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.456412 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr"] Apr 16 18:43:58.456895 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.456643 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c35e3326-203d-487e-bc0e-4b928bb3a82e" containerName="manager" Apr 16 18:43:58.456895 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.456653 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35e3326-203d-487e-bc0e-4b928bb3a82e" containerName="manager" Apr 16 18:43:58.456895 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.456726 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="c35e3326-203d-487e-bc0e-4b928bb3a82e" containerName="manager" Apr 16 18:43:58.459480 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.459460 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:43:58.461881 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.461859 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"kserve-ci-e2e-test\"/\"default-dockercfg-rqcg5\"" Apr 16 18:43:58.469802 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.469781 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr"] Apr 16 18:43:58.574319 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.574292 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/90c5376f-d100-46e2-ad46-05e58f34b6af-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr\" (UID: \"90c5376f-d100-46e2-ad46-05e58f34b6af\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:43:58.675655 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.675626 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/90c5376f-d100-46e2-ad46-05e58f34b6af-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr\" (UID: \"90c5376f-d100-46e2-ad46-05e58f34b6af\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:43:58.675983 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.675965 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/90c5376f-d100-46e2-ad46-05e58f34b6af-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr\" (UID: \"90c5376f-d100-46e2-ad46-05e58f34b6af\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:43:58.769876 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.769802 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:43:58.882483 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:58.882461 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr"] Apr 16 18:43:58.885017 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:43:58.884985 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90c5376f_d100_46e2_ad46_05e58f34b6af.slice/crio-789cebe0f617b5487e8a3afe0fb1bd1a0c00cff83b6e22568822941d740246c0 WatchSource:0}: Error finding container 789cebe0f617b5487e8a3afe0fb1bd1a0c00cff83b6e22568822941d740246c0: Status 404 returned error can't find the container with id 789cebe0f617b5487e8a3afe0fb1bd1a0c00cff83b6e22568822941d740246c0 Apr 16 18:43:59.737864 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:43:59.737824 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" event={"ID":"90c5376f-d100-46e2-ad46-05e58f34b6af","Type":"ContainerStarted","Data":"789cebe0f617b5487e8a3afe0fb1bd1a0c00cff83b6e22568822941d740246c0"} Apr 16 18:44:02.747993 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:02.747950 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" event={"ID":"90c5376f-d100-46e2-ad46-05e58f34b6af","Type":"ContainerStarted","Data":"c2d51c9d6274af32242516fa035c91c9af36e6422a5dbe68037d2aec0ba4af19"} Apr 16 18:44:06.759823 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:06.759787 2569 generic.go:358] "Generic (PLEG): container finished" podID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerID="c2d51c9d6274af32242516fa035c91c9af36e6422a5dbe68037d2aec0ba4af19" exitCode=0 Apr 16 18:44:06.760229 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:06.759861 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" event={"ID":"90c5376f-d100-46e2-ad46-05e58f34b6af","Type":"ContainerDied","Data":"c2d51c9d6274af32242516fa035c91c9af36e6422a5dbe68037d2aec0ba4af19"} Apr 16 18:44:24.814302 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:24.814262 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" event={"ID":"90c5376f-d100-46e2-ad46-05e58f34b6af","Type":"ContainerStarted","Data":"3500ea5d9d77643d87ccf13ba61d3efd3397e2afc2cc6bc9dece2cbd7d85e448"} Apr 16 18:44:24.814658 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:24.814537 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:44:24.815683 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:24.815657 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:44:24.836189 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:24.836147 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podStartSLOduration=1.044304308 podStartE2EDuration="26.836134821s" podCreationTimestamp="2026-04-16 18:43:58 +0000 UTC" firstStartedPulling="2026-04-16 18:43:58.886977406 +0000 UTC m=+841.779294749" lastFinishedPulling="2026-04-16 18:44:24.678807919 +0000 UTC m=+867.571125262" observedRunningTime="2026-04-16 18:44:24.834642213 +0000 UTC m=+867.726959582" watchObservedRunningTime="2026-04-16 18:44:24.836134821 +0000 UTC m=+867.728452185" Apr 16 18:44:25.818064 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:25.818030 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:44:35.818714 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:35.818673 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:44:45.818609 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:45.818569 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:44:55.818469 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:55.818425 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:44:57.537491 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:57.537459 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:44:57.538104 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:44:57.538085 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:45:05.818417 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:05.818347 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:45:15.818393 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:15.818331 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.17:8080: connect: connection refused" Apr 16 18:45:25.818875 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:25.818841 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:45:28.440617 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.440588 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl"] Apr 16 18:45:28.443542 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.443528 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" Apr 16 18:45:28.446209 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.446188 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:45:28.453734 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.453710 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl"] Apr 16 18:45:28.513812 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.513785 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b114e0-d6bb-432d-889e-02ebc68c44a8-openshift-service-ca-bundle\") pod \"model-chainer-raw-1a6d3-78c446778d-kdhcl\" (UID: \"c5b114e0-d6bb-432d-889e-02ebc68c44a8\") " pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" Apr 16 18:45:28.614324 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.614296 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b114e0-d6bb-432d-889e-02ebc68c44a8-openshift-service-ca-bundle\") pod \"model-chainer-raw-1a6d3-78c446778d-kdhcl\" (UID: \"c5b114e0-d6bb-432d-889e-02ebc68c44a8\") " pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" Apr 16 18:45:28.614867 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.614849 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b114e0-d6bb-432d-889e-02ebc68c44a8-openshift-service-ca-bundle\") pod \"model-chainer-raw-1a6d3-78c446778d-kdhcl\" (UID: \"c5b114e0-d6bb-432d-889e-02ebc68c44a8\") " pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" Apr 16 18:45:28.753690 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.753665 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" Apr 16 18:45:28.870371 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.870338 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl"] Apr 16 18:45:28.873088 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:45:28.873059 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5b114e0_d6bb_432d_889e_02ebc68c44a8.slice/crio-dd98e5edabf0fb420d298969a0e9a402c302b7819b8e31bd30bc8dcc05e59dec WatchSource:0}: Error finding container dd98e5edabf0fb420d298969a0e9a402c302b7819b8e31bd30bc8dcc05e59dec: Status 404 returned error can't find the container with id dd98e5edabf0fb420d298969a0e9a402c302b7819b8e31bd30bc8dcc05e59dec Apr 16 18:45:28.874749 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.874732 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:45:28.980259 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:28.980221 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" event={"ID":"c5b114e0-d6bb-432d-889e-02ebc68c44a8","Type":"ContainerStarted","Data":"dd98e5edabf0fb420d298969a0e9a402c302b7819b8e31bd30bc8dcc05e59dec"} Apr 16 18:45:31.988977 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:31.988939 2569 generic.go:358] "Generic (PLEG): container finished" podID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" containerID="92e942c77af03dd49858e289b95b0bb039f641e8ff83fcc9e3d2eea15ad319ac" exitCode=1 Apr 16 18:45:31.989348 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:31.989020 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" event={"ID":"c5b114e0-d6bb-432d-889e-02ebc68c44a8","Type":"ContainerDied","Data":"92e942c77af03dd49858e289b95b0bb039f641e8ff83fcc9e3d2eea15ad319ac"} Apr 16 18:45:31.989348 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:31.989206 2569 scope.go:117] "RemoveContainer" containerID="92e942c77af03dd49858e289b95b0bb039f641e8ff83fcc9e3d2eea15ad319ac" Apr 16 18:45:32.992873 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:32.992843 2569 generic.go:358] "Generic (PLEG): container finished" podID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" containerID="8cee3d70048c141baf65b02c4284417a6af0fde562dc3709b6b667d5be1b2db2" exitCode=1 Apr 16 18:45:32.993274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:32.992885 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" event={"ID":"c5b114e0-d6bb-432d-889e-02ebc68c44a8","Type":"ContainerDied","Data":"8cee3d70048c141baf65b02c4284417a6af0fde562dc3709b6b667d5be1b2db2"} Apr 16 18:45:32.993274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:32.992914 2569 scope.go:117] "RemoveContainer" containerID="92e942c77af03dd49858e289b95b0bb039f641e8ff83fcc9e3d2eea15ad319ac" Apr 16 18:45:32.993274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:32.993128 2569 scope.go:117] "RemoveContainer" containerID="8cee3d70048c141baf65b02c4284417a6af0fde562dc3709b6b667d5be1b2db2" Apr 16 18:45:32.993432 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:45:32.993309 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"model-chainer-raw-1a6d3\" with CrashLoopBackOff: \"back-off 10s restarting failed container=model-chainer-raw-1a6d3 pod=model-chainer-raw-1a6d3-78c446778d-kdhcl_kserve-ci-e2e-test(c5b114e0-d6bb-432d-889e-02ebc68c44a8)\"" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" Apr 16 18:45:33.753922 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:33.753891 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" Apr 16 18:45:33.996799 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:33.996765 2569 scope.go:117] "RemoveContainer" containerID="8cee3d70048c141baf65b02c4284417a6af0fde562dc3709b6b667d5be1b2db2" Apr 16 18:45:33.997168 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:45:33.996955 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"model-chainer-raw-1a6d3\" with CrashLoopBackOff: \"back-off 10s restarting failed container=model-chainer-raw-1a6d3 pod=model-chainer-raw-1a6d3-78c446778d-kdhcl_kserve-ci-e2e-test(c5b114e0-d6bb-432d-889e-02ebc68c44a8)\"" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" Apr 16 18:45:34.999040 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:34.999016 2569 scope.go:117] "RemoveContainer" containerID="8cee3d70048c141baf65b02c4284417a6af0fde562dc3709b6b667d5be1b2db2" Apr 16 18:45:34.999421 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:45:34.999186 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"model-chainer-raw-1a6d3\" with CrashLoopBackOff: \"back-off 10s restarting failed container=model-chainer-raw-1a6d3 pod=model-chainer-raw-1a6d3-78c446778d-kdhcl_kserve-ci-e2e-test(c5b114e0-d6bb-432d-889e-02ebc68c44a8)\"" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" Apr 16 18:45:38.482067 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.482031 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl"] Apr 16 18:45:38.613920 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.613875 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr"] Apr 16 18:45:38.614192 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.614165 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" containerID="cri-o://3500ea5d9d77643d87ccf13ba61d3efd3397e2afc2cc6bc9dece2cbd7d85e448" gracePeriod=30 Apr 16 18:45:38.614749 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.614733 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" Apr 16 18:45:38.690828 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.690792 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b114e0-d6bb-432d-889e-02ebc68c44a8-openshift-service-ca-bundle\") pod \"c5b114e0-d6bb-432d-889e-02ebc68c44a8\" (UID: \"c5b114e0-d6bb-432d-889e-02ebc68c44a8\") " Apr 16 18:45:38.691321 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.691288 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b114e0-d6bb-432d-889e-02ebc68c44a8-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "c5b114e0-d6bb-432d-889e-02ebc68c44a8" (UID: "c5b114e0-d6bb-432d-889e-02ebc68c44a8"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:45:38.753988 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.753925 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p"] Apr 16 18:45:38.754181 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.754158 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" containerName="model-chainer-raw-1a6d3" Apr 16 18:45:38.754181 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.754176 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" containerName="model-chainer-raw-1a6d3" Apr 16 18:45:38.754343 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.754238 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" containerName="model-chainer-raw-1a6d3" Apr 16 18:45:38.754343 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.754251 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" containerName="model-chainer-raw-1a6d3" Apr 16 18:45:38.754343 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.754334 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" containerName="model-chainer-raw-1a6d3" Apr 16 18:45:38.754538 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.754346 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" containerName="model-chainer-raw-1a6d3" Apr 16 18:45:38.757331 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.757310 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:45:38.766170 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.766147 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p"] Apr 16 18:45:38.791750 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.791726 2569 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b114e0-d6bb-432d-889e-02ebc68c44a8-openshift-service-ca-bundle\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:45:38.892125 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.892096 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0e5aea4b-904f-41e8-90e8-24605d00d302-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p\" (UID: \"0e5aea4b-904f-41e8-90e8-24605d00d302\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:45:38.992964 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.992935 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0e5aea4b-904f-41e8-90e8-24605d00d302-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p\" (UID: \"0e5aea4b-904f-41e8-90e8-24605d00d302\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:45:38.993275 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:38.993256 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0e5aea4b-904f-41e8-90e8-24605d00d302-kserve-provision-location\") pod \"isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p\" (UID: \"0e5aea4b-904f-41e8-90e8-24605d00d302\") " pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:45:39.009314 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:39.009248 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" event={"ID":"c5b114e0-d6bb-432d-889e-02ebc68c44a8","Type":"ContainerDied","Data":"dd98e5edabf0fb420d298969a0e9a402c302b7819b8e31bd30bc8dcc05e59dec"} Apr 16 18:45:39.009314 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:39.009290 2569 scope.go:117] "RemoveContainer" containerID="8cee3d70048c141baf65b02c4284417a6af0fde562dc3709b6b667d5be1b2db2" Apr 16 18:45:39.009314 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:39.009293 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl" Apr 16 18:45:39.027744 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:39.027722 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl"] Apr 16 18:45:39.031208 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:39.031185 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-1a6d3-78c446778d-kdhcl"] Apr 16 18:45:39.069173 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:39.069152 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:45:39.185717 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:39.185690 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p"] Apr 16 18:45:39.189043 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:45:39.189013 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5aea4b_904f_41e8_90e8_24605d00d302.slice/crio-0187756919fd6beb873c5b28eb465c83db457ab68da29f3ff57d3aa704f0783a WatchSource:0}: Error finding container 0187756919fd6beb873c5b28eb465c83db457ab68da29f3ff57d3aa704f0783a: Status 404 returned error can't find the container with id 0187756919fd6beb873c5b28eb465c83db457ab68da29f3ff57d3aa704f0783a Apr 16 18:45:39.556258 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:39.556217 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b114e0-d6bb-432d-889e-02ebc68c44a8" path="/var/lib/kubelet/pods/c5b114e0-d6bb-432d-889e-02ebc68c44a8/volumes" Apr 16 18:45:40.013188 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:40.013153 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" event={"ID":"0e5aea4b-904f-41e8-90e8-24605d00d302","Type":"ContainerStarted","Data":"17bd6040ab2dedc16a6c93d5e7a353b17b8ec842d4d839c15f13c8845c4a27c4"} Apr 16 18:45:40.013188 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:40.013188 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" event={"ID":"0e5aea4b-904f-41e8-90e8-24605d00d302","Type":"ContainerStarted","Data":"0187756919fd6beb873c5b28eb465c83db457ab68da29f3ff57d3aa704f0783a"} Apr 16 18:45:42.021305 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:42.021276 2569 generic.go:358] "Generic (PLEG): container finished" podID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerID="3500ea5d9d77643d87ccf13ba61d3efd3397e2afc2cc6bc9dece2cbd7d85e448" exitCode=0 Apr 16 18:45:42.021637 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:42.021338 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" event={"ID":"90c5376f-d100-46e2-ad46-05e58f34b6af","Type":"ContainerDied","Data":"3500ea5d9d77643d87ccf13ba61d3efd3397e2afc2cc6bc9dece2cbd7d85e448"} Apr 16 18:45:42.062704 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:42.062682 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:45:42.215584 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:42.215558 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/90c5376f-d100-46e2-ad46-05e58f34b6af-kserve-provision-location\") pod \"90c5376f-d100-46e2-ad46-05e58f34b6af\" (UID: \"90c5376f-d100-46e2-ad46-05e58f34b6af\") " Apr 16 18:45:42.215870 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:42.215849 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90c5376f-d100-46e2-ad46-05e58f34b6af-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "90c5376f-d100-46e2-ad46-05e58f34b6af" (UID: "90c5376f-d100-46e2-ad46-05e58f34b6af"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:45:42.316305 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:42.316277 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/90c5376f-d100-46e2-ad46-05e58f34b6af-kserve-provision-location\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:45:43.026158 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.026131 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" Apr 16 18:45:43.026511 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.026130 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr" event={"ID":"90c5376f-d100-46e2-ad46-05e58f34b6af","Type":"ContainerDied","Data":"789cebe0f617b5487e8a3afe0fb1bd1a0c00cff83b6e22568822941d740246c0"} Apr 16 18:45:43.026511 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.026240 2569 scope.go:117] "RemoveContainer" containerID="3500ea5d9d77643d87ccf13ba61d3efd3397e2afc2cc6bc9dece2cbd7d85e448" Apr 16 18:45:43.027519 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.027498 2569 generic.go:358] "Generic (PLEG): container finished" podID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerID="17bd6040ab2dedc16a6c93d5e7a353b17b8ec842d4d839c15f13c8845c4a27c4" exitCode=0 Apr 16 18:45:43.027598 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.027528 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" event={"ID":"0e5aea4b-904f-41e8-90e8-24605d00d302","Type":"ContainerDied","Data":"17bd6040ab2dedc16a6c93d5e7a353b17b8ec842d4d839c15f13c8845c4a27c4"} Apr 16 18:45:43.033481 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.033463 2569 scope.go:117] "RemoveContainer" containerID="c2d51c9d6274af32242516fa035c91c9af36e6422a5dbe68037d2aec0ba4af19" Apr 16 18:45:43.059792 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.059756 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr"] Apr 16 18:45:43.064099 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.064071 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-1a6d3-predictor-6ddf8d4b44-2vczr"] Apr 16 18:45:43.556708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:43.556669 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" path="/var/lib/kubelet/pods/90c5376f-d100-46e2-ad46-05e58f34b6af/volumes" Apr 16 18:45:44.032400 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:44.032342 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" event={"ID":"0e5aea4b-904f-41e8-90e8-24605d00d302","Type":"ContainerStarted","Data":"3b0710808d92c837fd3d231d72f88320a23e1a71f22d14dbbd27e8acc0a5ef5f"} Apr 16 18:45:44.032825 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:44.032690 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:45:44.034055 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:44.034026 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:45:44.050046 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:44.050009 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podStartSLOduration=6.049997278 podStartE2EDuration="6.049997278s" podCreationTimestamp="2026-04-16 18:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:45:44.049125447 +0000 UTC m=+946.941442813" watchObservedRunningTime="2026-04-16 18:45:44.049997278 +0000 UTC m=+946.942314643" Apr 16 18:45:45.036598 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:45.036560 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:45:55.036746 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:45:55.036696 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:46:05.036838 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:46:05.036790 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:46:15.037055 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:46:15.036962 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:46:25.036763 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:46:25.036715 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:46:35.036815 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:46:35.036772 2569 prober.go:120] "Probe failed" probeType="Readiness" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" probeResult="failure" output="dial tcp 10.132.0.19:8080: connect: connection refused" Apr 16 18:46:45.038561 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:46:45.038531 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:47:08.760860 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.760829 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29"] Apr 16 18:47:08.761441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.761216 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="storage-initializer" Apr 16 18:47:08.761441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.761235 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="storage-initializer" Apr 16 18:47:08.761441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.761258 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" Apr 16 18:47:08.761441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.761267 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" Apr 16 18:47:08.761441 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.761339 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="90c5376f-d100-46e2-ad46-05e58f34b6af" containerName="kserve-container" Apr 16 18:47:08.764244 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.764219 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:08.766785 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.766766 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"kserve-ci-e2e-test\"/\"openshift-service-ca.crt\"" Apr 16 18:47:08.773478 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.773454 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29"] Apr 16 18:47:08.807064 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.807041 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f376ce5e-0bdb-4112-9699-d3138855f6ba-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29\" (UID: \"f376ce5e-0bdb-4112-9699-d3138855f6ba\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:08.907811 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.907786 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f376ce5e-0bdb-4112-9699-d3138855f6ba-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29\" (UID: \"f376ce5e-0bdb-4112-9699-d3138855f6ba\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:08.908362 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:08.908340 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f376ce5e-0bdb-4112-9699-d3138855f6ba-openshift-service-ca-bundle\") pod \"model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29\" (UID: \"f376ce5e-0bdb-4112-9699-d3138855f6ba\") " pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:09.075601 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:09.075521 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:09.190046 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:09.190022 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29"] Apr 16 18:47:09.191767 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:47:09.191735 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf376ce5e_0bdb_4112_9699_d3138855f6ba.slice/crio-b92a51c107e9e18e717a42100c9caaf340166214da847b0280ae4f34148c9d01 WatchSource:0}: Error finding container b92a51c107e9e18e717a42100c9caaf340166214da847b0280ae4f34148c9d01: Status 404 returned error can't find the container with id b92a51c107e9e18e717a42100c9caaf340166214da847b0280ae4f34148c9d01 Apr 16 18:47:09.254125 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:09.254080 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" event={"ID":"f376ce5e-0bdb-4112-9699-d3138855f6ba","Type":"ContainerStarted","Data":"22dff7273e322fd4b0e6d69d6d504dff8f2102a482941188a7a554eb0eafb802"} Apr 16 18:47:09.254263 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:09.254135 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" event={"ID":"f376ce5e-0bdb-4112-9699-d3138855f6ba","Type":"ContainerStarted","Data":"b92a51c107e9e18e717a42100c9caaf340166214da847b0280ae4f34148c9d01"} Apr 16 18:47:09.254263 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:09.254185 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:09.273898 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:09.273855 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" podStartSLOduration=1.2738385110000001 podStartE2EDuration="1.273838511s" podCreationTimestamp="2026-04-16 18:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:47:09.272703092 +0000 UTC m=+1032.165020455" watchObservedRunningTime="2026-04-16 18:47:09.273838511 +0000 UTC m=+1032.166155877" Apr 16 18:47:10.258218 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:10.258181 2569 generic.go:358] "Generic (PLEG): container finished" podID="f376ce5e-0bdb-4112-9699-d3138855f6ba" containerID="22dff7273e322fd4b0e6d69d6d504dff8f2102a482941188a7a554eb0eafb802" exitCode=1 Apr 16 18:47:10.258615 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:10.258246 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" event={"ID":"f376ce5e-0bdb-4112-9699-d3138855f6ba","Type":"ContainerDied","Data":"22dff7273e322fd4b0e6d69d6d504dff8f2102a482941188a7a554eb0eafb802"} Apr 16 18:47:10.258615 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:10.258510 2569 scope.go:117] "RemoveContainer" containerID="22dff7273e322fd4b0e6d69d6d504dff8f2102a482941188a7a554eb0eafb802" Apr 16 18:47:11.262456 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:11.262421 2569 generic.go:358] "Generic (PLEG): container finished" podID="f376ce5e-0bdb-4112-9699-d3138855f6ba" containerID="61407c22ff0d685af675e3dda39a5ebfeba4dcd61f533c9efb36ffc3cc54c3e2" exitCode=1 Apr 16 18:47:11.262862 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:11.262478 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" event={"ID":"f376ce5e-0bdb-4112-9699-d3138855f6ba","Type":"ContainerDied","Data":"61407c22ff0d685af675e3dda39a5ebfeba4dcd61f533c9efb36ffc3cc54c3e2"} Apr 16 18:47:11.262862 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:11.262514 2569 scope.go:117] "RemoveContainer" containerID="22dff7273e322fd4b0e6d69d6d504dff8f2102a482941188a7a554eb0eafb802" Apr 16 18:47:11.262862 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:11.262678 2569 scope.go:117] "RemoveContainer" containerID="61407c22ff0d685af675e3dda39a5ebfeba4dcd61f533c9efb36ffc3cc54c3e2" Apr 16 18:47:11.262998 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:47:11.262862 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"model-chainer-raw-hpa-fd0ec\" with CrashLoopBackOff: \"back-off 10s restarting failed container=model-chainer-raw-hpa-fd0ec pod=model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29_kserve-ci-e2e-test(f376ce5e-0bdb-4112-9699-d3138855f6ba)\"" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" Apr 16 18:47:12.267018 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:12.266994 2569 scope.go:117] "RemoveContainer" containerID="61407c22ff0d685af675e3dda39a5ebfeba4dcd61f533c9efb36ffc3cc54c3e2" Apr 16 18:47:12.267406 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:47:12.267175 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"model-chainer-raw-hpa-fd0ec\" with CrashLoopBackOff: \"back-off 10s restarting failed container=model-chainer-raw-hpa-fd0ec pod=model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29_kserve-ci-e2e-test(f376ce5e-0bdb-4112-9699-d3138855f6ba)\"" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" Apr 16 18:47:14.254950 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:14.254893 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:14.255464 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:14.255404 2569 scope.go:117] "RemoveContainer" containerID="61407c22ff0d685af675e3dda39a5ebfeba4dcd61f533c9efb36ffc3cc54c3e2" Apr 16 18:47:14.255634 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:47:14.255612 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"model-chainer-raw-hpa-fd0ec\" with CrashLoopBackOff: \"back-off 10s restarting failed container=model-chainer-raw-hpa-fd0ec pod=model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29_kserve-ci-e2e-test(f376ce5e-0bdb-4112-9699-d3138855f6ba)\"" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" Apr 16 18:47:18.851114 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:18.851081 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29"] Apr 16 18:47:18.971494 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:18.971466 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p"] Apr 16 18:47:18.971795 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:18.971760 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" containerID="cri-o://3b0710808d92c837fd3d231d72f88320a23e1a71f22d14dbbd27e8acc0a5ef5f" gracePeriod=30 Apr 16 18:47:18.985540 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:18.985516 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:19.026814 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.026784 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n"] Apr 16 18:47:19.027046 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.027033 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" containerName="model-chainer-raw-hpa-fd0ec" Apr 16 18:47:19.027089 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.027048 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" containerName="model-chainer-raw-hpa-fd0ec" Apr 16 18:47:19.027089 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.027066 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" containerName="model-chainer-raw-hpa-fd0ec" Apr 16 18:47:19.027089 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.027073 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" containerName="model-chainer-raw-hpa-fd0ec" Apr 16 18:47:19.027217 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.027118 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" containerName="model-chainer-raw-hpa-fd0ec" Apr 16 18:47:19.031135 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.031115 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" Apr 16 18:47:19.039046 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.039022 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n"] Apr 16 18:47:19.041683 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.041573 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" Apr 16 18:47:19.073431 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.073408 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f376ce5e-0bdb-4112-9699-d3138855f6ba-openshift-service-ca-bundle\") pod \"f376ce5e-0bdb-4112-9699-d3138855f6ba\" (UID: \"f376ce5e-0bdb-4112-9699-d3138855f6ba\") " Apr 16 18:47:19.073726 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.073706 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f376ce5e-0bdb-4112-9699-d3138855f6ba-openshift-service-ca-bundle" (OuterVolumeSpecName: "openshift-service-ca-bundle") pod "f376ce5e-0bdb-4112-9699-d3138855f6ba" (UID: "f376ce5e-0bdb-4112-9699-d3138855f6ba"). InnerVolumeSpecName "openshift-service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 18:47:19.160279 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.160248 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n"] Apr 16 18:47:19.163970 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:47:19.163935 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3ffb519_60f9_4431_baf6_36792845e4b2.slice/crio-d0da5882644cb5408e3876371fbcd400b189ba903307dac43b702e136a982c17 WatchSource:0}: Error finding container d0da5882644cb5408e3876371fbcd400b189ba903307dac43b702e136a982c17: Status 404 returned error can't find the container with id d0da5882644cb5408e3876371fbcd400b189ba903307dac43b702e136a982c17 Apr 16 18:47:19.174772 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.174750 2569 reconciler_common.go:299] "Volume detached for volume \"openshift-service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f376ce5e-0bdb-4112-9699-d3138855f6ba-openshift-service-ca-bundle\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:47:19.286069 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.286035 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" event={"ID":"e3ffb519-60f9-4431-baf6-36792845e4b2","Type":"ContainerStarted","Data":"d0da5882644cb5408e3876371fbcd400b189ba903307dac43b702e136a982c17"} Apr 16 18:47:19.287095 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.287063 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" event={"ID":"f376ce5e-0bdb-4112-9699-d3138855f6ba","Type":"ContainerDied","Data":"b92a51c107e9e18e717a42100c9caaf340166214da847b0280ae4f34148c9d01"} Apr 16 18:47:19.287178 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.287101 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29" Apr 16 18:47:19.287212 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.287102 2569 scope.go:117] "RemoveContainer" containerID="61407c22ff0d685af675e3dda39a5ebfeba4dcd61f533c9efb36ffc3cc54c3e2" Apr 16 18:47:19.310830 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.310807 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29"] Apr 16 18:47:19.314708 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.314687 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/model-chainer-raw-hpa-fd0ec-79c8d5fc87-88r29"] Apr 16 18:47:19.556267 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:19.556241 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" path="/var/lib/kubelet/pods/f376ce5e-0bdb-4112-9699-d3138855f6ba/volumes" Apr 16 18:47:20.291410 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:20.291362 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" event={"ID":"e3ffb519-60f9-4431-baf6-36792845e4b2","Type":"ContainerStarted","Data":"69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada"} Apr 16 18:47:20.291820 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:20.291585 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" Apr 16 18:47:20.293438 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:20.293419 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" Apr 16 18:47:20.307290 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:20.307240 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" podStartSLOduration=0.35814377 podStartE2EDuration="1.307227998s" podCreationTimestamp="2026-04-16 18:47:19 +0000 UTC" firstStartedPulling="2026-04-16 18:47:19.166035049 +0000 UTC m=+1042.058352393" lastFinishedPulling="2026-04-16 18:47:20.115119276 +0000 UTC m=+1043.007436621" observedRunningTime="2026-04-16 18:47:20.306466757 +0000 UTC m=+1043.198784134" watchObservedRunningTime="2026-04-16 18:47:20.307227998 +0000 UTC m=+1043.199545356" Apr 16 18:47:22.299975 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:22.299945 2569 generic.go:358] "Generic (PLEG): container finished" podID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerID="3b0710808d92c837fd3d231d72f88320a23e1a71f22d14dbbd27e8acc0a5ef5f" exitCode=0 Apr 16 18:47:22.300419 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:22.299979 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" event={"ID":"0e5aea4b-904f-41e8-90e8-24605d00d302","Type":"ContainerDied","Data":"3b0710808d92c837fd3d231d72f88320a23e1a71f22d14dbbd27e8acc0a5ef5f"} Apr 16 18:47:22.315305 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:22.315286 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:47:22.398610 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:22.398580 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0e5aea4b-904f-41e8-90e8-24605d00d302-kserve-provision-location\") pod \"0e5aea4b-904f-41e8-90e8-24605d00d302\" (UID: \"0e5aea4b-904f-41e8-90e8-24605d00d302\") " Apr 16 18:47:22.398893 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:22.398871 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5aea4b-904f-41e8-90e8-24605d00d302-kserve-provision-location" (OuterVolumeSpecName: "kserve-provision-location") pod "0e5aea4b-904f-41e8-90e8-24605d00d302" (UID: "0e5aea4b-904f-41e8-90e8-24605d00d302"). InnerVolumeSpecName "kserve-provision-location". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Apr 16 18:47:22.499250 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:22.499216 2569 reconciler_common.go:299] "Volume detached for volume \"kserve-provision-location\" (UniqueName: \"kubernetes.io/empty-dir/0e5aea4b-904f-41e8-90e8-24605d00d302-kserve-provision-location\") on node \"ip-10-0-142-225.ec2.internal\" DevicePath \"\"" Apr 16 18:47:23.303627 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:23.303596 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" event={"ID":"0e5aea4b-904f-41e8-90e8-24605d00d302","Type":"ContainerDied","Data":"0187756919fd6beb873c5b28eb465c83db457ab68da29f3ff57d3aa704f0783a"} Apr 16 18:47:23.303627 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:23.303624 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p" Apr 16 18:47:23.304068 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:23.303639 2569 scope.go:117] "RemoveContainer" containerID="3b0710808d92c837fd3d231d72f88320a23e1a71f22d14dbbd27e8acc0a5ef5f" Apr 16 18:47:23.313090 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:23.313075 2569 scope.go:117] "RemoveContainer" containerID="17bd6040ab2dedc16a6c93d5e7a353b17b8ec842d4d839c15f13c8845c4a27c4" Apr 16 18:47:23.325329 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:23.325303 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p"] Apr 16 18:47:23.331915 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:23.331895 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/isvc-xgboost-graph-raw-hpa-fd0ec-predictor-748987f945-xdm6p"] Apr 16 18:47:23.560773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:47:23.557895 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" path="/var/lib/kubelet/pods/0e5aea4b-904f-41e8-90e8-24605d00d302/volumes" Apr 16 18:48:54.165663 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:54.165627 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve-ci-e2e-test_message-dumper-raw-79732-predictor-77bcc96986-82s7n_e3ffb519-60f9-4431-baf6-36792845e4b2/kserve-container/0.log" Apr 16 18:48:54.665816 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:54.665784 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n"] Apr 16 18:48:54.666047 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:54.666008 2569 kuberuntime_container.go:864] "Killing container with a grace period" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" podUID="e3ffb519-60f9-4431-baf6-36792845e4b2" containerName="kserve-container" containerID="cri-o://69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada" gracePeriod=30 Apr 16 18:48:54.898305 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:54.898284 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" Apr 16 18:48:55.552530 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.552497 2569 generic.go:358] "Generic (PLEG): container finished" podID="e3ffb519-60f9-4431-baf6-36792845e4b2" containerID="69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada" exitCode=2 Apr 16 18:48:55.552926 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.552591 2569 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" Apr 16 18:48:55.555882 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.555856 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" event={"ID":"e3ffb519-60f9-4431-baf6-36792845e4b2","Type":"ContainerDied","Data":"69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada"} Apr 16 18:48:55.556013 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.555896 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n" event={"ID":"e3ffb519-60f9-4431-baf6-36792845e4b2","Type":"ContainerDied","Data":"d0da5882644cb5408e3876371fbcd400b189ba903307dac43b702e136a982c17"} Apr 16 18:48:55.556013 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.555932 2569 scope.go:117] "RemoveContainer" containerID="69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada" Apr 16 18:48:55.564274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.564259 2569 scope.go:117] "RemoveContainer" containerID="69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada" Apr 16 18:48:55.564526 ip-10-0-142-225 kubenswrapper[2569]: E0416 18:48:55.564507 2569 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada\": container with ID starting with 69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada not found: ID does not exist" containerID="69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada" Apr 16 18:48:55.564573 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.564534 2569 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada"} err="failed to get container status \"69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada\": rpc error: code = NotFound desc = could not find container \"69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada\": container with ID starting with 69f8607b7fc1a21d4d578a9ba62286d6f8d8a8942555672710269b2189054ada not found: ID does not exist" Apr 16 18:48:55.584926 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.584902 2569 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n"] Apr 16 18:48:55.592425 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:55.592407 2569 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["kserve-ci-e2e-test/message-dumper-raw-79732-predictor-77bcc96986-82s7n"] Apr 16 18:48:57.556074 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:48:57.556044 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ffb519-60f9-4431-baf6-36792845e4b2" path="/var/lib/kubelet/pods/e3ffb519-60f9-4431-baf6-36792845e4b2/volumes" Apr 16 18:49:57.554888 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:49:57.554861 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:49:57.555994 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:49:57.555968 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:54:57.573905 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:54:57.573878 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:54:57.574414 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:54:57.574195 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:56:10.761599 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761564 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h2sf7/must-gather-ffzbd"] Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761800 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="storage-initializer" Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761812 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="storage-initializer" Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761826 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761832 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761843 2569 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3ffb519-60f9-4431-baf6-36792845e4b2" containerName="kserve-container" Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761852 2569 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ffb519-60f9-4431-baf6-36792845e4b2" containerName="kserve-container" Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761897 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="f376ce5e-0bdb-4112-9699-d3138855f6ba" containerName="model-chainer-raw-hpa-fd0ec" Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761905 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3ffb519-60f9-4431-baf6-36792845e4b2" containerName="kserve-container" Apr 16 18:56:10.762045 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.761911 2569 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e5aea4b-904f-41e8-90e8-24605d00d302" containerName="kserve-container" Apr 16 18:56:10.764686 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.764671 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h2sf7/must-gather-ffzbd" Apr 16 18:56:10.767118 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.767096 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-h2sf7\"/\"kube-root-ca.crt\"" Apr 16 18:56:10.767250 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.767098 2569 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-h2sf7\"/\"openshift-service-ca.crt\"" Apr 16 18:56:10.767250 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.767099 2569 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-h2sf7\"/\"default-dockercfg-vvgvg\"" Apr 16 18:56:10.773601 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.773579 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h2sf7/must-gather-ffzbd"] Apr 16 18:56:10.884816 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.884774 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgl6p\" (UniqueName: \"kubernetes.io/projected/c4393cb7-f712-44de-9314-900f4e7922f7-kube-api-access-bgl6p\") pod \"must-gather-ffzbd\" (UID: \"c4393cb7-f712-44de-9314-900f4e7922f7\") " pod="openshift-must-gather-h2sf7/must-gather-ffzbd" Apr 16 18:56:10.884816 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.884823 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4393cb7-f712-44de-9314-900f4e7922f7-must-gather-output\") pod \"must-gather-ffzbd\" (UID: \"c4393cb7-f712-44de-9314-900f4e7922f7\") " pod="openshift-must-gather-h2sf7/must-gather-ffzbd" Apr 16 18:56:10.985164 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.985134 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgl6p\" (UniqueName: \"kubernetes.io/projected/c4393cb7-f712-44de-9314-900f4e7922f7-kube-api-access-bgl6p\") pod \"must-gather-ffzbd\" (UID: \"c4393cb7-f712-44de-9314-900f4e7922f7\") " pod="openshift-must-gather-h2sf7/must-gather-ffzbd" Apr 16 18:56:10.985164 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.985169 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4393cb7-f712-44de-9314-900f4e7922f7-must-gather-output\") pod \"must-gather-ffzbd\" (UID: \"c4393cb7-f712-44de-9314-900f4e7922f7\") " pod="openshift-must-gather-h2sf7/must-gather-ffzbd" Apr 16 18:56:10.985481 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.985464 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4393cb7-f712-44de-9314-900f4e7922f7-must-gather-output\") pod \"must-gather-ffzbd\" (UID: \"c4393cb7-f712-44de-9314-900f4e7922f7\") " pod="openshift-must-gather-h2sf7/must-gather-ffzbd" Apr 16 18:56:10.993337 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:10.993315 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgl6p\" (UniqueName: \"kubernetes.io/projected/c4393cb7-f712-44de-9314-900f4e7922f7-kube-api-access-bgl6p\") pod \"must-gather-ffzbd\" (UID: \"c4393cb7-f712-44de-9314-900f4e7922f7\") " pod="openshift-must-gather-h2sf7/must-gather-ffzbd" Apr 16 18:56:11.073432 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:11.073350 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h2sf7/must-gather-ffzbd" Apr 16 18:56:11.183320 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:11.183288 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h2sf7/must-gather-ffzbd"] Apr 16 18:56:11.186353 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:56:11.186322 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4393cb7_f712_44de_9314_900f4e7922f7.slice/crio-3520149e7f0a2e18c67fd5de3aca2ce67b213838eb4ce277824d3e284b3eea50 WatchSource:0}: Error finding container 3520149e7f0a2e18c67fd5de3aca2ce67b213838eb4ce277824d3e284b3eea50: Status 404 returned error can't find the container with id 3520149e7f0a2e18c67fd5de3aca2ce67b213838eb4ce277824d3e284b3eea50 Apr 16 18:56:11.187997 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:11.187983 2569 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 16 18:56:11.645474 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:11.645444 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h2sf7/must-gather-ffzbd" event={"ID":"c4393cb7-f712-44de-9314-900f4e7922f7","Type":"ContainerStarted","Data":"3520149e7f0a2e18c67fd5de3aca2ce67b213838eb4ce277824d3e284b3eea50"} Apr 16 18:56:12.651073 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:12.651014 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h2sf7/must-gather-ffzbd" event={"ID":"c4393cb7-f712-44de-9314-900f4e7922f7","Type":"ContainerStarted","Data":"5fb1cb84b28486e1e0bd6d07dd1fdddcaf4164296d2e81372a76d14ea6efb53d"} Apr 16 18:56:12.651073 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:12.651056 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h2sf7/must-gather-ffzbd" event={"ID":"c4393cb7-f712-44de-9314-900f4e7922f7","Type":"ContainerStarted","Data":"4742e8e9116a5727e518eb5345e28ba75f558e7222578a9051cc0b4b60e22b69"} Apr 16 18:56:12.669424 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:12.669350 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-h2sf7/must-gather-ffzbd" podStartSLOduration=1.833580116 podStartE2EDuration="2.6693332s" podCreationTimestamp="2026-04-16 18:56:10 +0000 UTC" firstStartedPulling="2026-04-16 18:56:11.18810479 +0000 UTC m=+1574.080422134" lastFinishedPulling="2026-04-16 18:56:12.023857866 +0000 UTC m=+1574.916175218" observedRunningTime="2026-04-16 18:56:12.668166538 +0000 UTC m=+1575.560483903" watchObservedRunningTime="2026-04-16 18:56:12.6693332 +0000 UTC m=+1575.561650566" Apr 16 18:56:13.485529 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:13.485495 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_global-pull-secret-syncer-8rfj7_6ca02c8d-5554-44e4-9884-f9e0bcd462ed/global-pull-secret-syncer/0.log" Apr 16 18:56:13.646538 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:13.646507 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_konnectivity-agent-24f2c_671b786a-255d-4021-84d2-9c0ed65bd8da/konnectivity-agent/0.log" Apr 16 18:56:13.772333 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:13.772251 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_kube-apiserver-proxy-ip-10-0-142-225.ec2.internal_46a218914bddbe8acb8c3bd91619bc9c/haproxy/0.log" Apr 16 18:56:17.556589 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:17.556547 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-mxfc8_6a63a325-dd7c-4e50-b378-3f71641300c3/node-exporter/0.log" Apr 16 18:56:17.586691 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:17.586662 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-mxfc8_6a63a325-dd7c-4e50-b378-3f71641300c3/kube-rbac-proxy/0.log" Apr 16 18:56:17.614053 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:17.614028 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-mxfc8_6a63a325-dd7c-4e50-b378-3f71641300c3/init-textfile/0.log" Apr 16 18:56:19.222128 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:19.222098 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-console_networking-console-plugin-5cb6cf4cb4-m422p_c6572ec9-4256-4030-a8db-98573ded7d80/networking-console-plugin/0.log" Apr 16 18:56:20.337456 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.337428 2569 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn"] Apr 16 18:56:20.341720 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.341691 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.349190 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.349140 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn"] Apr 16 18:56:20.361984 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.361955 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxk5s\" (UniqueName: \"kubernetes.io/projected/09d317fb-f4de-4bc7-a591-4309d0fc9205-kube-api-access-lxk5s\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.362112 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.362000 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-proc\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.362112 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.362040 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-podres\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.362112 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.362083 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-lib-modules\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.362267 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.362133 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-sys\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462432 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462398 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lxk5s\" (UniqueName: \"kubernetes.io/projected/09d317fb-f4de-4bc7-a591-4309d0fc9205-kube-api-access-lxk5s\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462612 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462444 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-proc\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462612 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462480 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-podres\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462612 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462521 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-lib-modules\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462612 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462572 2569 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-sys\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462612 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462594 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-proc\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462815 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462642 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-sys\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462815 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462644 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-podres\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.462815 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.462712 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d317fb-f4de-4bc7-a591-4309d0fc9205-lib-modules\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.469951 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.469933 2569 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxk5s\" (UniqueName: \"kubernetes.io/projected/09d317fb-f4de-4bc7-a591-4309d0fc9205-kube-api-access-lxk5s\") pod \"perf-node-gather-daemonset-8f4qn\" (UID: \"09d317fb-f4de-4bc7-a591-4309d0fc9205\") " pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.655077 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.654121 2569 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:20.802570 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.802540 2569 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn"] Apr 16 18:56:20.805849 ip-10-0-142-225 kubenswrapper[2569]: W0416 18:56:20.805818 2569 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod09d317fb_f4de_4bc7_a591_4309d0fc9205.slice/crio-34c8a474d6491953e642d079a220bdbd92e90f05dcbaf10975f71effba49f7c7 WatchSource:0}: Error finding container 34c8a474d6491953e642d079a220bdbd92e90f05dcbaf10975f71effba49f7c7: Status 404 returned error can't find the container with id 34c8a474d6491953e642d079a220bdbd92e90f05dcbaf10975f71effba49f7c7 Apr 16 18:56:20.983457 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:20.983431 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-bhlgn_3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6/dns/0.log" Apr 16 18:56:21.002595 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:21.002540 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-bhlgn_3e9ada21-637f-4e0a-bef4-dfb5ed41f9b6/kube-rbac-proxy/0.log" Apr 16 18:56:21.151664 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:21.151639 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-z27vt_6b34a2b3-f5f1-4606-970d-5865032489f3/dns-node-resolver/0.log" Apr 16 18:56:21.593503 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:21.593472 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-r2w68_e5f5f6fc-73bc-4bc1-a607-dda6c5bbb1a0/node-ca/0.log" Apr 16 18:56:21.687998 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:21.687965 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" event={"ID":"09d317fb-f4de-4bc7-a591-4309d0fc9205","Type":"ContainerStarted","Data":"cc7081a9dcfafb39b766950f0dcd7db29431b6a70b8ae57a1f1a9a5b2d06de37"} Apr 16 18:56:21.688142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:21.688003 2569 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" event={"ID":"09d317fb-f4de-4bc7-a591-4309d0fc9205","Type":"ContainerStarted","Data":"34c8a474d6491953e642d079a220bdbd92e90f05dcbaf10975f71effba49f7c7"} Apr 16 18:56:21.688142 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:21.688035 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:21.703056 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:21.703014 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" podStartSLOduration=1.7030018139999998 podStartE2EDuration="1.703001814s" podCreationTimestamp="2026-04-16 18:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 18:56:21.702780461 +0000 UTC m=+1584.595097830" watchObservedRunningTime="2026-04-16 18:56:21.703001814 +0000 UTC m=+1584.595319179" Apr 16 18:56:22.571859 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:22.571825 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-hv956_077542aa-4ef3-4409-a13f-1196cff06904/serve-healthcheck-canary/0.log" Apr 16 18:56:22.978622 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:22.978596 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-8bw6b_64539562-3f22-45bd-b402-713b3d503522/kube-rbac-proxy/0.log" Apr 16 18:56:22.997019 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:22.996993 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-8bw6b_64539562-3f22-45bd-b402-713b3d503522/exporter/0.log" Apr 16 18:56:23.017331 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:23.017306 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-runtime-extractor-8bw6b_64539562-3f22-45bd-b402-713b3d503522/extractor/0.log" Apr 16 18:56:25.021703 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:25.021675 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/kserve_kserve-controller-manager-7c68cb4fc8-l2xrn_18f30541-9041-4ce9-a86d-852543730d0c/manager/0.log" Apr 16 18:56:27.700497 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:27.700473 2569 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-h2sf7/perf-node-gather-daemonset-8f4qn" Apr 16 18:56:30.649857 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.649830 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-kmhkm_9494cb1a-9b64-48a6-ae24-14717ce0b8f0/kube-multus-additional-cni-plugins/0.log" Apr 16 18:56:30.669783 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.669761 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-kmhkm_9494cb1a-9b64-48a6-ae24-14717ce0b8f0/egress-router-binary-copy/0.log" Apr 16 18:56:30.692269 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.692245 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-kmhkm_9494cb1a-9b64-48a6-ae24-14717ce0b8f0/cni-plugins/0.log" Apr 16 18:56:30.712274 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.712256 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-kmhkm_9494cb1a-9b64-48a6-ae24-14717ce0b8f0/bond-cni-plugin/0.log" Apr 16 18:56:30.734656 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.734632 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-kmhkm_9494cb1a-9b64-48a6-ae24-14717ce0b8f0/routeoverride-cni/0.log" Apr 16 18:56:30.754854 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.754827 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-kmhkm_9494cb1a-9b64-48a6-ae24-14717ce0b8f0/whereabouts-cni-bincopy/0.log" Apr 16 18:56:30.776806 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.776777 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-kmhkm_9494cb1a-9b64-48a6-ae24-14717ce0b8f0/whereabouts-cni/0.log" Apr 16 18:56:30.809480 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.809450 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q9jcb_e4899c5b-5582-41f6-8785-b1420a447044/kube-multus/0.log" Apr 16 18:56:30.911773 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.911693 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-p2hph_dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30/network-metrics-daemon/0.log" Apr 16 18:56:30.935471 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:30.935448 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-p2hph_dcd33bb7-69d8-4b44-a7ab-2b812d5bfc30/kube-rbac-proxy/0.log" Apr 16 18:56:32.527875 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.527840 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-controller/0.log" Apr 16 18:56:32.562279 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.562251 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/0.log" Apr 16 18:56:32.577704 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.577676 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovn-acl-logging/1.log" Apr 16 18:56:32.614252 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.614232 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/kube-rbac-proxy-node/0.log" Apr 16 18:56:32.657549 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.657525 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/kube-rbac-proxy-ovn-metrics/0.log" Apr 16 18:56:32.686461 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.686440 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/northd/0.log" Apr 16 18:56:32.734272 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.734244 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/nbdb/0.log" Apr 16 18:56:32.785816 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.785752 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/sbdb/0.log" Apr 16 18:56:32.957959 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:32.957929 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5gbp_8a6cc626-96f9-4f30-a283-abdb6733cdac/ovnkube-controller/0.log" Apr 16 18:56:33.977566 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:33.977535 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-8mjdj_c1b6e68a-5279-411c-ba1c-fd6c274af91f/network-check-target-container/0.log" Apr 16 18:56:34.860887 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:34.860855 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-fqdpz_c997d6be-0aad-4171-850c-d7aedaf7032f/iptables-alerter/0.log" Apr 16 18:56:35.504863 ip-10-0-142-225 kubenswrapper[2569]: I0416 18:56:35.504839 2569 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-299vl_127cd67a-6124-4bcc-baa8-d0ae87cd028f/tuned/0.log"