Bug 1854644 - Failed create ISO image with error cannot get resource "jobs" in API group "batch" in the namespace "assisted-installer
Summary: Failed create ISO image with error cannot get resource "jobs" in API group "b...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: assisted-installer
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Michael Filanov
QA Contact: Udi Kalifon
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-07 20:13 UTC by Yuri Obshansky
Modified: 2020-07-08 18:23 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-08 18:23:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
playbook (15.88 KB, text/plain)
2020-07-07 20:13 UTC, Yuri Obshansky
no flags Details
Snapshot (96.21 KB, image/png)
2020-07-07 20:13 UTC, Yuri Obshansky
no flags Details
Succesfull snapshot (42.62 KB, image/png)
2020-07-08 13:28 UTC, Yuri Obshansky
no flags Details

Description Yuri Obshansky 2020-07-07 20:13:04 UTC
Created attachment 1700204 [details]
playbook

Description of problem:
Installed Assisted Installer on minikube using playbook 
(see attached file )
Failed create ISO image 
(See attached snapshot)
# oc logs bm-inventory-bf9fcd8b6-8dx9d -n assisted-installer
time="2020-07-07T20:01:39Z" level=info msg="Starting bm service" func=main.main file="/home/runner/work/bm-inventory/bm-inventory/cmd/main.go:74"
time="2020-07-07T20:01:40Z" level=info msg="Started Cluster State Monitor" func="github.com/filanov/bm-inventory/pkg/thread.(*Thread).Start" file="/home/runner/work/bm-inventory/bm-inventory/pkg/thread/thread.go:40" pkg=cluster-monitor
time="2020-07-07T20:01:40Z" level=info msg="Started Host State Monitor" func="github.com/filanov/bm-inventory/pkg/thread.(*Thread).Start" file="/home/runner/work/bm-inventory/bm-inventory/pkg/thread/thread.go:40" pkg=host-monitor
time="2020-07-07T20:01:40Z" level=error msg="failed to generate dummy ISO image" func=github.com/filanov/bm-inventory/internal/bminventory.generateDummyISOImage file="/home/runner/work/bm-inventory/bm-inventory/internal/bminventory/inventory.go:162" error="jobs.batch is forbidden: User \"system:serviceaccount:assisted-installer:default\" cannot create resource \"jobs\" in API group \"batch\" in the namespace \"assisted-installer\"" pkg=Inventory
time="2020-07-07T20:04:24Z" level=info msg="Register cluster: ocp-cluster-assisted with id a5217dcc-9f63-4bad-b273-12453e8ebcb3" func="github.com/filanov/bm-inventory/internal/bminventory.(*bareMetalInventory).RegisterCluster" file="/home/runner/work/bm-inventory/bm-inventory/internal/bminventory/inventory.go:278" go-id=346 pkg=Inventory request_id=4332a5fb-0031-4655-915c-55031c6bda38
time="2020-07-07T20:04:50Z" level=info msg="prepare image for cluster a5217dcc-9f63-4bad-b273-12453e8ebcb3" func="github.com/filanov/bm-inventory/internal/bminventory.(*bareMetalInventory).GenerateClusterISO" file="/home/runner/work/bm-inventory/bm-inventory/internal/bminventory/inventory.go:387" go-id=432 pkg=Inventory request_id=1fb69657-6a2a-4002-93b1-a661563671f5
time="2020-07-07T20:04:50Z" level=info msg="Attempting to delete job %screateimage-a5217dcc-9f63-4bad-b273-12453e8ebcb3-00010101000000" func="github.com/filanov/bm-inventory/internal/bminventory.(*bareMetalInventory).GenerateClusterISO" file="/home/runner/work/bm-inventory/bm-inventory/internal/bminventory/inventory.go:450" go-id=432 pkg=Inventory request_id=1fb69657-6a2a-4002-93b1-a661563671f5
time="2020-07-07T20:05:20Z" level=error msg="Failed to get job <createimage-a5217dcc-9f63-4bad-b273-12453e8ebcb3-00010101000000> for deletion" func="github.com/filanov/bm-inventory/pkg/job.(*kubeJob).Delete" file="/home/runner/work/bm-inventory/bm-inventory/pkg/job/job.go:118" error="jobs.batch \"createimage-a5217dcc-9f63-4bad-b273-12453e8ebcb3-00010101000000\" is forbidden: User \"system:serviceaccount:assisted-installer:default\" cannot get resource \"jobs\" in API group \"batch\" in the namespace \"assisted-installer\"" go-id=432 pkg=k8s-job-wrapper request_id=1fb69657-6a2a-4002-93b1-a661563671f5
time="2020-07-07T20:05:20Z" level=error msg="failed to kill previous job in cluster a5217dcc-9f63-4bad-b273-12453e8ebcb3" func="github.com/filanov/bm-inventory/internal/bminventory.(*bareMetalInventory).GenerateClusterISO" file="/home/runner/work/bm-inventory/bm-inventory/internal/bminventory/inventory.go:452" error="failed to get job <createimage-a5217dcc-9f63-4bad-b273-12453e8ebcb3-00010101000000>: jobs.batch \"createimage-a5217dcc-9f63-4bad-b273-12453e8ebcb3-00010101000000\" is forbidden: User \"system:serviceaccount:assisted-installer:default\" cannot get resource \"jobs\" in API group \"batch\" in the namespace \"assisted-installer\"" go-id=432 pkg=Inventory request_id=1fb69657-6a2a-4002-93b1-a661563671f5


Version-Release number of selected component (if applicable):
latest

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Yuri Obshansky 2020-07-07 20:13:29 UTC
Created attachment 1700205 [details]
Snapshot

Comment 2 Fred Rolland 2020-07-08 05:50:08 UTC
Yuri hi,
Did you run on Openshift or Minikube?

Thanks

Comment 3 Fred Rolland 2020-07-08 06:00:38 UTC
Yuri, can you check that the Role+RB+SA are deployed?
https://github.com/filanov/bm-inventory/blob/master/deploy/roles/default_role.yaml

Comment 4 Yuri Obshansky 2020-07-08 13:03:46 UTC
It is running on Minikube
(see playbook for reproducing)

[root@seal12 deploy]# oc get roles -A
NAMESPACE     NAME                                             AGE
kube-public   kubeadm:bootstrap-signer-clusterinfo             17h
kube-public   system:controller:bootstrap-signer               17h
kube-system   extension-apiserver-authentication-reader        17h
kube-system   kube-proxy                                       17h
kube-system   kubeadm:kubelet-config-1.17                      17h
kube-system   kubeadm:nodes-kubeadm-config                     17h
kube-system   system::leader-locking-kube-controller-manager   17h
kube-system   system::leader-locking-kube-scheduler            17h
kube-system   system:controller:bootstrap-signer               17h
kube-system   system:controller:cloud-provider                 17h
kube-system   system:controller:token-cleaner                  17h

Comment 5 Yuri Obshansky 2020-07-08 13:28:18 UTC
After creating default roles is working
See attached snapshot

# oc create -f default_role.yaml 
role.rbac.authorization.k8s.io/default created
rolebinding.rbac.authorization.k8s.io/default created
Error from server (AlreadyExists): error when creating "default_role.yaml": serviceaccounts "default" already exists
[root@seal12 roles]# oc get roles -A
NAMESPACE            NAME                                             AGE
assisted-installer   default                                          16s
kube-public          kubeadm:bootstrap-signer-clusterinfo             17h
kube-public          system:controller:bootstrap-signer               17h
kube-system          extension-apiserver-authentication-reader        17h
kube-system          kube-proxy                                       17h
kube-system          kubeadm:kubelet-config-1.17                      17h
kube-system          kubeadm:nodes-kubeadm-config                     17h
kube-system          system::leader-locking-kube-controller-manager   17h
kube-system          system::leader-locking-kube-scheduler            17h
kube-system          system:controller:bootstrap-signer               17h
kube-system          system:controller:cloud-provider                 17h
kube-system          system:controller:token-cleaner                  17h

Comment 6 Yuri Obshansky 2020-07-08 13:28:48 UTC
Created attachment 1700304 [details]
Succesfull snapshot

Comment 7 Yuri Obshansky 2020-07-08 13:29:54 UTC
What about this error
Error from server (AlreadyExists): error when creating "default_role.yaml": serviceaccounts "default" already exists
???


Note You need to log in before you can comment on or make changes to this bug.