Hide Forgot
Document URL: https://access.redhat.com/documentation/en/red-hat-satellite/6.2/single/hammer-cli-guide/ Section Number and Name: Example 2.6 Describe the issue: There is a bash script to run to sync repos. If its done this way, it will sync repos one by one. ORG="ACME" for i in $(hammer --csv repository list --organization $ORG | grep -vi '^ID' | awk -F, {'print $1'}) do hammer repository synchronize --id ${i} --organization $ORG --async done Better to do something like this, it will create the tasks in parallel. sync=$(hammer --csv repository list --organization-id 1 | egrep -vi 'id' |awk -F, '{print $1}') for i in $sync; do hammer repository synchronize --async --id $i --organization-id 1;done I havent tested but perhaps the --async line before the --id is called.
Hello Jon Why is doing the task in parallel better? I am guessing the limiting factor is going to be the speed of the machine or the package download speed when you run the command. I wonder if the total time to complete the task is different using sequential or parallel execution. Has that been tested? Thank you
Thinking about this more, it has pros and con. If I run an automated script to install, to create and publish content views, then run jobs one by one is better, because the next task cannot continue until first is finished. Its just going to take a long time. i.e if one sync takes 30 minutes to finish, it will kick off in another 30 minutes and so on. But if you dont have automated scripts that install everything. You'd want all the sync jobs to be running in parallel. These jobs could just be updates to existing.
hammer repository synchronize --async --id 3 --organization-id 1 I can confrim that by moving the --async option before the --id enables multiple jobs to run in parallel for i in $(hammer --csv repository list --organization-id 1 | grep -vi '^ID' | awk -F, {'print $1'}) do hammer repository synchronize --async --id ${i} --organization-id 1 done If customer has 30 repos in hammer repository list --organization-id 1 Then pulp should be able to queue these jobs up, so that they are run when the workers are free. I think you could add a line, that these jobs will be run as a single task, until all tasks are complete. If customer wants the option of parallel, they can alter the --async line before the --id (it seems perhaps that the position of the --async ( putting tasks into background has a positional condition. That may be a bug itself.
Assigning to Charles for review.
Hello, [1] I think: `--organization $ORG` `--organization-id 1`, Are the same, I prefer to use numbering because thats personal choice as I like can more easily remember one number that a range of characters. [2] Point taken [3] Honestly, I think it's a bug if the order of the --async argument changes if it's actually asynchronous. ### Script with Andrew Adjustments, before id --> Results, sync's in parallel #!/bin/bash set -x sync=$(hammer --csv repository list --organization-id 1 | egrep -vi 'id' |awk -F, '{print $1}') for i in $sync; do hammer repository synchronize --async --id $i;done ##### Results ######################## + for i in '$sync' + hammer repository synchronize --async --id 2 Repository is being synchronized in task bcdbcc40-6d12-425d-8581-864026f4f39f + for i in '$sync' + hammer repository synchronize --async --id 3 Repository is being synchronized in task 6e5a92ea-5b99-46db-a1ce-737db1a1eaee + for i in '$sync' + hammer repository synchronize --async --id 1 Repository is being synchronized in task d9497e53-a327-4349-bbdb-e5bc6b495909 ####################################### ### Script with Andrew Adjustments, After id --> Results, sync's in single mode #!/bin/bash set -x sync=$(hammer --csv repository list --organization-id 1 | egrep -vi 'id' |awk -F, '{print $1}') for i in $sync; do hammer repository synchronize --id $i --async;done ---> Will Update shortly...
I'm not able to reproduce with: #!/bin/bash set -x sync=$(hammer --csv repository list --organization-id 1 | egrep -vi 'id' |awk -F, '{print $1}') for i in $sync; do hammer repository synchronize --id $i --async;done I think its working as expected. We can close this off.
The content is now live on the customer portal.