Bug 1727727 - Build+Packaging Automation
Summary: Build+Packaging Automation
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: project-infrastructure
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Sheetal Pamecha
QA Contact:
URL:
Whiteboard:
: 1727722 1727723 1727724 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-08 04:40 UTC by Sheetal Pamecha
Modified: 2020-03-17 03:20 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-17 03:20:55 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Sheetal Pamecha 2019-07-08 04:40:25 UTC
Description of problem:

Currently for every release, packages for Debian(9 & 10) and Ubuntu(bionic,cosmic,eoan,disco,xenial) need to be built manually. To automate this process in the same way as fedora and centos are triggered.

Comment 1 Sheetal Pamecha 2019-07-10 06:18:28 UTC
*** Bug 1727722 has been marked as a duplicate of this bug. ***

Comment 2 Sheetal Pamecha 2019-07-10 06:18:51 UTC
*** Bug 1727723 has been marked as a duplicate of this bug. ***

Comment 3 Sheetal Pamecha 2019-07-10 06:19:14 UTC
*** Bug 1727724 has been marked as a duplicate of this bug. ***

Comment 4 hari gowtham 2019-07-25 11:41:34 UTC
For automating the packaging for debian and ubuntu, we have wrote two scripts.
packaging. sh is one which the jenkins' slave has to run based on a job that is being triggered by one of us.
This will ssh to the respective lab machine and run the generic_script.sh in the machine to build the packages.
(We will take care of having the generic_script.sh made available in the lab machine.)

The requirements from the infra side are:
1) check if the lab machines are reachable from the jenkins' slave
2) map the credentials that the package.sh needs to ssh to the lab machines, so we can use the key instead of passing passwords.
3) trigger permissions are to be given only for a specific set of people.
Amar, Shyam, Kaleb, Sunny, Rinku, Shwetha, Sheetal, and Hari.

The scripts are available at:
https://github.com/Sheetalpamecha/packaging-scripts/

Will send the package.sh script as a patch to build-jobs once the keys to login are available.
And will work on the job simulatenously as well.

Comment 5 Kaleb KEITHLEY 2019-07-25 12:42:08 UTC
(In reply to hari gowtham from comment #4)
> The requirements from the infra side are:
> 1) check if the lab machines are reachable from the jenkins' slave

They are not reachable.

You probably need four new build machines in Jenkins: Debian stretch/9, buster/10, and bullseye/11; and Ubuntu bionic/18.04. The debian boxes need lots of disk space, the ubuntu needs slightly less as packages aren't actually built on it, it sends them to Launchpad to build.

The machines _must_ be secure as they will, by necessity, have to have gpg private keys that are used to sign the packages  installed on them.

The machines should be apt update + apt upgraded periodically.  There's a small amount of pbuilder setup (see the ~glusterpackager/HOWTO file on the current builders in the lab) that should be updated periodically as well.

Comment 6 M. Scherer 2019-07-25 13:32:25 UTC
We have debian builders (well, we have one, but I can create more quite easily). How much is lots of disk space ?

Also, I would prefer a separate step to sign, so we can run that on a separate server that the one where stuff are being built.

And yes, we are not going to build stuff in the lab, we can't ssh there.

Comment 7 hari gowtham 2019-07-29 06:57:50 UTC
@Misc, I see debian machines have 20GB disks each and Ubuntu has 30GB disk in the current machines. And we did come across debian machine not having enough space to build lately.
So a bit more for the new machines in jenkins would be better.

About signing, is it fine if we get the password to the machines as a parameter from the person who triggers the jenkins' job and proceed with it to sign in?
This way we can make use of the same machine for building and signing the package. But I'm not sure if the parameters are logged somewhere. Which will let others know the password.
If that's the case, do we have a work around for this? If manual intervention is necessary for signing, it sort of defeats a portion of this effort. 

@Kaleb, I wanted to ask about the steps for the initial environment setup. Thanks for informing about it, will look into it once the machines are available.
I want to know if the package that are already built on these lab machines have to be moved to these new jenkins' machine as a part of keeping track of packages. 
And will there be any other work to be done as a part of this movement.

Thanks,
Hari.

Comment 8 M. Scherer 2019-07-29 07:54:17 UTC
We can increase the disk size if needed when we hit problems, that shouldn't be a worry.

About the password, we can also just store the key on a emulated smartcard on the builder side, and use a password stored on the builder. Therefore, no one will have to type it or share it, and it can't be copied. And that permit automated setup.

Comment 9 hari gowtham 2019-08-06 10:38:09 UTC
Hi Misc,

Can you please create the machines as mentioned above, so we can setup them up?

Comment 10 M. Scherer 2019-08-20 12:27:26 UTC
I am not sure to understand what do you mean by "setup them up". 

I do expect the setup be done with ansible, using our playbooks, and not give direct access to people (because experience showed that when people have a way to bypass automation, they do bypass it sooner or later, causing us trouble later).

So far, the only patch I found is https://review.gluster.org/#/c/build-jobs/+/23172/ which is not exactly something that should be merged, since that's a job that do replicate the work of jenkins. I kinda do expect a job that just run generic-package.sh on the builder, and that's it.

Comment 11 hari gowtham 2019-08-22 09:24:54 UTC
By setup i meant doing the following prerequisites:

these two steps are the ones necessary as of now:
- `deb.packages.dot-gnupg.tgz`: has the ~/.gnupg dir with the keyring
needed to build & sign packages
- packages required: build-essential pbuilder devscripts reprepro
debhelper dpkg-sig

And for the first time we need to do this:

# First time create the /var/cache/pbuilder/base.tgz
# on debian: sudo pbuilder create --distribution wheezy --mirror
ftp://ftp.us.debian.org/debian/ --debootstrapopts
"--keyring=/usr/share/keyrings/debian-archive-keyring.gpg"
# on raspbian: sudo pbuilder create --distribution wheezy --mirror
http://archive.raspbian.org/raspbian/ --debootstrapopts
"--keyring=/usr/share/keyrings/raspbian-archive-keyring.gpg"

NOTE:
In future if any change is made here (
https://github.com/semiosis/glusterfs-debian/tree/wheezy-glusterfs-3.5/debian)
then we might have to change it.




The reason to go for the above two level implementation was, I wasn't aware of how to make the 
job run on a particular machine based on the arguments it gets.

Like stretch has to be run on rhs-vm-16.storage-dev.lab.eng.bOS.redhat.com(which will be one of the jenkins debian slaves)
And we have to run the script on multiple machines based on the number of distributions we want to build.

Comment 12 M. Scherer 2019-08-22 10:17:30 UTC
Ok, so I will install the packages on the builder we have, and then have it added to jenkins. 
(and while on it, also have 2nd one, just in case)

As for running different job running on specific machine, that's indeed pretty annoying on jenkins. I do not have enough experience with jjb, but JobTemplate is likely something that would help for that:
https://docs.openstack.org/infra/jenkins-job-builder/definition.html#id2 

But afaik, gluster is not dependent on the kernel, so building that with pbuilder in a chroot should be sufficient no matter what Debian, as long as it is a up to date one, no ?

Comment 13 hari gowtham 2019-08-22 10:30:41 UTC
(In reply to M. Scherer from comment #12)
> Ok, so I will install the packages on the builder we have, and then have it
> added to jenkins. 
> (and while on it, also have 2nd one, just in case)

Forgot to mention that this script file is also necessary:
https://github.com/Sheetalpamecha/packaging-scripts/blob/master/generic_package.sh
Will send a patch to have it in the repo.

> 
> As for running different job running on specific machine, that's indeed
> pretty annoying on jenkins. I do not have enough experience with jjb, but
> JobTemplate is likely something that would help for that:
> https://docs.openstack.org/infra/jenkins-job-builder/definition.html#id2 

Will look into it. I'm new to writing jobs for jenkins. 

> 
> But afaik, gluster is not dependent on the kernel, so building that with
> pbuilder in a chroot should be sufficient no matter what Debian, as long as
> it is a up to date one, no ?

Yes, gluster is not dependent on kernel, but I'm unaware of using chroot 
for different debian version .
For this Kaleb would be the better person to answer.
@kaleb can you please answer this?

Comment 14 M. Scherer 2019-08-22 10:42:48 UTC
Pbuilder do setup chroots, afaik, so that's kinda like mock, if you are maybe more familliar with the Fedora/Centos tooling. Now, maybe there is limitation and they do not work exactly the same, but I would have expected a clean chroot created each time, to build the package. I didn't do debian package since a long time.

Comment 15 M. Scherer 2019-08-22 13:03:06 UTC
I did push the installation and I would like to defer the gnupg integration for now, as it likely requires a bit more discussion (like, how do we distribute the keys, etc, do we rotate it).

And for the pbuilder cache, I would need to know the exact matrix of distribution we want to build and how. That part seems not too hard:
https://wiki.debian.org/PbuilderTricks#How_to_build_for_different_distributions

And if we aim to build on unstable, we also may need to do some work to keep the chroot updated (same for stable in fact).

Comment 16 Kaleb KEITHLEY 2019-08-22 13:14:29 UTC
yes, pbuilder is a chroot tool, similar to mock. Each time you build you get a clean chroot.

We are currently building for stretch/9, buster/10, and bullseye/unstable/11.

AFAIK the buildroot should be updated periodically for all of them; bullseye/unstable should probably be updated more frequently than the others.

I don't know anything about pbuilder apart from what I mentioned above, and specifically I don't know anything about how to use pbuilder to build for different distributions on a single machine. I've been using separate stretch, buster, and bullseye installs on dedicated boxes to build the packages for that release of Debian.

Comment 17 Kaleb KEITHLEY 2019-08-22 13:25:49 UTC
(In reply to M. Scherer from comment #15)
> I did push the installation and I would like to defer the gnupg integration
> for now, as it likely requires a bit more discussion (like, how do we
> distribute the keys, etc, do we rotate it).
> 
> And for the pbuilder cache, I would need to know the exact matrix of
> distribution we want to build and how. That part seems not too hard:
> https://wiki.debian.org/
> PbuilderTricks#How_to_build_for_different_distributions
> 
> And if we aim to build on unstable, we also may need to do some work to keep
> the chroot updated (same for stable in fact).

The keys that we've been using were generated on an internal machine and distributed to the build machines, which are all internal as well. 

We were using a new, different key for every major version through 4.1, but some people complained about that, so for 5.x, 6.x, and now 7.x we have been using the same key. As 4.1 is about to reach EOL that essentially means we are only using a single key now for all the packages we build.

AFAIK people expect the packages to be signed. And best practices suggests to me that they _must_ be signed.

Given that 7.0rc0 is now out and packages will be signed with the current key, that suggests to me that we must keep using that key for the life of 7.x. We can certainly create a new key for 8.x, when that rolls around.

And yes, we need a secure way to get the private key onto the jenkins build machines somehow.

Comment 18 hari gowtham 2019-08-22 13:57:49 UTC
(In reply to hari gowtham from comment #13)
> (In reply to M. Scherer from comment #12)
> > Ok, so I will install the packages on the builder we have, and then have it
> > added to jenkins. 
> > (and while on it, also have 2nd one, just in case)
> 
> Forgot to mention that this script file is also necessary:
> https://github.com/Sheetalpamecha/packaging-scripts/blob/master/
> generic_package.sh
> Will send a patch to have it in the repo.

The above mentioned file is sent as a patch at: https://review.gluster.org/#/c/build-jobs/+/23289

> 
> > 
> > As for running different job running on specific machine, that's indeed
> > pretty annoying on jenkins. I do not have enough experience with jjb, but
> > JobTemplate is likely something that would help for that:
> > https://docs.openstack.org/infra/jenkins-job-builder/definition.html#id2 
> 
> Will look into it. I'm new to writing jobs for jenkins. 
> 
> > 
> > But afaik, gluster is not dependent on the kernel, so building that with
> > pbuilder in a chroot should be sufficient no matter what Debian, as long as
> > it is a up to date one, no ?
> 
> Yes, gluster is not dependent on kernel, but I'm unaware of using chroot 
> for different debian version .
> For this Kaleb would be the better person to answer.
> @kaleb can you please answer this?

Comment 19 M. Scherer 2020-01-27 16:28:55 UTC
So, following the meeting in Brno, we agreed 'provided I missed nothing), that:
- we need a builder with a Unix user able to push to download.gluster.org, on a specific directory by ssh
- we need to have the same unix user to push to github, so reusing the github gluster-ant bot 

That's for infra team, we will add that on https://github.com/gluster/gluster.org_ansible_configuration/tree/master/roles/debian_package_builder/tasks

Comment 20 Sheetal Pamecha 2020-02-19 10:12:21 UTC
Upstream Patch - 
Add package job and script - https://review.gluster.org/#/c/build-jobs/+/23172/
Add generic script - https://review.gluster.org/#/c/build-jobs/+/23289/
PR for installing required packages in builder - https://github.com/gluster/gluster.org_ansible_configuration/pull/56

Comment 21 Worker Ant 2020-03-17 03:20:55 UTC
This bug is moved to https://github.com/gluster/project-infrastructure/issues/46, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.