Deepshikha has been doing excellent work and has demonstrated the ability to not just do day-to-day work but also design and ownership of the CI module. This bug is being used for granting her maintainer access and also the infra piece of granting her access to all the Jenkins infrastructure.
So, in practice, what access does that entail ? (cause we just have "all access" and "no access" :/ )
REVIEW: https://review.gluster.org/19880 (maintainers: promote Deepshikha to maintainer) posted (#2) for review on master by Nigel Babu
Yeah, we have the option of either setting two levels or giving all access and depending on trust. Right now, I'm looking for all Jenkins nodes, build.gluster.org, and the associated servers like ci-logs.gluster.org. I'm okay with all access if that's easier.
COMMIT: https://review.gluster.org/19880 committed in master by "Nigel Babu" <nigelb> with a commit message- maintainers: promote Deepshikha to maintainer Deepshikha has been doing excellent work across the CI system. She is now ready to co-maintain the Continuous Integration module and be responsible for the CI ecosystem in its entirety. Fixes: bz#1567880 Change-Id: If204301d26731f93b2dccfe8b6571ee748a47b26 Signed-off-by: Nigel Babu <nigelb>
I am also ok with all access, if someone explain the best practice regarding ssh security and stuff like this. (ideally, key on a smartcard/yubikey, but at minima, password protected ssh key, and up to date system, not running anything on it, etc)
Here's the public key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCuCz0Uc6gRdnwt7M8c2RAzfq3dKOek1RuFmpb7srs0MlRZpxAUUBgLYUS3hvQUdUaa5PIK9mmfc2cAt9JbCru1YVmHuA+0Dii3G7Tb93cRUjS8Ca75dMcD4j/8MXEW7U/zikrpV7MDLYzfZ+SzCatrQRjtC8DWuIk39OhEf/knHG7iAtQ79vkDj67AGGNuB4S5a6mmCSvf+3O+4cROx9hIKAzw7YyWBN5G07JTMjNbottZErOtzBbBm5gpD05ARKy2S4eI8oAWlDEoxBywf9tAGJcbku7MvphBTiaikwMljDWiBdN7ww3A11SkivupBa/SVRwNSXpOBx0aNAiLKCk7c3w+ryFfb4+1w6FBBWsZdkSgssyOtC05B1J03X/L8S8KtxplTFtKIhGZFMUBlOZTZfPkgI6sJfsZtGiTHpFCplkVq8IxZFu4Z0VkAxIoQInfPIIMzVuUpWm1ONdMrJ9KSLzkTMmfqHXVkEu2QRmVsJTO8tlghIV4HPbCWiE4oFo7zcAyfpPu8od526706UL/TWSXFAul/uprYvrsLSz7tHb1dgJEzc+l8GhG1HUvgXjepoMGoMNFgubpLsff8RoeIdjwuqFAyvdowLHxRdPbZqY3QGDsESZOoBa0AvQlC/6dBb0C9zNj3A/1hbBMu9BZxxPqMkyDD7dOGrCTa7vnzQ== dkhandel
I have added the access 2 days ago, seems I forgot to announce on this bug
Can confirm this is now done for the most commonly needed hosts. Resolving the bug.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/