Current koji client doesn't work with rpmfusion koji instance. Reproducible: Always Steps to Reproduce: 1. koji -p koji-rpmfusion cancel $TASKID or manually: koji -s https://koji.rpmfusion.org/kojihub --cert=~/.rpmfusion.cert --authtype=ssl cancel $TASKID Actual Results: 2023-07-25 19:21:27,435 [ERROR] koji: GenericError: Invalid method: getKojiVersion Expected Results: No error, task canceled. It works OK with: koji-1.21.0-2.fc30.noarch
Reproduced with: koji-1.33.0-1.fc37.noarch
I'm not sure what version rpmfusion is running (it's old enough that it doesn't report that on the api page). I don't think koji claims interoperability between clients that are very far apart. Adding kwizart here for comment on the rpmfusion instance, perhaps it could be upgraded? If not, we probibly should move this upstream and see if there's anything to do to make it more compatible to older versions.
Right, I think that's a recent change that made any recent client start to fail with older el7 based koji. We might backport the getKojiVersion method in the el7 branch or to fix the client to assume older koji if the getKojiversion cannot be found. For now the workaround is to use el7 koji client for cancel tasks.
mike Would it be possible to cut a new upstream release based on 1.21 branch ? with getKojiversion and other relevant fixes added ? This would help us to work with newer koji client while still be el7 based. Thanks in advance.
Fixed in the (newer) koji client side of things with https://pagure.io/koji/c/93a5ca5abe0e4d36b1e975f04978e9dfe430e37e?branch=master
Let's close this issue, will be fixed with koji-1.34.0, hopefully not too far.
We don't maintain older versions of Koji upstream, though we have in the past backported important cve fixes. When this has come up in the past, we drew the line at about 2 years (so at this point 1.21 would be past even that). We strongly recommend that all Koji instances stay up to date with upstream releases. We do aim for the Koji cli client to be compatible with older hubs within reason, and we are open to addressing such issues. Please file upstream bugs for such cases.