Hide Forgot
It's quite useful instead of parsing generated by lighttpd HTML page. This is what we do now: # Parse results rpms = [] for task in build.get_build_tasks(): url_prefix = task.result_dir_url resp = requests.get(url_prefix) if resp.status_code != 200: raise Exception("Failed to fetch {!r}: {!s}".format(url_prefix, resp.text)) soup = bs4.BeautifulSoup(resp.text, "lxml") for link in soup.find_all("a", href=True): href = link["href"] if href.endswith(".rpm") and not href.endswith(".src.rpm"): rpms.append("{}/{}".format(url_prefix, href)) I would like to avoid this as much as possible.
There is: /api/coprs/build/<id>/ (CoprClient.get_build_details(...)) and also: /api_2/builds/<id> (CoprClient.builds.get_one(...)) For direct retrieval of rpm filenames you can use `dnf repoquery` on the backend repository.
(In reply to clime from comment #1) > There is: /api/coprs/build/<id>/ (CoprClient.get_build_details(...)) > > and also: /api_2/builds/<id> (CoprClient.builds.get_one(...)) Which returns no URLs to RPM. > > For direct retrieval of rpm filenames you can use `dnf repoquery` on the > backend repository. 1. repoquery is slow 2. I want to get all built RPMs from exact build# and repoquery is not helpful in that case, at all. It will list all RPMs (and actually without URLs). It doesn't show duplicated versions and etc.
There's an upstream issue for this so I'm closing this in favour of https://pagure.io/copr/copr/issue/1411