| Summary: | RFE: [API] Add method to get built RPMs from build # | ||
|---|---|---|---|
| Product: | [Community] Copr | Reporter: | Igor Gnatenko <ignatenko> |
| Component: | frontend | Assignee: | Copr Team <copr-team> |
| Status: | CLOSED UPSTREAM | QA Contact: | |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | unspecified | CC: | praiskup |
| Target Milestone: | --- | Keywords: | FutureFeature, Reopened |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Enhancement | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-12-14 09:54:52 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
There is: /api/coprs/build/<id>/ (CoprClient.get_build_details(...)) and also: /api_2/builds/<id> (CoprClient.builds.get_one(...)) For direct retrieval of rpm filenames you can use `dnf repoquery` on the backend repository. (In reply to clime from comment #1) > There is: /api/coprs/build/<id>/ (CoprClient.get_build_details(...)) > > and also: /api_2/builds/<id> (CoprClient.builds.get_one(...)) Which returns no URLs to RPM. > > For direct retrieval of rpm filenames you can use `dnf repoquery` on the > backend repository. 1. repoquery is slow 2. I want to get all built RPMs from exact build# and repoquery is not helpful in that case, at all. It will list all RPMs (and actually without URLs). It doesn't show duplicated versions and etc. There's an upstream issue for this so I'm closing this in favour of https://pagure.io/copr/copr/issue/1411 |
It's quite useful instead of parsing generated by lighttpd HTML page. This is what we do now: # Parse results rpms = [] for task in build.get_build_tasks(): url_prefix = task.result_dir_url resp = requests.get(url_prefix) if resp.status_code != 200: raise Exception("Failed to fetch {!r}: {!s}".format(url_prefix, resp.text)) soup = bs4.BeautifulSoup(resp.text, "lxml") for link in soup.find_all("a", href=True): href = link["href"] if href.endswith(".rpm") and not href.endswith(".src.rpm"): rpms.append("{}/{}".format(url_prefix, href)) I would like to avoid this as much as possible.