Red Hat Bugzilla – Bug 419241
yum baseurl doesn't understand port designation in downloads
Last modified: 2014-01-21 18:01:15 EST
Description of problem:
Using a baseurl for a yum repository that includes a port designation fails to
download files (surprisingly downloads all yum metadata from server designated
with alternate port before that).
When downloading each file in accepted update results in the error:
[Errno 4] IOError: [Errno ftp error] Int or String expected
Trying other mirror.
When no mirrors are available each file results in a second error.
When ftp server's port is reconfigured to 21 (standard ftp port) downloads
succeed. This reconfiguration is not acceptable long term, and yum should
accept standard urls also in download stage (not just in metadata retrieval stage).
Version-Release number of selected component (if applicable):
If you run:
does it work?
(In reply to comment #1)
> If you run:
> urlgrabber ftp://someserver:256/path/to/repository/some/file
> does it work?
and you're sure the baseurl syntax is identical?
b/c yum uses urlgrabber internally so something is odd if urlgrabber from the
cli works but doesn't from w/i yum.
I am sure it is identical syntax. Other ftp clients (e.g. ncftp) worked at the
time just fine to extract the offending files from and to the same machines.
Because of these tests is why I so boldly proclaimed that yum didn't understand
Given that it took _three_ months to elicit any kind of a response maybe some
updates incurred a collateral fix: I obviously haven't attempted to use yum in
this way because I have no reason to trust it to work.
I'll try to find time in the near future to reassemble the full repository, and
arrange it so the client has something to update to test it again.
In case it still doesn't work, I'll suggest that I was wondering if in yum
something splits apart URLs and reassembles them [for just the downloads, not
the metadata] before passing them to the urlgrabber infrastructure (maybe an RE
substitution that forgot to take into account complete port numbers?). I could
imagine an error like the reported one if the url (silently) submitted has a
colon but only a slash after it, and yum forgot to reinsert the number.
I'm sorry about the delay on responding to this bug. Until yesterday, however, I
had never seen it. I either deleted the email accidentally with some other bug
reports or I never received it.
This is why we have bug days to clean up things like this.
The ftp server you're using, is it public? Can I test things against it?
So I've just tried (using HEAD):
Add repo file with baseurl=ftp://host:256/path/to/repo
...run "nc -l 256" on "host", run yum ... at which point it hangs until I get:
ftp://host:256/path/to/repo/repodata/repomd.xml: [Errno 4] IOError: [Errno ftp
error] timed out
...at which point nc goes away. I've also tried straceing it (just to be sure)
and it does have "sin_port=htons(256)" in the connect line.
So without actually running an ftp server on a weird port, this looks like it
works to me.
If you can get an strace of yum when it's failing, feel free to re-open.
Because you are apparently unable to set up a real repository on port 256,
This should work (port 21 _is_ the default ftp port), but for against a default
configure ftp server results in the same errors as I reported. The metadata
loads from the ftp server, but it won't download actual packages.
We may move on to strace if you can't reproduce this. At first blush it looks
like yum isn't even _trying_ to download the file before giving up - after
looking for the package file it does a dns lookup and then immediately starts
writing an error message that it couldn't get the file and is trying other
mirrors (which there aren't any).
(In reply to comment #7)
> Because you are apparently unable to set up a real repository on port 256,
> please try:
> This should work (port 21 _is_ the default ftp port), but for against a default
> configure ftp server results in the same errors as I reported. The metadata
> loads from the ftp server, but it won't download actual packages.
> We may move on to strace if you can't reproduce this. At first blush it looks
> like yum isn't even _trying_ to download the file before giving up - after
> looking for the package file
in the local yum cache unsuccessfully (to clarify)
> it does a dns lookup and then immediately starts
> writing an error message that it couldn't get the file and is trying other
> mirrors (which there aren't any).
Please also understand I would have expected the experiment in comment #6 to get
past the repodata download if the server was populated with real data. The
problem occurs _after_ downloading metadata. The experiment has to be done with
a real ftp server (specified by specific port) serving a real repository under
conditions where yum actually needs to download packages to update.
Ok, I've fixed this upstream. You can apply this locally if you want (against:
diff -u -r1.12 byterange.py
--- urlgrabber/byterange.py 20 Jul 2006 20:15:58 -0000 1.12
+++ urlgrabber/byterange.py 13 Mar 2008 17:00:12 -0000
@@ -272,6 +272,8 @@
host, port = splitport(host)
if port is None:
port = ftplib.FTP_PORT
+ port = int(port)
# username/password handling
user, host = splituser(host)
Excellent! It solves the bug! I am still wondering why metadata loaded fine
though... must be a different path.