We'd like rlRun to give us possibility to easily tell apart stdout and stderr of the runned command. It would also write the output to file specified by $OUTPUTFILE. I'm not sure if the best solution would be to add this functionality to rlRun, or if some new "brand" of rlRun should be created.
Created attachment 355804 [details] First possibility - modified rlRun First of the two possible solutions. rlRun is now modified to tag stdout and stderr. Also if $OUTPUTFILE is specified, the tagged stdout and stderr are tee-d in there.
Created attachment 355806 [details] Second possibility - new function rlRunTag Second possibility to solve this issue. I added new function rlRunTag which behaves almost exactly like standard rlRun, except of the new 'tagging' functionality. I also separated the code for converting the expected return codes, so it can be reused inside the rlRunTag.
This is an interesting idea. I'd vote for adding an optional parameter for rlRun --- let the user choose whether this additional functionality should be on or off: rlRun command [status[,status...]] [comment] [tag] What's the point of redirecting the output to $OUTPUFILE? All the output is captured into TESTOUT.log anyway, ready for further inspection when something goes wrong.
I suggest altering the original rlRun. Tagging output is certainly useful, and does not alter the beakerlib behavior at all - currently we don't care about the output itself. With OUTPUTFILE - I'm okay with teeing whatever you want to OUTPUTFILE - beakerlib at the moment circumvents OUTPUTFILE totally. But it's worth thinking about the real usefulness of it - it's just duplicating the data which already are in TESTOUT.log
I don't like the idea of another parameter... I would like to see this as a default - it would make TESTOUT.log a bit more readable imo
Few questions/ideas :) 1) Why are we tee-ing stuff to OUTPUTFILE: It's because of the rhts web interface. You are of course right about the TESTOUT.log - there is everything inside already. But this fact can make it quite hard to examine, and we do not usually want to examine _all_ the information, but only some important parts. These parts, we store inside the OUTPUTFILE, which is shown directly on the "Test Log for Job XYZ" page. (for example http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=9110168 ) You say, that you completely omit the OUTPUTFILE in beakerlib - is there any way to do what I just described? Since the Test Log: part of the "Test Log for Job XYZ" page is pretty usefull (at least for us). 2) On the other hand, I agree, that logging into OUTPUTFILE, should not be default behavior. Ideally, one should be able to control it (maybe even the tagging) with some parameter. I think, that the best way is 'transition' to getopt for checking the parameters (at least for rlRun, since it's getting 'complicated'). With getopt, there will be virtually no problem with parsing these additional optional parameters. What do you think about it? 3) Maybe it would be for the best, if we could talk about this stuff face2face - I'd be glad if you agreed with some 20 minutes meeting, or so.
(In reply to comment #6) > But this fact can make it quite hard to examine, and we do not usually want to > examine _all_ the information, but only some important parts. These parts, we > store inside the OUTPUTFILE, which is shown directly on the "Test Log for Job > XYZ" page. (for example > http://rhts.redhat.com/cgi-bin/rhts/test_log.cgi?id=9110168 ) > > You say, that you completely omit the OUTPUTFILE in beakerlib - is there any > way to do what I just described? Since the Test Log: part of the "Test Log for > Job XYZ" page is pretty usefull (at least for us). Can't those important parts be analysed and distilled into asserts and logs? Because those will ultimately appear in "Test log" section of RHTS webUI (every rlPhaseEnd and rlJournalEnd replaces it with a journal printout)
> Can't those important parts be analysed and distilled into asserts and logs? > Because those will ultimately appear in "Test log" section of RHTS webUI (every > rlPhaseEnd and rlJournalEnd replaces it with a journal printout) Well, at the moment, i don't know, how to easily put (e.g) whole output of some command in those, but even if it's possible, i don't see the improvement against using OUTPUTFILE - since it appears to me, that OUTPUTFILE is/was ment exactly for such a thing as 'logging stuff, you wanna see directly on that webUI page'. Maybe I misunderstood the purpose/concept of OUTPUTFILE and the appropriate section in the webUI log page. But for now, it seems reasonable to me.
I'll try to clarify how the logging works in rhts: There's a 'rhts-report-result', which internally does something like 'submit-result $NAME $RESULT $LOGFILE $SCORE", with these four variables as arguments. What is in LOGFILE, appears in the GUI. This function is considered internal to rhts and shouldnt be used in tests. There's 'report-result' function, which should be used in the tests. It is basically a wrapper, calling "rhts-submit-result $NAME $RESULT $OUTPUTFILE $SCORE", where NAME, RESULT, and SCORE are arguments, and $OUTPUTFILE is the default. Test users are advised to log their stuff to OUTPUTFILE, where it gets picked to this function. In beakerlib, we do report-result at the end of each phase. We dump the current journal to the file, and we set a *local* variable OUTPUTFILE (so we won't destroy it, just cover it) to this file, so it gets picked by rhts-report-result. What anything else writes in OUTPUTFILE, beakerlib does not care. I usually do not care about the output of something I rlRun. Definitely not enough to see the output in the direct log of the test. But that's me. When I do, I usually do 'rlRun "dostuff > file"' and I process/report the file separately. I kinda like the idea of getopting, and on the second thought, new options where: default - current behavior one option - submitting the output separately via your rlSubmitLog we process in the other bug another option - something like 'while read line; do rlLog "$line"; done < output'
Created attachment 356034 [details] Logging using rlLog, new parameters to control tagging and logging I'm still not sure, if default tagging of stderr/stdout is for the best - IDK if it won't break older tests. Anyway: New parameters -t and -l * -t if specified, stdout/err is tagged * -l if specified, output of runned command is logged Parameters are now parsed using getopt, so -t and -l have no 'impact' on the old tests, or the other parameters of rlRun. I also switched from logging directly into OUTPUTFILE to logging using rlLog, we can both agree, that it's the preffered way to do it.
Seems good to me. Let's push it.
Thx, patch pushed.
(In reply to comment #10) > I'm still not sure, if default tagging of stderr/stdout is for the best - IDK > if it won't break older tests. Anyway: should be ok - testsuite found no regression in rlRun function. BTW could you please update unit test to cover these new features? https://wiki.test.redhat.com/BaseOs/Projects/RhtsLibrary/CodingStyle#UnitTests
Created attachment 357788 [details] rlRun - tests for the new features Ok, so here is the promised unittest for the new rlRun features.
I've tested the new rlRun and discovered it has one very unpleasant effect: The output of the command is not displayed continuously but flushed only at the end. Because of this, progress of commands cannot be seen well which can be sometimes very undesirable and annoying. Tested with: rlRun -l -t "perl -e 'for (0..4) { print \"\$_\n\"; sleep 1 }'" rlRun -l -t "perl -e 'for (0..4) { print \"\$_\n\"; sleep 1 }' 1>&2" This behaviour can be seen even when -l and -t are not set -> not backward compatible. Furthermore, the -l option does not log output lines separately, so the result of rlJournalPrintText is not nicely formatted and looks like this: :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: [ LOG ] :: Test :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: [ LOG ] :: perl -e 'for (0..4) { print "$_\n"; sleep 1 }'\nSTDOUT: 0 STDOUT: 1 STDOUT: 2 STDOUT: 3 STDOUT: 4 :: [ PASS ] :: Running 'perl -e 'for (0..4) { print "$_\n"; sleep 1 }'' :: [ LOG ] :: perl -e 'for (0..4) { print "$_\n"; sleep 1 }' 1>&2\nSTDERR: 0 STDERR: 1 STDERR: 2 STDERR: 3 STDERR: 4 :: [ PASS ] :: Running 'perl -e 'for (0..4) { print "$_\n"; sleep 1 }' 1>&2' :: [ LOG ] :: Duration: 11s :: [ LOG ] :: Assertions: 2 good, 0 bad :: [ PASS ] :: RESULT: Test I would rather expect something like the following: :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: [ LOG ] :: Test :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: [ LOG ] :: STDOUT: 0 :: [ LOG ] :: STDOUT: 1 :: [ LOG ] :: STDOUT: 2 :: [ LOG ] :: STDOUT: 3 :: [ LOG ] :: STDOUT: 4 :: [ PASS ] :: Running 'perl -e 'for (0..4) { print "$_\n"; sleep 1 }'' :: [ LOG ] :: STDERR: 0 :: [ LOG ] :: STDERR: 1 :: [ LOG ] :: STDERR: 2 :: [ LOG ] :: STDERR: 3 :: [ LOG ] :: STDERR: 4 :: [ PASS ] :: Running 'perl -e 'for (0..4) { print "$_\n"; sleep 1 }' 1>&2' :: [ LOG ] :: Duration: 11s :: [ LOG ] :: Assertions: 2 good, 0 bad :: [ PASS ] :: RESULT: Test
ad logging separate lines: I belive, that logging as it's now is better, because now it's inside one XML Node (i belive it's called message) in XML journal (native structure for the journalling backend in beakerlib), which is IMHO better sematically, than several 'one liners' since the log should be 'homogenous'. The :: [ LOG ] :: lines can be IMHO easily added in the journalling.py, and they should be added here, if you prefer the one-line-one-":: [ LOG ] ::"-mark policy. ad non-continuous output: I did not know, that it's necessary to see the output continuously, so it never crossed my mind to test it. Do you have any idea how to make it 'continuous', resp. what is causing the 'non-continuality'? I don't have any clue, to be honest, since both sed and tee are 'filters' which produce one line of output for one line of input. I can of course make it backwards compatible by altering the eval chain if neither -l or -t options are specified, but i'd rather see this working even with these parameters.
(In reply to comment #16) > ad logging separate lines: > I belive, that logging as it's now is better, because now it's inside one XML > Node (i belive it's called message) in XML journal (native structure for the > journalling backend in beakerlib), which is IMHO better sematically, than > several 'one liners' since the log should be 'homogenous'. > > The :: [ LOG ] :: lines can be IMHO easily added in the journalling.py, and > they should be added here, if you prefer the one-line-one-":: [ LOG ] ::"-mark > policy. > Agree, rlJournalPrintText is formatting function anyway, so it can do a bit more formatting - add the "...[log]..." prefix or whitespace to multi-line messages. I've created bug for this feature: https://bugzilla.redhat.com/show_bug.cgi?id=518988
Created attachment 359013 [details] workaround to make rlRun output continuous ad non-continuous output: so i did some research and the whole problem is most likely caused by the pipe buffer (on my machine it's 4kiB), i tried to set it to zero using ulimit, but it did not work: # ulimit -p 0 bash: ulimit: pipe size: cannot modify limit: Invalid argument so for now, i altered the eval chain to the old eval $command if neither the -l nor -t parameters are set. I any of you had an idea how to make the pipe buffer only one line instead of 8kiB, please let me know.
Created attachment 359242 [details] Using unbuffer (from expect) to make rlRun output continuous Hi, so I made some research and I discovered quite interesting command unbuffer (from the expect package) which can somehow disable the pipe buffering. Using this command I was able to make the output continuous even with -l and/or -t parameters. The only drawback I see is the dependency on expect - is it a problem? Could the beakerlib rpms be dependent on expect? For the sake of usability, I added simple 'detection' of the unbuffer command, and it's used only if it's present.
I think the unbuffered output is not that important to bring new dependency on expect. If there is the unbuffered fallback "simple eval" when no -p/-l option is present (and possibly a comment in documentation mentioning to install expect if you need unbuffered output with -l/-p option), then everything should be fine, from my POV.
Created attachment 359378 [details] Using unbuffer command (if present) to produce unbuffered output Ok, so I added the note about buffered output, and installing expect as a solution to the rlRun documentation. Please review the patch and let me know, if i can push it.
Created attachment 359639 [details] unbuffer patch without duplicating comment What about not duplicating the comment and put it just once at the end? And perhaps, the unbuffer tool detection could be done only when necessary. Attaching a modified patch. What do you think?
(In reply to comment #22) Your patch looks better to me. thx.
Ok, pushed to git: http://git.fedorahosted.org/git/beaker.git?p=beaker.git;a=commitdiff;h=f6a3115366e443818357fa0574f2bd248d07bb3d
this approach almost caused havoc when attempted to deploy new beakerlib to RHTS :/ the problem is this line: eval "$command" 2> >(sed -e "s/^/$TAG_ERR/g" | tee -a $LOG_FILE) 1> >(sed -e "s/^/$TAG_OUT/g" | tee -a $LOG_FILE) which is a valid statement. But running some tests will fail with a message like this: /usr/lib/beakerlib//testing.sh: line 673: syntax error near unexpected token `>' The actual problem is, that that line apparently contains bashism, which is supposed to be ok - our /bin/sh is just a symlink to bash. But the reality seems to be different: running 'bash runtest.sh' will succeed, while 'sh runtest.sh' will fail with a message like the above. The real problem is that we probably have some tests having '#!/bin/sh' shebang, or no shebang at all (I've found some rhts-test-wizard originated tests, which haven't had shebangs, too - so there's probably a good chance of being quite a few of them), which means it will be run as '/bin/sh' too. So pushing this update will probably break a zillion of tests, because a zillion of tests are wrong :/ Nasty. Any idea how to proceed? You can try this yourself: sh -c '. /usr/lib/beakerlib/beakerlib.sh'
Created attachment 361215 [details] Quick'n'dirty solution for the /bin/sh problem I'm looking into it. Maybe the "quick'n'dirty" solution could be in moving the actual eval line into separate script, which would have the /bin/bash shebang. But to bring more bad news, while messing around with the code, I discovered, that the unbuffered mode (using unbuffer command) is messing up the return value of the evaled command. Sadly I did not notice it, because I only ran the testsuite, which does not cover this (my fault :-/ ).
Seems, the things are getting too complicated here. What about just using regular redirection of stdout/stderr to some file(s) and after the command is done just cat their (tagged) content? This would get rid of the unbuffer hugger-mugger as well as the problem with tests run by /bin/sh. The output will be "buffered" but I think we could live with that. (There's always the old good regular rlRun when we need the tests to behave more responsively.)
Some ideas... Besides redirecting outputs to files as Petr proposed (what is of course very good idea, simple, easy for implementation) and other stated workarounds redirection to named pipes could be used. Named pipes are slightly more complicated, however they could possibly allow "realtime" processing of outputs (btw, actually they are hidden behind bashism used in initial implementation). Python provides quite nice interface implementing this, subprocess.Popen. (however I doubt if this allows feature I called "realtime" processing in previous par, shell should match our needs without problem) Question is, if such effort makes sense, of course.
beakerlib-1.9-1.fc20 has been submitted as an update for Fedora 20. https://admin.fedoraproject.org/updates/beakerlib-1.9-1.fc20
Package beakerlib-1.9-1.fc20: * should fix your issue, * was pushed to the Fedora 20 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing beakerlib-1.9-1.fc20' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2014-7442/beakerlib-1.9-1.fc20 then log in and leave karma (feedback).
beakerlib-1.9-2.fc20 has been submitted as an update for Fedora 20. https://admin.fedoraproject.org/updates/beakerlib-1.9-2.fc20
Please consider the package fixing this bug available in Fedora stable repos once bz1116308 is closed and RHEL stable repos once bz1116317 is closed. Fixed in: beakerlib-1.9-2.fc19 beakerlib-1.9-2.fc20 beakerlib-1.9-2.fc21 beakerlib-1.9-3.el5 beakerlib-1.9-2.el6 beakerlib-1.9-2.el7
beakerlib-1.9-3.fc20 has been submitted as an update for Fedora 20. https://admin.fedoraproject.org/updates/beakerlib-1.9-3.fc20
beakerlib-1.9-3.fc20 has been pushed to the Fedora 20 stable repository. If problems still persist, please make note of it in this bug report.