python-priority fails to build with Python 3.8.0b1. See https://copr.fedorainfracloud.org/coprs/g/python/python3.8/package/python-priority/ for actual logs. This report is automated and not very verbose, but I'll get back here with details.
=================================== FAILURES =================================== _______________ TestPriorityTreeOutput.test_period_of_repetition _______________ self = <test_priority.TestPriorityTreeOutput object at 0x7f697a9ed150> @given(STREAMS_AND_WEIGHTS) > def test_period_of_repetition(self, streams_and_weights): """ The period of repetition of a priority sequence is given by the sum of the weights of the streams. Once that many values have been pulled out the sequence repeats identically. test/test_priority.py:492: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python2.7/site-packages/hypothesis/core.py:600: in execute % (test.__name__, text_repr[0]) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <hypothesis.core.StateForActualGivenExecution object at 0x7f697a9ed510> message = 'Hypothesis test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weigh... (257, 249),\n (10715, 247)]) produces unreliable results: Falsified on the first call but did not on a subsequent one' def __flaky(self, message): if len(self.falsifying_examples) <= 1: > raise Flaky(message) E Flaky: Hypothesis test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weights=[(29729, 157), E (2627, 16), E (17000, 17), E (13695, 160), E (90876124250409049, 71), E (23759, 4), E (1, 1), E (23041, 162), E (104, 81), E (80, 201), E (257, 249), E (10715, 247)]) produces unreliable results: Falsified on the first call but did not on a subsequent one /usr/lib/python2.7/site-packages/hypothesis/core.py:770: Flaky ---------------------------------- Hypothesis ---------------------------------- Falsifying example: test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weights=[(29729, 157), (2627, 16), (17000, 17), (13695, 160), (90876124250409049, 71), (23759, 4), (1, 1), (23041, 162), (104, 81), (80, 201), (257, 249), (10715, 247)]) Unreliable test timings! On an initial run, this test took 308.50ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 169.27 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None. You can reproduce this example by temporarily adding @reproduce_failure('4.23.4', 'AIUDVOhAnJ8DExSED9ICAITPEHICAGr9n4ID+QKFtrUBBRCxRrQEN7mcAwEBlQAAAQCSAMQHBIG0AKFtASnOUIMGBwBAnsg0AAECAPg+BwMAU7X2Aw==') as a decorator on your test case ==================== 1 failed, 161 passed in 491.49 seconds ==================== Looks like a flaky test :(
*** This bug has been marked as a duplicate of bug 1709800 ***