Bug 107873 - r-c-x does not configure "dri" Radeon 9200
Summary: r-c-x does not configure "dri" Radeon 9200
Keywords:
Status: CLOSED DUPLICATE of bug 115672
Alias: None
Product: Fedora
Classification: Fedora
Component: redhat-config-xfree86
Version: 1
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Brent Fox
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2003-10-23 22:30 UTC by Warren Togami
Modified: 2007-11-30 22:10 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2006-02-21 18:59:23 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Warren Togami 2003-10-23 22:30:30 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.5)
Gecko/20031007 Firebird/0.7

Description of problem:
(mharris said to file this.)

The latest redhat-config-xfree86-0.9.15-1 successfully configures my Radeon
9200, however it does not add the line: Load "dri".  If you need my hardware
details including lspci, please read Bug 107805.

Version-Release number of selected component (if applicable):
ftp://people.redhat.com/bfox/redhat-config-xfree86-0.9.15-1.noarch.rpm

Comment 1 Dams 2003-10-28 04:05:49 UTC
I got a radeon 7200. DRI always worked for me (i played quake3, warcraft3..) on
shrike. redhat-config-xfree86 didnt put the 'Load "dri"' in etc/X11/XF86Config.

BTW, where is the checkbox to enable 3d acccel in r-c-x ? I dont see it anymore..
I'm using redhat-config-xfree86-0.9.15-1 too.

Comment 2 Mike A. Harris 2003-10-28 15:38:53 UTC
Brent:

The Load "dri" option is safe and correct to be placed in the config file
for all Radeon hardware wether or not the hardware is supported by DRI or not,
as the video driver knows which hardware is DRI supported or not, and will
properly disable DRI internally if the particular chip is not supported.  This
simplifies config tool choices, and makes it so DRI will just work for people
if the driver is ever updated to support DRI on previously DRI unsupported
hardware.  If the Load DRI line breaks hardware that DRI doesn't support,
I'd like to get bug reports as that is a very serious bug.  The Load DRI
line doesn't technically enable DRI, it just loads the X server extension,
and should be harmless on all hardware.  We've just traditionally used it
as an easy and quick method to manually disable DRI too, which is ok.

Anvil:

The 3D checkbox is gone from the tool because it is not possible to implement
a functional 3D checkbox.  First of all, "DRI" is not 3D acceleration, it is
an X extension which permits the drivers to utilize the DMA/IRQ capabilities
of the hardware through the usage of a shared area of memory (SAREA).  This
can be used to implement 3D acceleration, 2D acceleration, Xvideo acceleration
and other features that particular hardware supports via DMA access.  3D
acceleration is the most common thing to be implemented using the DRI
interfaces, however it is very incorrect to treat DRI==3D.  The Radeon driver
for example uses DRI for 2D acceleration also, however if you disable DRI,
the Radeon driver falls back to MMIO for 2D acceleration instead.

Some video drivers support DRI and some do not.  For drivers that do support
DRI, some use it for 3D, some for 3D and 2D, some for 3D, 2D and Xvideo.  Those
drivers support DRI on some cards and not on other cards at all.  For example,
Radeon 9000 has DRI support but Radeon 9800 does not.

If we had a checkbox that said "[ ] Enable 3D acceleration" there is no way
in XFree86 to enable or disable 3D acceleration, so that checkbox is bogus.
If the checkbox instead said "[ ] Enable DRI (Direct Rendering Infrastructure)",
that would be only a bit more accurate, it would not really be "enabling DRI",
but would rather be "loading the DRI X extension" by deciding wether or not to
Load "dri" in the config file.

Since a single video driver can support DRI on some chips and not support DRI
on other chips, there is no easy way the config tool can know which video
chips are supported by DRI and which ones are not in a given video driver.
Presenting the option "{ ] Enable 3D acceleration" to users of a Radeon 9000
would just luckily work, because the Radeon driver supports DRI on Radeon 9000,
and disabling the loading of the DRI extension has the side effect of disabling
3D acceleration.  However, users of Radeon 9500/9600/9700/9800 would turn that
checkbox on or off to no end and get absolutely no change.  The checkbox would
cause the X server to load or not load the DRI extension still, thus giving
the user the illusion that their card has 3D acceleration support, and that
they just enabled 3D acceleration with that checkbox, but the driver doesn't
support 3D on their card, so the DRI extension loads into the X server, and
in the Radeon driver InitDRI, it disables DRI on unsupported cards, so the
"[ ] Enable 3D" checkbox is misleading to the user.  In order for the checkbox
to be accurate, the configuration tool would have to internally contain a
complete list of the PCI IDs of every chip that has proper workign DRI support
which includes 3D acceleration.  However, detecting based on PCI ID is not
enough, because PCI Radeon is not officially supported by DRI and requires
the ForcePCIRadeon config file option to be used in order to use DRI
experimentally (that option can also be used to force AGP Radeon to use
pcigart if AGP doesn't work).  This would mean the config tool now has to
walk the PCI bus, and find all video cards, then find the PCI extended
capabilities pointer, and walk the extended CAP list to find the AGP cap,
to determine if it is an AGP card or a PCI card - still not knowing if your
motherboard AGP is supported by agpgart or not either.

Even if all of this maze of complexity could be put into the config tool, or
into database files of some sort that the config tool read, what is true
supportwise on x86, isn't necessarily true on AMD64, ia64, ppc, etc., so we
would have to maintain separate databases of PCI IDs, PCI/AGP, etc. per
architecture, and even then, that is just to determine "DRI" support for a 
chip, and not actually determine if 3D acceleration is supported or not.

As soon as I update the Radeon driver, that might now make a chip supported
by DRI that wasn't before, and so I'd have to update yet another database for
all architectures, and rerelease new XFree86, new config tool, new hwdata
packages.

In short, having a "[ ] Enable 3D acceleration" checkbox is nice in theory,
but it is next to impossible to actually implement due to the insane complexity
of the underlying system, and architectural portability.  If such a feature
can not be implemented 100% correctly and accurately enable/disable 3D on
a given chip, then users will file bug reports "the 3D checkbox doesn't work
on my card!", and we aren't able to easily fix that.  It isn't worth wasting
several weeks or more engineer hours for a kindof useless option that isn't
accurate or reliable anyway.

Long term, if each video driver had its own private metadata database file,
like Windows .INF files, and we implemented the ability to distinguish between
AGP and PCI cards in the tool as described above (or by getting that info from
the X server which isn't possible currently), and also allowed the metadata
files to have per-architecture configurability, we might be able to implement
a "[ ] Enable DRI" checkbox some time down the line, but even then, 3D is
implemented in X using DRI, but DRI itself is not 3D acceleration.  Future
drivers may do 2D with DRI also, even without 3D acceleration support.

Hope this clarifies.





Comment 3 Dams 2003-10-28 17:24:22 UTC
Thanks Mike for this (quite long) explanation. :)

Comment 4 Mike A. Harris 2003-11-29 01:03:03 UTC
IMHO, the config tool should probably just put

    Load "dri"

in the config file for all video hardware always, on all architectures
that ship DRI enabled.  That is just loading the DRI X server
extension, which is needed if a driver supports DRI, however if a
given driver does not support DRI, then the server DRI extension just
gets loaded and is unused, so no harm done.

If a driver fails for a given card with this line present wether
DRI is supported on that card or not, I'd kindof like people to
report that as a bug against XFree86 so it can detect the issue
and properly disable DRI inside the driver on incompatible hardware.

Brent, how does this sound to you for FC2?


Comment 5 Brent Fox 2004-01-19 23:12:11 UTC
I'm not sure I understand what you're saying.  Are you suggesting that
we remove all the 'LINE #Option "nodri"' lines from the Cards file in
hwdata?

Comment 6 Mike A. Harris 2004-01-20 02:34:57 UTC
Brent: No, that line is harmless, and is commented out by default
anyway.  Here's a basic summary of DRI configurations:

In a given X release, a particular driver may or may not support
DRI at all, or might support it only on certain architectures, or
we may have disabled it theoretically (ie: RHEL/ppc).  So, it is
best if we can avoid any assumptions in config tools and other
infrastructure as to wether or not a particular driver has DRI
support or not (for one or more cards it supports).  It's also
possible and common that for a given driver that does have DRI
support, that it only supports DRI on certain specific chips, and
not on all hardware the driver supports.  However, by updating
the driver(s) over time, and in new X releases, support for DRI
might be added to a driver that did not previously support 3D
on a given chip.  An example is Radeon 8500 being supported in
DRI in 4.3.0, but not previously.  So, we should avoid any
assumptions in config tools and other infrastructure, that assumes
support for DRI (or no support) for a given specific chip/card.
That just simplifies everything greatly, and makes management of
DRI support/non support centralized, in this case in the driver
itself.

There are 2 aspects of DRI to take into account from a config
tool perspective, however only one of them is really important to
handle IMHO.  The first thing, is that all config files, should
have the DRI module loaded always, regardless of wether the
particular video hardware in the person's system has DRI support
or not.  The DRI module, just loads the X server "dri" extension
module, to provide the server side infrastructure to be optionally
used by drivers which support DRI.  If a driver doesn't support
DRI on a particular card (or not at all), it just doesn't use
the DRI functions that are in the 'dri' module, however that module
doesn't do anything if the driver(s) don't use it, so it is harmless
to load the 'dri' module always.  The only case where the module
should not be loaded, is if we disable DRI at compile time, such
as our RHEL3/ppc release.  I'd have to doublecheck to be certain,
(and I have no easy way to do that), but the dri module is not
shipped on our PPC release, so trying to load it would cause a
failure.  Other than that though, DRI should be loaded in all
video hardware configurations for platforms which we ship DRI
on (possible exceptions listed below).

If a driver has no DRI support at all, what happens is, the server
starts up, loads the dri extension, loads the other modules, and
the video driver.  The video driver does not support DRI, and so
it does not call into the DRI extension routines and so no DRI
codepath is ever executed.  The driver just works normally.

If a driver has DRI support for some hardware, and your card has
DRI support in the driver, if the DRI extension is loaded, then
DRI will be enabled, and you'll have working DRI.

If a driver has DRI support for some hardware, and your card does
NOT have DRI support in the driver, then the driver will have a
built in list of hardware that it does not support DRI on, and
it SHOULD disable DRI when it detects this, and log a message
to the log file stating "[dri] DRI disabled as this hardware does
not have DRI support yet." or some similar message to that effect.
If the driver hangs here, or tries to initialize or use DRI at all
on unsupported hardware, that very well can hang the server and/or
system, and that would be a rather serious driver bug that should
be bugzilla'd and fixed before release.

On hardware that does have DRI support, on a fresh OS install,
with X configured properly for DRI operation, there are some cases
in which DRI support has certain restrictions, such as the tdfx
driver only supporting DRI on certain cards in depth 16, and on
Voodoo 3/Banshee only supporting DRI up to 1024x768.  Other drivers
have similar restrictions in certain circumstances as well, however
in all cases where DRI is incompatible with a specific setup, the
driver itself is responsible for detecting the problematic
configuration, and it should disable DRI on it's own during driver
DRIInit().  If it does not, it's a driver bug, should be in bugzilla,
etc...

The "NoDRI" option, is intended to be used to instruct a particular
driver which does contain DRI support for a given chip, that you
wish to disable DRI.  In this case, it is perfectly fine to still
load the DRI extension module, you're just instructing the driver
to not use DRI.

In the past, we have used NoDRI to try and disable DRI on problem
hardware, and we've also used commenting out the Load "dri" line
on occasion.  Both methods may work, but both are a much a hack
more than anything, and can require config tool black magic which
is ugly and presents us with other problems.  It also decentralizes
the problem which has additional bad juju for us.



So, what I'd like to propose for the future, is the following:

For all OS releases, if we ship our kernel with DRM kernel modules,
then our XFree86 for that OS release should be compiled with DRI
support as well.  The config tools should put the Load "dri" line
in the config file always no matter what hardware is present in
the system.  If our OS release does not ship kernel DRM modules,
then our XFree86 should be compiled without DRI support, and the
'dri' extension wont be present, so the Load "dri" line should
not be present in the config file.  It would be simpler of course
if we could just put the line there always, and have it just
ignore errors if it can't find the module.  I don't know if it
will start up if the module is explicitly listed and not present,
however if you can test this, it's possible we might be able to
just put the line in always and make things one notch simpler
all around.

Aside from that, assuming the above configuration, if any users
encounter problems with any particular hardware, using our
supplied kernel, X, and driver modules, in which DRI causes their
X server or system to hang, and they report bugs to us during beta,
etc., what I plan on doing is trying to determine the cause of the
problem, and doing one of the following:

- Find and fix bug in driver (or equivalent for kernel side bugs)

- Change driver(s) to autodetect the particular chip having a
  consistent DRI instability for all users (or majority of users),
  and I will make the driver itself default to DRI being off for
  this scenario, and require the user to use Option "dri" in the
  device section to override the default and force DRI to be on,
  caveat emptor.

- If a particular driver has totally broken DRI support that doesn't
  work at all, or has tonnes of problems and isn't well maintained,
  it makes sense to either hard code DRI to be off by default in the
  driver, and require the user to use the Option "dri" from above.
  Or, perhaps even not ship DRM for that hardware at all (we've done
  that for SiS for example.)


So, in summary, you shouldn't need to worry about the Option 'nodri'
stuff at all, as we shouldn't have it hardcoded in the tools at
all anywhere, however it might be present in the Cards file on
occasion temporarily to test things.  It's ok if those lines are
present by default in Cards but commented out though.  That just
makes it more noticeable to users who might experience problems,
but where we want to have DRI available by default.  However,
the Cards file should not, IMHO have any nodri lines which are not
commented out by default.  If it does, I consider that to be a
driver bug I need to attend to, and that it's an ugly hack in Cards
which should go away.

In the past, we had many more crufty things in Cards, and it required
many config tool hacks, and other black magic.  Back then, that
was the easiest way to work around problems in XFree86 and so that
was how it was done prior to me coming onboard.  Over time, I
determined that approach was hacky, and caused longer term
maintenance and upgrade problems, and ugly hacks, and while fixing
these things or working around them inside drivers takes more time
and effort on my side of things, I believe it is "the right way",
and makes life easier all around for everyone else as well, via
less incoming bugs against anaconda/r-c-x/etc. crashing when it is
an X bug, etc.

Sorry for the long winded comment, but I've been thinking about
this stuff lately, and decided I might as well put my thoughts
into writing so we've got a reference for the future as well.  ;o)

Any questions/comments/feedback, feel free to continue here, or in
email or IRC.

Comment 7 Brent Fox 2004-03-04 20:13:38 UTC

*** This bug has been marked as a duplicate of 115672 ***

Comment 8 Red Hat Bugzilla 2006-02-21 18:59:23 UTC
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.


Note You need to log in before you can comment on or make changes to this bug.