Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 291178 Details for
Bug 426743
savage: opengl programs fail on startup with wait event returned error -14
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
[patch]
DRM kernel modules update
drm-mm-git-071229.patch (text/plain), 1.40 MB, created by
Samo Pogacnik
on 2008-01-09 18:21:19 UTC
(
hide
)
Description:
DRM kernel modules update
Filename:
MIME Type:
Creator:
Samo Pogacnik
Created:
2008-01-09 18:21:19 UTC
Size:
1.40 MB
patch
obsolete
>diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/ati_pcigart.c linux-2.6.23.i686/drivers/char/drm/ati_pcigart.c >--- linux-2.6.23.i686.orig/drivers/char/drm/ati_pcigart.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/ati_pcigart.c 2008-01-06 09:24:57.000000000 +0100 >@@ -51,8 +51,12 @@ static void *drm_ati_alloc_pcigart_table > > page = virt_to_page(address); > >- for (i = 0; i < order; i++, page++) >+ for (i = 0; i < order; i++, page++) { >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15) >+ get_page(page); >+#endif > SetPageReserved(page); >+ } > > DRM_DEBUG("%s: returning 0x%08lx\n", __FUNCTION__, address); > return (void *)address; >@@ -67,8 +71,12 @@ static void drm_ati_free_pcigart_table(v > > page = virt_to_page((unsigned long)address); > >- for (i = 0; i < num_pages; i++, page++) >+ for (i = 0; i < num_pages; i++, page++) { >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15) >+ __put_page(page); >+#endif > ClearPageReserved(page); >+ } > > free_pages((unsigned long)address, order); > } >@@ -112,8 +120,10 @@ int drm_ati_pcigart_cleanup(struct drm_d > gart_info->bus_addr = 0; > } > >+ > if (gart_info->gart_table_location == DRM_ATI_GART_MAIN > && gart_info->addr) { >+ > drm_ati_free_pcigart_table(gart_info->addr, order); > gart_info->addr = NULL; > } >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/Doxyfile linux-2.6.23.i686/drivers/char/drm/Doxyfile >--- linux-2.6.23.i686.orig/drivers/char/drm/Doxyfile 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/Doxyfile 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,1161 @@ >+# Doxyfile 1.3.8 >+ >+# This file describes the settings to be used by the documentation system >+# doxygen (www.doxygen.org) for a project >+# >+# All text after a hash (#) is considered a comment and will be ignored >+# The format is: >+# TAG = value [value, ...] >+# For lists items can also be appended using: >+# TAG += value [value, ...] >+# Values that contain spaces should be placed between quotes (" ") >+ >+#--------------------------------------------------------------------------- >+# Project related configuration options >+#--------------------------------------------------------------------------- >+ >+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded >+# by quotes) that should identify the project. >+ >+PROJECT_NAME = "Direct Rendering Module" >+ >+# The PROJECT_NUMBER tag can be used to enter a project or revision number. >+# This could be handy for archiving the generated documentation or >+# if some version control system is used. >+ >+PROJECT_NUMBER = >+ >+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) >+# base path where the generated documentation will be put. >+# If a relative path is entered, it will be relative to the location >+# where doxygen was started. If left blank the current directory will be used. >+ >+OUTPUT_DIRECTORY = >+ >+# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create >+# 4096 sub-directories (in 2 levels) under the output directory of each output >+# format and will distribute the generated files over these directories. >+# Enabling this option can be useful when feeding doxygen a huge amount of source >+# files, where putting all generated files in the same directory would otherwise >+# cause performance problems for the file system. >+ >+CREATE_SUBDIRS = NO >+ >+# The OUTPUT_LANGUAGE tag is used to specify the language in which all >+# documentation generated by doxygen is written. Doxygen will use this >+# information to generate all constant output in the proper language. >+# The default language is English, other supported languages are: >+# Brazilian, Catalan, Chinese, Chinese-Traditional, Croatian, Czech, Danish, >+# Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, >+# Japanese-en (Japanese with English messages), Korean, Korean-en, Norwegian, >+# Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, >+# Swedish, and Ukrainian. >+ >+OUTPUT_LANGUAGE = English >+ >+# This tag can be used to specify the encoding used in the generated output. >+# The encoding is not always determined by the language that is chosen, >+# but also whether or not the output is meant for Windows or non-Windows users. >+# In case there is a difference, setting the USE_WINDOWS_ENCODING tag to YES >+# forces the Windows encoding (this is the default for the Windows binary), >+# whereas setting the tag to NO uses a Unix-style encoding (the default for >+# all platforms other than Windows). >+ >+USE_WINDOWS_ENCODING = NO >+ >+# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will >+# include brief member descriptions after the members that are listed in >+# the file and class documentation (similar to JavaDoc). >+# Set to NO to disable this. >+ >+BRIEF_MEMBER_DESC = YES >+ >+# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend >+# the brief description of a member or function before the detailed description. >+# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the >+# brief descriptions will be completely suppressed. >+ >+REPEAT_BRIEF = YES >+ >+# This tag implements a quasi-intelligent brief description abbreviator >+# that is used to form the text in various listings. Each string >+# in this list, if found as the leading text of the brief description, will be >+# stripped from the text and the result after processing the whole list, is used >+# as the annotated text. Otherwise, the brief description is used as-is. If left >+# blank, the following values are used ("$name" is automatically replaced with the >+# name of the entity): "The $name class" "The $name widget" "The $name file" >+# "is" "provides" "specifies" "contains" "represents" "a" "an" "the" >+ >+ABBREVIATE_BRIEF = >+ >+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then >+# Doxygen will generate a detailed section even if there is only a brief >+# description. >+ >+ALWAYS_DETAILED_SEC = NO >+ >+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all inherited >+# members of a class in the documentation of that class as if those members were >+# ordinary class members. Constructors, destructors and assignment operators of >+# the base classes will not be shown. >+ >+INLINE_INHERITED_MEMB = NO >+ >+# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full >+# path before files name in the file list and in the header files. If set >+# to NO the shortest path that makes the file name unique will be used. >+ >+FULL_PATH_NAMES = NO >+ >+# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag >+# can be used to strip a user-defined part of the path. Stripping is >+# only done if one of the specified strings matches the left-hand part of >+# the path. The tag can be used to show relative paths in the file list. >+# If left blank the directory from which doxygen is run is used as the >+# path to strip. >+ >+STRIP_FROM_PATH = >+ >+# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of >+# the path mentioned in the documentation of a class, which tells >+# the reader which header file to include in order to use a class. >+# If left blank only the name of the header file containing the class >+# definition is used. Otherwise one should specify the include paths that >+# are normally passed to the compiler using the -I flag. >+ >+STRIP_FROM_INC_PATH = >+ >+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter >+# (but less readable) file names. This can be useful is your file systems >+# doesn't support long names like on DOS, Mac, or CD-ROM. >+ >+SHORT_NAMES = NO >+ >+# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen >+# will interpret the first line (until the first dot) of a JavaDoc-style >+# comment as the brief description. If set to NO, the JavaDoc >+# comments will behave just like the Qt-style comments (thus requiring an >+# explicit @brief command for a brief description. >+ >+JAVADOC_AUTOBRIEF = YES >+ >+# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen >+# treat a multi-line C++ special comment block (i.e. a block of //! or /// >+# comments) as a brief description. This used to be the default behaviour. >+# The new default is to treat a multi-line C++ comment block as a detailed >+# description. Set this tag to YES if you prefer the old behaviour instead. >+ >+MULTILINE_CPP_IS_BRIEF = NO >+ >+# If the DETAILS_AT_TOP tag is set to YES then Doxygen >+# will output the detailed description near the top, like JavaDoc. >+# If set to NO, the detailed description appears after the member >+# documentation. >+ >+DETAILS_AT_TOP = YES >+ >+# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented >+# member inherits the documentation from any documented member that it >+# re-implements. >+ >+INHERIT_DOCS = YES >+ >+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC >+# tag is set to YES, then doxygen will reuse the documentation of the first >+# member in the group (if any) for the other members of the group. By default >+# all members of a group must be documented explicitly. >+ >+DISTRIBUTE_GROUP_DOC = NO >+ >+# The TAB_SIZE tag can be used to set the number of spaces in a tab. >+# Doxygen uses this value to replace tabs by spaces in code fragments. >+ >+TAB_SIZE = 8 >+ >+# This tag can be used to specify a number of aliases that acts >+# as commands in the documentation. An alias has the form "name=value". >+# For example adding "sideeffect=\par Side Effects:\n" will allow you to >+# put the command \sideeffect (or @sideeffect) in the documentation, which >+# will result in a user-defined paragraph with heading "Side Effects:". >+# You can put \n's in the value part of an alias to insert newlines. >+ >+ALIASES = >+ >+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources >+# only. Doxygen will then generate output that is more tailored for C. >+# For instance, some of the names that are used will be different. The list >+# of all members will be omitted, etc. >+ >+OPTIMIZE_OUTPUT_FOR_C = YES >+ >+# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java sources >+# only. Doxygen will then generate output that is more tailored for Java. >+# For instance, namespaces will be presented as packages, qualified scopes >+# will look different, etc. >+ >+OPTIMIZE_OUTPUT_JAVA = NO >+ >+# Set the SUBGROUPING tag to YES (the default) to allow class member groups of >+# the same type (for instance a group of public functions) to be put as a >+# subgroup of that type (e.g. under the Public Functions section). Set it to >+# NO to prevent subgrouping. Alternatively, this can be done per class using >+# the \nosubgrouping command. >+ >+SUBGROUPING = YES >+ >+#--------------------------------------------------------------------------- >+# Build related configuration options >+#--------------------------------------------------------------------------- >+ >+# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in >+# documentation are documented, even if no documentation was available. >+# Private class members and static file members will be hidden unless >+# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES >+ >+EXTRACT_ALL = NO >+ >+# If the EXTRACT_PRIVATE tag is set to YES all private members of a class >+# will be included in the documentation. >+ >+EXTRACT_PRIVATE = YES >+ >+# If the EXTRACT_STATIC tag is set to YES all static members of a file >+# will be included in the documentation. >+ >+EXTRACT_STATIC = YES >+ >+# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) >+# defined locally in source files will be included in the documentation. >+# If set to NO only classes defined in header files are included. >+ >+EXTRACT_LOCAL_CLASSES = YES >+ >+# This flag is only useful for Objective-C code. When set to YES local >+# methods, which are defined in the implementation section but not in >+# the interface are included in the documentation. >+# If set to NO (the default) only methods in the interface are included. >+ >+EXTRACT_LOCAL_METHODS = NO >+ >+# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all >+# undocumented members of documented classes, files or namespaces. >+# If set to NO (the default) these members will be included in the >+# various overviews, but no documentation section is generated. >+# This option has no effect if EXTRACT_ALL is enabled. >+ >+HIDE_UNDOC_MEMBERS = NO >+ >+# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all >+# undocumented classes that are normally visible in the class hierarchy. >+# If set to NO (the default) these classes will be included in the various >+# overviews. This option has no effect if EXTRACT_ALL is enabled. >+ >+HIDE_UNDOC_CLASSES = NO >+ >+# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all >+# friend (class|struct|union) declarations. >+# If set to NO (the default) these declarations will be included in the >+# documentation. >+ >+HIDE_FRIEND_COMPOUNDS = NO >+ >+# If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any >+# documentation blocks found inside the body of a function. >+# If set to NO (the default) these blocks will be appended to the >+# function's detailed documentation block. >+ >+HIDE_IN_BODY_DOCS = NO >+ >+# The INTERNAL_DOCS tag determines if documentation >+# that is typed after a \internal command is included. If the tag is set >+# to NO (the default) then the documentation will be excluded. >+# Set it to YES to include the internal documentation. >+ >+INTERNAL_DOCS = NO >+ >+# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate >+# file names in lower-case letters. If set to YES upper-case letters are also >+# allowed. This is useful if you have classes or files whose names only differ >+# in case and if your file system supports case sensitive file names. Windows >+# and Mac users are advised to set this option to NO. >+ >+CASE_SENSE_NAMES = YES >+ >+# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen >+# will show members with their full class and namespace scopes in the >+# documentation. If set to YES the scope will be hidden. >+ >+HIDE_SCOPE_NAMES = NO >+ >+# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen >+# will put a list of the files that are included by a file in the documentation >+# of that file. >+ >+SHOW_INCLUDE_FILES = NO >+ >+# If the INLINE_INFO tag is set to YES (the default) then a tag [inline] >+# is inserted in the documentation for inline members. >+ >+INLINE_INFO = YES >+ >+# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen >+# will sort the (detailed) documentation of file and class members >+# alphabetically by member name. If set to NO the members will appear in >+# declaration order. >+ >+SORT_MEMBER_DOCS = NO >+ >+# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the >+# brief documentation of file, namespace and class members alphabetically >+# by member name. If set to NO (the default) the members will appear in >+# declaration order. >+ >+SORT_BRIEF_DOCS = NO >+ >+# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be >+# sorted by fully-qualified names, including namespaces. If set to >+# NO (the default), the class list will be sorted only by class name, >+# not including the namespace part. >+# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. >+# Note: This option applies only to the class list, not to the >+# alphabetical list. >+ >+SORT_BY_SCOPE_NAME = NO >+ >+# The GENERATE_TODOLIST tag can be used to enable (YES) or >+# disable (NO) the todo list. This list is created by putting \todo >+# commands in the documentation. >+ >+GENERATE_TODOLIST = YES >+ >+# The GENERATE_TESTLIST tag can be used to enable (YES) or >+# disable (NO) the test list. This list is created by putting \test >+# commands in the documentation. >+ >+GENERATE_TESTLIST = YES >+ >+# The GENERATE_BUGLIST tag can be used to enable (YES) or >+# disable (NO) the bug list. This list is created by putting \bug >+# commands in the documentation. >+ >+GENERATE_BUGLIST = YES >+ >+# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or >+# disable (NO) the deprecated list. This list is created by putting >+# \deprecated commands in the documentation. >+ >+GENERATE_DEPRECATEDLIST= YES >+ >+# The ENABLED_SECTIONS tag can be used to enable conditional >+# documentation sections, marked by \if sectionname ... \endif. >+ >+ENABLED_SECTIONS = >+ >+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines >+# the initial value of a variable or define consists of for it to appear in >+# the documentation. If the initializer consists of more lines than specified >+# here it will be hidden. Use a value of 0 to hide initializers completely. >+# The appearance of the initializer of individual variables and defines in the >+# documentation can be controlled using \showinitializer or \hideinitializer >+# command in the documentation regardless of this setting. >+ >+MAX_INITIALIZER_LINES = 30 >+ >+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated >+# at the bottom of the documentation of classes and structs. If set to YES the >+# list will mention the files that were used to generate the documentation. >+ >+SHOW_USED_FILES = YES >+ >+#--------------------------------------------------------------------------- >+# configuration options related to warning and progress messages >+#--------------------------------------------------------------------------- >+ >+# The QUIET tag can be used to turn on/off the messages that are generated >+# by doxygen. Possible values are YES and NO. If left blank NO is used. >+ >+QUIET = YES >+ >+# The WARNINGS tag can be used to turn on/off the warning messages that are >+# generated by doxygen. Possible values are YES and NO. If left blank >+# NO is used. >+ >+WARNINGS = YES >+ >+# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings >+# for undocumented members. If EXTRACT_ALL is set to YES then this flag will >+# automatically be disabled. >+ >+WARN_IF_UNDOCUMENTED = NO >+ >+# If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for >+# potential errors in the documentation, such as not documenting some >+# parameters in a documented function, or documenting parameters that >+# don't exist or using markup commands wrongly. >+ >+WARN_IF_DOC_ERROR = YES >+ >+# The WARN_FORMAT tag determines the format of the warning messages that >+# doxygen can produce. The string should contain the $file, $line, and $text >+# tags, which will be replaced by the file and line number from which the >+# warning originated and the warning text. >+ >+WARN_FORMAT = "$file:$line: $text" >+ >+# The WARN_LOGFILE tag can be used to specify a file to which warning >+# and error messages should be written. If left blank the output is written >+# to stderr. >+ >+WARN_LOGFILE = >+ >+#--------------------------------------------------------------------------- >+# configuration options related to the input files >+#--------------------------------------------------------------------------- >+ >+# The INPUT tag can be used to specify the files and/or directories that contain >+# documented source files. You may enter file names like "myfile.cpp" or >+# directories like "/usr/src/myproject". Separate the files or directories >+# with spaces. >+ >+INPUT = . \ >+ ../shared-core >+ >+# If the value of the INPUT tag contains directories, you can use the >+# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp >+# and *.h) to filter out the source-files in the directories. If left >+# blank the following patterns are tested: >+# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx *.hpp >+# *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm >+ >+FILE_PATTERNS = *.c \ >+ *.h >+ >+# The RECURSIVE tag can be used to turn specify whether or not subdirectories >+# should be searched for input files as well. Possible values are YES and NO. >+# If left blank NO is used. >+ >+RECURSIVE = NO >+ >+# The EXCLUDE tag can be used to specify files and/or directories that should >+# excluded from the INPUT source files. This way you can easily exclude a >+# subdirectory from a directory tree whose root is specified with the INPUT tag. >+ >+EXCLUDE = >+ >+# The EXCLUDE_SYMLINKS tag can be used select whether or not files or directories >+# that are symbolic links (a Unix filesystem feature) are excluded from the input. >+ >+EXCLUDE_SYMLINKS = YES >+ >+# If the value of the INPUT tag contains directories, you can use the >+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude >+# certain files from those directories. >+ >+EXCLUDE_PATTERNS = >+ >+# The EXAMPLE_PATH tag can be used to specify one or more files or >+# directories that contain example code fragments that are included (see >+# the \include command). >+ >+EXAMPLE_PATH = >+ >+# If the value of the EXAMPLE_PATH tag contains directories, you can use the >+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp >+# and *.h) to filter out the source-files in the directories. If left >+# blank all files are included. >+ >+EXAMPLE_PATTERNS = >+ >+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be >+# searched for input files to be used with the \include or \dontinclude >+# commands irrespective of the value of the RECURSIVE tag. >+# Possible values are YES and NO. If left blank NO is used. >+ >+EXAMPLE_RECURSIVE = NO >+ >+# The IMAGE_PATH tag can be used to specify one or more files or >+# directories that contain image that are included in the documentation (see >+# the \image command). >+ >+IMAGE_PATH = >+ >+# The INPUT_FILTER tag can be used to specify a program that doxygen should >+# invoke to filter for each input file. Doxygen will invoke the filter program >+# by executing (via popen()) the command <filter> <input-file>, where <filter> >+# is the value of the INPUT_FILTER tag, and <input-file> is the name of an >+# input file. Doxygen will then use the output that the filter program writes >+# to standard output. If FILTER_PATTERNS is specified, this tag will be >+# ignored. >+ >+INPUT_FILTER = >+ >+# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern >+# basis. Doxygen will compare the file name with each pattern and apply the >+# filter if there is a match. The filters are a list of the form: >+# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further >+# info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER >+# is applied to all files. >+ >+FILTER_PATTERNS = >+ >+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using >+# INPUT_FILTER) will be used to filter the input files when producing source >+# files to browse (i.e. when SOURCE_BROWSER is set to YES). >+ >+FILTER_SOURCE_FILES = NO >+ >+#--------------------------------------------------------------------------- >+# configuration options related to source browsing >+#--------------------------------------------------------------------------- >+ >+# If the SOURCE_BROWSER tag is set to YES then a list of source files will >+# be generated. Documented entities will be cross-referenced with these sources. >+# Note: To get rid of all source code in the generated output, make sure also >+# VERBATIM_HEADERS is set to NO. >+ >+SOURCE_BROWSER = NO >+ >+# Setting the INLINE_SOURCES tag to YES will include the body >+# of functions and classes directly in the documentation. >+ >+INLINE_SOURCES = NO >+ >+# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct >+# doxygen to hide any special comment blocks from generated source code >+# fragments. Normal C and C++ comments will always remain visible. >+ >+STRIP_CODE_COMMENTS = YES >+ >+# If the REFERENCED_BY_RELATION tag is set to YES (the default) >+# then for each documented function all documented >+# functions referencing it will be listed. >+ >+REFERENCED_BY_RELATION = YES >+ >+# If the REFERENCES_RELATION tag is set to YES (the default) >+# then for each documented function all documented entities >+# called/used by that function will be listed. >+ >+REFERENCES_RELATION = YES >+ >+# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen >+# will generate a verbatim copy of the header file for each class for >+# which an include is specified. Set to NO to disable this. >+ >+VERBATIM_HEADERS = NO >+ >+#--------------------------------------------------------------------------- >+# configuration options related to the alphabetical class index >+#--------------------------------------------------------------------------- >+ >+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index >+# of all compounds will be generated. Enable this if the project >+# contains a lot of classes, structs, unions or interfaces. >+ >+ALPHABETICAL_INDEX = NO >+ >+# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then >+# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns >+# in which this list will be split (can be a number in the range [1..20]) >+ >+COLS_IN_ALPHA_INDEX = 5 >+ >+# In case all classes in a project start with a common prefix, all >+# classes will be put under the same header in the alphabetical index. >+# The IGNORE_PREFIX tag can be used to specify one or more prefixes that >+# should be ignored while generating the index headers. >+ >+IGNORE_PREFIX = >+ >+#--------------------------------------------------------------------------- >+# configuration options related to the HTML output >+#--------------------------------------------------------------------------- >+ >+# If the GENERATE_HTML tag is set to YES (the default) Doxygen will >+# generate HTML output. >+ >+GENERATE_HTML = YES >+ >+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. >+# If a relative path is entered the value of OUTPUT_DIRECTORY will be >+# put in front of it. If left blank `html' will be used as the default path. >+ >+HTML_OUTPUT = html >+ >+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for >+# each generated HTML page (for example: .htm,.php,.asp). If it is left blank >+# doxygen will generate files with .html extension. >+ >+HTML_FILE_EXTENSION = .html >+ >+# The HTML_HEADER tag can be used to specify a personal HTML header for >+# each generated HTML page. If it is left blank doxygen will generate a >+# standard header. >+ >+HTML_HEADER = >+ >+# The HTML_FOOTER tag can be used to specify a personal HTML footer for >+# each generated HTML page. If it is left blank doxygen will generate a >+# standard footer. >+ >+HTML_FOOTER = >+ >+# The HTML_STYLESHEET tag can be used to specify a user-defined cascading >+# style sheet that is used by each HTML page. It can be used to >+# fine-tune the look of the HTML output. If the tag is left blank doxygen >+# will generate a default style sheet. Note that doxygen will try to copy >+# the style sheet file to the HTML output directory, so don't put your own >+# stylesheet in the HTML output directory as well, or it will be erased! >+ >+HTML_STYLESHEET = >+ >+# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, >+# files or namespaces will be aligned in HTML using tables. If set to >+# NO a bullet list will be used. >+ >+HTML_ALIGN_MEMBERS = YES >+ >+# If the GENERATE_HTMLHELP tag is set to YES, additional index files >+# will be generated that can be used as input for tools like the >+# Microsoft HTML help workshop to generate a compressed HTML help file (.chm) >+# of the generated HTML documentation. >+ >+GENERATE_HTMLHELP = NO >+ >+# If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can >+# be used to specify the file name of the resulting .chm file. You >+# can add a path in front of the file if the result should not be >+# written to the html output directory. >+ >+CHM_FILE = >+ >+# If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can >+# be used to specify the location (absolute path including file name) of >+# the HTML help compiler (hhc.exe). If non-empty doxygen will try to run >+# the HTML help compiler on the generated index.hhp. >+ >+HHC_LOCATION = >+ >+# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag >+# controls if a separate .chi index file is generated (YES) or that >+# it should be included in the master .chm file (NO). >+ >+GENERATE_CHI = NO >+ >+# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag >+# controls whether a binary table of contents is generated (YES) or a >+# normal table of contents (NO) in the .chm file. >+ >+BINARY_TOC = NO >+ >+# The TOC_EXPAND flag can be set to YES to add extra items for group members >+# to the contents of the HTML help documentation and to the tree view. >+ >+TOC_EXPAND = NO >+ >+# The DISABLE_INDEX tag can be used to turn on/off the condensed index at >+# top of each HTML page. The value NO (the default) enables the index and >+# the value YES disables it. >+ >+DISABLE_INDEX = NO >+ >+# This tag can be used to set the number of enum values (range [1..20]) >+# that doxygen will group on one line in the generated HTML documentation. >+ >+ENUM_VALUES_PER_LINE = 4 >+ >+# If the GENERATE_TREEVIEW tag is set to YES, a side panel will be >+# generated containing a tree-like index structure (just like the one that >+# is generated for HTML Help). For this to work a browser that supports >+# JavaScript, DHTML, CSS and frames is required (for instance Mozilla 1.0+, >+# Netscape 6.0+, Internet explorer 5.0+, or Konqueror). Windows users are >+# probably better off using the HTML help feature. >+ >+GENERATE_TREEVIEW = NO >+ >+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be >+# used to set the initial width (in pixels) of the frame in which the tree >+# is shown. >+ >+TREEVIEW_WIDTH = 250 >+ >+#--------------------------------------------------------------------------- >+# configuration options related to the LaTeX output >+#--------------------------------------------------------------------------- >+ >+# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will >+# generate Latex output. >+ >+GENERATE_LATEX = NO >+ >+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. >+# If a relative path is entered the value of OUTPUT_DIRECTORY will be >+# put in front of it. If left blank `latex' will be used as the default path. >+ >+LATEX_OUTPUT = latex >+ >+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be >+# invoked. If left blank `latex' will be used as the default command name. >+ >+LATEX_CMD_NAME = latex >+ >+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to >+# generate index for LaTeX. If left blank `makeindex' will be used as the >+# default command name. >+ >+MAKEINDEX_CMD_NAME = makeindex >+ >+# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact >+# LaTeX documents. This may be useful for small projects and may help to >+# save some trees in general. >+ >+COMPACT_LATEX = NO >+ >+# The PAPER_TYPE tag can be used to set the paper type that is used >+# by the printer. Possible values are: a4, a4wide, letter, legal and >+# executive. If left blank a4wide will be used. >+ >+PAPER_TYPE = a4wide >+ >+# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX >+# packages that should be included in the LaTeX output. >+ >+EXTRA_PACKAGES = >+ >+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for >+# the generated latex document. The header should contain everything until >+# the first chapter. If it is left blank doxygen will generate a >+# standard header. Notice: only use this tag if you know what you are doing! >+ >+LATEX_HEADER = >+ >+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated >+# is prepared for conversion to pdf (using ps2pdf). The pdf file will >+# contain links (just like the HTML output) instead of page references >+# This makes the output suitable for online browsing using a pdf viewer. >+ >+PDF_HYPERLINKS = NO >+ >+# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of >+# plain latex in the generated Makefile. Set this option to YES to get a >+# higher quality PDF documentation. >+ >+USE_PDFLATEX = NO >+ >+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. >+# command to the generated LaTeX files. This will instruct LaTeX to keep >+# running if errors occur, instead of asking the user for help. >+# This option is also used when generating formulas in HTML. >+ >+LATEX_BATCHMODE = NO >+ >+# If LATEX_HIDE_INDICES is set to YES then doxygen will not >+# include the index chapters (such as File Index, Compound Index, etc.) >+# in the output. >+ >+LATEX_HIDE_INDICES = NO >+ >+#--------------------------------------------------------------------------- >+# configuration options related to the RTF output >+#--------------------------------------------------------------------------- >+ >+# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output >+# The RTF output is optimized for Word 97 and may not look very pretty with >+# other RTF readers or editors. >+ >+GENERATE_RTF = NO >+ >+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. >+# If a relative path is entered the value of OUTPUT_DIRECTORY will be >+# put in front of it. If left blank `rtf' will be used as the default path. >+ >+RTF_OUTPUT = rtf >+ >+# If the COMPACT_RTF tag is set to YES Doxygen generates more compact >+# RTF documents. This may be useful for small projects and may help to >+# save some trees in general. >+ >+COMPACT_RTF = NO >+ >+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated >+# will contain hyperlink fields. The RTF file will >+# contain links (just like the HTML output) instead of page references. >+# This makes the output suitable for online browsing using WORD or other >+# programs which support those fields. >+# Note: wordpad (write) and others do not support links. >+ >+RTF_HYPERLINKS = NO >+ >+# Load stylesheet definitions from file. Syntax is similar to doxygen's >+# config file, i.e. a series of assignments. You only have to provide >+# replacements, missing definitions are set to their default value. >+ >+RTF_STYLESHEET_FILE = >+ >+# Set optional variables used in the generation of an rtf document. >+# Syntax is similar to doxygen's config file. >+ >+RTF_EXTENSIONS_FILE = >+ >+#--------------------------------------------------------------------------- >+# configuration options related to the man page output >+#--------------------------------------------------------------------------- >+ >+# If the GENERATE_MAN tag is set to YES (the default) Doxygen will >+# generate man pages >+ >+GENERATE_MAN = NO >+ >+# The MAN_OUTPUT tag is used to specify where the man pages will be put. >+# If a relative path is entered the value of OUTPUT_DIRECTORY will be >+# put in front of it. If left blank `man' will be used as the default path. >+ >+MAN_OUTPUT = man >+ >+# The MAN_EXTENSION tag determines the extension that is added to >+# the generated man pages (default is the subroutine's section .3) >+ >+MAN_EXTENSION = .3 >+ >+# If the MAN_LINKS tag is set to YES and Doxygen generates man output, >+# then it will generate one additional man file for each entity >+# documented in the real man page(s). These additional files >+# only source the real man page, but without them the man command >+# would be unable to find the correct page. The default is NO. >+ >+MAN_LINKS = NO >+ >+#--------------------------------------------------------------------------- >+# configuration options related to the XML output >+#--------------------------------------------------------------------------- >+ >+# If the GENERATE_XML tag is set to YES Doxygen will >+# generate an XML file that captures the structure of >+# the code including all documentation. >+ >+GENERATE_XML = NO >+ >+# The XML_OUTPUT tag is used to specify where the XML pages will be put. >+# If a relative path is entered the value of OUTPUT_DIRECTORY will be >+# put in front of it. If left blank `xml' will be used as the default path. >+ >+XML_OUTPUT = xml >+ >+# The XML_SCHEMA tag can be used to specify an XML schema, >+# which can be used by a validating XML parser to check the >+# syntax of the XML files. >+ >+XML_SCHEMA = >+ >+# The XML_DTD tag can be used to specify an XML DTD, >+# which can be used by a validating XML parser to check the >+# syntax of the XML files. >+ >+XML_DTD = >+ >+# If the XML_PROGRAMLISTING tag is set to YES Doxygen will >+# dump the program listings (including syntax highlighting >+# and cross-referencing information) to the XML output. Note that >+# enabling this will significantly increase the size of the XML output. >+ >+XML_PROGRAMLISTING = YES >+ >+#--------------------------------------------------------------------------- >+# configuration options for the AutoGen Definitions output >+#--------------------------------------------------------------------------- >+ >+# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will >+# generate an AutoGen Definitions (see autogen.sf.net) file >+# that captures the structure of the code including all >+# documentation. Note that this feature is still experimental >+# and incomplete at the moment. >+ >+GENERATE_AUTOGEN_DEF = NO >+ >+#--------------------------------------------------------------------------- >+# configuration options related to the Perl module output >+#--------------------------------------------------------------------------- >+ >+# If the GENERATE_PERLMOD tag is set to YES Doxygen will >+# generate a Perl module file that captures the structure of >+# the code including all documentation. Note that this >+# feature is still experimental and incomplete at the >+# moment. >+ >+GENERATE_PERLMOD = NO >+ >+# If the PERLMOD_LATEX tag is set to YES Doxygen will generate >+# the necessary Makefile rules, Perl scripts and LaTeX code to be able >+# to generate PDF and DVI output from the Perl module output. >+ >+PERLMOD_LATEX = NO >+ >+# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be >+# nicely formatted so it can be parsed by a human reader. This is useful >+# if you want to understand what is going on. On the other hand, if this >+# tag is set to NO the size of the Perl module output will be much smaller >+# and Perl will parse it just the same. >+ >+PERLMOD_PRETTY = YES >+ >+# The names of the make variables in the generated doxyrules.make file >+# are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. >+# This is useful so different doxyrules.make files included by the same >+# Makefile don't overwrite each other's variables. >+ >+PERLMOD_MAKEVAR_PREFIX = >+ >+#--------------------------------------------------------------------------- >+# Configuration options related to the preprocessor >+#--------------------------------------------------------------------------- >+ >+# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will >+# evaluate all C-preprocessor directives found in the sources and include >+# files. >+ >+ENABLE_PREPROCESSING = YES >+ >+# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro >+# names in the source code. If set to NO (the default) only conditional >+# compilation will be performed. Macro expansion can be done in a controlled >+# way by setting EXPAND_ONLY_PREDEF to YES. >+ >+MACRO_EXPANSION = YES >+ >+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES >+# then the macro expansion is limited to the macros specified with the >+# PREDEFINED and EXPAND_AS_PREDEFINED tags. >+ >+EXPAND_ONLY_PREDEF = YES >+ >+# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files >+# in the INCLUDE_PATH (see below) will be search if a #include is found. >+ >+SEARCH_INCLUDES = YES >+ >+# The INCLUDE_PATH tag can be used to specify one or more directories that >+# contain include files that are not input files but should be processed by >+# the preprocessor. >+ >+INCLUDE_PATH = >+ >+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard >+# patterns (like *.h and *.hpp) to filter out the header-files in the >+# directories. If left blank, the patterns specified with FILE_PATTERNS will >+# be used. >+ >+INCLUDE_FILE_PATTERNS = >+ >+# The PREDEFINED tag can be used to specify one or more macro names that >+# are defined before the preprocessor is started (similar to the -D option of >+# gcc). The argument of the tag is a list of macros of the form: name >+# or name=definition (no spaces). If the definition and the = are >+# omitted =1 is assumed. >+ >+PREDEFINED = __KERNEL__ \ >+ DRM(x)=x \ >+ __OS_HAS_AGP=1 \ >+ __OS_HAS_MTRR=1 >+ >+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then >+# this tag can be used to specify a list of macro names that should be expanded. >+# The macro definition that is found in the sources will be used. >+# Use the PREDEFINED tag if you want to use a different macro definition. >+ >+EXPAND_AS_DEFINED = DRMFILE \ >+ DRM_IOCTL_ARGS \ >+ DRM_IRQ_ARGS \ >+ DRM_TASKQUEUE_ARGS >+ >+# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then >+# doxygen's preprocessor will remove all function-like macros that are alone >+# on a line, have an all uppercase name, and do not end with a semicolon. Such >+# function macros are typically used for boiler-plate code, and will confuse the >+# parser if not removed. >+ >+SKIP_FUNCTION_MACROS = YES >+ >+#--------------------------------------------------------------------------- >+# Configuration::additions related to external references >+#--------------------------------------------------------------------------- >+ >+# The TAGFILES option can be used to specify one or more tagfiles. >+# Optionally an initial location of the external documentation >+# can be added for each tagfile. The format of a tag file without >+# this location is as follows: >+# TAGFILES = file1 file2 ... >+# Adding location for the tag files is done as follows: >+# TAGFILES = file1=loc1 "file2 = loc2" ... >+# where "loc1" and "loc2" can be relative or absolute paths or >+# URLs. If a location is present for each tag, the installdox tool >+# does not have to be run to correct the links. >+# Note that each tag file must have a unique name >+# (where the name does NOT include the path) >+# If a tag file is not located in the directory in which doxygen >+# is run, you must also specify the path to the tagfile here. >+ >+TAGFILES = >+ >+# When a file name is specified after GENERATE_TAGFILE, doxygen will create >+# a tag file that is based on the input files it reads. >+ >+GENERATE_TAGFILE = >+ >+# If the ALLEXTERNALS tag is set to YES all external classes will be listed >+# in the class index. If set to NO only the inherited external classes >+# will be listed. >+ >+ALLEXTERNALS = NO >+ >+# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed >+# in the modules index. If set to NO, only the current project's groups will >+# be listed. >+ >+EXTERNAL_GROUPS = YES >+ >+# The PERL_PATH should be the absolute path and name of the perl script >+# interpreter (i.e. the result of `which perl'). >+ >+PERL_PATH = /usr/bin/perl >+ >+#--------------------------------------------------------------------------- >+# Configuration options related to the dot tool >+#--------------------------------------------------------------------------- >+ >+# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will >+# generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base or >+# super classes. Setting the tag to NO turns the diagrams off. Note that this >+# option is superseded by the HAVE_DOT option below. This is only a fallback. It is >+# recommended to install and use dot, since it yields more powerful graphs. >+ >+CLASS_DIAGRAMS = YES >+ >+# If set to YES, the inheritance and collaboration graphs will hide >+# inheritance and usage relations if the target is undocumented >+# or is not a class. >+ >+HIDE_UNDOC_RELATIONS = YES >+ >+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is >+# available from the path. This tool is part of Graphviz, a graph visualization >+# toolkit from AT&T and Lucent Bell Labs. The other options in this section >+# have no effect if this option is set to NO (the default) >+ >+HAVE_DOT = NO >+ >+# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen >+# will generate a graph for each documented class showing the direct and >+# indirect inheritance relations. Setting this tag to YES will force the >+# the CLASS_DIAGRAMS tag to NO. >+ >+CLASS_GRAPH = YES >+ >+# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen >+# will generate a graph for each documented class showing the direct and >+# indirect implementation dependencies (inheritance, containment, and >+# class references variables) of the class with other documented classes. >+ >+COLLABORATION_GRAPH = YES >+ >+# If the UML_LOOK tag is set to YES doxygen will generate inheritance and >+# collaboration diagrams in a style similar to the OMG's Unified Modeling >+# Language. >+ >+UML_LOOK = NO >+ >+# If set to YES, the inheritance and collaboration graphs will show the >+# relations between templates and their instances. >+ >+TEMPLATE_RELATIONS = YES >+ >+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT >+# tags are set to YES then doxygen will generate a graph for each documented >+# file showing the direct and indirect include dependencies of the file with >+# other documented files. >+ >+INCLUDE_GRAPH = YES >+ >+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and >+# HAVE_DOT tags are set to YES then doxygen will generate a graph for each >+# documented header file showing the documented files that directly or >+# indirectly include this file. >+ >+INCLUDED_BY_GRAPH = YES >+ >+# If the CALL_GRAPH and HAVE_DOT tags are set to YES then doxygen will >+# generate a call dependency graph for every global function or class method. >+# Note that enabling this option will significantly increase the time of a run. >+# So in most cases it will be better to enable call graphs for selected >+# functions only using the \callgraph command. >+ >+CALL_GRAPH = NO >+ >+# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen >+# will graphical hierarchy of all classes instead of a textual one. >+ >+GRAPHICAL_HIERARCHY = YES >+ >+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images >+# generated by dot. Possible values are png, jpg, or gif >+# If left blank png will be used. >+ >+DOT_IMAGE_FORMAT = png >+ >+# The tag DOT_PATH can be used to specify the path where the dot tool can be >+# found. If left blank, it is assumed the dot tool can be found on the path. >+ >+DOT_PATH = >+ >+# The DOTFILE_DIRS tag can be used to specify one or more directories that >+# contain dot files that are included in the documentation (see the >+# \dotfile command). >+ >+DOTFILE_DIRS = >+ >+# The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width >+# (in pixels) of the graphs generated by dot. If a graph becomes larger than >+# this value, doxygen will try to truncate the graph, so that it fits within >+# the specified constraint. Beware that most browsers cannot cope with very >+# large images. >+ >+MAX_DOT_GRAPH_WIDTH = 1024 >+ >+# The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height >+# (in pixels) of the graphs generated by dot. If a graph becomes larger than >+# this value, doxygen will try to truncate the graph, so that it fits within >+# the specified constraint. Beware that most browsers cannot cope with very >+# large images. >+ >+MAX_DOT_GRAPH_HEIGHT = 1024 >+ >+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the >+# graphs generated by dot. A depth value of 3 means that only nodes reachable >+# from the root by following a path via at most 3 edges will be shown. Nodes that >+# lay further from the root node will be omitted. Note that setting this option to >+# 1 or 2 may greatly reduce the computation time needed for large code bases. Also >+# note that a graph may be further truncated if the graph's image dimensions are >+# not sufficient to fit the graph (see MAX_DOT_GRAPH_WIDTH and MAX_DOT_GRAPH_HEIGHT). >+# If 0 is used for the depth value (the default), the graph is not depth-constrained. >+ >+MAX_DOT_GRAPH_DEPTH = 0 >+ >+# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will >+# generate a legend page explaining the meaning of the various boxes and >+# arrows in the dot generated graphs. >+ >+GENERATE_LEGEND = YES >+ >+# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will >+# remove the intermediate dot files that are used to generate >+# the various graphs. >+ >+DOT_CLEANUP = YES >+ >+#--------------------------------------------------------------------------- >+# Configuration::additions related to the search engine >+#--------------------------------------------------------------------------- >+ >+# The SEARCHENGINE tag specifies whether or not a search engine should be >+# used. If set to NO the values of all tags below this one will be ignored. >+ >+SEARCHENGINE = NO >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_agpsupport.c linux-2.6.23.i686/drivers/char/drm/drm_agpsupport.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_agpsupport.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_agpsupport.c 2008-01-06 09:24:57.000000000 +0100 >@@ -68,7 +68,6 @@ int drm_agp_info(struct drm_device *dev, > > return 0; > } >- > EXPORT_SYMBOL(drm_agp_info); > > int drm_agp_info_ioctl(struct drm_device *dev, void *data, >@@ -95,16 +94,25 @@ int drm_agp_info_ioctl(struct drm_device > */ > int drm_agp_acquire(struct drm_device * dev) > { >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+ int retcode; >+#endif >+ > if (!dev->agp) > return -ENODEV; > if (dev->agp->acquired) > return -EBUSY; >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+ if ((retcode = agp_backend_acquire())) >+ return retcode; >+#else > if (!(dev->agp->bridge = agp_backend_acquire(dev->pdev))) > return -ENODEV; >+#endif >+ > dev->agp->acquired = 1; > return 0; > } >- > EXPORT_SYMBOL(drm_agp_acquire); > > /** >@@ -133,13 +141,18 @@ int drm_agp_acquire_ioctl(struct drm_dev > * > * Verifies the AGP device has been acquired and calls \c agp_backend_release. > */ >-int drm_agp_release(struct drm_device * dev) >+int drm_agp_release(struct drm_device *dev) > { > if (!dev->agp || !dev->agp->acquired) > return -EINVAL; >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+ agp_backend_release(); >+#else > agp_backend_release(dev->agp->bridge); >+#endif > dev->agp->acquired = 0; > return 0; >+ > } > EXPORT_SYMBOL(drm_agp_release); > >@@ -159,18 +172,20 @@ int drm_agp_release_ioctl(struct drm_dev > * Verifies the AGP device has been acquired but not enabled, and calls > * \c agp_enable. > */ >-int drm_agp_enable(struct drm_device * dev, struct drm_agp_mode mode) >+int drm_agp_enable(struct drm_device *dev, struct drm_agp_mode mode) > { > if (!dev->agp || !dev->agp->acquired) > return -EINVAL; > > dev->agp->mode = mode.mode; >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+ agp_enable(mode.mode); >+#else > agp_enable(dev->agp->bridge, mode.mode); >- dev->agp->base = dev->agp->agp_info.aper_base; >+#endif > dev->agp->enabled = 1; > return 0; > } >- > EXPORT_SYMBOL(drm_agp_enable); > > int drm_agp_enable_ioctl(struct drm_device *dev, void *data, >@@ -296,6 +311,7 @@ int drm_agp_unbind_ioctl(struct drm_devi > return drm_agp_unbind(dev, request); > } > >+ > /** > * Bind AGP memory into the GATT (ioctl) > * >@@ -340,6 +356,7 @@ int drm_agp_bind_ioctl(struct drm_device > return drm_agp_bind(dev, request); > } > >+ > /** > * Free AGP memory (ioctl). > * >@@ -383,6 +400,7 @@ int drm_agp_free_ioctl(struct drm_device > return drm_agp_free(dev, request); > } > >+ > /** > * Initialize the AGP resources. > * >@@ -399,6 +417,10 @@ struct drm_agp_head *drm_agp_init(struct > if (!(head = drm_alloc(sizeof(*head), DRM_MEM_AGPLISTS))) > return NULL; > memset((void *)head, 0, sizeof(*head)); >+ >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+ agp_copy_info(&head->agp_info); >+#else > head->bridge = agp_find_bridge(dev->pdev); > if (!head->bridge) { > if (!(head->bridge = agp_backend_acquire(dev->pdev))) { >@@ -410,6 +432,7 @@ struct drm_agp_head *drm_agp_init(struct > } else { > agp_copy_info(head->bridge, &head->agp_info); > } >+#endif > if (head->agp_info.chipset == NOT_SUPPORTED) { > drm_free(head, sizeof(*head), DRM_MEM_AGPLISTS); > return NULL; >@@ -417,16 +440,23 @@ struct drm_agp_head *drm_agp_init(struct > INIT_LIST_HEAD(&head->memory); > head->cant_use_aperture = head->agp_info.cant_use_aperture; > head->page_mask = head->agp_info.page_mask; >- >+ head->base = head->agp_info.aper_base; > return head; > } > > /** Calls agp_allocate_memory() */ >-DRM_AGP_MEM *drm_agp_allocate_memory(struct agp_bridge_data * bridge, >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+DRM_AGP_MEM *drm_agp_allocate_memory(size_t pages, u32 type) >+{ >+ return agp_allocate_memory(pages, type); >+} >+#else >+DRM_AGP_MEM *drm_agp_allocate_memory(struct agp_bridge_data *bridge, > size_t pages, u32 type) > { > return agp_allocate_memory(bridge, pages, type); > } >+#endif > > /** Calls agp_free_memory() */ > int drm_agp_free_memory(DRM_AGP_MEM * handle) >@@ -444,6 +474,7 @@ int drm_agp_bind_memory(DRM_AGP_MEM * ha > return -EINVAL; > return agp_bind_memory(handle, start); > } >+EXPORT_SYMBOL(drm_agp_bind_memory); > > /** Calls agp_unbind_memory() */ > int drm_agp_unbind_memory(DRM_AGP_MEM * handle) >@@ -453,4 +484,189 @@ int drm_agp_unbind_memory(DRM_AGP_MEM * > return agp_unbind_memory(handle); > } > >+ >+ >+/* >+ * AGP ttm backend interface. >+ */ >+ >+#ifndef AGP_USER_TYPES >+#define AGP_USER_TYPES (1 << 16) >+#define AGP_USER_MEMORY (AGP_USER_TYPES) >+#define AGP_USER_CACHED_MEMORY (AGP_USER_TYPES + 1) >+#endif >+#define AGP_REQUIRED_MAJOR 0 >+#define AGP_REQUIRED_MINOR 102 >+ >+static int drm_agp_needs_unbind_cache_adjust(struct drm_ttm_backend *backend) >+{ >+ return ((backend->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); >+} >+ >+ >+static int drm_agp_populate(struct drm_ttm_backend *backend, >+ unsigned long num_pages, struct page **pages, >+ struct page *dummy_read_page) >+{ >+ struct drm_agp_ttm_backend *agp_be = >+ container_of(backend, struct drm_agp_ttm_backend, backend); >+ struct page **cur_page, **last_page = pages + num_pages; >+ DRM_AGP_MEM *mem; >+ int dummy_page_count = 0; >+ >+ if (drm_alloc_memctl(num_pages * sizeof(void *))) >+ return -1; >+ >+ DRM_DEBUG("drm_agp_populate_ttm\n"); >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+ mem = drm_agp_allocate_memory(num_pages, AGP_USER_MEMORY); >+#else >+ mem = drm_agp_allocate_memory(agp_be->bridge, num_pages, AGP_USER_MEMORY); >+#endif >+ if (!mem) { >+ drm_free_memctl(num_pages * sizeof(void *)); >+ return -1; >+ } >+ >+ DRM_DEBUG("Current page count is %ld\n", (long) mem->page_count); >+ mem->page_count = 0; >+ for (cur_page = pages; cur_page < last_page; ++cur_page) { >+ struct page *page = *cur_page; >+ if (!page) { >+ page = dummy_read_page; >+ ++dummy_page_count; >+ } >+ mem->memory[mem->page_count++] = phys_to_gart(page_to_phys(page)); >+ } >+ if (dummy_page_count) >+ DRM_DEBUG("Mapped %d dummy pages\n", dummy_page_count); >+ agp_be->mem = mem; >+ return 0; >+} >+ >+static int drm_agp_bind_ttm(struct drm_ttm_backend *backend, >+ struct drm_bo_mem_reg *bo_mem) >+{ >+ struct drm_agp_ttm_backend *agp_be = >+ container_of(backend, struct drm_agp_ttm_backend, backend); >+ DRM_AGP_MEM *mem = agp_be->mem; >+ int ret; >+ int snooped = (bo_mem->flags & DRM_BO_FLAG_CACHED) && !(bo_mem->flags & DRM_BO_FLAG_CACHED_MAPPED); >+ >+ DRM_DEBUG("drm_agp_bind_ttm\n"); >+ mem->is_flushed = TRUE; >+ mem->type = AGP_USER_MEMORY; >+ /* CACHED MAPPED implies not snooped memory */ >+ if (snooped) >+ mem->type = AGP_USER_CACHED_MEMORY; >+ >+ ret = drm_agp_bind_memory(mem, bo_mem->mm_node->start); >+ if (ret) >+ DRM_ERROR("AGP Bind memory failed\n"); >+ >+ DRM_FLAG_MASKED(backend->flags, (bo_mem->flags & DRM_BO_FLAG_CACHED) ? >+ DRM_BE_FLAG_BOUND_CACHED : 0, >+ DRM_BE_FLAG_BOUND_CACHED); >+ return ret; >+} >+ >+static int drm_agp_unbind_ttm(struct drm_ttm_backend *backend) >+{ >+ struct drm_agp_ttm_backend *agp_be = >+ container_of(backend, struct drm_agp_ttm_backend, backend); >+ >+ DRM_DEBUG("drm_agp_unbind_ttm\n"); >+ if (agp_be->mem->is_bound) >+ return drm_agp_unbind_memory(agp_be->mem); >+ else >+ return 0; >+} >+ >+static void drm_agp_clear_ttm(struct drm_ttm_backend *backend) >+{ >+ struct drm_agp_ttm_backend *agp_be = >+ container_of(backend, struct drm_agp_ttm_backend, backend); >+ DRM_AGP_MEM *mem = agp_be->mem; >+ >+ DRM_DEBUG("drm_agp_clear_ttm\n"); >+ if (mem) { >+ unsigned long num_pages = mem->page_count; >+ backend->func->unbind(backend); >+ agp_free_memory(mem); >+ drm_free_memctl(num_pages * sizeof(void *)); >+ } >+ agp_be->mem = NULL; >+} >+ >+static void drm_agp_destroy_ttm(struct drm_ttm_backend *backend) >+{ >+ struct drm_agp_ttm_backend *agp_be; >+ >+ if (backend) { >+ DRM_DEBUG("drm_agp_destroy_ttm\n"); >+ agp_be = container_of(backend, struct drm_agp_ttm_backend, backend); >+ if (agp_be) { >+ if (agp_be->mem) >+ backend->func->clear(backend); >+ drm_ctl_free(agp_be, sizeof(*agp_be), DRM_MEM_TTM); >+ } >+ } >+} >+ >+static struct drm_ttm_backend_func agp_ttm_backend = { >+ .needs_ub_cache_adjust = drm_agp_needs_unbind_cache_adjust, >+ .populate = drm_agp_populate, >+ .clear = drm_agp_clear_ttm, >+ .bind = drm_agp_bind_ttm, >+ .unbind = drm_agp_unbind_ttm, >+ .destroy = drm_agp_destroy_ttm, >+}; >+ >+struct drm_ttm_backend *drm_agp_init_ttm(struct drm_device *dev) >+{ >+ >+ struct drm_agp_ttm_backend *agp_be; >+ struct agp_kern_info *info; >+ >+ if (!dev->agp) { >+ DRM_ERROR("AGP is not initialized.\n"); >+ return NULL; >+ } >+ info = &dev->agp->agp_info; >+ >+ if (info->version.major != AGP_REQUIRED_MAJOR || >+ info->version.minor < AGP_REQUIRED_MINOR) { >+ DRM_ERROR("Wrong agpgart version %d.%d\n" >+ "\tYou need at least version %d.%d.\n", >+ info->version.major, >+ info->version.minor, >+ AGP_REQUIRED_MAJOR, >+ AGP_REQUIRED_MINOR); >+ return NULL; >+ } >+ >+ >+ agp_be = drm_ctl_calloc(1, sizeof(*agp_be), DRM_MEM_TTM); >+ if (!agp_be) >+ return NULL; >+ >+ agp_be->mem = NULL; >+ >+ agp_be->bridge = dev->agp->bridge; >+ agp_be->populated = FALSE; >+ agp_be->backend.func = &agp_ttm_backend; >+ agp_be->backend.dev = dev; >+ >+ return &agp_be->backend; >+} >+EXPORT_SYMBOL(drm_agp_init_ttm); >+ >+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,25) >+void drm_agp_chipset_flush(struct drm_device *dev) >+{ >+ agp_flush_chipset(dev->agp->bridge); >+} >+EXPORT_SYMBOL(drm_agp_flush_chipset); >+#endif >+ > #endif /* __OS_HAS_AGP */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_auth.c linux-2.6.23.i686/drivers/char/drm/drm_auth.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_auth.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_auth.c 2008-01-06 09:24:57.000000000 +0100 >@@ -83,7 +83,6 @@ static int drm_add_magic(struct drm_devi > return -ENOMEM; > memset(entry, 0, sizeof(*entry)); > entry->priv = priv; >- > entry->hash_item.key = (unsigned long)magic; > mutex_lock(&dev->struct_mutex); > drm_ht_insert_item(&dev->magiclist, &entry->hash_item); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_bo.c linux-2.6.23.i686/drivers/char/drm/drm_bo.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_bo.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_bo.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,2724 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+ >+/* >+ * Locking may look a bit complicated but isn't really: >+ * >+ * The buffer usage atomic_t needs to be protected by dev->struct_mutex >+ * when there is a chance that it can be zero before or after the operation. >+ * >+ * dev->struct_mutex also protects all lists and list heads, >+ * Hash tables and hash heads. >+ * >+ * bo->mutex protects the buffer object itself excluding the usage field. >+ * bo->mutex does also protect the buffer list heads, so to manipulate those, >+ * we need both the bo->mutex and the dev->struct_mutex. >+ * >+ * Locking order is bo->mutex, dev->struct_mutex. Therefore list traversal >+ * is a bit complicated. When dev->struct_mutex is released to grab bo->mutex, >+ * the list traversal will, in general, need to be restarted. >+ * >+ */ >+ >+static void drm_bo_destroy_locked(struct drm_buffer_object *bo); >+static int drm_bo_setup_vm_locked(struct drm_buffer_object *bo); >+static void drm_bo_takedown_vm_locked(struct drm_buffer_object *bo); >+static void drm_bo_unmap_virtual(struct drm_buffer_object *bo); >+ >+static inline uint64_t drm_bo_type_flags(unsigned type) >+{ >+ return (1ULL << (24 + type)); >+} >+ >+/* >+ * bo locked. dev->struct_mutex locked. >+ */ >+ >+void drm_bo_add_to_pinned_lru(struct drm_buffer_object *bo) >+{ >+ struct drm_mem_type_manager *man; >+ >+ DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); >+ DRM_ASSERT_LOCKED(&bo->mutex); >+ >+ man = &bo->dev->bm.man[bo->pinned_mem_type]; >+ list_add_tail(&bo->pinned_lru, &man->pinned); >+} >+ >+void drm_bo_add_to_lru(struct drm_buffer_object *bo) >+{ >+ struct drm_mem_type_manager *man; >+ >+ DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); >+ >+ if (!(bo->mem.proposed_flags & (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT)) >+ || bo->mem.mem_type != bo->pinned_mem_type) { >+ man = &bo->dev->bm.man[bo->mem.mem_type]; >+ list_add_tail(&bo->lru, &man->lru); >+ } else { >+ INIT_LIST_HEAD(&bo->lru); >+ } >+} >+ >+static int drm_bo_vm_pre_move(struct drm_buffer_object *bo, int old_is_pci) >+{ >+#ifdef DRM_ODD_MM_COMPAT >+ int ret; >+ >+ if (!bo->map_list.map) >+ return 0; >+ >+ ret = drm_bo_lock_kmm(bo); >+ if (ret) >+ return ret; >+ drm_bo_unmap_virtual(bo); >+ if (old_is_pci) >+ drm_bo_finish_unmap(bo); >+#else >+ if (!bo->map_list.map) >+ return 0; >+ >+ drm_bo_unmap_virtual(bo); >+#endif >+ return 0; >+} >+ >+static void drm_bo_vm_post_move(struct drm_buffer_object *bo) >+{ >+#ifdef DRM_ODD_MM_COMPAT >+ int ret; >+ >+ if (!bo->map_list.map) >+ return; >+ >+ ret = drm_bo_remap_bound(bo); >+ if (ret) { >+ DRM_ERROR("Failed to remap a bound buffer object.\n" >+ "\tThis might cause a sigbus later.\n"); >+ } >+ drm_bo_unlock_kmm(bo); >+#endif >+} >+ >+/* >+ * Call bo->mutex locked. >+ */ >+ >+static int drm_bo_add_ttm(struct drm_buffer_object *bo) >+{ >+ struct drm_device *dev = bo->dev; >+ int ret = 0; >+ uint32_t page_flags = 0; >+ >+ DRM_ASSERT_LOCKED(&bo->mutex); >+ bo->ttm = NULL; >+ >+ if (bo->mem.proposed_flags & DRM_BO_FLAG_WRITE) >+ page_flags |= DRM_TTM_PAGE_WRITE; >+ >+ switch (bo->type) { >+ case drm_bo_type_device: >+ case drm_bo_type_kernel: >+ bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, >+ page_flags, dev->bm.dummy_read_page); >+ if (!bo->ttm) >+ ret = -ENOMEM; >+ break; >+ case drm_bo_type_user: >+ bo->ttm = drm_ttm_create(dev, bo->num_pages << PAGE_SHIFT, >+ page_flags | DRM_TTM_PAGE_USER, >+ dev->bm.dummy_read_page); >+ if (!bo->ttm) >+ ret = -ENOMEM; >+ >+ ret = drm_ttm_set_user(bo->ttm, current, >+ bo->buffer_start, >+ bo->num_pages); >+ if (ret) >+ return ret; >+ >+ break; >+ default: >+ DRM_ERROR("Illegal buffer object type\n"); >+ ret = -EINVAL; >+ break; >+ } >+ >+ return ret; >+} >+ >+static int drm_bo_handle_move_mem(struct drm_buffer_object *bo, >+ struct drm_bo_mem_reg *mem, >+ int evict, int no_wait) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_buffer_manager *bm = &dev->bm; >+ int old_is_pci = drm_mem_reg_is_pci(dev, &bo->mem); >+ int new_is_pci = drm_mem_reg_is_pci(dev, mem); >+ struct drm_mem_type_manager *old_man = &bm->man[bo->mem.mem_type]; >+ struct drm_mem_type_manager *new_man = &bm->man[mem->mem_type]; >+ int ret = 0; >+ >+ if (old_is_pci || new_is_pci || >+ ((mem->flags ^ bo->mem.flags) & DRM_BO_FLAG_CACHED)) >+ ret = drm_bo_vm_pre_move(bo, old_is_pci); >+ if (ret) >+ return ret; >+ >+ /* >+ * Create and bind a ttm if required. >+ */ >+ >+ if (!(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (bo->ttm == NULL)) { >+ ret = drm_bo_add_ttm(bo); >+ if (ret) >+ goto out_err; >+ >+ if (mem->mem_type != DRM_BO_MEM_LOCAL) { >+ ret = drm_ttm_bind(bo->ttm, mem); >+ if (ret) >+ goto out_err; >+ } >+ } >+ >+ if ((bo->mem.mem_type == DRM_BO_MEM_LOCAL) && bo->ttm == NULL) { >+ >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ uint64_t save_flags = old_mem->flags; >+ uint64_t save_proposed_flags = old_mem->proposed_flags; >+ >+ *old_mem = *mem; >+ mem->mm_node = NULL; >+ old_mem->proposed_flags = save_proposed_flags; >+ DRM_FLAG_MASKED(save_flags, mem->flags, DRM_BO_MASK_MEMTYPE); >+ >+ } else if (!(old_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && >+ !(new_man->flags & _DRM_FLAG_MEMTYPE_FIXED)) { >+ >+ ret = drm_bo_move_ttm(bo, evict, no_wait, mem); >+ >+ } else if (dev->driver->bo_driver->move) { >+ ret = dev->driver->bo_driver->move(bo, evict, no_wait, mem); >+ >+ } else { >+ >+ ret = drm_bo_move_memcpy(bo, evict, no_wait, mem); >+ >+ } >+ >+ if (ret) >+ goto out_err; >+ >+ if (old_is_pci || new_is_pci) >+ drm_bo_vm_post_move(bo); >+ >+ if (bo->priv_flags & _DRM_BO_FLAG_EVICTED) { >+ ret = >+ dev->driver->bo_driver->invalidate_caches(dev, >+ bo->mem.flags); >+ if (ret) >+ DRM_ERROR("Can not flush read caches\n"); >+ } >+ >+ DRM_FLAG_MASKED(bo->priv_flags, >+ (evict) ? _DRM_BO_FLAG_EVICTED : 0, >+ _DRM_BO_FLAG_EVICTED); >+ >+ if (bo->mem.mm_node) >+ bo->offset = (bo->mem.mm_node->start << PAGE_SHIFT) + >+ bm->man[bo->mem.mem_type].gpu_offset; >+ >+ >+ return 0; >+ >+out_err: >+ if (old_is_pci || new_is_pci) >+ drm_bo_vm_post_move(bo); >+ >+ new_man = &bm->man[bo->mem.mem_type]; >+ if ((new_man->flags & _DRM_FLAG_MEMTYPE_FIXED) && bo->ttm) { >+ drm_ttm_unbind(bo->ttm); >+ drm_ttm_destroy(bo->ttm); >+ bo->ttm = NULL; >+ } >+ >+ return ret; >+} >+ >+/* >+ * Call bo->mutex locked. >+ * Wait until the buffer is idle. >+ */ >+ >+int drm_bo_wait(struct drm_buffer_object *bo, int lazy, int ignore_signals, >+ int no_wait) >+{ >+ int ret; >+ >+ DRM_ASSERT_LOCKED(&bo->mutex); >+ >+ if (bo->fence) { >+ if (drm_fence_object_signaled(bo->fence, bo->fence_type, 0)) { >+ drm_fence_usage_deref_unlocked(&bo->fence); >+ return 0; >+ } >+ if (no_wait) >+ return -EBUSY; >+ >+ ret = drm_fence_object_wait(bo->fence, lazy, ignore_signals, >+ bo->fence_type); >+ if (ret) >+ return ret; >+ >+ drm_fence_usage_deref_unlocked(&bo->fence); >+ } >+ return 0; >+} >+EXPORT_SYMBOL(drm_bo_wait); >+ >+static int drm_bo_expire_fence(struct drm_buffer_object *bo, int allow_errors) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_buffer_manager *bm = &dev->bm; >+ >+ if (bo->fence) { >+ if (bm->nice_mode) { >+ unsigned long _end = jiffies + 3 * DRM_HZ; >+ int ret; >+ do { >+ ret = drm_bo_wait(bo, 0, 1, 0); >+ if (ret && allow_errors) >+ return ret; >+ >+ } while (ret && !time_after_eq(jiffies, _end)); >+ >+ if (bo->fence) { >+ bm->nice_mode = 0; >+ DRM_ERROR("Detected GPU lockup or " >+ "fence driver was taken down. " >+ "Evicting buffer.\n"); >+ } >+ } >+ if (bo->fence) >+ drm_fence_usage_deref_unlocked(&bo->fence); >+ } >+ return 0; >+} >+ >+/* >+ * Call dev->struct_mutex locked. >+ * Attempts to remove all private references to a buffer by expiring its >+ * fence object and removing from lru lists and memory managers. >+ */ >+ >+static void drm_bo_cleanup_refs(struct drm_buffer_object *bo, int remove_all) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_buffer_manager *bm = &dev->bm; >+ >+ DRM_ASSERT_LOCKED(&dev->struct_mutex); >+ >+ atomic_inc(&bo->usage); >+ mutex_unlock(&dev->struct_mutex); >+ mutex_lock(&bo->mutex); >+ >+ DRM_FLAG_MASKED(bo->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); >+ >+ if (bo->fence && drm_fence_object_signaled(bo->fence, >+ bo->fence_type, 0)) >+ drm_fence_usage_deref_unlocked(&bo->fence); >+ >+ if (bo->fence && remove_all) >+ (void)drm_bo_expire_fence(bo, 0); >+ >+ mutex_lock(&dev->struct_mutex); >+ >+ if (!atomic_dec_and_test(&bo->usage)) >+ goto out; >+ >+ if (!bo->fence) { >+ list_del_init(&bo->lru); >+ if (bo->mem.mm_node) { >+ drm_mm_put_block(bo->mem.mm_node); >+ if (bo->pinned_node == bo->mem.mm_node) >+ bo->pinned_node = NULL; >+ bo->mem.mm_node = NULL; >+ } >+ list_del_init(&bo->pinned_lru); >+ if (bo->pinned_node) { >+ drm_mm_put_block(bo->pinned_node); >+ bo->pinned_node = NULL; >+ } >+ list_del_init(&bo->ddestroy); >+ mutex_unlock(&bo->mutex); >+ drm_bo_destroy_locked(bo); >+ return; >+ } >+ >+ if (list_empty(&bo->ddestroy)) { >+ drm_fence_object_flush(bo->fence, bo->fence_type); >+ list_add_tail(&bo->ddestroy, &bm->ddestroy); >+ schedule_delayed_work(&bm->wq, >+ ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); >+ } >+ >+out: >+ mutex_unlock(&bo->mutex); >+ return; >+} >+ >+/* >+ * Verify that refcount is 0 and that there are no internal references >+ * to the buffer object. Then destroy it. >+ */ >+ >+static void drm_bo_destroy_locked(struct drm_buffer_object *bo) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_buffer_manager *bm = &dev->bm; >+ >+ DRM_ASSERT_LOCKED(&dev->struct_mutex); >+ >+ if (list_empty(&bo->lru) && bo->mem.mm_node == NULL && >+ list_empty(&bo->pinned_lru) && bo->pinned_node == NULL && >+ list_empty(&bo->ddestroy) && atomic_read(&bo->usage) == 0) { >+ if (bo->fence != NULL) { >+ DRM_ERROR("Fence was non-zero.\n"); >+ drm_bo_cleanup_refs(bo, 0); >+ return; >+ } >+ >+#ifdef DRM_ODD_MM_COMPAT >+ BUG_ON(!list_empty(&bo->vma_list)); >+ BUG_ON(!list_empty(&bo->p_mm_list)); >+#endif >+ >+ if (bo->ttm) { >+ drm_ttm_unbind(bo->ttm); >+ drm_ttm_destroy(bo->ttm); >+ bo->ttm = NULL; >+ } >+ >+ atomic_dec(&bm->count); >+ >+ drm_ctl_free(bo, sizeof(*bo), DRM_MEM_BUFOBJ); >+ >+ return; >+ } >+ >+ /* >+ * Some stuff is still trying to reference the buffer object. >+ * Get rid of those references. >+ */ >+ >+ drm_bo_cleanup_refs(bo, 0); >+ >+ return; >+} >+ >+/* >+ * Call dev->struct_mutex locked. >+ */ >+ >+static void drm_bo_delayed_delete(struct drm_device *dev, int remove_all) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ >+ struct drm_buffer_object *entry, *nentry; >+ struct list_head *list, *next; >+ >+ list_for_each_safe(list, next, &bm->ddestroy) { >+ entry = list_entry(list, struct drm_buffer_object, ddestroy); >+ >+ nentry = NULL; >+ if (next != &bm->ddestroy) { >+ nentry = list_entry(next, struct drm_buffer_object, >+ ddestroy); >+ atomic_inc(&nentry->usage); >+ } >+ >+ drm_bo_cleanup_refs(entry, remove_all); >+ >+ if (nentry) >+ atomic_dec(&nentry->usage); >+ } >+} >+ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20) >+static void drm_bo_delayed_workqueue(void *data) >+#else >+static void drm_bo_delayed_workqueue(struct work_struct *work) >+#endif >+{ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20) >+ struct drm_device *dev = (struct drm_device *) data; >+ struct drm_buffer_manager *bm = &dev->bm; >+#else >+ struct drm_buffer_manager *bm = >+ container_of(work, struct drm_buffer_manager, wq.work); >+ struct drm_device *dev = container_of(bm, struct drm_device, bm); >+#endif >+ >+ DRM_DEBUG("Delayed delete Worker\n"); >+ >+ mutex_lock(&dev->struct_mutex); >+ if (!bm->initialized) { >+ mutex_unlock(&dev->struct_mutex); >+ return; >+ } >+ drm_bo_delayed_delete(dev, 0); >+ if (bm->initialized && !list_empty(&bm->ddestroy)) { >+ schedule_delayed_work(&bm->wq, >+ ((DRM_HZ / 100) < 1) ? 1 : DRM_HZ / 100); >+ } >+ mutex_unlock(&dev->struct_mutex); >+} >+ >+void drm_bo_usage_deref_locked(struct drm_buffer_object **bo) >+{ >+ struct drm_buffer_object *tmp_bo = *bo; >+ bo = NULL; >+ >+ DRM_ASSERT_LOCKED(&tmp_bo->dev->struct_mutex); >+ >+ if (atomic_dec_and_test(&tmp_bo->usage)) >+ drm_bo_destroy_locked(tmp_bo); >+} >+EXPORT_SYMBOL(drm_bo_usage_deref_locked); >+ >+static void drm_bo_base_deref_locked(struct drm_file *file_priv, >+ struct drm_user_object *uo) >+{ >+ struct drm_buffer_object *bo = >+ drm_user_object_entry(uo, struct drm_buffer_object, base); >+ >+ DRM_ASSERT_LOCKED(&bo->dev->struct_mutex); >+ >+ drm_bo_takedown_vm_locked(bo); >+ drm_bo_usage_deref_locked(&bo); >+} >+ >+void drm_bo_usage_deref_unlocked(struct drm_buffer_object **bo) >+{ >+ struct drm_buffer_object *tmp_bo = *bo; >+ struct drm_device *dev = tmp_bo->dev; >+ >+ *bo = NULL; >+ if (atomic_dec_and_test(&tmp_bo->usage)) { >+ mutex_lock(&dev->struct_mutex); >+ if (atomic_read(&tmp_bo->usage) == 0) >+ drm_bo_destroy_locked(tmp_bo); >+ mutex_unlock(&dev->struct_mutex); >+ } >+} >+EXPORT_SYMBOL(drm_bo_usage_deref_unlocked); >+ >+void drm_putback_buffer_objects(struct drm_device *dev) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct list_head *list = &bm->unfenced; >+ struct drm_buffer_object *entry, *next; >+ >+ mutex_lock(&dev->struct_mutex); >+ list_for_each_entry_safe(entry, next, list, lru) { >+ atomic_inc(&entry->usage); >+ mutex_unlock(&dev->struct_mutex); >+ >+ mutex_lock(&entry->mutex); >+ BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); >+ mutex_lock(&dev->struct_mutex); >+ >+ list_del_init(&entry->lru); >+ DRM_FLAG_MASKED(entry->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); >+ DRM_WAKEUP(&entry->event_queue); >+ >+ /* >+ * FIXME: Might want to put back on head of list >+ * instead of tail here. >+ */ >+ >+ drm_bo_add_to_lru(entry); >+ mutex_unlock(&entry->mutex); >+ drm_bo_usage_deref_locked(&entry); >+ } >+ mutex_unlock(&dev->struct_mutex); >+} >+EXPORT_SYMBOL(drm_putback_buffer_objects); >+ >+ >+/* >+ * Note. The caller has to register (if applicable) >+ * and deregister fence object usage. >+ */ >+ >+int drm_fence_buffer_objects(struct drm_device *dev, >+ struct list_head *list, >+ uint32_t fence_flags, >+ struct drm_fence_object *fence, >+ struct drm_fence_object **used_fence) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_buffer_object *entry; >+ uint32_t fence_type = 0; >+ uint32_t fence_class = ~0; >+ int count = 0; >+ int ret = 0; >+ struct list_head *l; >+ >+ mutex_lock(&dev->struct_mutex); >+ >+ if (!list) >+ list = &bm->unfenced; >+ >+ if (fence) >+ fence_class = fence->fence_class; >+ >+ list_for_each_entry(entry, list, lru) { >+ BUG_ON(!(entry->priv_flags & _DRM_BO_FLAG_UNFENCED)); >+ fence_type |= entry->new_fence_type; >+ if (fence_class == ~0) >+ fence_class = entry->new_fence_class; >+ else if (entry->new_fence_class != fence_class) { >+ DRM_ERROR("Unmatching fence classes on unfenced list: " >+ "%d and %d.\n", >+ fence_class, >+ entry->new_fence_class); >+ ret = -EINVAL; >+ goto out; >+ } >+ count++; >+ } >+ >+ if (!count) { >+ ret = -EINVAL; >+ goto out; >+ } >+ >+ if (fence) { >+ if ((fence_type & fence->type) != fence_type || >+ (fence->fence_class != fence_class)) { >+ DRM_ERROR("Given fence doesn't match buffers " >+ "on unfenced list.\n"); >+ ret = -EINVAL; >+ goto out; >+ } >+ } else { >+ mutex_unlock(&dev->struct_mutex); >+ ret = drm_fence_object_create(dev, fence_class, fence_type, >+ fence_flags | DRM_FENCE_FLAG_EMIT, >+ &fence); >+ mutex_lock(&dev->struct_mutex); >+ if (ret) >+ goto out; >+ } >+ >+ count = 0; >+ l = list->next; >+ while (l != list) { >+ prefetch(l->next); >+ entry = list_entry(l, struct drm_buffer_object, lru); >+ atomic_inc(&entry->usage); >+ mutex_unlock(&dev->struct_mutex); >+ mutex_lock(&entry->mutex); >+ mutex_lock(&dev->struct_mutex); >+ list_del_init(l); >+ if (entry->priv_flags & _DRM_BO_FLAG_UNFENCED) { >+ count++; >+ if (entry->fence) >+ drm_fence_usage_deref_locked(&entry->fence); >+ entry->fence = drm_fence_reference_locked(fence); >+ entry->fence_class = entry->new_fence_class; >+ entry->fence_type = entry->new_fence_type; >+ DRM_FLAG_MASKED(entry->priv_flags, 0, >+ _DRM_BO_FLAG_UNFENCED); >+ DRM_WAKEUP(&entry->event_queue); >+ drm_bo_add_to_lru(entry); >+ } >+ mutex_unlock(&entry->mutex); >+ drm_bo_usage_deref_locked(&entry); >+ l = list->next; >+ } >+ DRM_DEBUG("Fenced %d buffers\n", count); >+out: >+ mutex_unlock(&dev->struct_mutex); >+ *used_fence = fence; >+ return ret; >+} >+EXPORT_SYMBOL(drm_fence_buffer_objects); >+ >+/* >+ * bo->mutex locked >+ */ >+ >+static int drm_bo_evict(struct drm_buffer_object *bo, unsigned mem_type, >+ int no_wait) >+{ >+ int ret = 0; >+ struct drm_device *dev = bo->dev; >+ struct drm_bo_mem_reg evict_mem; >+ >+ /* >+ * Someone might have modified the buffer before we took the >+ * buffer mutex. >+ */ >+ >+ if (bo->priv_flags & _DRM_BO_FLAG_UNFENCED) >+ goto out; >+ if (bo->mem.mem_type != mem_type) >+ goto out; >+ >+ ret = drm_bo_wait(bo, 0, 0, no_wait); >+ >+ if (ret && ret != -EAGAIN) { >+ DRM_ERROR("Failed to expire fence before " >+ "buffer eviction.\n"); >+ goto out; >+ } >+ >+ evict_mem = bo->mem; >+ evict_mem.mm_node = NULL; >+ >+ evict_mem = bo->mem; >+ evict_mem.proposed_flags = dev->driver->bo_driver->evict_flags(bo); >+ ret = drm_bo_mem_space(bo, &evict_mem, no_wait); >+ >+ if (ret) { >+ if (ret != -EAGAIN) >+ DRM_ERROR("Failed to find memory space for " >+ "buffer 0x%p eviction.\n", bo); >+ goto out; >+ } >+ >+ ret = drm_bo_handle_move_mem(bo, &evict_mem, 1, no_wait); >+ >+ if (ret) { >+ if (ret != -EAGAIN) >+ DRM_ERROR("Buffer eviction failed\n"); >+ goto out; >+ } >+ >+ mutex_lock(&dev->struct_mutex); >+ if (evict_mem.mm_node) { >+ if (evict_mem.mm_node != bo->pinned_node) >+ drm_mm_put_block(evict_mem.mm_node); >+ evict_mem.mm_node = NULL; >+ } >+ list_del(&bo->lru); >+ drm_bo_add_to_lru(bo); >+ mutex_unlock(&dev->struct_mutex); >+ >+ DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_EVICTED, >+ _DRM_BO_FLAG_EVICTED); >+ >+out: >+ return ret; >+} >+ >+/** >+ * Repeatedly evict memory from the LRU for @mem_type until we create enough >+ * space, or we've evicted everything and there isn't enough space. >+ */ >+static int drm_bo_mem_force_space(struct drm_device *dev, >+ struct drm_bo_mem_reg *mem, >+ uint32_t mem_type, int no_wait) >+{ >+ struct drm_mm_node *node; >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_buffer_object *entry; >+ struct drm_mem_type_manager *man = &bm->man[mem_type]; >+ struct list_head *lru; >+ unsigned long num_pages = mem->num_pages; >+ int ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ do { >+ node = drm_mm_search_free(&man->manager, num_pages, >+ mem->page_alignment, 1); >+ if (node) >+ break; >+ >+ lru = &man->lru; >+ if (lru->next == lru) >+ break; >+ >+ entry = list_entry(lru->next, struct drm_buffer_object, lru); >+ atomic_inc(&entry->usage); >+ mutex_unlock(&dev->struct_mutex); >+ mutex_lock(&entry->mutex); >+ BUG_ON(entry->mem.flags & (DRM_BO_FLAG_NO_MOVE | DRM_BO_FLAG_NO_EVICT)); >+ >+ ret = drm_bo_evict(entry, mem_type, no_wait); >+ mutex_unlock(&entry->mutex); >+ drm_bo_usage_deref_unlocked(&entry); >+ if (ret) >+ return ret; >+ mutex_lock(&dev->struct_mutex); >+ } while (1); >+ >+ if (!node) { >+ mutex_unlock(&dev->struct_mutex); >+ return -ENOMEM; >+ } >+ >+ node = drm_mm_get_block(node, num_pages, mem->page_alignment); >+ mutex_unlock(&dev->struct_mutex); >+ mem->mm_node = node; >+ mem->mem_type = mem_type; >+ return 0; >+} >+ >+static int drm_bo_mt_compatible(struct drm_mem_type_manager *man, >+ int disallow_fixed, >+ uint32_t mem_type, >+ uint64_t mask, uint32_t *res_mask) >+{ >+ uint64_t cur_flags = drm_bo_type_flags(mem_type); >+ uint64_t flag_diff; >+ >+ if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && disallow_fixed) >+ return 0; >+ if (man->flags & _DRM_FLAG_MEMTYPE_CACHED) >+ cur_flags |= DRM_BO_FLAG_CACHED; >+ if (man->flags & _DRM_FLAG_MEMTYPE_MAPPABLE) >+ cur_flags |= DRM_BO_FLAG_MAPPABLE; >+ if (man->flags & _DRM_FLAG_MEMTYPE_CSELECT) >+ DRM_FLAG_MASKED(cur_flags, mask, DRM_BO_FLAG_CACHED); >+ >+ if ((cur_flags & mask & DRM_BO_MASK_MEM) == 0) >+ return 0; >+ >+ if (mem_type == DRM_BO_MEM_LOCAL) { >+ *res_mask = cur_flags; >+ return 1; >+ } >+ >+ flag_diff = (mask ^ cur_flags); >+ if (flag_diff & DRM_BO_FLAG_CACHED_MAPPED) >+ cur_flags |= DRM_BO_FLAG_CACHED_MAPPED; >+ >+ if ((flag_diff & DRM_BO_FLAG_CACHED) && >+ (!(mask & DRM_BO_FLAG_CACHED) || >+ (mask & DRM_BO_FLAG_FORCE_CACHING))) >+ return 0; >+ >+ if ((flag_diff & DRM_BO_FLAG_MAPPABLE) && >+ ((mask & DRM_BO_FLAG_MAPPABLE) || >+ (mask & DRM_BO_FLAG_FORCE_MAPPABLE))) >+ return 0; >+ >+ *res_mask = cur_flags; >+ return 1; >+} >+ >+/** >+ * Creates space for memory region @mem according to its type. >+ * >+ * This function first searches for free space in compatible memory types in >+ * the priority order defined by the driver. If free space isn't found, then >+ * drm_bo_mem_force_space is attempted in priority order to evict and find >+ * space. >+ */ >+int drm_bo_mem_space(struct drm_buffer_object *bo, >+ struct drm_bo_mem_reg *mem, int no_wait) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_mem_type_manager *man; >+ >+ uint32_t num_prios = dev->driver->bo_driver->num_mem_type_prio; >+ const uint32_t *prios = dev->driver->bo_driver->mem_type_prio; >+ uint32_t i; >+ uint32_t mem_type = DRM_BO_MEM_LOCAL; >+ uint32_t cur_flags; >+ int type_found = 0; >+ int type_ok = 0; >+ int has_eagain = 0; >+ struct drm_mm_node *node = NULL; >+ int ret; >+ >+ mem->mm_node = NULL; >+ for (i = 0; i < num_prios; ++i) { >+ mem_type = prios[i]; >+ man = &bm->man[mem_type]; >+ >+ type_ok = drm_bo_mt_compatible(man, >+ bo->type == drm_bo_type_user, >+ mem_type, mem->proposed_flags, >+ &cur_flags); >+ >+ if (!type_ok) >+ continue; >+ >+ if (mem_type == DRM_BO_MEM_LOCAL) >+ break; >+ >+ if ((mem_type == bo->pinned_mem_type) && >+ (bo->pinned_node != NULL)) { >+ node = bo->pinned_node; >+ break; >+ } >+ >+ mutex_lock(&dev->struct_mutex); >+ if (man->has_type && man->use_type) { >+ type_found = 1; >+ node = drm_mm_search_free(&man->manager, mem->num_pages, >+ mem->page_alignment, 1); >+ if (node) >+ node = drm_mm_get_block(node, mem->num_pages, >+ mem->page_alignment); >+ } >+ mutex_unlock(&dev->struct_mutex); >+ if (node) >+ break; >+ } >+ >+ if ((type_ok && (mem_type == DRM_BO_MEM_LOCAL)) || node) { >+ mem->mm_node = node; >+ mem->mem_type = mem_type; >+ mem->flags = cur_flags; >+ return 0; >+ } >+ >+ if (!type_found) >+ return -EINVAL; >+ >+ num_prios = dev->driver->bo_driver->num_mem_busy_prio; >+ prios = dev->driver->bo_driver->mem_busy_prio; >+ >+ for (i = 0; i < num_prios; ++i) { >+ mem_type = prios[i]; >+ man = &bm->man[mem_type]; >+ >+ if (!man->has_type) >+ continue; >+ >+ if (!drm_bo_mt_compatible(man, >+ bo->type == drm_bo_type_user, >+ mem_type, >+ mem->proposed_flags, >+ &cur_flags)) >+ continue; >+ >+ ret = drm_bo_mem_force_space(dev, mem, mem_type, no_wait); >+ >+ if (ret == 0 && mem->mm_node) { >+ mem->flags = cur_flags; >+ return 0; >+ } >+ >+ if (ret == -EAGAIN) >+ has_eagain = 1; >+ } >+ >+ ret = (has_eagain) ? -EAGAIN : -ENOMEM; >+ return ret; >+} >+EXPORT_SYMBOL(drm_bo_mem_space); >+ >+/* >+ * drm_bo_propose_flags: >+ * >+ * @bo: the buffer object getting new flags >+ * >+ * @new_flags: the new set of proposed flag bits >+ * >+ * @new_mask: the mask of bits changed in new_flags >+ * >+ * Modify the proposed_flag bits in @bo >+ */ >+static int drm_bo_modify_proposed_flags (struct drm_buffer_object *bo, >+ uint64_t new_flags, uint64_t new_mask) >+{ >+ uint32_t new_access; >+ >+ /* Copy unchanging bits from existing proposed_flags */ >+ DRM_FLAG_MASKED(new_flags, bo->mem.proposed_flags, ~new_mask); >+ >+ if (bo->type == drm_bo_type_user && >+ ((new_flags & (DRM_BO_FLAG_CACHED | DRM_BO_FLAG_FORCE_CACHING)) != >+ (DRM_BO_FLAG_CACHED | DRM_BO_FLAG_FORCE_CACHING))) { >+ DRM_ERROR("User buffers require cache-coherent memory.\n"); >+ return -EINVAL; >+ } >+ >+ if ((new_mask & DRM_BO_FLAG_NO_EVICT) && !DRM_SUSER(DRM_CURPROC)) { >+ DRM_ERROR("DRM_BO_FLAG_NO_EVICT is only available to priviliged processes.\n"); >+ return -EPERM; >+ } >+ >+ if ((new_flags & DRM_BO_FLAG_NO_MOVE)) { >+ DRM_ERROR("DRM_BO_FLAG_NO_MOVE is not properly implemented yet.\n"); >+ return -EPERM; >+ } >+ >+ new_access = new_flags & (DRM_BO_FLAG_EXE | DRM_BO_FLAG_WRITE | >+ DRM_BO_FLAG_READ); >+ >+ if (new_access == 0) { >+ DRM_ERROR("Invalid buffer object rwx properties\n"); >+ return -EINVAL; >+ } >+ >+ bo->mem.proposed_flags = new_flags; >+ return 0; >+} >+ >+/* >+ * Call dev->struct_mutex locked. >+ */ >+ >+struct drm_buffer_object *drm_lookup_buffer_object(struct drm_file *file_priv, >+ uint32_t handle, int check_owner) >+{ >+ struct drm_user_object *uo; >+ struct drm_buffer_object *bo; >+ >+ uo = drm_lookup_user_object(file_priv, handle); >+ >+ if (!uo || (uo->type != drm_buffer_type)) { >+ DRM_ERROR("Could not find buffer object 0x%08x\n", handle); >+ return NULL; >+ } >+ >+ if (check_owner && file_priv != uo->owner) { >+ if (!drm_lookup_ref_object(file_priv, uo, _DRM_REF_USE)) >+ return NULL; >+ } >+ >+ bo = drm_user_object_entry(uo, struct drm_buffer_object, base); >+ atomic_inc(&bo->usage); >+ return bo; >+} >+EXPORT_SYMBOL(drm_lookup_buffer_object); >+ >+/* >+ * Call bo->mutex locked. >+ * Returns 1 if the buffer is currently rendered to or from. 0 otherwise. >+ * Doesn't do any fence flushing as opposed to the drm_bo_busy function. >+ */ >+ >+static int drm_bo_quick_busy(struct drm_buffer_object *bo) >+{ >+ struct drm_fence_object *fence = bo->fence; >+ >+ BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNFENCED); >+ if (fence) { >+ if (drm_fence_object_signaled(fence, bo->fence_type, 0)) { >+ drm_fence_usage_deref_unlocked(&bo->fence); >+ return 0; >+ } >+ return 1; >+ } >+ return 0; >+} >+ >+/* >+ * Call bo->mutex locked. >+ * Returns 1 if the buffer is currently rendered to or from. 0 otherwise. >+ */ >+ >+static int drm_bo_busy(struct drm_buffer_object *bo) >+{ >+ struct drm_fence_object *fence = bo->fence; >+ >+ BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNFENCED); >+ if (fence) { >+ if (drm_fence_object_signaled(fence, bo->fence_type, 0)) { >+ drm_fence_usage_deref_unlocked(&bo->fence); >+ return 0; >+ } >+ drm_fence_object_flush(fence, DRM_FENCE_TYPE_EXE); >+ if (drm_fence_object_signaled(fence, bo->fence_type, 0)) { >+ drm_fence_usage_deref_unlocked(&bo->fence); >+ return 0; >+ } >+ return 1; >+ } >+ return 0; >+} >+ >+static int drm_bo_evict_cached(struct drm_buffer_object *bo) >+{ >+ int ret = 0; >+ >+ BUG_ON(bo->priv_flags & _DRM_BO_FLAG_UNFENCED); >+ if (bo->mem.mm_node) >+ ret = drm_bo_evict(bo, DRM_BO_MEM_TT, 1); >+ return ret; >+} >+ >+/* >+ * Wait until a buffer is unmapped. >+ */ >+ >+static int drm_bo_wait_unmapped(struct drm_buffer_object *bo, int no_wait) >+{ >+ int ret = 0; >+ >+ if ((atomic_read(&bo->mapped) >= 0) && no_wait) >+ return -EBUSY; >+ >+ DRM_WAIT_ON(ret, bo->event_queue, 3 * DRM_HZ, >+ atomic_read(&bo->mapped) == -1); >+ >+ if (ret == -EINTR) >+ ret = -EAGAIN; >+ >+ return ret; >+} >+ >+static int drm_bo_check_unfenced(struct drm_buffer_object *bo) >+{ >+ int ret; >+ >+ mutex_lock(&bo->mutex); >+ ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); >+ mutex_unlock(&bo->mutex); >+ return ret; >+} >+ >+/* >+ * Wait until a buffer, scheduled to be fenced moves off the unfenced list. >+ * Until then, we cannot really do anything with it except delete it. >+ */ >+ >+static int drm_bo_wait_unfenced(struct drm_buffer_object *bo, int no_wait, >+ int eagain_if_wait) >+{ >+ int ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); >+ >+ if (ret && no_wait) >+ return -EBUSY; >+ else if (!ret) >+ return 0; >+ >+ ret = 0; >+ mutex_unlock(&bo->mutex); >+ DRM_WAIT_ON (ret, bo->event_queue, 3 * DRM_HZ, >+ !drm_bo_check_unfenced(bo)); >+ mutex_lock(&bo->mutex); >+ if (ret == -EINTR) >+ return -EAGAIN; >+ ret = (bo->priv_flags & _DRM_BO_FLAG_UNFENCED); >+ if (ret) { >+ DRM_ERROR("Timeout waiting for buffer to become fenced\n"); >+ return -EBUSY; >+ } >+ if (eagain_if_wait) >+ return -EAGAIN; >+ >+ return 0; >+} >+ >+/* >+ * Fill in the ioctl reply argument with buffer info. >+ * Bo locked. >+ */ >+ >+static void drm_bo_fill_rep_arg(struct drm_buffer_object *bo, >+ struct drm_bo_info_rep *rep) >+{ >+ if (!rep) >+ return; >+ >+ rep->handle = bo->base.hash.key; >+ rep->flags = bo->mem.flags; >+ rep->size = bo->num_pages * PAGE_SIZE; >+ rep->offset = bo->offset; >+ >+ /* >+ * drm_bo_type_device buffers have user-visible >+ * handles which can be used to share across >+ * processes. Hand that back to the application >+ */ >+ if (bo->type == drm_bo_type_device) >+ rep->arg_handle = bo->map_list.user_token; >+ else >+ rep->arg_handle = 0; >+ >+ rep->proposed_flags = bo->mem.proposed_flags; >+ rep->buffer_start = bo->buffer_start; >+ rep->fence_flags = bo->fence_type; >+ rep->rep_flags = 0; >+ rep->page_alignment = bo->mem.page_alignment; >+ >+ if ((bo->priv_flags & _DRM_BO_FLAG_UNFENCED) || drm_bo_quick_busy(bo)) { >+ DRM_FLAG_MASKED(rep->rep_flags, DRM_BO_REP_BUSY, >+ DRM_BO_REP_BUSY); >+ } >+} >+ >+/* >+ * Wait for buffer idle and register that we've mapped the buffer. >+ * Mapping is registered as a drm_ref_object with type _DRM_REF_TYPE1, >+ * so that if the client dies, the mapping is automatically >+ * unregistered. >+ */ >+ >+static int drm_buffer_object_map(struct drm_file *file_priv, uint32_t handle, >+ uint32_t map_flags, unsigned hint, >+ struct drm_bo_info_rep *rep) >+{ >+ struct drm_buffer_object *bo; >+ struct drm_device *dev = file_priv->head->dev; >+ int ret = 0; >+ int no_wait = hint & DRM_BO_HINT_DONT_BLOCK; >+ >+ mutex_lock(&dev->struct_mutex); >+ bo = drm_lookup_buffer_object(file_priv, handle, 1); >+ mutex_unlock(&dev->struct_mutex); >+ >+ if (!bo) >+ return -EINVAL; >+ >+ mutex_lock(&bo->mutex); >+ ret = drm_bo_wait_unfenced(bo, no_wait, 0); >+ if (ret) >+ goto out; >+ >+ /* >+ * If this returns true, we are currently unmapped. >+ * We need to do this test, because unmapping can >+ * be done without the bo->mutex held. >+ */ >+ >+ while (1) { >+ if (atomic_inc_and_test(&bo->mapped)) { >+ if (no_wait && drm_bo_busy(bo)) { >+ atomic_dec(&bo->mapped); >+ ret = -EBUSY; >+ goto out; >+ } >+ ret = drm_bo_wait(bo, 0, 0, no_wait); >+ if (ret) { >+ atomic_dec(&bo->mapped); >+ goto out; >+ } >+ >+ if (bo->mem.flags & DRM_BO_FLAG_CACHED_MAPPED) >+ drm_bo_evict_cached(bo); >+ >+ break; >+ } else if (bo->mem.flags & DRM_BO_FLAG_CACHED_MAPPED) { >+ >+ /* >+ * We are already mapped with different flags. >+ * need to wait for unmap. >+ */ >+ >+ ret = drm_bo_wait_unmapped(bo, no_wait); >+ if (ret) >+ goto out; >+ >+ continue; >+ } >+ break; >+ } >+ >+ mutex_lock(&dev->struct_mutex); >+ ret = drm_add_ref_object(file_priv, &bo->base, _DRM_REF_TYPE1); >+ mutex_unlock(&dev->struct_mutex); >+ if (ret) { >+ if (atomic_add_negative(-1, &bo->mapped)) >+ DRM_WAKEUP(&bo->event_queue); >+ >+ } else >+ drm_bo_fill_rep_arg(bo, rep); >+out: >+ mutex_unlock(&bo->mutex); >+ drm_bo_usage_deref_unlocked(&bo); >+ return ret; >+} >+ >+static int drm_buffer_object_unmap(struct drm_file *file_priv, uint32_t handle) >+{ >+ struct drm_device *dev = file_priv->head->dev; >+ struct drm_buffer_object *bo; >+ struct drm_ref_object *ro; >+ int ret = 0; >+ >+ mutex_lock(&dev->struct_mutex); >+ >+ bo = drm_lookup_buffer_object(file_priv, handle, 1); >+ if (!bo) { >+ ret = -EINVAL; >+ goto out; >+ } >+ >+ ro = drm_lookup_ref_object(file_priv, &bo->base, _DRM_REF_TYPE1); >+ if (!ro) { >+ ret = -EINVAL; >+ goto out; >+ } >+ >+ drm_remove_ref_object(file_priv, ro); >+ drm_bo_usage_deref_locked(&bo); >+out: >+ mutex_unlock(&dev->struct_mutex); >+ return ret; >+} >+ >+/* >+ * Call struct-sem locked. >+ */ >+ >+static void drm_buffer_user_object_unmap(struct drm_file *file_priv, >+ struct drm_user_object *uo, >+ enum drm_ref_type action) >+{ >+ struct drm_buffer_object *bo = >+ drm_user_object_entry(uo, struct drm_buffer_object, base); >+ >+ /* >+ * We DON'T want to take the bo->lock here, because we want to >+ * hold it when we wait for unmapped buffer. >+ */ >+ >+ BUG_ON(action != _DRM_REF_TYPE1); >+ >+ if (atomic_add_negative(-1, &bo->mapped)) >+ DRM_WAKEUP(&bo->event_queue); >+} >+ >+/* >+ * bo->mutex locked. >+ * Note that new_mem_flags are NOT transferred to the bo->mem.proposed_flags. >+ */ >+ >+int drm_bo_move_buffer(struct drm_buffer_object *bo, uint64_t new_mem_flags, >+ int no_wait, int move_unfenced) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_buffer_manager *bm = &dev->bm; >+ int ret = 0; >+ struct drm_bo_mem_reg mem; >+ /* >+ * Flush outstanding fences. >+ */ >+ >+ drm_bo_busy(bo); >+ >+ /* >+ * Wait for outstanding fences. >+ */ >+ >+ ret = drm_bo_wait(bo, 0, 0, no_wait); >+ if (ret) >+ return ret; >+ >+ mem.num_pages = bo->num_pages; >+ mem.size = mem.num_pages << PAGE_SHIFT; >+ mem.proposed_flags = new_mem_flags; >+ mem.page_alignment = bo->mem.page_alignment; >+ >+ mutex_lock(&bm->evict_mutex); >+ mutex_lock(&dev->struct_mutex); >+ list_del_init(&bo->lru); >+ mutex_unlock(&dev->struct_mutex); >+ >+ /* >+ * Determine where to move the buffer. >+ */ >+ ret = drm_bo_mem_space(bo, &mem, no_wait); >+ if (ret) >+ goto out_unlock; >+ >+ ret = drm_bo_handle_move_mem(bo, &mem, 0, no_wait); >+ >+out_unlock: >+ mutex_lock(&dev->struct_mutex); >+ if (ret || !move_unfenced) { >+ if (mem.mm_node) { >+ if (mem.mm_node != bo->pinned_node) >+ drm_mm_put_block(mem.mm_node); >+ mem.mm_node = NULL; >+ } >+ drm_bo_add_to_lru(bo); >+ if (bo->priv_flags & _DRM_BO_FLAG_UNFENCED) { >+ DRM_WAKEUP(&bo->event_queue); >+ DRM_FLAG_MASKED(bo->priv_flags, 0, >+ _DRM_BO_FLAG_UNFENCED); >+ } >+ } else { >+ list_add_tail(&bo->lru, &bm->unfenced); >+ DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_UNFENCED, >+ _DRM_BO_FLAG_UNFENCED); >+ } >+ mutex_unlock(&dev->struct_mutex); >+ mutex_unlock(&bm->evict_mutex); >+ return ret; >+} >+ >+static int drm_bo_mem_compat(struct drm_bo_mem_reg *mem) >+{ >+ uint32_t flag_diff = (mem->proposed_flags ^ mem->flags); >+ >+ if ((mem->proposed_flags & mem->flags & DRM_BO_MASK_MEM) == 0) >+ return 0; >+ if ((flag_diff & DRM_BO_FLAG_CACHED) && >+ (/* !(mem->proposed_flags & DRM_BO_FLAG_CACHED) ||*/ >+ (mem->proposed_flags & DRM_BO_FLAG_FORCE_CACHING))) >+ return 0; >+ >+ if ((flag_diff & DRM_BO_FLAG_MAPPABLE) && >+ ((mem->proposed_flags & DRM_BO_FLAG_MAPPABLE) || >+ (mem->proposed_flags & DRM_BO_FLAG_FORCE_MAPPABLE))) >+ return 0; >+ return 1; >+} >+ >+/** >+ * drm_buffer_object_validate: >+ * >+ * @bo: the buffer object to modify >+ * >+ * @fence_class: the new fence class covering this buffer >+ * >+ * @move_unfenced: a boolean indicating whether switching the >+ * memory space of this buffer should cause the buffer to >+ * be placed on the unfenced list. >+ * >+ * @no_wait: whether this function should return -EBUSY instead >+ * of waiting. >+ * >+ * Change buffer access parameters. This can involve moving >+ * the buffer to the correct memory type, pinning the buffer >+ * or changing the class/type of fence covering this buffer >+ * >+ * Must be called with bo locked. >+ */ >+ >+static int drm_buffer_object_validate(struct drm_buffer_object *bo, >+ uint32_t fence_class, >+ int move_unfenced, int no_wait) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_bo_driver *driver = dev->driver->bo_driver; >+ uint32_t ftype; >+ int ret; >+ >+ DRM_DEBUG("Proposed flags 0x%016llx, Old flags 0x%016llx\n", >+ (unsigned long long) bo->mem.proposed_flags, >+ (unsigned long long) bo->mem.flags); >+ >+ ret = driver->fence_type(bo, &fence_class, &ftype); >+ >+ if (ret) { >+ DRM_ERROR("Driver did not support given buffer permissions\n"); >+ return ret; >+ } >+ >+ /* >+ * We're switching command submission mechanism, >+ * or cannot simply rely on the hardware serializing for us. >+ * >+ * Wait for buffer idle. >+ */ >+ >+ if ((fence_class != bo->fence_class) || >+ ((ftype ^ bo->fence_type) & bo->fence_type)) { >+ >+ ret = drm_bo_wait(bo, 0, 0, no_wait); >+ >+ if (ret) >+ return ret; >+ >+ } >+ >+ bo->new_fence_class = fence_class; >+ bo->new_fence_type = ftype; >+ >+ ret = drm_bo_wait_unmapped(bo, no_wait); >+ if (ret) { >+ DRM_ERROR("Timed out waiting for buffer unmap.\n"); >+ return ret; >+ } >+ >+ /* >+ * Check whether we need to move buffer. >+ */ >+ >+ if (!drm_bo_mem_compat(&bo->mem)) { >+ ret = drm_bo_move_buffer(bo, bo->mem.proposed_flags, no_wait, >+ move_unfenced); >+ if (ret) { >+ if (ret != -EAGAIN) >+ DRM_ERROR("Failed moving buffer.\n"); >+ return ret; >+ } >+ } >+ >+ /* >+ * Pinned buffers. >+ */ >+ >+ if (bo->mem.proposed_flags & (DRM_BO_FLAG_NO_EVICT | DRM_BO_FLAG_NO_MOVE)) { >+ bo->pinned_mem_type = bo->mem.mem_type; >+ mutex_lock(&dev->struct_mutex); >+ list_del_init(&bo->pinned_lru); >+ drm_bo_add_to_pinned_lru(bo); >+ >+ if (bo->pinned_node != bo->mem.mm_node) { >+ if (bo->pinned_node != NULL) >+ drm_mm_put_block(bo->pinned_node); >+ bo->pinned_node = bo->mem.mm_node; >+ } >+ >+ mutex_unlock(&dev->struct_mutex); >+ >+ } else if (bo->pinned_node != NULL) { >+ >+ mutex_lock(&dev->struct_mutex); >+ >+ if (bo->pinned_node != bo->mem.mm_node) >+ drm_mm_put_block(bo->pinned_node); >+ >+ list_del_init(&bo->pinned_lru); >+ bo->pinned_node = NULL; >+ mutex_unlock(&dev->struct_mutex); >+ >+ } >+ >+ /* >+ * We might need to add a TTM. >+ */ >+ >+ if (bo->mem.mem_type == DRM_BO_MEM_LOCAL && bo->ttm == NULL) { >+ ret = drm_bo_add_ttm(bo); >+ if (ret) >+ return ret; >+ } >+ /* >+ * Validation has succeeded, move the access and other >+ * non-mapping-related flag bits from the proposed flags to >+ * the active flags >+ */ >+ >+ DRM_FLAG_MASKED(bo->mem.flags, bo->mem.proposed_flags, ~DRM_BO_MASK_MEMTYPE); >+ >+ /* >+ * Finally, adjust lru to be sure. >+ */ >+ >+ mutex_lock(&dev->struct_mutex); >+ list_del(&bo->lru); >+ if (move_unfenced) { >+ list_add_tail(&bo->lru, &bm->unfenced); >+ DRM_FLAG_MASKED(bo->priv_flags, _DRM_BO_FLAG_UNFENCED, >+ _DRM_BO_FLAG_UNFENCED); >+ } else { >+ drm_bo_add_to_lru(bo); >+ if (bo->priv_flags & _DRM_BO_FLAG_UNFENCED) { >+ DRM_WAKEUP(&bo->event_queue); >+ DRM_FLAG_MASKED(bo->priv_flags, 0, >+ _DRM_BO_FLAG_UNFENCED); >+ } >+ } >+ mutex_unlock(&dev->struct_mutex); >+ >+ return 0; >+} >+ >+/** >+ * drm_bo_do_validate: >+ * >+ * @bo: the buffer object >+ * >+ * @flags: access rights, mapping parameters and cacheability. See >+ * the DRM_BO_FLAG_* values in drm.h >+ * >+ * @mask: Which flag values to change; this allows callers to modify >+ * things without knowing the current state of other flags. >+ * >+ * @hint: changes the proceedure for this operation, see the DRM_BO_HINT_* >+ * values in drm.h. >+ * >+ * @fence_class: a driver-specific way of doing fences. Presumably, >+ * this would be used if the driver had more than one submission and >+ * fencing mechanism. At this point, there isn't any use of this >+ * from the user mode code. >+ * >+ * @rep: To be stuffed with the reply from validation >+ * >+ * 'validate' a buffer object. This changes where the buffer is >+ * located, along with changing access modes. >+ */ >+ >+int drm_bo_do_validate(struct drm_buffer_object *bo, >+ uint64_t flags, uint64_t mask, uint32_t hint, >+ uint32_t fence_class, >+ struct drm_bo_info_rep *rep) >+{ >+ int ret; >+ int no_wait = (hint & DRM_BO_HINT_DONT_BLOCK) != 0; >+ >+ mutex_lock(&bo->mutex); >+ ret = drm_bo_wait_unfenced(bo, no_wait, 0); >+ >+ if (ret) >+ goto out; >+ >+ ret = drm_bo_modify_proposed_flags (bo, flags, mask); >+ if (ret) >+ goto out; >+ >+ ret = drm_buffer_object_validate(bo, >+ fence_class, >+ !(hint & DRM_BO_HINT_DONT_FENCE), >+ no_wait); >+out: >+ if (rep) >+ drm_bo_fill_rep_arg(bo, rep); >+ >+ mutex_unlock(&bo->mutex); >+ return ret; >+} >+EXPORT_SYMBOL(drm_bo_do_validate); >+ >+/** >+ * drm_bo_handle_validate >+ * >+ * @file_priv: the drm file private, used to get a handle to the user context >+ * >+ * @handle: the buffer object handle >+ * >+ * @flags: access rights, mapping parameters and cacheability. See >+ * the DRM_BO_FLAG_* values in drm.h >+ * >+ * @mask: Which flag values to change; this allows callers to modify >+ * things without knowing the current state of other flags. >+ * >+ * @hint: changes the proceedure for this operation, see the DRM_BO_HINT_* >+ * values in drm.h. >+ * >+ * @fence_class: a driver-specific way of doing fences. Presumably, >+ * this would be used if the driver had more than one submission and >+ * fencing mechanism. At this point, there isn't any use of this >+ * from the user mode code. >+ * >+ * @use_old_fence_class: don't change fence class, pull it from the buffer object >+ * >+ * @rep: To be stuffed with the reply from validation >+ * >+ * @bp_rep: To be stuffed with the buffer object pointer >+ * >+ * Perform drm_bo_do_validate on a buffer referenced by a user-space handle. >+ * Some permissions checking is done on the parameters, otherwise this >+ * is a thin wrapper. >+ */ >+ >+int drm_bo_handle_validate(struct drm_file *file_priv, uint32_t handle, >+ uint64_t flags, uint64_t mask, >+ uint32_t hint, >+ uint32_t fence_class, >+ int use_old_fence_class, >+ struct drm_bo_info_rep *rep, >+ struct drm_buffer_object **bo_rep) >+{ >+ struct drm_device *dev = file_priv->head->dev; >+ struct drm_buffer_object *bo; >+ int ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ bo = drm_lookup_buffer_object(file_priv, handle, 1); >+ mutex_unlock(&dev->struct_mutex); >+ >+ if (!bo) >+ return -EINVAL; >+ >+ if (use_old_fence_class) >+ fence_class = bo->fence_class; >+ >+ /* >+ * Only allow creator to change shared buffer mask. >+ */ >+ >+ if (bo->base.owner != file_priv) >+ mask &= ~(DRM_BO_FLAG_NO_EVICT | DRM_BO_FLAG_NO_MOVE); >+ >+ >+ ret = drm_bo_do_validate(bo, flags, mask, hint, fence_class, rep); >+ >+ if (!ret && bo_rep) >+ *bo_rep = bo; >+ else >+ drm_bo_usage_deref_unlocked(&bo); >+ >+ return ret; >+} >+EXPORT_SYMBOL(drm_bo_handle_validate); >+ >+static int drm_bo_handle_info(struct drm_file *file_priv, uint32_t handle, >+ struct drm_bo_info_rep *rep) >+{ >+ struct drm_device *dev = file_priv->head->dev; >+ struct drm_buffer_object *bo; >+ >+ mutex_lock(&dev->struct_mutex); >+ bo = drm_lookup_buffer_object(file_priv, handle, 1); >+ mutex_unlock(&dev->struct_mutex); >+ >+ if (!bo) >+ return -EINVAL; >+ >+ mutex_lock(&bo->mutex); >+ if (!(bo->priv_flags & _DRM_BO_FLAG_UNFENCED)) >+ (void)drm_bo_busy(bo); >+ drm_bo_fill_rep_arg(bo, rep); >+ mutex_unlock(&bo->mutex); >+ drm_bo_usage_deref_unlocked(&bo); >+ return 0; >+} >+ >+static int drm_bo_handle_wait(struct drm_file *file_priv, uint32_t handle, >+ uint32_t hint, >+ struct drm_bo_info_rep *rep) >+{ >+ struct drm_device *dev = file_priv->head->dev; >+ struct drm_buffer_object *bo; >+ int no_wait = hint & DRM_BO_HINT_DONT_BLOCK; >+ int ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ bo = drm_lookup_buffer_object(file_priv, handle, 1); >+ mutex_unlock(&dev->struct_mutex); >+ >+ if (!bo) >+ return -EINVAL; >+ >+ mutex_lock(&bo->mutex); >+ ret = drm_bo_wait_unfenced(bo, no_wait, 0); >+ if (ret) >+ goto out; >+ ret = drm_bo_wait(bo, hint & DRM_BO_HINT_WAIT_LAZY, 0, no_wait); >+ if (ret) >+ goto out; >+ >+ drm_bo_fill_rep_arg(bo, rep); >+ >+out: >+ mutex_unlock(&bo->mutex); >+ drm_bo_usage_deref_unlocked(&bo); >+ return ret; >+} >+ >+int drm_buffer_object_create(struct drm_device *dev, >+ unsigned long size, >+ enum drm_bo_type type, >+ uint64_t flags, >+ uint32_t hint, >+ uint32_t page_alignment, >+ unsigned long buffer_start, >+ struct drm_buffer_object **buf_obj) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_buffer_object *bo; >+ int ret = 0; >+ unsigned long num_pages; >+ >+ size += buffer_start & ~PAGE_MASK; >+ num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; >+ if (num_pages == 0) { >+ DRM_ERROR("Illegal buffer object size.\n"); >+ return -EINVAL; >+ } >+ >+ bo = drm_ctl_calloc(1, sizeof(*bo), DRM_MEM_BUFOBJ); >+ >+ if (!bo) >+ return -ENOMEM; >+ >+ mutex_init(&bo->mutex); >+ mutex_lock(&bo->mutex); >+ >+ atomic_set(&bo->usage, 1); >+ atomic_set(&bo->mapped, -1); >+ DRM_INIT_WAITQUEUE(&bo->event_queue); >+ INIT_LIST_HEAD(&bo->lru); >+ INIT_LIST_HEAD(&bo->pinned_lru); >+ INIT_LIST_HEAD(&bo->ddestroy); >+#ifdef DRM_ODD_MM_COMPAT >+ INIT_LIST_HEAD(&bo->p_mm_list); >+ INIT_LIST_HEAD(&bo->vma_list); >+#endif >+ bo->dev = dev; >+ bo->type = type; >+ bo->num_pages = num_pages; >+ bo->mem.mem_type = DRM_BO_MEM_LOCAL; >+ bo->mem.num_pages = bo->num_pages; >+ bo->mem.mm_node = NULL; >+ bo->mem.page_alignment = page_alignment; >+ bo->buffer_start = buffer_start & PAGE_MASK; >+ bo->priv_flags = 0; >+ bo->mem.flags = (DRM_BO_FLAG_MEM_LOCAL | DRM_BO_FLAG_CACHED | >+ DRM_BO_FLAG_MAPPABLE); >+ bo->mem.proposed_flags = 0; >+ atomic_inc(&bm->count); >+ /* >+ * Use drm_bo_modify_proposed_flags to error-check the proposed flags >+ */ >+ ret = drm_bo_modify_proposed_flags (bo, flags, flags); >+ if (ret) >+ goto out_err; >+ >+ /* >+ * For drm_bo_type_device buffers, allocate >+ * address space from the device so that applications >+ * can mmap the buffer from there >+ */ >+ if (bo->type == drm_bo_type_device) { >+ mutex_lock(&dev->struct_mutex); >+ ret = drm_bo_setup_vm_locked(bo); >+ mutex_unlock(&dev->struct_mutex); >+ if (ret) >+ goto out_err; >+ } >+ >+ ret = drm_buffer_object_validate(bo, 0, 0, hint & DRM_BO_HINT_DONT_BLOCK); >+ if (ret) >+ goto out_err; >+ >+ mutex_unlock(&bo->mutex); >+ *buf_obj = bo; >+ return 0; >+ >+out_err: >+ mutex_unlock(&bo->mutex); >+ >+ drm_bo_usage_deref_unlocked(&bo); >+ return ret; >+} >+EXPORT_SYMBOL(drm_buffer_object_create); >+ >+ >+static int drm_bo_add_user_object(struct drm_file *file_priv, >+ struct drm_buffer_object *bo, int shareable) >+{ >+ struct drm_device *dev = file_priv->head->dev; >+ int ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ ret = drm_add_user_object(file_priv, &bo->base, shareable); >+ if (ret) >+ goto out; >+ >+ bo->base.remove = drm_bo_base_deref_locked; >+ bo->base.type = drm_buffer_type; >+ bo->base.ref_struct_locked = NULL; >+ bo->base.unref = drm_buffer_user_object_unmap; >+ >+out: >+ mutex_unlock(&dev->struct_mutex); >+ return ret; >+} >+ >+int drm_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_bo_create_arg *arg = data; >+ struct drm_bo_create_req *req = &arg->d.req; >+ struct drm_bo_info_rep *rep = &arg->d.rep; >+ struct drm_buffer_object *entry; >+ enum drm_bo_type bo_type; >+ int ret = 0; >+ >+ DRM_DEBUG("drm_bo_create_ioctl: %dkb, %dkb align\n", >+ (int)(req->size / 1024), req->page_alignment * 4); >+ >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized.\n"); >+ return -EINVAL; >+ } >+ >+ /* >+ * If the buffer creation request comes in with a starting address, >+ * that points at the desired user pages to map. Otherwise, create >+ * a drm_bo_type_device buffer, which uses pages allocated from the kernel >+ */ >+ bo_type = (req->buffer_start) ? drm_bo_type_user : drm_bo_type_device; >+ >+ /* >+ * User buffers cannot be shared >+ */ >+ if (bo_type == drm_bo_type_user) >+ req->flags &= ~DRM_BO_FLAG_SHAREABLE; >+ >+ ret = drm_buffer_object_create(file_priv->head->dev, >+ req->size, bo_type, req->flags, >+ req->hint, req->page_alignment, >+ req->buffer_start, &entry); >+ if (ret) >+ goto out; >+ >+ ret = drm_bo_add_user_object(file_priv, entry, >+ req->flags & DRM_BO_FLAG_SHAREABLE); >+ if (ret) { >+ drm_bo_usage_deref_unlocked(&entry); >+ goto out; >+ } >+ >+ mutex_lock(&entry->mutex); >+ drm_bo_fill_rep_arg(entry, rep); >+ mutex_unlock(&entry->mutex); >+ >+out: >+ return ret; >+} >+ >+int drm_bo_setstatus_ioctl(struct drm_device *dev, >+ void *data, struct drm_file *file_priv) >+{ >+ struct drm_bo_map_wait_idle_arg *arg = data; >+ struct drm_bo_info_req *req = &arg->d.req; >+ struct drm_bo_info_rep *rep = &arg->d.rep; >+ int ret; >+ >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized.\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_bo_read_lock(&dev->bm.bm_lock); >+ if (ret) >+ return ret; >+ >+ /* >+ * validate the buffer. note that 'fence_class' will be unused >+ * as we pass use_old_fence_class=1 here. Note also that >+ * the libdrm API doesn't pass fence_class to the kernel, >+ * so it's a good thing it isn't used here. >+ */ >+ ret = drm_bo_handle_validate(file_priv, req->handle, >+ req->flags, >+ req->mask, >+ req->hint | DRM_BO_HINT_DONT_FENCE, >+ req->fence_class, 1, >+ rep, NULL); >+ >+ (void) drm_bo_read_unlock(&dev->bm.bm_lock); >+ if (ret) >+ return ret; >+ >+ return 0; >+} >+ >+int drm_bo_map_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_bo_map_wait_idle_arg *arg = data; >+ struct drm_bo_info_req *req = &arg->d.req; >+ struct drm_bo_info_rep *rep = &arg->d.rep; >+ int ret; >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized.\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_buffer_object_map(file_priv, req->handle, req->mask, >+ req->hint, rep); >+ if (ret) >+ return ret; >+ >+ return 0; >+} >+ >+int drm_bo_unmap_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_bo_handle_arg *arg = data; >+ int ret; >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized.\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_buffer_object_unmap(file_priv, arg->handle); >+ return ret; >+} >+ >+ >+int drm_bo_reference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_bo_reference_info_arg *arg = data; >+ struct drm_bo_handle_arg *req = &arg->d.req; >+ struct drm_bo_info_rep *rep = &arg->d.rep; >+ struct drm_user_object *uo; >+ int ret; >+ >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized.\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_user_object_ref(file_priv, req->handle, >+ drm_buffer_type, &uo); >+ if (ret) >+ return ret; >+ >+ ret = drm_bo_handle_info(file_priv, req->handle, rep); >+ if (ret) >+ return ret; >+ >+ return 0; >+} >+ >+int drm_bo_unreference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_bo_handle_arg *arg = data; >+ int ret = 0; >+ >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized.\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_user_object_unref(file_priv, arg->handle, drm_buffer_type); >+ return ret; >+} >+ >+int drm_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_bo_reference_info_arg *arg = data; >+ struct drm_bo_handle_arg *req = &arg->d.req; >+ struct drm_bo_info_rep *rep = &arg->d.rep; >+ int ret; >+ >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized.\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_bo_handle_info(file_priv, req->handle, rep); >+ if (ret) >+ return ret; >+ >+ return 0; >+} >+ >+int drm_bo_wait_idle_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_bo_map_wait_idle_arg *arg = data; >+ struct drm_bo_info_req *req = &arg->d.req; >+ struct drm_bo_info_rep *rep = &arg->d.rep; >+ int ret; >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized.\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_bo_handle_wait(file_priv, req->handle, >+ req->hint, rep); >+ if (ret) >+ return ret; >+ >+ return 0; >+} >+ >+static int drm_bo_leave_list(struct drm_buffer_object *bo, >+ uint32_t mem_type, >+ int free_pinned, >+ int allow_errors) >+{ >+ struct drm_device *dev = bo->dev; >+ int ret = 0; >+ >+ mutex_lock(&bo->mutex); >+ >+ ret = drm_bo_expire_fence(bo, allow_errors); >+ if (ret) >+ goto out; >+ >+ if (free_pinned) { >+ DRM_FLAG_MASKED(bo->mem.flags, 0, DRM_BO_FLAG_NO_MOVE); >+ mutex_lock(&dev->struct_mutex); >+ list_del_init(&bo->pinned_lru); >+ if (bo->pinned_node == bo->mem.mm_node) >+ bo->pinned_node = NULL; >+ if (bo->pinned_node != NULL) { >+ drm_mm_put_block(bo->pinned_node); >+ bo->pinned_node = NULL; >+ } >+ mutex_unlock(&dev->struct_mutex); >+ } >+ >+ if (bo->mem.flags & DRM_BO_FLAG_NO_EVICT) { >+ DRM_ERROR("A DRM_BO_NO_EVICT buffer present at " >+ "cleanup. Removing flag and evicting.\n"); >+ bo->mem.flags &= ~DRM_BO_FLAG_NO_EVICT; >+ bo->mem.proposed_flags &= ~DRM_BO_FLAG_NO_EVICT; >+ } >+ >+ if (bo->mem.mem_type == mem_type) >+ ret = drm_bo_evict(bo, mem_type, 0); >+ >+ if (ret) { >+ if (allow_errors) { >+ goto out; >+ } else { >+ ret = 0; >+ DRM_ERROR("Cleanup eviction failed\n"); >+ } >+ } >+ >+out: >+ mutex_unlock(&bo->mutex); >+ return ret; >+} >+ >+ >+static struct drm_buffer_object *drm_bo_entry(struct list_head *list, >+ int pinned_list) >+{ >+ if (pinned_list) >+ return list_entry(list, struct drm_buffer_object, pinned_lru); >+ else >+ return list_entry(list, struct drm_buffer_object, lru); >+} >+ >+/* >+ * dev->struct_mutex locked. >+ */ >+ >+static int drm_bo_force_list_clean(struct drm_device *dev, >+ struct list_head *head, >+ unsigned mem_type, >+ int free_pinned, >+ int allow_errors, >+ int pinned_list) >+{ >+ struct list_head *list, *next, *prev; >+ struct drm_buffer_object *entry, *nentry; >+ int ret; >+ int do_restart; >+ >+ /* >+ * The list traversal is a bit odd here, because an item may >+ * disappear from the list when we release the struct_mutex or >+ * when we decrease the usage count. Also we're not guaranteed >+ * to drain pinned lists, so we can't always restart. >+ */ >+ >+restart: >+ nentry = NULL; >+ list_for_each_safe(list, next, head) { >+ prev = list->prev; >+ >+ entry = (nentry != NULL) ? nentry: drm_bo_entry(list, pinned_list); >+ atomic_inc(&entry->usage); >+ if (nentry) { >+ atomic_dec(&nentry->usage); >+ nentry = NULL; >+ } >+ >+ /* >+ * Protect the next item from destruction, so we can check >+ * its list pointers later on. >+ */ >+ >+ if (next != head) { >+ nentry = drm_bo_entry(next, pinned_list); >+ atomic_inc(&nentry->usage); >+ } >+ mutex_unlock(&dev->struct_mutex); >+ >+ ret = drm_bo_leave_list(entry, mem_type, free_pinned, >+ allow_errors); >+ mutex_lock(&dev->struct_mutex); >+ >+ drm_bo_usage_deref_locked(&entry); >+ if (ret) >+ return ret; >+ >+ /* >+ * Has the next item disappeared from the list? >+ */ >+ >+ do_restart = ((next->prev != list) && (next->prev != prev)); >+ >+ if (nentry != NULL && do_restart) >+ drm_bo_usage_deref_locked(&nentry); >+ >+ if (do_restart) >+ goto restart; >+ } >+ return 0; >+} >+ >+int drm_bo_clean_mm(struct drm_device *dev, unsigned mem_type) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_mem_type_manager *man = &bm->man[mem_type]; >+ int ret = -EINVAL; >+ >+ if (mem_type >= DRM_BO_MEM_TYPES) { >+ DRM_ERROR("Illegal memory type %d\n", mem_type); >+ return ret; >+ } >+ >+ if (!man->has_type) { >+ DRM_ERROR("Trying to take down uninitialized " >+ "memory manager type %u\n", mem_type); >+ return ret; >+ } >+ man->use_type = 0; >+ man->has_type = 0; >+ >+ ret = 0; >+ if (mem_type > 0) { >+ BUG_ON(!list_empty(&bm->unfenced)); >+ drm_bo_force_list_clean(dev, &man->lru, mem_type, 1, 0, 0); >+ drm_bo_force_list_clean(dev, &man->pinned, mem_type, 1, 0, 1); >+ >+ if (drm_mm_clean(&man->manager)) { >+ drm_mm_takedown(&man->manager); >+ } else { >+ ret = -EBUSY; >+ } >+ } >+ >+ return ret; >+} >+EXPORT_SYMBOL(drm_bo_clean_mm); >+ >+/** >+ *Evict all buffers of a particular mem_type, but leave memory manager >+ *regions for NO_MOVE buffers intact. New buffers cannot be added at this >+ *point since we have the hardware lock. >+ */ >+ >+static int drm_bo_lock_mm(struct drm_device *dev, unsigned mem_type) >+{ >+ int ret; >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_mem_type_manager *man = &bm->man[mem_type]; >+ >+ if (mem_type == 0 || mem_type >= DRM_BO_MEM_TYPES) { >+ DRM_ERROR("Illegal memory manager memory type %u.\n", mem_type); >+ return -EINVAL; >+ } >+ >+ if (!man->has_type) { >+ DRM_ERROR("Memory type %u has not been initialized.\n", >+ mem_type); >+ return 0; >+ } >+ >+ ret = drm_bo_force_list_clean(dev, &man->lru, mem_type, 0, 1, 0); >+ if (ret) >+ return ret; >+ ret = drm_bo_force_list_clean(dev, &man->pinned, mem_type, 0, 1, 1); >+ >+ return ret; >+} >+ >+int drm_bo_init_mm(struct drm_device *dev, >+ unsigned type, >+ unsigned long p_offset, unsigned long p_size) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ int ret = -EINVAL; >+ struct drm_mem_type_manager *man; >+ >+ if (type >= DRM_BO_MEM_TYPES) { >+ DRM_ERROR("Illegal memory type %d\n", type); >+ return ret; >+ } >+ >+ man = &bm->man[type]; >+ if (man->has_type) { >+ DRM_ERROR("Memory manager already initialized for type %d\n", >+ type); >+ return ret; >+ } >+ >+ ret = dev->driver->bo_driver->init_mem_type(dev, type, man); >+ if (ret) >+ return ret; >+ >+ ret = 0; >+ if (type != DRM_BO_MEM_LOCAL) { >+ if (!p_size) { >+ DRM_ERROR("Zero size memory manager type %d\n", type); >+ return ret; >+ } >+ ret = drm_mm_init(&man->manager, p_offset, p_size); >+ if (ret) >+ return ret; >+ } >+ man->has_type = 1; >+ man->use_type = 1; >+ >+ INIT_LIST_HEAD(&man->lru); >+ INIT_LIST_HEAD(&man->pinned); >+ >+ return 0; >+} >+EXPORT_SYMBOL(drm_bo_init_mm); >+ >+/* >+ * This function is intended to be called on drm driver unload. >+ * If you decide to call it from lastclose, you must protect the call >+ * from a potentially racing drm_bo_driver_init in firstopen. >+ * (This may happen on X server restart). >+ */ >+ >+int drm_bo_driver_finish(struct drm_device *dev) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ int ret = 0; >+ unsigned i = DRM_BO_MEM_TYPES; >+ struct drm_mem_type_manager *man; >+ >+ mutex_lock(&dev->struct_mutex); >+ >+ if (!bm->initialized) >+ goto out; >+ bm->initialized = 0; >+ >+ while (i--) { >+ man = &bm->man[i]; >+ if (man->has_type) { >+ man->use_type = 0; >+ if ((i != DRM_BO_MEM_LOCAL) && drm_bo_clean_mm(dev, i)) { >+ ret = -EBUSY; >+ DRM_ERROR("DRM memory manager type %d " >+ "is not clean.\n", i); >+ } >+ man->has_type = 0; >+ } >+ } >+ mutex_unlock(&dev->struct_mutex); >+ >+ if (!cancel_delayed_work(&bm->wq)) >+ flush_scheduled_work(); >+ >+ mutex_lock(&dev->struct_mutex); >+ drm_bo_delayed_delete(dev, 1); >+ if (list_empty(&bm->ddestroy)) >+ DRM_DEBUG("Delayed destroy list was clean\n"); >+ >+ if (list_empty(&bm->man[0].lru)) >+ DRM_DEBUG("Swap list was clean\n"); >+ >+ if (list_empty(&bm->man[0].pinned)) >+ DRM_DEBUG("NO_MOVE list was clean\n"); >+ >+ if (list_empty(&bm->unfenced)) >+ DRM_DEBUG("Unfenced list was clean\n"); >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+ ClearPageReserved(bm->dummy_read_page); >+#endif >+ __free_page(bm->dummy_read_page); >+ >+out: >+ mutex_unlock(&dev->struct_mutex); >+ return ret; >+} >+ >+/* >+ * This function is intended to be called on drm driver load. >+ * If you decide to call it from firstopen, you must protect the call >+ * from a potentially racing drm_bo_driver_finish in lastclose. >+ * (This may happen on X server restart). >+ */ >+ >+int drm_bo_driver_init(struct drm_device *dev) >+{ >+ struct drm_bo_driver *driver = dev->driver->bo_driver; >+ struct drm_buffer_manager *bm = &dev->bm; >+ int ret = -EINVAL; >+ >+ bm->dummy_read_page = NULL; >+ drm_bo_init_lock(&bm->bm_lock); >+ mutex_lock(&dev->struct_mutex); >+ if (!driver) >+ goto out_unlock; >+ >+ bm->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); >+ if (!bm->dummy_read_page) { >+ ret = -ENOMEM; >+ goto out_unlock; >+ } >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+ SetPageReserved(bm->dummy_read_page); >+#endif >+ >+ /* >+ * Initialize the system memory buffer type. >+ * Other types need to be driver / IOCTL initialized. >+ */ >+ ret = drm_bo_init_mm(dev, DRM_BO_MEM_LOCAL, 0, 0); >+ if (ret) >+ goto out_unlock; >+ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20) >+ INIT_WORK(&bm->wq, &drm_bo_delayed_workqueue, dev); >+#else >+ INIT_DELAYED_WORK(&bm->wq, drm_bo_delayed_workqueue); >+#endif >+ bm->initialized = 1; >+ bm->nice_mode = 1; >+ atomic_set(&bm->count, 0); >+ bm->cur_pages = 0; >+ INIT_LIST_HEAD(&bm->unfenced); >+ INIT_LIST_HEAD(&bm->ddestroy); >+out_unlock: >+ mutex_unlock(&dev->struct_mutex); >+ return ret; >+} >+EXPORT_SYMBOL(drm_bo_driver_init); >+ >+int drm_mm_init_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_mm_init_arg *arg = data; >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_bo_driver *driver = dev->driver->bo_driver; >+ int ret; >+ >+ if (!driver) { >+ DRM_ERROR("Buffer objects are not supported by this driver\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_bo_write_lock(&bm->bm_lock, file_priv); >+ if (ret) >+ return ret; >+ >+ ret = -EINVAL; >+ if (arg->magic != DRM_BO_INIT_MAGIC) { >+ DRM_ERROR("You are using an old libdrm that is not compatible with\n" >+ "\tthe kernel DRM module. Please upgrade your libdrm.\n"); >+ return -EINVAL; >+ } >+ if (arg->major != DRM_BO_INIT_MAJOR) { >+ DRM_ERROR("libdrm and kernel DRM buffer object interface major\n" >+ "\tversion don't match. Got %d, expected %d.\n", >+ arg->major, DRM_BO_INIT_MAJOR); >+ return -EINVAL; >+ } >+ >+ mutex_lock(&dev->struct_mutex); >+ if (!bm->initialized) { >+ DRM_ERROR("DRM memory manager was not initialized.\n"); >+ goto out; >+ } >+ if (arg->mem_type == 0) { >+ DRM_ERROR("System memory buffers already initialized.\n"); >+ goto out; >+ } >+ ret = drm_bo_init_mm(dev, arg->mem_type, >+ arg->p_offset, arg->p_size); >+ >+out: >+ mutex_unlock(&dev->struct_mutex); >+ (void) drm_bo_write_unlock(&bm->bm_lock, file_priv); >+ >+ if (ret) >+ return ret; >+ >+ return 0; >+} >+ >+int drm_mm_takedown_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_mm_type_arg *arg = data; >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_bo_driver *driver = dev->driver->bo_driver; >+ int ret; >+ >+ if (!driver) { >+ DRM_ERROR("Buffer objects are not supported by this driver\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_bo_write_lock(&bm->bm_lock, file_priv); >+ if (ret) >+ return ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ ret = -EINVAL; >+ if (!bm->initialized) { >+ DRM_ERROR("DRM memory manager was not initialized\n"); >+ goto out; >+ } >+ if (arg->mem_type == 0) { >+ DRM_ERROR("No takedown for System memory buffers.\n"); >+ goto out; >+ } >+ ret = 0; >+ if (drm_bo_clean_mm(dev, arg->mem_type)) { >+ DRM_ERROR("Memory manager type %d not clean. " >+ "Delaying takedown\n", arg->mem_type); >+ } >+out: >+ mutex_unlock(&dev->struct_mutex); >+ (void) drm_bo_write_unlock(&bm->bm_lock, file_priv); >+ >+ if (ret) >+ return ret; >+ >+ return 0; >+} >+ >+int drm_mm_lock_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ struct drm_mm_type_arg *arg = data; >+ struct drm_bo_driver *driver = dev->driver->bo_driver; >+ int ret; >+ >+ if (!driver) { >+ DRM_ERROR("Buffer objects are not supported by this driver\n"); >+ return -EINVAL; >+ } >+ >+ if (arg->lock_flags & DRM_BO_LOCK_IGNORE_NO_EVICT) { >+ DRM_ERROR("Lock flag DRM_BO_LOCK_IGNORE_NO_EVICT not supported yet.\n"); >+ return -EINVAL; >+ } >+ >+ if (arg->lock_flags & DRM_BO_LOCK_UNLOCK_BM) { >+ ret = drm_bo_write_lock(&dev->bm.bm_lock, file_priv); >+ if (ret) >+ return ret; >+ } >+ >+ mutex_lock(&dev->struct_mutex); >+ ret = drm_bo_lock_mm(dev, arg->mem_type); >+ mutex_unlock(&dev->struct_mutex); >+ if (ret) { >+ (void) drm_bo_write_unlock(&dev->bm.bm_lock, file_priv); >+ return ret; >+ } >+ >+ return 0; >+} >+ >+int drm_mm_unlock_ioctl(struct drm_device *dev, >+ void *data, >+ struct drm_file *file_priv) >+{ >+ struct drm_mm_type_arg *arg = data; >+ struct drm_bo_driver *driver = dev->driver->bo_driver; >+ int ret; >+ >+ if (!driver) { >+ DRM_ERROR("Buffer objects are not supported by this driver\n"); >+ return -EINVAL; >+ } >+ >+ if (arg->lock_flags & DRM_BO_LOCK_UNLOCK_BM) { >+ ret = drm_bo_write_unlock(&dev->bm.bm_lock, file_priv); >+ if (ret) >+ return ret; >+ } >+ >+ return 0; >+} >+ >+/* >+ * buffer object vm functions. >+ */ >+ >+int drm_mem_reg_is_pci(struct drm_device *dev, struct drm_bo_mem_reg *mem) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_mem_type_manager *man = &bm->man[mem->mem_type]; >+ >+ if (!(man->flags & _DRM_FLAG_MEMTYPE_FIXED)) { >+ if (mem->mem_type == DRM_BO_MEM_LOCAL) >+ return 0; >+ >+ if (man->flags & _DRM_FLAG_MEMTYPE_CMA) >+ return 0; >+ >+ if (mem->flags & DRM_BO_FLAG_CACHED) >+ return 0; >+ } >+ return 1; >+} >+EXPORT_SYMBOL(drm_mem_reg_is_pci); >+ >+/** >+ * \c Get the PCI offset for the buffer object memory. >+ * >+ * \param bo The buffer object. >+ * \param bus_base On return the base of the PCI region >+ * \param bus_offset On return the byte offset into the PCI region >+ * \param bus_size On return the byte size of the buffer object or zero if >+ * the buffer object memory is not accessible through a PCI region. >+ * \return Failure indication. >+ * >+ * Returns -EINVAL if the buffer object is currently not mappable. >+ * Otherwise returns zero. >+ */ >+ >+int drm_bo_pci_offset(struct drm_device *dev, >+ struct drm_bo_mem_reg *mem, >+ unsigned long *bus_base, >+ unsigned long *bus_offset, unsigned long *bus_size) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_mem_type_manager *man = &bm->man[mem->mem_type]; >+ >+ *bus_size = 0; >+ if (!(man->flags & _DRM_FLAG_MEMTYPE_MAPPABLE)) >+ return -EINVAL; >+ >+ if (drm_mem_reg_is_pci(dev, mem)) { >+ *bus_offset = mem->mm_node->start << PAGE_SHIFT; >+ *bus_size = mem->num_pages << PAGE_SHIFT; >+ *bus_base = man->io_offset; >+ } >+ >+ return 0; >+} >+ >+/** >+ * \c Kill all user-space virtual mappings of this buffer object. >+ * >+ * \param bo The buffer object. >+ * >+ * Call bo->mutex locked. >+ */ >+ >+void drm_bo_unmap_virtual(struct drm_buffer_object *bo) >+{ >+ struct drm_device *dev = bo->dev; >+ loff_t offset = ((loff_t) bo->map_list.hash.key) << PAGE_SHIFT; >+ loff_t holelen = ((loff_t) bo->mem.num_pages) << PAGE_SHIFT; >+ >+ if (!dev->dev_mapping) >+ return; >+ >+ unmap_mapping_range(dev->dev_mapping, offset, holelen, 1); >+} >+ >+/** >+ * drm_bo_takedown_vm_locked: >+ * >+ * @bo: the buffer object to remove any drm device mapping >+ * >+ * Remove any associated vm mapping on the drm device node that >+ * would have been created for a drm_bo_type_device buffer >+ */ >+static void drm_bo_takedown_vm_locked(struct drm_buffer_object *bo) >+{ >+ struct drm_map_list *list; >+ drm_local_map_t *map; >+ struct drm_device *dev = bo->dev; >+ >+ DRM_ASSERT_LOCKED(&dev->struct_mutex); >+ if (bo->type != drm_bo_type_device) >+ return; >+ >+ list = &bo->map_list; >+ if (list->user_token) { >+ drm_ht_remove_item(&dev->map_hash, &list->hash); >+ list->user_token = 0; >+ } >+ if (list->file_offset_node) { >+ drm_mm_put_block(list->file_offset_node); >+ list->file_offset_node = NULL; >+ } >+ >+ map = list->map; >+ if (!map) >+ return; >+ >+ drm_ctl_free(map, sizeof(*map), DRM_MEM_BUFOBJ); >+ list->map = NULL; >+ list->user_token = 0ULL; >+ drm_bo_usage_deref_locked(&bo); >+} >+ >+/** >+ * drm_bo_setup_vm_locked: >+ * >+ * @bo: the buffer to allocate address space for >+ * >+ * Allocate address space in the drm device so that applications >+ * can mmap the buffer and access the contents. This only >+ * applies to drm_bo_type_device objects as others are not >+ * placed in the drm device address space. >+ */ >+static int drm_bo_setup_vm_locked(struct drm_buffer_object *bo) >+{ >+ struct drm_map_list *list = &bo->map_list; >+ drm_local_map_t *map; >+ struct drm_device *dev = bo->dev; >+ >+ DRM_ASSERT_LOCKED(&dev->struct_mutex); >+ list->map = drm_ctl_calloc(1, sizeof(*map), DRM_MEM_BUFOBJ); >+ if (!list->map) >+ return -ENOMEM; >+ >+ map = list->map; >+ map->offset = 0; >+ map->type = _DRM_TTM; >+ map->flags = _DRM_REMOVABLE; >+ map->size = bo->mem.num_pages * PAGE_SIZE; >+ atomic_inc(&bo->usage); >+ map->handle = (void *)bo; >+ >+ list->file_offset_node = drm_mm_search_free(&dev->offset_manager, >+ bo->mem.num_pages, 0, 0); >+ >+ if (!list->file_offset_node) { >+ drm_bo_takedown_vm_locked(bo); >+ return -ENOMEM; >+ } >+ >+ list->file_offset_node = drm_mm_get_block(list->file_offset_node, >+ bo->mem.num_pages, 0); >+ >+ list->hash.key = list->file_offset_node->start; >+ if (drm_ht_insert_item(&dev->map_hash, &list->hash)) { >+ drm_bo_takedown_vm_locked(bo); >+ return -ENOMEM; >+ } >+ >+ list->user_token = ((uint64_t) list->hash.key) << PAGE_SHIFT; >+ >+ return 0; >+} >+ >+int drm_bo_version_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ struct drm_bo_version_arg *arg = (struct drm_bo_version_arg *)data; >+ >+ arg->major = DRM_BO_INIT_MAJOR; >+ arg->minor = DRM_BO_INIT_MINOR; >+ arg->patchlevel = DRM_BO_INIT_PATCH; >+ >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_bo_lock.c linux-2.6.23.i686/drivers/char/drm/drm_bo_lock.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_bo_lock.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_bo_lock.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,175 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2007 Tungsten Graphics, Inc., Cedar Park, TX., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+/* >+ * This file implements a simple replacement for the buffer manager use >+ * of the heavyweight hardware lock. >+ * The lock is a read-write lock. Taking it in read mode is fast, and >+ * intended for in-kernel use only. >+ * Taking it in write mode is slow. >+ * >+ * The write mode is used only when there is a need to block all >+ * user-space processes from allocating a >+ * new memory area. >+ * Typical use in write mode is X server VT switching, and it's allowed >+ * to leave kernel space with the write lock held. If a user-space process >+ * dies while having the write-lock, it will be released during the file >+ * descriptor release. >+ * >+ * The read lock is typically placed at the start of an IOCTL- or >+ * user-space callable function that may end up allocating a memory area. >+ * This includes setstatus, super-ioctls and no_pfn; the latter may move >+ * unmappable regions to mappable. It's a bug to leave kernel space with the >+ * read lock held. >+ * >+ * Both read- and write lock taking is interruptible for low signal-delivery >+ * latency. The locking functions will return -EAGAIN if interrupted by a >+ * signal. >+ * >+ * Locking order: The lock should be taken BEFORE any kernel mutexes >+ * or spinlocks. >+ */ >+ >+#include "drmP.h" >+ >+void drm_bo_init_lock(struct drm_bo_lock *lock) >+{ >+ DRM_INIT_WAITQUEUE(&lock->queue); >+ atomic_set(&lock->write_lock_pending, 0); >+ atomic_set(&lock->readers, 0); >+} >+ >+void drm_bo_read_unlock(struct drm_bo_lock *lock) >+{ >+ if (unlikely(atomic_add_negative(-1, &lock->readers))) >+ BUG(); >+ if (atomic_read(&lock->readers) == 0) >+ wake_up_interruptible(&lock->queue); >+} >+EXPORT_SYMBOL(drm_bo_read_unlock); >+ >+int drm_bo_read_lock(struct drm_bo_lock *lock) >+{ >+ while (unlikely(atomic_read(&lock->write_lock_pending) != 0)) { >+ int ret; >+ ret = wait_event_interruptible >+ (lock->queue, atomic_read(&lock->write_lock_pending) == 0); >+ if (ret) >+ return -EAGAIN; >+ } >+ >+ while (unlikely(!atomic_add_unless(&lock->readers, 1, -1))) { >+ int ret; >+ ret = wait_event_interruptible >+ (lock->queue, atomic_add_unless(&lock->readers, 1, -1)); >+ if (ret) >+ return -EAGAIN; >+ } >+ return 0; >+} >+EXPORT_SYMBOL(drm_bo_read_lock); >+ >+static int __drm_bo_write_unlock(struct drm_bo_lock *lock) >+{ >+ if (unlikely(atomic_cmpxchg(&lock->readers, -1, 0) != -1)) >+ return -EINVAL; >+ if (unlikely(atomic_cmpxchg(&lock->write_lock_pending, 1, 0) != 1)) >+ return -EINVAL; >+ wake_up_interruptible(&lock->queue); >+ return 0; >+} >+ >+static void drm_bo_write_lock_remove(struct drm_file *file_priv, >+ struct drm_user_object *item) >+{ >+ struct drm_bo_lock *lock = container_of(item, struct drm_bo_lock, base); >+ int ret; >+ >+ ret = __drm_bo_write_unlock(lock); >+ BUG_ON(ret); >+} >+ >+int drm_bo_write_lock(struct drm_bo_lock *lock, struct drm_file *file_priv) >+{ >+ int ret = 0; >+ struct drm_device *dev; >+ >+ if (unlikely(atomic_cmpxchg(&lock->write_lock_pending, 0, 1) != 0)) >+ return -EINVAL; >+ >+ while (unlikely(atomic_cmpxchg(&lock->readers, 0, -1) != 0)) { >+ ret = wait_event_interruptible >+ (lock->queue, atomic_cmpxchg(&lock->readers, 0, -1) == 0); >+ >+ if (ret) { >+ atomic_set(&lock->write_lock_pending, 0); >+ wake_up_interruptible(&lock->queue); >+ return -EAGAIN; >+ } >+ } >+ >+ /* >+ * Add a dummy user-object, the destructor of which will >+ * make sure the lock is released if the client dies >+ * while holding it. >+ */ >+ >+ dev = file_priv->head->dev; >+ mutex_lock(&dev->struct_mutex); >+ ret = drm_add_user_object(file_priv, &lock->base, 0); >+ lock->base.remove = &drm_bo_write_lock_remove; >+ lock->base.type = drm_lock_type; >+ if (ret) >+ (void)__drm_bo_write_unlock(lock); >+ >+ mutex_unlock(&dev->struct_mutex); >+ >+ return ret; >+} >+ >+int drm_bo_write_unlock(struct drm_bo_lock *lock, struct drm_file *file_priv) >+{ >+ struct drm_device *dev = file_priv->head->dev; >+ struct drm_ref_object *ro; >+ >+ mutex_lock(&dev->struct_mutex); >+ >+ if (lock->base.owner != file_priv) { >+ mutex_unlock(&dev->struct_mutex); >+ return -EINVAL; >+ } >+ ro = drm_lookup_ref_object(file_priv, &lock->base, _DRM_REF_USE); >+ BUG_ON(!ro); >+ drm_remove_ref_object(file_priv, ro); >+ lock->base.owner = NULL; >+ >+ mutex_unlock(&dev->struct_mutex); >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_bo_move.c linux-2.6.23.i686/drivers/char/drm/drm_bo_move.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_bo_move.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_bo_move.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,597 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2007 Tungsten Graphics, Inc., Cedar Park, TX., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+ >+/** >+ * Free the old memory node unless it's a pinned region and we >+ * have not been requested to free also pinned regions. >+ */ >+ >+static void drm_bo_free_old_node(struct drm_buffer_object *bo) >+{ >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ >+ if (old_mem->mm_node && (old_mem->mm_node != bo->pinned_node)) { >+ mutex_lock(&bo->dev->struct_mutex); >+ drm_mm_put_block(old_mem->mm_node); >+ old_mem->mm_node = NULL; >+ mutex_unlock(&bo->dev->struct_mutex); >+ } >+ old_mem->mm_node = NULL; >+} >+ >+int drm_bo_move_ttm(struct drm_buffer_object *bo, >+ int evict, int no_wait, struct drm_bo_mem_reg *new_mem) >+{ >+ struct drm_ttm *ttm = bo->ttm; >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ uint64_t save_flags = old_mem->flags; >+ uint64_t save_proposed_flags = old_mem->proposed_flags; >+ int ret; >+ >+ if (old_mem->mem_type == DRM_BO_MEM_TT) { >+ if (evict) >+ drm_ttm_evict(ttm); >+ else >+ drm_ttm_unbind(ttm); >+ >+ drm_bo_free_old_node(bo); >+ DRM_FLAG_MASKED(old_mem->flags, >+ DRM_BO_FLAG_CACHED | DRM_BO_FLAG_MAPPABLE | >+ DRM_BO_FLAG_MEM_LOCAL, DRM_BO_MASK_MEMTYPE); >+ old_mem->mem_type = DRM_BO_MEM_LOCAL; >+ save_flags = old_mem->flags; >+ } >+ if (new_mem->mem_type != DRM_BO_MEM_LOCAL) { >+ ret = drm_ttm_bind(ttm, new_mem); >+ if (ret) >+ return ret; >+ } >+ >+ *old_mem = *new_mem; >+ new_mem->mm_node = NULL; >+ old_mem->proposed_flags = save_proposed_flags; >+ DRM_FLAG_MASKED(save_flags, new_mem->flags, DRM_BO_MASK_MEMTYPE); >+ return 0; >+} >+EXPORT_SYMBOL(drm_bo_move_ttm); >+ >+/** >+ * \c Return a kernel virtual address to the buffer object PCI memory. >+ * >+ * \param bo The buffer object. >+ * \return Failure indication. >+ * >+ * Returns -EINVAL if the buffer object is currently not mappable. >+ * Returns -ENOMEM if the ioremap operation failed. >+ * Otherwise returns zero. >+ * >+ * After a successfull call, bo->iomap contains the virtual address, or NULL >+ * if the buffer object content is not accessible through PCI space. >+ * Call bo->mutex locked. >+ */ >+ >+int drm_mem_reg_ioremap(struct drm_device *dev, struct drm_bo_mem_reg *mem, >+ void **virtual) >+{ >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_mem_type_manager *man = &bm->man[mem->mem_type]; >+ unsigned long bus_offset; >+ unsigned long bus_size; >+ unsigned long bus_base; >+ int ret; >+ void *addr; >+ >+ *virtual = NULL; >+ ret = drm_bo_pci_offset(dev, mem, &bus_base, &bus_offset, &bus_size); >+ if (ret || bus_size == 0) >+ return ret; >+ >+ if (!(man->flags & _DRM_FLAG_NEEDS_IOREMAP)) >+ addr = (void *)(((u8 *) man->io_addr) + bus_offset); >+ else { >+ addr = ioremap_nocache(bus_base + bus_offset, bus_size); >+ if (!addr) >+ return -ENOMEM; >+ } >+ *virtual = addr; >+ return 0; >+} >+EXPORT_SYMBOL(drm_mem_reg_ioremap); >+ >+/** >+ * \c Unmap mapping obtained using drm_bo_ioremap >+ * >+ * \param bo The buffer object. >+ * >+ * Call bo->mutex locked. >+ */ >+ >+void drm_mem_reg_iounmap(struct drm_device *dev, struct drm_bo_mem_reg *mem, >+ void *virtual) >+{ >+ struct drm_buffer_manager *bm; >+ struct drm_mem_type_manager *man; >+ >+ bm = &dev->bm; >+ man = &bm->man[mem->mem_type]; >+ >+ if (virtual && (man->flags & _DRM_FLAG_NEEDS_IOREMAP)) >+ iounmap(virtual); >+} >+ >+static int drm_copy_io_page(void *dst, void *src, unsigned long page) >+{ >+ uint32_t *dstP = >+ (uint32_t *) ((unsigned long)dst + (page << PAGE_SHIFT)); >+ uint32_t *srcP = >+ (uint32_t *) ((unsigned long)src + (page << PAGE_SHIFT)); >+ >+ int i; >+ for (i = 0; i < PAGE_SIZE / sizeof(uint32_t); ++i) >+ iowrite32(ioread32(srcP++), dstP++); >+ return 0; >+} >+ >+static int drm_copy_io_ttm_page(struct drm_ttm *ttm, void *src, >+ unsigned long page) >+{ >+ struct page *d = drm_ttm_get_page(ttm, page); >+ void *dst; >+ >+ if (!d) >+ return -ENOMEM; >+ >+ src = (void *)((unsigned long)src + (page << PAGE_SHIFT)); >+ dst = kmap(d); >+ if (!dst) >+ return -ENOMEM; >+ >+ memcpy_fromio(dst, src, PAGE_SIZE); >+ kunmap(d); >+ return 0; >+} >+ >+static int drm_copy_ttm_io_page(struct drm_ttm *ttm, void *dst, unsigned long page) >+{ >+ struct page *s = drm_ttm_get_page(ttm, page); >+ void *src; >+ >+ if (!s) >+ return -ENOMEM; >+ >+ dst = (void *)((unsigned long)dst + (page << PAGE_SHIFT)); >+ src = kmap(s); >+ if (!src) >+ return -ENOMEM; >+ >+ memcpy_toio(dst, src, PAGE_SIZE); >+ kunmap(s); >+ return 0; >+} >+ >+int drm_bo_move_memcpy(struct drm_buffer_object *bo, >+ int evict, int no_wait, struct drm_bo_mem_reg *new_mem) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_mem_type_manager *man = &dev->bm.man[new_mem->mem_type]; >+ struct drm_ttm *ttm = bo->ttm; >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ struct drm_bo_mem_reg old_copy = *old_mem; >+ void *old_iomap; >+ void *new_iomap; >+ int ret; >+ uint64_t save_flags = old_mem->flags; >+ uint64_t save_proposed_flags = old_mem->proposed_flags; >+ unsigned long i; >+ unsigned long page; >+ unsigned long add = 0; >+ int dir; >+ >+ ret = drm_mem_reg_ioremap(dev, old_mem, &old_iomap); >+ if (ret) >+ return ret; >+ ret = drm_mem_reg_ioremap(dev, new_mem, &new_iomap); >+ if (ret) >+ goto out; >+ >+ if (old_iomap == NULL && new_iomap == NULL) >+ goto out2; >+ if (old_iomap == NULL && ttm == NULL) >+ goto out2; >+ >+ add = 0; >+ dir = 1; >+ >+ if ((old_mem->mem_type == new_mem->mem_type) && >+ (new_mem->mm_node->start < >+ old_mem->mm_node->start + old_mem->mm_node->size)) { >+ dir = -1; >+ add = new_mem->num_pages - 1; >+ } >+ >+ for (i = 0; i < new_mem->num_pages; ++i) { >+ page = i * dir + add; >+ if (old_iomap == NULL) >+ ret = drm_copy_ttm_io_page(ttm, new_iomap, page); >+ else if (new_iomap == NULL) >+ ret = drm_copy_io_ttm_page(ttm, old_iomap, page); >+ else >+ ret = drm_copy_io_page(new_iomap, old_iomap, page); >+ if (ret) >+ goto out1; >+ } >+ mb(); >+out2: >+ drm_bo_free_old_node(bo); >+ >+ *old_mem = *new_mem; >+ new_mem->mm_node = NULL; >+ old_mem->proposed_flags = save_proposed_flags; >+ DRM_FLAG_MASKED(save_flags, new_mem->flags, DRM_BO_MASK_MEMTYPE); >+ >+ if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (ttm != NULL)) { >+ drm_ttm_unbind(ttm); >+ drm_ttm_destroy(ttm); >+ bo->ttm = NULL; >+ } >+ >+out1: >+ drm_mem_reg_iounmap(dev, new_mem, new_iomap); >+out: >+ drm_mem_reg_iounmap(dev, &old_copy, old_iomap); >+ return ret; >+} >+EXPORT_SYMBOL(drm_bo_move_memcpy); >+ >+/* >+ * Transfer a buffer object's memory and LRU status to a newly >+ * created object. User-space references remains with the old >+ * object. Call bo->mutex locked. >+ */ >+ >+int drm_buffer_object_transfer(struct drm_buffer_object *bo, >+ struct drm_buffer_object **new_obj) >+{ >+ struct drm_buffer_object *fbo; >+ struct drm_device *dev = bo->dev; >+ struct drm_buffer_manager *bm = &dev->bm; >+ >+ fbo = drm_ctl_calloc(1, sizeof(*fbo), DRM_MEM_BUFOBJ); >+ if (!fbo) >+ return -ENOMEM; >+ >+ *fbo = *bo; >+ mutex_init(&fbo->mutex); >+ mutex_lock(&fbo->mutex); >+ mutex_lock(&dev->struct_mutex); >+ >+ DRM_INIT_WAITQUEUE(&bo->event_queue); >+ INIT_LIST_HEAD(&fbo->ddestroy); >+ INIT_LIST_HEAD(&fbo->lru); >+ INIT_LIST_HEAD(&fbo->pinned_lru); >+#ifdef DRM_ODD_MM_COMPAT >+ INIT_LIST_HEAD(&fbo->vma_list); >+ INIT_LIST_HEAD(&fbo->p_mm_list); >+#endif >+ >+ fbo->fence = drm_fence_reference_locked(bo->fence); >+ fbo->pinned_node = NULL; >+ fbo->mem.mm_node->private = (void *)fbo; >+ atomic_set(&fbo->usage, 1); >+ atomic_inc(&bm->count); >+ mutex_unlock(&dev->struct_mutex); >+ mutex_unlock(&fbo->mutex); >+ >+ *new_obj = fbo; >+ return 0; >+} >+ >+/* >+ * Since move is underway, we need to block signals in this function. >+ * We cannot restart until it has finished. >+ */ >+ >+int drm_bo_move_accel_cleanup(struct drm_buffer_object *bo, >+ int evict, int no_wait, uint32_t fence_class, >+ uint32_t fence_type, uint32_t fence_flags, >+ struct drm_bo_mem_reg *new_mem) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_mem_type_manager *man = &dev->bm.man[new_mem->mem_type]; >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ int ret; >+ uint64_t save_flags = old_mem->flags; >+ uint64_t save_proposed_flags = old_mem->proposed_flags; >+ struct drm_buffer_object *old_obj; >+ >+ if (bo->fence) >+ drm_fence_usage_deref_unlocked(&bo->fence); >+ ret = drm_fence_object_create(dev, fence_class, fence_type, >+ fence_flags | DRM_FENCE_FLAG_EMIT, >+ &bo->fence); >+ bo->fence_type = fence_type; >+ if (ret) >+ return ret; >+ >+#ifdef DRM_ODD_MM_COMPAT >+ /* >+ * In this mode, we don't allow pipelining a copy blit, >+ * since the buffer will be accessible from user space >+ * the moment we return and rebuild the page tables. >+ * >+ * With normal vm operation, page tables are rebuilt >+ * on demand using fault(), which waits for buffer idle. >+ */ >+ if (1) >+#else >+ if (evict || ((bo->mem.mm_node == bo->pinned_node) && >+ bo->mem.mm_node != NULL)) >+#endif >+ { >+ ret = drm_bo_wait(bo, 0, 1, 0); >+ if (ret) >+ return ret; >+ >+ drm_bo_free_old_node(bo); >+ >+ if ((man->flags & _DRM_FLAG_MEMTYPE_FIXED) && (bo->ttm != NULL)) { >+ drm_ttm_unbind(bo->ttm); >+ drm_ttm_destroy(bo->ttm); >+ bo->ttm = NULL; >+ } >+ } else { >+ >+ /* This should help pipeline ordinary buffer moves. >+ * >+ * Hang old buffer memory on a new buffer object, >+ * and leave it to be released when the GPU >+ * operation has completed. >+ */ >+ >+ ret = drm_buffer_object_transfer(bo, &old_obj); >+ >+ if (ret) >+ return ret; >+ >+ if (!(man->flags & _DRM_FLAG_MEMTYPE_FIXED)) >+ old_obj->ttm = NULL; >+ else >+ bo->ttm = NULL; >+ >+ mutex_lock(&dev->struct_mutex); >+ list_del_init(&old_obj->lru); >+ DRM_FLAG_MASKED(bo->priv_flags, 0, _DRM_BO_FLAG_UNFENCED); >+ drm_bo_add_to_lru(old_obj); >+ >+ drm_bo_usage_deref_locked(&old_obj); >+ mutex_unlock(&dev->struct_mutex); >+ >+ } >+ >+ *old_mem = *new_mem; >+ new_mem->mm_node = NULL; >+ old_mem->proposed_flags = save_proposed_flags; >+ DRM_FLAG_MASKED(save_flags, new_mem->flags, DRM_BO_MASK_MEMTYPE); >+ return 0; >+} >+EXPORT_SYMBOL(drm_bo_move_accel_cleanup); >+ >+int drm_bo_same_page(unsigned long offset, >+ unsigned long offset2) >+{ >+ return (offset & PAGE_MASK) == (offset2 & PAGE_MASK); >+} >+EXPORT_SYMBOL(drm_bo_same_page); >+ >+unsigned long drm_bo_offset_end(unsigned long offset, >+ unsigned long end) >+{ >+ offset = (offset + PAGE_SIZE) & PAGE_MASK; >+ return (end < offset) ? end : offset; >+} >+EXPORT_SYMBOL(drm_bo_offset_end); >+ >+static pgprot_t drm_kernel_io_prot(uint32_t map_type) >+{ >+ pgprot_t tmp = PAGE_KERNEL; >+ >+#if defined(__i386__) || defined(__x86_64__) >+#ifdef USE_PAT_WC >+#warning using pat >+ if (drm_use_pat() && map_type == _DRM_TTM) { >+ pgprot_val(tmp) |= _PAGE_PAT; >+ return tmp; >+ } >+#endif >+ if (boot_cpu_data.x86 > 3 && map_type != _DRM_AGP) { >+ pgprot_val(tmp) |= _PAGE_PCD; >+ pgprot_val(tmp) &= ~_PAGE_PWT; >+ } >+#elif defined(__powerpc__) >+ pgprot_val(tmp) |= _PAGE_NO_CACHE; >+ if (map_type == _DRM_REGISTERS) >+ pgprot_val(tmp) |= _PAGE_GUARDED; >+#endif >+#if defined(__ia64__) >+ if (map_type == _DRM_TTM) >+ tmp = pgprot_writecombine(tmp); >+ else >+ tmp = pgprot_noncached(tmp); >+#endif >+ return tmp; >+} >+ >+static int drm_bo_ioremap(struct drm_buffer_object *bo, unsigned long bus_base, >+ unsigned long bus_offset, unsigned long bus_size, >+ struct drm_bo_kmap_obj *map) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_bo_mem_reg *mem = &bo->mem; >+ struct drm_mem_type_manager *man = &dev->bm.man[mem->mem_type]; >+ >+ if (!(man->flags & _DRM_FLAG_NEEDS_IOREMAP)) { >+ map->bo_kmap_type = bo_map_premapped; >+ map->virtual = (void *)(((u8 *) man->io_addr) + bus_offset); >+ } else { >+ map->bo_kmap_type = bo_map_iomap; >+ map->virtual = ioremap_nocache(bus_base + bus_offset, bus_size); >+ } >+ return (!map->virtual) ? -ENOMEM : 0; >+} >+ >+static int drm_bo_kmap_ttm(struct drm_buffer_object *bo, >+ unsigned long start_page, unsigned long num_pages, >+ struct drm_bo_kmap_obj *map) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_bo_mem_reg *mem = &bo->mem; >+ struct drm_mem_type_manager *man = &dev->bm.man[mem->mem_type]; >+ pgprot_t prot; >+ struct drm_ttm *ttm = bo->ttm; >+ struct page *d; >+ int i; >+ >+ BUG_ON(!ttm); >+ >+ if (num_pages == 1 && (mem->flags & DRM_BO_FLAG_CACHED)) { >+ >+ /* >+ * We're mapping a single page, and the desired >+ * page protection is consistent with the bo. >+ */ >+ >+ map->bo_kmap_type = bo_map_kmap; >+ map->page = drm_ttm_get_page(ttm, start_page); >+ map->virtual = kmap(map->page); >+ } else { >+ /* >+ * Populate the part we're mapping; >+ */ >+ >+ for (i = start_page; i < start_page + num_pages; ++i) { >+ d = drm_ttm_get_page(ttm, i); >+ if (!d) >+ return -ENOMEM; >+ } >+ >+ /* >+ * We need to use vmap to get the desired page protection >+ * or to make the buffer object look contigous. >+ */ >+ >+ prot = (mem->flags & DRM_BO_FLAG_CACHED) ? >+ PAGE_KERNEL : >+ drm_kernel_io_prot(man->drm_bus_maptype); >+ map->bo_kmap_type = bo_map_vmap; >+ map->virtual = vmap(ttm->pages + start_page, >+ num_pages, 0, prot); >+ } >+ return (!map->virtual) ? -ENOMEM : 0; >+} >+ >+/* >+ * This function is to be used for kernel mapping of buffer objects. >+ * It chooses the appropriate mapping method depending on the memory type >+ * and caching policy the buffer currently has. >+ * Mapping multiple pages or buffers that live in io memory is a bit slow and >+ * consumes vmalloc space. Be restrictive with such mappings. >+ * Mapping single pages usually returns the logical kernel address, >+ * (which is fast) >+ * BUG may use slower temporary mappings for high memory pages or >+ * uncached / write-combined pages. >+ * >+ * The function fills in a drm_bo_kmap_obj which can be used to return the >+ * kernel virtual address of the buffer. >+ * >+ * Code servicing a non-priviliged user request is only allowed to map one >+ * page at a time. We might need to implement a better scheme to stop such >+ * processes from consuming all vmalloc space. >+ */ >+ >+int drm_bo_kmap(struct drm_buffer_object *bo, unsigned long start_page, >+ unsigned long num_pages, struct drm_bo_kmap_obj *map) >+{ >+ int ret; >+ unsigned long bus_base; >+ unsigned long bus_offset; >+ unsigned long bus_size; >+ >+ map->virtual = NULL; >+ >+ if (num_pages > bo->num_pages) >+ return -EINVAL; >+ if (start_page > bo->num_pages) >+ return -EINVAL; >+#if 0 >+ if (num_pages > 1 && !DRM_SUSER(DRM_CURPROC)) >+ return -EPERM; >+#endif >+ ret = drm_bo_pci_offset(bo->dev, &bo->mem, &bus_base, >+ &bus_offset, &bus_size); >+ >+ if (ret) >+ return ret; >+ >+ if (bus_size == 0) { >+ return drm_bo_kmap_ttm(bo, start_page, num_pages, map); >+ } else { >+ bus_offset += start_page << PAGE_SHIFT; >+ bus_size = num_pages << PAGE_SHIFT; >+ return drm_bo_ioremap(bo, bus_base, bus_offset, bus_size, map); >+ } >+} >+EXPORT_SYMBOL(drm_bo_kmap); >+ >+void drm_bo_kunmap(struct drm_bo_kmap_obj *map) >+{ >+ if (!map->virtual) >+ return; >+ >+ switch (map->bo_kmap_type) { >+ case bo_map_iomap: >+ iounmap(map->virtual); >+ break; >+ case bo_map_vmap: >+ vunmap(map->virtual); >+ break; >+ case bo_map_kmap: >+ kunmap(map->page); >+ break; >+ case bo_map_premapped: >+ break; >+ default: >+ BUG(); >+ } >+ map->virtual = NULL; >+ map->page = NULL; >+} >+EXPORT_SYMBOL(drm_bo_kunmap); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_bufs.c linux-2.6.23.i686/drivers/char/drm/drm_bufs.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_bufs.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_bufs.c 2008-01-06 09:24:57.000000000 +0100 >@@ -46,11 +46,9 @@ unsigned long drm_get_resource_len(struc > { > return pci_resource_len(dev->pdev, resource); > } >- > EXPORT_SYMBOL(drm_get_resource_len); > >-struct drm_map_list *drm_find_matching_map(struct drm_device *dev, >- drm_local_map_t *map) >+struct drm_map_list *drm_find_matching_map(struct drm_device *dev, drm_local_map_t *map) > { > struct drm_map_list *entry; > list_for_each_entry(entry, &dev->maplist, head) { >@@ -69,6 +67,7 @@ static int drm_map_handle(struct drm_dev > unsigned long user_token, int hashed_handle) > { > int use_hashed_handle; >+ > #if (BITS_PER_LONG == 64) > use_hashed_handle = ((user_token & 0xFFFFFFFF00000000UL) || hashed_handle); > #elif (BITS_PER_LONG == 32) >@@ -102,10 +101,10 @@ static int drm_map_handle(struct drm_dev > * type. Adds the map to the map list drm_device::maplist. Adds MTRR's where > * applicable and if supported by the kernel. > */ >-static int drm_addmap_core(struct drm_device * dev, unsigned int offset, >+static int drm_addmap_core(struct drm_device *dev, unsigned int offset, > unsigned int size, enum drm_map_type type, > enum drm_map_flags flags, >- struct drm_map_list ** maplist) >+ struct drm_map_list **maplist) > { > struct drm_map *map; > struct drm_map_list *list; >@@ -143,7 +142,7 @@ static int drm_addmap_core(struct drm_de > case _DRM_REGISTERS: > case _DRM_FRAME_BUFFER: > #if !defined(__sparc__) && !defined(__alpha__) && !defined(__ia64__) && !defined(__powerpc64__) && !defined(__x86_64__) >- if (map->offset + (map->size-1) < map->offset || >+ if (map->offset + (map->size - 1) < map->offset || > map->offset < virt_to_phys(high_memory)) { > drm_free(map, sizeof(*map), DRM_MEM_MAPS); > return -EINVAL; >@@ -185,15 +184,14 @@ static int drm_addmap_core(struct drm_de > return -ENOMEM; > } > } >- > break; > case _DRM_SHM: > list = drm_find_matching_map(dev, map); > if (list != NULL) { > if(list->map->size != map->size) { > DRM_DEBUG("Matching maps of type %d with " >- "mismatched sizes, (%ld vs %ld)\n", >- map->type, map->size, list->map->size); >+ "mismatched sizes, (%ld vs %ld)\n", >+ map->type, map->size, list->map->size); > list->map->size = map->size; > } > >@@ -230,11 +228,17 @@ static int drm_addmap_core(struct drm_de > #ifdef __alpha__ > map->offset += dev->hose->mem_space->start; > #endif >- /* Note: dev->agp->base may actually be 0 when the DRM >- * is not in control of AGP space. But if user space is >- * it should already have added the AGP base itself. >+ /* In some cases (i810 driver), user space may have already >+ * added the AGP base itself, because dev->agp->base previously >+ * only got set during AGP enable. So, only add the base >+ * address if the map's offset isn't already within the >+ * aperture. > */ >- map->offset += dev->agp->base; >+ if (map->offset < dev->agp->base || >+ map->offset > dev->agp->base + >+ dev->agp->agp_info.aper_size * 1024 * 1024 - 1) { >+ map->offset += dev->agp->base; >+ } > map->mtrr = dev->agp->agp_mtrr; /* for getmap */ > > /* This assumes the DRM is in total control of AGP space. >@@ -255,7 +259,6 @@ static int drm_addmap_core(struct drm_de > return -EPERM; > } > DRM_DEBUG("AGP offset = 0x%08lx, size = 0x%08lx\n", map->offset, map->size); >- > break; > } > case _DRM_SCATTER_GATHER: >@@ -298,10 +301,11 @@ static int drm_addmap_core(struct drm_de > list_add(&list->head, &dev->maplist); > > /* Assign a 32-bit handle */ >- /* We do it here so that dev->struct_mutex protects the increment */ >- user_token = (map->type == _DRM_SHM) ? (unsigned long)map->handle : >+ >+ user_token = (map->type == _DRM_SHM) ? (unsigned long) map->handle : > map->offset; > ret = drm_map_handle(dev, &list->hash, user_token, 0); >+ > if (ret) { > if (map->type == _DRM_REGISTERS) > iounmap(map->handle); >@@ -316,9 +320,9 @@ static int drm_addmap_core(struct drm_de > > *maplist = list; > return 0; >- } >+} > >-int drm_addmap(struct drm_device * dev, unsigned int offset, >+int drm_addmap(struct drm_device *dev, unsigned int offset, > unsigned int size, enum drm_map_type type, > enum drm_map_flags flags, drm_local_map_t ** map_ptr) > { >@@ -391,6 +395,10 @@ int drm_rmmap_locked(struct drm_device * > if (!found) > return -EINVAL; > >+ /* List has wrapped around to the head pointer, or it's empty and we >+ * didn't find anything. >+ */ >+ > switch (map->type) { > case _DRM_REGISTERS: > iounmap(map->handle); >@@ -414,11 +422,14 @@ int drm_rmmap_locked(struct drm_device * > dmah.size = map->size; > __drm_pci_free(dev, &dmah); > break; >+ case _DRM_TTM: >+ BUG_ON(1); > } > drm_free(map, sizeof(*map), DRM_MEM_MAPS); > > return 0; > } >+EXPORT_SYMBOL(drm_rmmap_locked); > > int drm_rmmap(struct drm_device *dev, drm_local_map_t *map) > { >@@ -488,8 +499,8 @@ int drm_rmmap_ioctl(struct drm_device *d > * > * Frees any pages and buffers associated with the given entry. > */ >-static void drm_cleanup_buf_error(struct drm_device * dev, >- struct drm_buf_entry * entry) >+static void drm_cleanup_buf_error(struct drm_device *dev, >+ struct drm_buf_entry *entry) > { > int i; > >@@ -534,7 +545,7 @@ static void drm_cleanup_buf_error(struct > * reallocates the buffer list of the same size order to accommodate the new > * buffers. > */ >-int drm_addbufs_agp(struct drm_device * dev, struct drm_buf_desc * request) >+int drm_addbufs_agp(struct drm_device *dev, struct drm_buf_desc *request) > { > struct drm_device_dma *dma = dev->dma; > struct drm_buf_entry *entry; >@@ -704,7 +715,7 @@ int drm_addbufs_agp(struct drm_device * > EXPORT_SYMBOL(drm_addbufs_agp); > #endif /* __OS_HAS_AGP */ > >-int drm_addbufs_pci(struct drm_device * dev, struct drm_buf_desc * request) >+int drm_addbufs_pci(struct drm_device *dev, struct drm_buf_desc *request) > { > struct drm_device_dma *dma = dev->dma; > int count; >@@ -816,9 +827,9 @@ int drm_addbufs_pci(struct drm_device * > page_count = 0; > > while (entry->buf_count < count) { >- >+ > dmah = drm_pci_alloc(dev, PAGE_SIZE << page_order, 0x1000, 0xfffffffful); >- >+ > if (!dmah) { > /* Set count correctly so we free the proper amount. */ > entry->buf_count = count; >@@ -930,7 +941,7 @@ int drm_addbufs_pci(struct drm_device * > } > EXPORT_SYMBOL(drm_addbufs_pci); > >-static int drm_addbufs_sg(struct drm_device * dev, struct drm_buf_desc * request) >+static int drm_addbufs_sg(struct drm_device *dev, struct drm_buf_desc *request) > { > struct drm_device_dma *dma = dev->dma; > struct drm_buf_entry *entry; >@@ -1092,7 +1103,7 @@ static int drm_addbufs_sg(struct drm_dev > return 0; > } > >-static int drm_addbufs_fb(struct drm_device * dev, struct drm_buf_desc * request) >+int drm_addbufs_fb(struct drm_device *dev, struct drm_buf_desc *request) > { > struct drm_device_dma *dma = dev->dma; > struct drm_buf_entry *entry; >@@ -1251,6 +1262,7 @@ static int drm_addbufs_fb(struct drm_dev > atomic_dec(&dev->buf_alloc); > return 0; > } >+EXPORT_SYMBOL(drm_addbufs_fb); > > > /** >@@ -1594,5 +1606,3 @@ int drm_order(unsigned long size) > return order; > } > EXPORT_SYMBOL(drm_order); >- >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_compat.c linux-2.6.23.i686/drivers/char/drm/drm_compat.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_compat.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_compat.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,731 @@ >+/************************************************************************** >+ * >+ * This kernel module is free software; you can redistribute it and/or >+ * modify it under the terms of the GNU General Public License as >+ * published by the Free Software Foundation; either version 2 of the >+ * License, or (at your option) any later version. >+ * >+ * This program is distributed in the hope that it will be useful, but >+ * WITHOUT ANY WARRANTY; without even the implied warranty of >+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >+ * General Public License for more details. >+ * >+ * You should have received a copy of the GNU General Public License >+ * along with this program; if not, write to the Free Software >+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. >+ * >+ **************************************************************************/ >+/* >+ * This code provides access to unexported mm kernel features. It is necessary >+ * to use the new DRM memory manager code with kernels that don't support it >+ * directly. >+ * >+ * Authors: Thomas Hellstrom <thomas-at-tungstengraphics-dot-com> >+ * Linux kernel mm subsystem authors. >+ * (Most code taken from there). >+ */ >+ >+#include "drmP.h" >+ >+#if defined(CONFIG_X86) && (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+ >+/* >+ * These have bad performance in the AGP module for the indicated kernel versions. >+ */ >+ >+int drm_map_page_into_agp(struct page *page) >+{ >+ int i; >+ i = change_page_attr(page, 1, PAGE_KERNEL_NOCACHE); >+ /* Caller's responsibility to call global_flush_tlb() for >+ * performance reasons */ >+ return i; >+} >+ >+int drm_unmap_page_from_agp(struct page *page) >+{ >+ int i; >+ i = change_page_attr(page, 1, PAGE_KERNEL); >+ /* Caller's responsibility to call global_flush_tlb() for >+ * performance reasons */ >+ return i; >+} >+#endif >+ >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,19)) >+ >+/* >+ * The protection map was exported in 2.6.19 >+ */ >+ >+pgprot_t vm_get_page_prot(unsigned long vm_flags) >+{ >+#ifdef MODULE >+ static pgprot_t drm_protection_map[16] = { >+ __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111, >+ __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111 >+ }; >+ >+ return drm_protection_map[vm_flags & 0x0F]; >+#else >+ extern pgprot_t protection_map[]; >+ return protection_map[vm_flags & 0x0F]; >+#endif >+}; >+#endif >+ >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+ >+/* >+ * vm code for kernels below 2.6.15 in which version a major vm write >+ * occured. This implement a simple straightforward >+ * version similar to what's going to be >+ * in kernel 2.6.19+ >+ * Kernels below 2.6.15 use nopage whereas 2.6.19 and upwards use >+ * nopfn. >+ */ >+ >+static struct { >+ spinlock_t lock; >+ struct page *dummy_page; >+ atomic_t present; >+} drm_np_retry = >+{SPIN_LOCK_UNLOCKED, NOPAGE_OOM, ATOMIC_INIT(0)}; >+ >+ >+static struct page *drm_bo_vm_fault(struct vm_area_struct *vma, >+ struct fault_data *data); >+ >+ >+struct page * get_nopage_retry(void) >+{ >+ if (atomic_read(&drm_np_retry.present) == 0) { >+ struct page *page = alloc_page(GFP_KERNEL); >+ if (!page) >+ return NOPAGE_OOM; >+ spin_lock(&drm_np_retry.lock); >+ drm_np_retry.dummy_page = page; >+ atomic_set(&drm_np_retry.present,1); >+ spin_unlock(&drm_np_retry.lock); >+ } >+ get_page(drm_np_retry.dummy_page); >+ return drm_np_retry.dummy_page; >+} >+ >+void free_nopage_retry(void) >+{ >+ if (atomic_read(&drm_np_retry.present) == 1) { >+ spin_lock(&drm_np_retry.lock); >+ __free_page(drm_np_retry.dummy_page); >+ drm_np_retry.dummy_page = NULL; >+ atomic_set(&drm_np_retry.present, 0); >+ spin_unlock(&drm_np_retry.lock); >+ } >+} >+ >+struct page *drm_bo_vm_nopage(struct vm_area_struct *vma, >+ unsigned long address, >+ int *type) >+{ >+ struct fault_data data; >+ >+ if (type) >+ *type = VM_FAULT_MINOR; >+ >+ data.address = address; >+ data.vma = vma; >+ drm_bo_vm_fault(vma, &data); >+ switch (data.type) { >+ case VM_FAULT_OOM: >+ return NOPAGE_OOM; >+ case VM_FAULT_SIGBUS: >+ return NOPAGE_SIGBUS; >+ default: >+ break; >+ } >+ >+ return NOPAGE_REFAULT; >+} >+ >+#endif >+ >+#if !defined(DRM_FULL_MM_COMPAT) && \ >+ ((LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) || \ >+ (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,19))) >+ >+static int drm_pte_is_clear(struct vm_area_struct *vma, >+ unsigned long addr) >+{ >+ struct mm_struct *mm = vma->vm_mm; >+ int ret = 1; >+ pte_t *pte; >+ pmd_t *pmd; >+ pud_t *pud; >+ pgd_t *pgd; >+ >+ spin_lock(&mm->page_table_lock); >+ pgd = pgd_offset(mm, addr); >+ if (pgd_none(*pgd)) >+ goto unlock; >+ pud = pud_offset(pgd, addr); >+ if (pud_none(*pud)) >+ goto unlock; >+ pmd = pmd_offset(pud, addr); >+ if (pmd_none(*pmd)) >+ goto unlock; >+ pte = pte_offset_map(pmd, addr); >+ if (!pte) >+ goto unlock; >+ ret = pte_none(*pte); >+ pte_unmap(pte); >+ unlock: >+ spin_unlock(&mm->page_table_lock); >+ return ret; >+} >+ >+static int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, >+ unsigned long pfn) >+{ >+ int ret; >+ if (!drm_pte_is_clear(vma, addr)) >+ return -EBUSY; >+ >+ ret = io_remap_pfn_range(vma, addr, pfn, PAGE_SIZE, vma->vm_page_prot); >+ return ret; >+} >+ >+ >+static struct page *drm_bo_vm_fault(struct vm_area_struct *vma, >+ struct fault_data *data) >+{ >+ unsigned long address = data->address; >+ struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; >+ unsigned long page_offset; >+ struct page *page = NULL; >+ struct drm_ttm *ttm; >+ struct drm_device *dev; >+ unsigned long pfn; >+ int err; >+ unsigned long bus_base; >+ unsigned long bus_offset; >+ unsigned long bus_size; >+ >+ dev = bo->dev; >+ while(drm_bo_read_lock(&dev->bm.bm_lock)); >+ >+ mutex_lock(&bo->mutex); >+ >+ err = drm_bo_wait(bo, 0, 1, 0); >+ if (err) { >+ data->type = (err == -EAGAIN) ? >+ VM_FAULT_MINOR : VM_FAULT_SIGBUS; >+ goto out_unlock; >+ } >+ >+ >+ /* >+ * If buffer happens to be in a non-mappable location, >+ * move it to a mappable. >+ */ >+ >+ if (!(bo->mem.flags & DRM_BO_FLAG_MAPPABLE)) { >+ unsigned long _end = jiffies + 3*DRM_HZ; >+ uint32_t new_mask = bo->mem.mask | >+ DRM_BO_FLAG_MAPPABLE | >+ DRM_BO_FLAG_FORCE_MAPPABLE; >+ >+ do { >+ err = drm_bo_move_buffer(bo, new_mask, 0, 0); >+ } while((err == -EAGAIN) && !time_after_eq(jiffies, _end)); >+ >+ if (err) { >+ DRM_ERROR("Timeout moving buffer to mappable location.\n"); >+ data->type = VM_FAULT_SIGBUS; >+ goto out_unlock; >+ } >+ } >+ >+ if (address > vma->vm_end) { >+ data->type = VM_FAULT_SIGBUS; >+ goto out_unlock; >+ } >+ >+ dev = bo->dev; >+ err = drm_bo_pci_offset(dev, &bo->mem, &bus_base, &bus_offset, >+ &bus_size); >+ >+ if (err) { >+ data->type = VM_FAULT_SIGBUS; >+ goto out_unlock; >+ } >+ >+ page_offset = (address - vma->vm_start) >> PAGE_SHIFT; >+ >+ if (bus_size) { >+ struct drm_mem_type_manager *man = &dev->bm.man[bo->mem.mem_type]; >+ >+ pfn = ((bus_base + bus_offset) >> PAGE_SHIFT) + page_offset; >+ vma->vm_page_prot = drm_io_prot(man->drm_bus_maptype, vma); >+ } else { >+ ttm = bo->ttm; >+ >+ drm_ttm_fixup_caching(ttm); >+ page = drm_ttm_get_page(ttm, page_offset); >+ if (!page) { >+ data->type = VM_FAULT_OOM; >+ goto out_unlock; >+ } >+ pfn = page_to_pfn(page); >+ vma->vm_page_prot = (bo->mem.flags & DRM_BO_FLAG_CACHED) ? >+ vm_get_page_prot(vma->vm_flags) : >+ drm_io_prot(_DRM_TTM, vma); >+ } >+ >+ err = vm_insert_pfn(vma, address, pfn); >+ >+ if (!err || err == -EBUSY) >+ data->type = VM_FAULT_MINOR; >+ else >+ data->type = VM_FAULT_OOM; >+out_unlock: >+ mutex_unlock(&bo->mutex); >+ drm_bo_read_unlock(&dev->bm.bm_lock); >+ return NULL; >+} >+ >+#endif >+ >+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,19)) && \ >+ !defined(DRM_FULL_MM_COMPAT) >+ >+/** >+ */ >+ >+unsigned long drm_bo_vm_nopfn(struct vm_area_struct * vma, >+ unsigned long address) >+{ >+ struct fault_data data; >+ data.address = address; >+ >+ (void) drm_bo_vm_fault(vma, &data); >+ if (data.type == VM_FAULT_OOM) >+ return NOPFN_OOM; >+ else if (data.type == VM_FAULT_SIGBUS) >+ return NOPFN_SIGBUS; >+ >+ /* >+ * pfn already set. >+ */ >+ >+ return 0; >+} >+#endif >+ >+ >+#ifdef DRM_ODD_MM_COMPAT >+ >+/* >+ * VM compatibility code for 2.6.15-2.6.18. This code implements a complicated >+ * workaround for a single BUG statement in do_no_page in these versions. The >+ * tricky thing is that we need to take the mmap_sem in exclusive mode for _all_ >+ * vmas mapping the ttm, before dev->struct_mutex is taken. The way we do this is to >+ * check first take the dev->struct_mutex, and then trylock all mmap_sems. If this >+ * fails for a single mmap_sem, we have to release all sems and the dev->struct_mutex, >+ * release the cpu and retry. We also need to keep track of all vmas mapping the ttm. >+ * phew. >+ */ >+ >+typedef struct p_mm_entry { >+ struct list_head head; >+ struct mm_struct *mm; >+ atomic_t refcount; >+ int locked; >+} p_mm_entry_t; >+ >+typedef struct vma_entry { >+ struct list_head head; >+ struct vm_area_struct *vma; >+} vma_entry_t; >+ >+ >+struct page *drm_bo_vm_nopage(struct vm_area_struct *vma, >+ unsigned long address, >+ int *type) >+{ >+ struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; >+ unsigned long page_offset; >+ struct page *page; >+ struct drm_ttm *ttm; >+ struct drm_device *dev; >+ >+ mutex_lock(&bo->mutex); >+ >+ if (type) >+ *type = VM_FAULT_MINOR; >+ >+ if (address > vma->vm_end) { >+ page = NOPAGE_SIGBUS; >+ goto out_unlock; >+ } >+ >+ dev = bo->dev; >+ >+ if (drm_mem_reg_is_pci(dev, &bo->mem)) { >+ DRM_ERROR("Invalid compat nopage.\n"); >+ page = NOPAGE_SIGBUS; >+ goto out_unlock; >+ } >+ >+ ttm = bo->ttm; >+ drm_ttm_fixup_caching(ttm); >+ page_offset = (address - vma->vm_start) >> PAGE_SHIFT; >+ page = drm_ttm_get_page(ttm, page_offset); >+ if (!page) { >+ page = NOPAGE_OOM; >+ goto out_unlock; >+ } >+ >+ get_page(page); >+out_unlock: >+ mutex_unlock(&bo->mutex); >+ return page; >+} >+ >+ >+ >+ >+int drm_bo_map_bound(struct vm_area_struct *vma) >+{ >+ struct drm_buffer_object *bo = (struct drm_buffer_object *)vma->vm_private_data; >+ int ret = 0; >+ unsigned long bus_base; >+ unsigned long bus_offset; >+ unsigned long bus_size; >+ >+ ret = drm_bo_pci_offset(bo->dev, &bo->mem, &bus_base, >+ &bus_offset, &bus_size); >+ BUG_ON(ret); >+ >+ if (bus_size) { >+ struct drm_mem_type_manager *man = &bo->dev->bm.man[bo->mem.mem_type]; >+ unsigned long pfn = (bus_base + bus_offset) >> PAGE_SHIFT; >+ pgprot_t pgprot = drm_io_prot(man->drm_bus_maptype, vma); >+ ret = io_remap_pfn_range(vma, vma->vm_start, pfn, >+ vma->vm_end - vma->vm_start, >+ pgprot); >+ } >+ >+ return ret; >+} >+ >+ >+int drm_bo_add_vma(struct drm_buffer_object * bo, struct vm_area_struct *vma) >+{ >+ p_mm_entry_t *entry, *n_entry; >+ vma_entry_t *v_entry; >+ struct mm_struct *mm = vma->vm_mm; >+ >+ v_entry = drm_ctl_alloc(sizeof(*v_entry), DRM_MEM_BUFOBJ); >+ if (!v_entry) { >+ DRM_ERROR("Allocation of vma pointer entry failed\n"); >+ return -ENOMEM; >+ } >+ v_entry->vma = vma; >+ >+ list_add_tail(&v_entry->head, &bo->vma_list); >+ >+ list_for_each_entry(entry, &bo->p_mm_list, head) { >+ if (mm == entry->mm) { >+ atomic_inc(&entry->refcount); >+ return 0; >+ } else if ((unsigned long)mm < (unsigned long)entry->mm) ; >+ } >+ >+ n_entry = drm_ctl_alloc(sizeof(*n_entry), DRM_MEM_BUFOBJ); >+ if (!n_entry) { >+ DRM_ERROR("Allocation of process mm pointer entry failed\n"); >+ return -ENOMEM; >+ } >+ INIT_LIST_HEAD(&n_entry->head); >+ n_entry->mm = mm; >+ n_entry->locked = 0; >+ atomic_set(&n_entry->refcount, 0); >+ list_add_tail(&n_entry->head, &entry->head); >+ >+ return 0; >+} >+ >+void drm_bo_delete_vma(struct drm_buffer_object * bo, struct vm_area_struct *vma) >+{ >+ p_mm_entry_t *entry, *n; >+ vma_entry_t *v_entry, *v_n; >+ int found = 0; >+ struct mm_struct *mm = vma->vm_mm; >+ >+ list_for_each_entry_safe(v_entry, v_n, &bo->vma_list, head) { >+ if (v_entry->vma == vma) { >+ found = 1; >+ list_del(&v_entry->head); >+ drm_ctl_free(v_entry, sizeof(*v_entry), DRM_MEM_BUFOBJ); >+ break; >+ } >+ } >+ BUG_ON(!found); >+ >+ list_for_each_entry_safe(entry, n, &bo->p_mm_list, head) { >+ if (mm == entry->mm) { >+ if (atomic_add_negative(-1, &entry->refcount)) { >+ list_del(&entry->head); >+ BUG_ON(entry->locked); >+ drm_ctl_free(entry, sizeof(*entry), DRM_MEM_BUFOBJ); >+ } >+ return; >+ } >+ } >+ BUG_ON(1); >+} >+ >+ >+ >+int drm_bo_lock_kmm(struct drm_buffer_object * bo) >+{ >+ p_mm_entry_t *entry; >+ int lock_ok = 1; >+ >+ list_for_each_entry(entry, &bo->p_mm_list, head) { >+ BUG_ON(entry->locked); >+ if (!down_write_trylock(&entry->mm->mmap_sem)) { >+ lock_ok = 0; >+ break; >+ } >+ entry->locked = 1; >+ } >+ >+ if (lock_ok) >+ return 0; >+ >+ list_for_each_entry(entry, &bo->p_mm_list, head) { >+ if (!entry->locked) >+ break; >+ up_write(&entry->mm->mmap_sem); >+ entry->locked = 0; >+ } >+ >+ /* >+ * Possible deadlock. Try again. Our callers should handle this >+ * and restart. >+ */ >+ >+ return -EAGAIN; >+} >+ >+void drm_bo_unlock_kmm(struct drm_buffer_object * bo) >+{ >+ p_mm_entry_t *entry; >+ >+ list_for_each_entry(entry, &bo->p_mm_list, head) { >+ BUG_ON(!entry->locked); >+ up_write(&entry->mm->mmap_sem); >+ entry->locked = 0; >+ } >+} >+ >+int drm_bo_remap_bound(struct drm_buffer_object *bo) >+{ >+ vma_entry_t *v_entry; >+ int ret = 0; >+ >+ if (drm_mem_reg_is_pci(bo->dev, &bo->mem)) { >+ list_for_each_entry(v_entry, &bo->vma_list, head) { >+ ret = drm_bo_map_bound(v_entry->vma); >+ if (ret) >+ break; >+ } >+ } >+ >+ return ret; >+} >+ >+void drm_bo_finish_unmap(struct drm_buffer_object *bo) >+{ >+ vma_entry_t *v_entry; >+ >+ list_for_each_entry(v_entry, &bo->vma_list, head) { >+ v_entry->vma->vm_flags &= ~VM_PFNMAP; >+ } >+} >+ >+#endif >+ >+#ifdef DRM_IDR_COMPAT_FN >+/* only called when idp->lock is held */ >+static void __free_layer(struct idr *idp, struct idr_layer *p) >+{ >+ p->ary[0] = idp->id_free; >+ idp->id_free = p; >+ idp->id_free_cnt++; >+} >+ >+static void free_layer(struct idr *idp, struct idr_layer *p) >+{ >+ unsigned long flags; >+ >+ /* >+ * Depends on the return element being zeroed. >+ */ >+ spin_lock_irqsave(&idp->lock, flags); >+ __free_layer(idp, p); >+ spin_unlock_irqrestore(&idp->lock, flags); >+} >+ >+/** >+ * idr_for_each - iterate through all stored pointers >+ * @idp: idr handle >+ * @fn: function to be called for each pointer >+ * @data: data passed back to callback function >+ * >+ * Iterate over the pointers registered with the given idr. The >+ * callback function will be called for each pointer currently >+ * registered, passing the id, the pointer and the data pointer passed >+ * to this function. It is not safe to modify the idr tree while in >+ * the callback, so functions such as idr_get_new and idr_remove are >+ * not allowed. >+ * >+ * We check the return of @fn each time. If it returns anything other >+ * than 0, we break out and return that value. >+ * >+* The caller must serialize idr_find() vs idr_get_new() and idr_remove(). >+ */ >+int idr_for_each(struct idr *idp, >+ int (*fn)(int id, void *p, void *data), void *data) >+{ >+ int n, id, max, error = 0; >+ struct idr_layer *p; >+ struct idr_layer *pa[MAX_LEVEL]; >+ struct idr_layer **paa = &pa[0]; >+ >+ n = idp->layers * IDR_BITS; >+ p = idp->top; >+ max = 1 << n; >+ >+ id = 0; >+ while (id < max) { >+ while (n > 0 && p) { >+ n -= IDR_BITS; >+ *paa++ = p; >+ p = p->ary[(id >> n) & IDR_MASK]; >+ } >+ >+ if (p) { >+ error = fn(id, (void *)p, data); >+ if (error) >+ break; >+ } >+ >+ id += 1 << n; >+ while (n < fls(id)) { >+ n += IDR_BITS; >+ p = *--paa; >+ } >+ } >+ >+ return error; >+} >+EXPORT_SYMBOL(idr_for_each); >+ >+/** >+ * idr_remove_all - remove all ids from the given idr tree >+ * @idp: idr handle >+ * >+ * idr_destroy() only frees up unused, cached idp_layers, but this >+ * function will remove all id mappings and leave all idp_layers >+ * unused. >+ * >+ * A typical clean-up sequence for objects stored in an idr tree, will >+ * use idr_for_each() to free all objects, if necessay, then >+ * idr_remove_all() to remove all ids, and idr_destroy() to free >+ * up the cached idr_layers. >+ */ >+void idr_remove_all(struct idr *idp) >+{ >+ int n, id, max, error = 0; >+ struct idr_layer *p; >+ struct idr_layer *pa[MAX_LEVEL]; >+ struct idr_layer **paa = &pa[0]; >+ >+ n = idp->layers * IDR_BITS; >+ p = idp->top; >+ max = 1 << n; >+ >+ id = 0; >+ while (id < max && !error) { >+ while (n > IDR_BITS && p) { >+ n -= IDR_BITS; >+ *paa++ = p; >+ p = p->ary[(id >> n) & IDR_MASK]; >+ } >+ >+ id += 1 << n; >+ while (n < fls(id)) { >+ if (p) { >+ memset(p, 0, sizeof *p); >+ free_layer(idp, p); >+ } >+ n += IDR_BITS; >+ p = *--paa; >+ } >+ } >+ idp->top = NULL; >+ idp->layers = 0; >+} >+EXPORT_SYMBOL(idr_remove_all); >+ >+#endif /* DRM_IDR_COMPAT_FN */ >+ >+ >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,18)) >+/** >+ * idr_replace - replace pointer for given id >+ * @idp: idr handle >+ * @ptr: pointer you want associated with the id >+ * @id: lookup key >+ * >+ * Replace the pointer registered with an id and return the old value. >+ * A -ENOENT return indicates that @id was not found. >+ * A -EINVAL return indicates that @id was not within valid constraints. >+ * >+ * The caller must serialize vs idr_find(), idr_get_new(), and idr_remove(). >+ */ >+void *idr_replace(struct idr *idp, void *ptr, int id) >+{ >+ int n; >+ struct idr_layer *p, *old_p; >+ >+ n = idp->layers * IDR_BITS; >+ p = idp->top; >+ >+ id &= MAX_ID_MASK; >+ >+ if (id >= (1 << n)) >+ return ERR_PTR(-EINVAL); >+ >+ n -= IDR_BITS; >+ while ((n > 0) && p) { >+ p = p->ary[(id >> n) & IDR_MASK]; >+ n -= IDR_BITS; >+ } >+ >+ n = id & IDR_MASK; >+ if (unlikely(p == NULL || !test_bit(n, &p->bitmap))) >+ return ERR_PTR(-ENOENT); >+ >+ old_p = p->ary[n]; >+ p->ary[n] = ptr; >+ >+ return (void *)old_p; >+} >+EXPORT_SYMBOL(idr_replace); >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_compat.h linux-2.6.23.i686/drivers/char/drm/drm_compat.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_compat.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_compat.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,331 @@ >+/** >+ * \file drm_compat.h >+ * Backward compatability definitions for Direct Rendering Manager >+ * >+ * \author Rickard E. (Rik) Faith <faith@valinux.com> >+ * \author Gareth Hughes <gareth@valinux.com> >+ */ >+ >+/* >+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. >+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. >+ * All rights reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR >+ * OTHER DEALINGS IN THE SOFTWARE. >+ */ >+ >+#ifndef _DRM_COMPAT_H_ >+#define _DRM_COMPAT_H_ >+ >+#ifndef minor >+#define minor(x) MINOR((x)) >+#endif >+ >+#ifndef MODULE_LICENSE >+#define MODULE_LICENSE(x) >+#endif >+ >+#ifndef preempt_disable >+#define preempt_disable() >+#define preempt_enable() >+#endif >+ >+#ifndef pte_offset_map >+#define pte_offset_map pte_offset >+#define pte_unmap(pte) >+#endif >+ >+#ifndef module_param >+#define module_param(name, type, perm) >+#endif >+ >+/* older kernels had different irq args */ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,19)) >+#undef DRM_IRQ_ARGS >+#define DRM_IRQ_ARGS int irq, void *arg, struct pt_regs *regs >+#endif >+ >+#ifndef list_for_each_safe >+#define list_for_each_safe(pos, n, head) \ >+ for (pos = (head)->next, n = pos->next; pos != (head); \ >+ pos = n, n = pos->next) >+#endif >+ >+#ifndef list_for_each_entry >+#define list_for_each_entry(pos, head, member) \ >+ for (pos = list_entry((head)->next, typeof(*pos), member), \ >+ prefetch(pos->member.next); \ >+ &pos->member != (head); \ >+ pos = list_entry(pos->member.next, typeof(*pos), member), \ >+ prefetch(pos->member.next)) >+#endif >+ >+#ifndef list_for_each_entry_safe >+#define list_for_each_entry_safe(pos, n, head, member) \ >+ for (pos = list_entry((head)->next, typeof(*pos), member), \ >+ n = list_entry(pos->member.next, typeof(*pos), member); \ >+ &pos->member != (head); \ >+ pos = n, n = list_entry(n->member.next, typeof(*n), member)) >+#endif >+ >+#ifndef __user >+#define __user >+#endif >+ >+#if !defined(__put_page) >+#define __put_page(p) atomic_dec(&(p)->count) >+#endif >+ >+#if !defined(__GFP_COMP) >+#define __GFP_COMP 0 >+#endif >+ >+#if !defined(IRQF_SHARED) >+#define IRQF_SHARED SA_SHIRQ >+#endif >+ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,10) >+static inline int remap_pfn_range(struct vm_area_struct *vma, unsigned long from, unsigned long pfn, unsigned long size, pgprot_t pgprot) >+{ >+ return remap_page_range(vma, from, >+ pfn << PAGE_SHIFT, >+ size, >+ pgprot); >+} >+ >+static __inline__ void *kcalloc(size_t nmemb, size_t size, int flags) >+{ >+ void *addr; >+ >+ addr = kmalloc(size * nmemb, flags); >+ if (addr != NULL) >+ memset((void *)addr, 0, size * nmemb); >+ >+ return addr; >+} >+#endif >+ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,16) >+#define mutex_lock down >+#define mutex_unlock up >+ >+#define mutex semaphore >+ >+#define mutex_init(a) sema_init((a), 1) >+ >+#endif >+ >+#ifndef DEFINE_SPINLOCK >+#define DEFINE_SPINLOCK(x) spinlock_t x = SPIN_LOCK_UNLOCKED >+#endif >+ >+/* old architectures */ >+#ifdef __AMD64__ >+#define __x86_64__ >+#endif >+ >+/* sysfs __ATTR macro */ >+#ifndef __ATTR >+#define __ATTR(_name,_mode,_show,_store) { \ >+ .attr = {.name = __stringify(_name), .mode = _mode, .owner = THIS_MODULE }, \ >+ .show = _show, \ >+ .store = _store, \ >+} >+#endif >+ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,18) >+#define vmalloc_user(_size) ({void * tmp = vmalloc(_size); \ >+ if (tmp) memset(tmp, 0, size); \ >+ (tmp);}) >+#endif >+ >+#ifndef list_for_each_entry_safe_reverse >+#define list_for_each_entry_safe_reverse(pos, n, head, member) \ >+ for (pos = list_entry((head)->prev, typeof(*pos), member), \ >+ n = list_entry(pos->member.prev, typeof(*pos), member); \ >+ &pos->member != (head); \ >+ pos = n, n = list_entry(n->member.prev, typeof(*n), member)) >+#endif >+ >+#include <linux/mm.h> >+#include <asm/page.h> >+ >+#if ((LINUX_VERSION_CODE < KERNEL_VERSION(2,6,19)) && \ >+ (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,15))) >+#define DRM_ODD_MM_COMPAT >+#endif >+ >+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,21)) >+#define DRM_FULL_MM_COMPAT >+#endif >+ >+ >+/* >+ * Flush relevant caches and clear a VMA structure so that page references >+ * will cause a page fault. Don't flush tlbs. >+ */ >+ >+extern void drm_clear_vma(struct vm_area_struct *vma, >+ unsigned long addr, unsigned long end); >+ >+/* >+ * Return the PTE protection map entries for the VMA flags given by >+ * flags. This is a functional interface to the kernel's protection map. >+ */ >+ >+extern pgprot_t vm_get_page_prot(unsigned long vm_flags); >+ >+#ifndef GFP_DMA32 >+#define GFP_DMA32 GFP_KERNEL >+#endif >+#ifndef __GFP_DMA32 >+#define __GFP_DMA32 GFP_KERNEL >+#endif >+ >+#if defined(CONFIG_X86) && (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+ >+/* >+ * These are too slow in earlier kernels. >+ */ >+ >+extern int drm_unmap_page_from_agp(struct page *page); >+extern int drm_map_page_into_agp(struct page *page); >+ >+#define map_page_into_agp drm_map_page_into_agp >+#define unmap_page_from_agp drm_unmap_page_from_agp >+#endif >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+extern struct page *get_nopage_retry(void); >+extern void free_nopage_retry(void); >+ >+#define NOPAGE_REFAULT get_nopage_retry() >+#endif >+ >+ >+#ifndef DRM_FULL_MM_COMPAT >+ >+/* >+ * For now, just return a dummy page that we've allocated out of >+ * static space. The page will be put by do_nopage() since we've already >+ * filled out the pte. >+ */ >+ >+struct fault_data { >+ struct vm_area_struct *vma; >+ unsigned long address; >+ pgoff_t pgoff; >+ unsigned int flags; >+ >+ int type; >+}; >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,19)) >+extern struct page *drm_bo_vm_nopage(struct vm_area_struct *vma, >+ unsigned long address, >+ int *type); >+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,19)) && \ >+ !defined(DRM_FULL_MM_COMPAT) >+extern unsigned long drm_bo_vm_nopfn(struct vm_area_struct *vma, >+ unsigned long address); >+#endif /* (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,19)) */ >+#endif /* ndef DRM_FULL_MM_COMPAT */ >+ >+#ifdef DRM_ODD_MM_COMPAT >+ >+struct drm_buffer_object; >+ >+ >+/* >+ * Add a vma to the ttm vma list, and the >+ * process mm pointer to the ttm mm list. Needs the ttm mutex. >+ */ >+ >+extern int drm_bo_add_vma(struct drm_buffer_object * bo, >+ struct vm_area_struct *vma); >+/* >+ * Delete a vma and the corresponding mm pointer from the >+ * ttm lists. Needs the ttm mutex. >+ */ >+extern void drm_bo_delete_vma(struct drm_buffer_object * bo, >+ struct vm_area_struct *vma); >+ >+/* >+ * Attempts to lock all relevant mmap_sems for a ttm, while >+ * not releasing the ttm mutex. May return -EAGAIN to avoid >+ * deadlocks. In that case the caller shall release the ttm mutex, >+ * schedule() and try again. >+ */ >+ >+extern int drm_bo_lock_kmm(struct drm_buffer_object * bo); >+ >+/* >+ * Unlock all relevant mmap_sems for a ttm. >+ */ >+extern void drm_bo_unlock_kmm(struct drm_buffer_object * bo); >+ >+/* >+ * If the ttm was bound to the aperture, this function shall be called >+ * with all relevant mmap sems held. It deletes the flag VM_PFNMAP from all >+ * vmas mapping this ttm. This is needed just after unmapping the ptes of >+ * the vma, otherwise the do_nopage() function will bug :(. The function >+ * releases the mmap_sems for this ttm. >+ */ >+ >+extern void drm_bo_finish_unmap(struct drm_buffer_object *bo); >+ >+/* >+ * Remap all vmas of this ttm using io_remap_pfn_range. We cannot >+ * fault these pfns in, because the first one will set the vma VM_PFNMAP >+ * flag, which will make the next fault bug in do_nopage(). The function >+ * releases the mmap_sems for this ttm. >+ */ >+ >+extern int drm_bo_remap_bound(struct drm_buffer_object *bo); >+ >+ >+/* >+ * Remap a vma for a bound ttm. Call with the ttm mutex held and >+ * the relevant mmap_sem locked. >+ */ >+extern int drm_bo_map_bound(struct vm_area_struct *vma); >+ >+#endif >+ >+/* fixme when functions are upstreamed - upstreamed for 2.6.23 */ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,23)) >+#define DRM_IDR_COMPAT_FN >+#endif >+#ifdef DRM_IDR_COMPAT_FN >+int idr_for_each(struct idr *idp, >+ int (*fn)(int id, void *p, void *data), void *data); >+void idr_remove_all(struct idr *idp); >+#endif >+ >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,18)) >+void *idr_replace(struct idr *idp, void *ptr, int id); >+#endif >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,19)) >+typedef _Bool bool; >+#endif >+ >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_context.c linux-2.6.23.i686/drivers/char/drm/drm_context.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_context.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_context.c 2008-01-06 09:24:57.000000000 +0100 >@@ -56,7 +56,7 @@ > * in drm_device::ctx_idr, while holding the drm_device::struct_mutex > * lock. > */ >-void drm_ctxbitmap_free(struct drm_device * dev, int ctx_handle) >+void drm_ctxbitmap_free(struct drm_device *dev, int ctx_handle) > { > mutex_lock(&dev->struct_mutex); > idr_remove(&dev->ctx_idr, ctx_handle); >@@ -72,7 +72,7 @@ void drm_ctxbitmap_free(struct drm_devic > * Allocate a new idr from drm_device::ctx_idr while holding the > * drm_device::struct_mutex lock. > */ >-static int drm_ctxbitmap_next(struct drm_device * dev) >+static int drm_ctxbitmap_next(struct drm_device *dev) > { > int new_id; > int ret; >@@ -89,6 +89,7 @@ again: > mutex_unlock(&dev->struct_mutex); > goto again; > } >+ > mutex_unlock(&dev->struct_mutex); > return new_id; > } >@@ -100,7 +101,7 @@ again: > * > * Initialise the drm_device::ctx_idr > */ >-int drm_ctxbitmap_init(struct drm_device * dev) >+int drm_ctxbitmap_init(struct drm_device *dev) > { > idr_init(&dev->ctx_idr); > return 0; >@@ -114,7 +115,7 @@ int drm_ctxbitmap_init(struct drm_device > * Free all idr members using drm_ctx_sarea_free helper function > * while holding the drm_device::struct_mutex lock. > */ >-void drm_ctxbitmap_cleanup(struct drm_device * dev) >+void drm_ctxbitmap_cleanup(struct drm_device *dev) > { > mutex_lock(&dev->struct_mutex); > idr_remove_all(&dev->ctx_idr); >@@ -159,7 +160,7 @@ int drm_getsareactx(struct drm_device *d > request->handle = NULL; > list_for_each_entry(_entry, &dev->maplist, head) { > if (_entry->map == map) { >- request->handle = >+ request->handle = > (void *)(unsigned long)_entry->user_token; > break; > } >@@ -228,7 +229,7 @@ int drm_setsareactx(struct drm_device *d > * > * Attempt to set drm_device::context_flag. > */ >-static int drm_context_switch(struct drm_device * dev, int old, int new) >+static int drm_context_switch(struct drm_device *dev, int old, int new) > { > if (test_and_set_bit(0, &dev->context_flag)) { > DRM_ERROR("Reentering -- FIXME\n"); >@@ -256,7 +257,7 @@ static int drm_context_switch(struct drm > * hardware lock is held, clears the drm_device::context_flag and wakes up > * drm_device::context_wait. > */ >-static int drm_context_switch_complete(struct drm_device * dev, int new) >+static int drm_context_switch_complete(struct drm_device *dev, int new) > { > dev->last_context = new; /* PRE/POST: This is the _only_ writer. */ > dev->last_switch = jiffies; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_core.h linux-2.6.23.i686/drivers/char/drm/drm_core.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_core.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_core.h 2008-01-06 09:24:57.000000000 +0100 >@@ -20,6 +20,7 @@ > * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER > * DEALINGS IN THE SOFTWARE. > */ >+ > #define CORE_AUTHOR "Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl" > > #define CORE_NAME "drm" >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_dma.c linux-2.6.23.i686/drivers/char/drm/drm_dma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_dma.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_dma.c 2008-01-06 09:24:57.000000000 +0100 >@@ -129,7 +129,7 @@ void drm_dma_takedown(struct drm_device > * > * Resets the fields of \p buf. > */ >-void drm_free_buffer(struct drm_device *dev, struct drm_buf * buf) >+void drm_free_buffer(struct drm_device *dev, struct drm_buf *buf) > { > if (!buf) > return; >@@ -176,5 +176,4 @@ void drm_core_reclaim_buffers(struct drm > } > } > } >- > EXPORT_SYMBOL(drm_core_reclaim_buffers); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_drv.c linux-2.6.23.i686/drivers/char/drm/drm_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_drv.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -45,10 +45,12 @@ > * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR > * OTHER DEALINGS IN THE SOFTWARE. > */ >- > #include "drmP.h" > #include "drm_core.h" > >+static void drm_cleanup(struct drm_device * dev); >+int drm_fb_loaded = 0; >+ > static int drm_version(struct drm_device *dev, void *data, > struct drm_file *file_priv); > >@@ -116,11 +118,43 @@ static struct drm_ioctl_desc drm_ioctls[ > > DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, 0), > >+ // DRM_IOCTL_DEF(DRM_IOCTL_BUFOBJ, drm_bo_ioctl, DRM_AUTH), >+ > DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_update_drawable_info, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >+ >+ >+ DRM_IOCTL_DEF(DRM_IOCTL_MM_INIT, drm_mm_init_ioctl, >+ DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >+ DRM_IOCTL_DEF(DRM_IOCTL_MM_TAKEDOWN, drm_mm_takedown_ioctl, >+ DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >+ DRM_IOCTL_DEF(DRM_IOCTL_MM_LOCK, drm_mm_lock_ioctl, >+ DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >+ DRM_IOCTL_DEF(DRM_IOCTL_MM_UNLOCK, drm_mm_unlock_ioctl, >+ DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >+ >+ DRM_IOCTL_DEF(DRM_IOCTL_FENCE_CREATE, drm_fence_create_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_FENCE_REFERENCE, drm_fence_reference_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_FENCE_UNREFERENCE, drm_fence_unreference_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_FENCE_SIGNALED, drm_fence_signaled_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_FENCE_FLUSH, drm_fence_flush_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_FENCE_WAIT, drm_fence_wait_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_FENCE_EMIT, drm_fence_emit_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_FENCE_BUFFERS, drm_fence_buffers_ioctl, DRM_AUTH), >+ >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_CREATE, drm_bo_create_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_MAP, drm_bo_map_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_UNMAP, drm_bo_unmap_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_REFERENCE, drm_bo_reference_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_UNREFERENCE, drm_bo_unreference_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_SETSTATUS, drm_bo_setstatus_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_INFO, drm_bo_info_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_WAIT_IDLE, drm_bo_wait_idle_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_IOCTL_BO_VERSION, drm_bo_version_ioctl, 0), > }; > > #define DRM_CORE_IOCTL_COUNT ARRAY_SIZE( drm_ioctls ) > >+ > /** > * Take down the DRM device. > * >@@ -139,6 +173,12 @@ int drm_lastclose(struct drm_device * de > > DRM_DEBUG("\n"); > >+ /* >+ * We can't do much about this function failing. >+ */ >+ >+ drm_bo_driver_finish(dev); >+ > if (dev->driver->lastclose) > dev->driver->lastclose(dev); > DRM_DEBUG("driver lastclose completed\n"); >@@ -152,13 +192,18 @@ int drm_lastclose(struct drm_device * de > if (dev->irq_enabled) > drm_irq_uninstall(dev); > >+ /* Free drawable information memory */ > mutex_lock(&dev->struct_mutex); > >- /* Free drawable information memory */ > drm_drawable_free_all(dev); > del_timer(&dev->timer); > >- /* Clear pid list */ >+ if (dev->unique) { >+ drm_free(dev->unique, strlen(dev->unique) + 1, DRM_MEM_DRIVER); >+ dev->unique = NULL; >+ dev->unique_len = 0; >+ } >+ > if (dev->magicfree.next) { > list_for_each_entry_safe(pt, next, &dev->magicfree, head) { > list_del(&pt->head); >@@ -168,6 +213,7 @@ int drm_lastclose(struct drm_device * de > drm_ht_remove(&dev->magiclist); > } > >+ > /* Clear AGP information */ > if (drm_core_has_AGP(dev) && dev->agp) { > struct drm_agp_mem *entry, *tempe; >@@ -196,16 +242,19 @@ int drm_lastclose(struct drm_device * de > /* Clear vma list (only built for debugging) */ > list_for_each_entry_safe(vma, vma_temp, &dev->vmalist, head) { > list_del(&vma->head); >- drm_free(vma, sizeof(*vma), DRM_MEM_VMAS); >+ drm_ctl_free(vma, sizeof(*vma), DRM_MEM_VMAS); > } > > list_for_each_entry_safe(r_list, list_t, &dev->maplist, head) { >- drm_rmmap_locked(dev, r_list->map); >- r_list = NULL; >+ if (!(r_list->map->flags & _DRM_DRIVER)) { >+ drm_rmmap_locked(dev, r_list->map); >+ r_list = NULL; >+ } > } > > if (drm_core_check_feature(dev, DRIVER_DMA_QUEUE) && dev->queuelist) { > for (i = 0; i < dev->queue_count; i++) { >+ > if (dev->queuelist[i]) { > drm_free(dev->queuelist[i], > sizeof(*dev->queuelist[0]), >@@ -228,12 +277,24 @@ int drm_lastclose(struct drm_device * de > dev->lock.file_priv = NULL; > wake_up_interruptible(&dev->lock.lock_queue); > } >+ dev->dev_mapping = NULL; > mutex_unlock(&dev->struct_mutex); > > DRM_DEBUG("lastclose completed\n"); > return 0; > } > >+void drm_cleanup_pci(struct pci_dev *pdev) >+{ >+ struct drm_device *dev = pci_get_drvdata(pdev); >+ >+ pci_set_drvdata(pdev, NULL); >+ pci_release_regions(pdev); >+ if (dev) >+ drm_cleanup(dev); >+} >+EXPORT_SYMBOL(drm_cleanup_pci); >+ > /** > * Module initialization. Called via init_module at module load time, or via > * linux/init/main.c (this is not currently supported). >@@ -247,32 +308,71 @@ int drm_lastclose(struct drm_device * de > * Expands the \c DRIVER_PREINIT and \c DRIVER_POST_INIT macros before and > * after the initialization for driver customization. > */ >-int drm_init(struct drm_driver *driver) >+int drm_init(struct drm_driver *driver, >+ struct pci_device_id *pciidlist) > { >- struct pci_dev *pdev = NULL; >+ struct pci_dev *pdev; > struct pci_device_id *pid; >- int i; >+ int rc, i; > > DRM_DEBUG("\n"); > >- drm_mem_init(); >- >- for (i = 0; driver->pci_driver.id_table[i].vendor != 0; i++) { >- pid = (struct pci_device_id *)&driver->pci_driver.id_table[i]; >+ for (i = 0; (pciidlist[i].vendor != 0) && !drm_fb_loaded; i++) { >+ pid = &pciidlist[i]; > > pdev = NULL; > /* pass back in pdev to account for multiple identical cards */ > while ((pdev = > pci_get_subsys(pid->vendor, pid->device, pid->subvendor, >- pid->subdevice, pdev)) != NULL) { >- /* stealth mode requires a manual probe */ >- pci_dev_get(pdev); >- drm_get_dev(pdev, pid, driver); >+ pid->subdevice, pdev))) { >+ /* Are there device class requirements? */ >+ if ((pid->class != 0) >+ && ((pdev->class & pid->class_mask) != pid->class)) { >+ continue; >+ } >+ /* is there already a driver loaded, or (short circuit saves work) */ >+ /* does something like VesaFB have control of the memory region? */ >+ if (pci_dev_driver(pdev) >+ || pci_request_regions(pdev, "DRM scan")) { >+ /* go into stealth mode */ >+ drm_fb_loaded = 1; >+ pci_dev_put(pdev); >+ break; >+ } >+ /* no fbdev or vesadev, put things back and wait for normal probe */ >+ pci_release_regions(pdev); >+ } >+ } >+ >+ if (!drm_fb_loaded) >+ return pci_register_driver(&driver->pci_driver); >+ else { >+ for (i = 0; pciidlist[i].vendor != 0; i++) { >+ pid = &pciidlist[i]; >+ >+ pdev = NULL; >+ /* pass back in pdev to account for multiple identical cards */ >+ while ((pdev = >+ pci_get_subsys(pid->vendor, pid->device, >+ pid->subvendor, pid->subdevice, >+ pdev))) { >+ /* Are there device class requirements? */ >+ if ((pid->class != 0) >+ && ((pdev->class & pid->class_mask) != pid->class)) { >+ continue; >+ } >+ /* stealth mode requires a manual probe */ >+ pci_dev_get(pdev); >+ if ((rc = drm_get_dev(pdev, &pciidlist[i], driver))) { >+ pci_dev_put(pdev); >+ return rc; >+ } >+ } > } >+ DRM_INFO("Used old pci detect: framebuffer loaded\n"); > } > return 0; > } >- > EXPORT_SYMBOL(drm_init); > > /** >@@ -284,21 +384,18 @@ EXPORT_SYMBOL(drm_init); > */ > static void drm_cleanup(struct drm_device * dev) > { >- DRM_DEBUG("\n"); > >+ DRM_DEBUG("\n"); > if (!dev) { > DRM_ERROR("cleanup called no dev\n"); > return; > } > > drm_lastclose(dev); >+ drm_fence_manager_takedown(dev); > >- drm_ht_remove(&dev->map_hash); >- >- drm_ctxbitmap_cleanup(dev); >- >- if (drm_core_has_MTRR(dev) && drm_core_has_AGP(dev) && >- dev->agp && dev->agp->agp_mtrr >= 0) { >+ if (drm_core_has_MTRR(dev) && drm_core_has_AGP(dev) && dev->agp >+ && dev->agp->agp_mtrr >= 0) { > int retval; > retval = mtrr_del(dev->agp->agp_mtrr, > dev->agp->agp_info.aper_base, >@@ -310,10 +407,17 @@ static void drm_cleanup(struct drm_devic > drm_free(dev->agp, sizeof(*dev->agp), DRM_MEM_AGPLISTS); > dev->agp = NULL; > } >- > if (dev->driver->unload) > dev->driver->unload(dev); > >+ if (!drm_fb_loaded) >+ pci_disable_device(dev->pdev); >+ >+ drm_ctxbitmap_cleanup(dev); >+ drm_ht_remove(&dev->map_hash); >+ drm_mm_takedown(&dev->offset_manager); >+ drm_ht_remove(&dev->object_hash); >+ > drm_put_head(&dev->primary); > if (drm_put_dev(dev)) > DRM_ERROR("Cannot unload module\n"); >@@ -326,26 +430,30 @@ void drm_exit(struct drm_driver *driver) > struct drm_head *head; > > DRM_DEBUG("\n"); >- >- for (i = 0; i < drm_cards_limit; i++) { >- head = drm_heads[i]; >- if (!head) >- continue; >- if (!head->dev) >- continue; >- if (head->dev->driver != driver) >- continue; >- dev = head->dev; >- if (dev) { >- /* release the pci driver */ >- if (dev->pdev) >- pci_dev_put(dev->pdev); >- drm_cleanup(dev); >+ if (drm_fb_loaded) { >+ for (i = 0; i < drm_cards_limit; i++) { >+ head = drm_heads[i]; >+ if (!head) >+ continue; >+ if (!head->dev) >+ continue; >+ if (head->dev->driver != driver) >+ continue; >+ dev = head->dev; >+ if (dev) { >+ /* release the pci driver */ >+ if (dev->pdev) >+ pci_dev_put(dev->pdev); >+ drm_cleanup(dev); >+ } > } >- } >+ } else >+ pci_unregister_driver(&driver->pci_driver); >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+ free_nopage_retry(); >+#endif > DRM_INFO("Module unloaded\n"); > } >- > EXPORT_SYMBOL(drm_exit); > > /** File operations structure */ >@@ -356,13 +464,35 @@ static const struct file_operations drm_ > > static int __init drm_core_init(void) > { >- int ret = -ENOMEM; >+ int ret; >+ struct sysinfo si; >+ unsigned long avail_memctl_mem; >+ unsigned long max_memctl_mem; >+ >+ si_meminfo(&si); >+ >+ /* >+ * AGP only allows low / DMA32 memory ATM. >+ */ > >+ avail_memctl_mem = si.totalram - si.totalhigh; >+ >+ /* >+ * Avoid overflows >+ */ >+ >+ max_memctl_mem = 1UL << (32 - PAGE_SHIFT); >+ max_memctl_mem = (max_memctl_mem / si.mem_unit) * PAGE_SIZE; >+ >+ if (avail_memctl_mem >= max_memctl_mem) >+ avail_memctl_mem = max_memctl_mem; >+ >+ drm_init_memctl(avail_memctl_mem/2, avail_memctl_mem*3/4, si.mem_unit); >+ >+ ret = -ENOMEM; > drm_cards_limit = >- (drm_cards_limit < >- DRM_MAX_MINOR + 1 ? drm_cards_limit : DRM_MAX_MINOR + 1); >- drm_heads = >- drm_calloc(drm_cards_limit, sizeof(*drm_heads), DRM_MEM_STUB); >+ (drm_cards_limit < DRM_MAX_MINOR + 1 ? drm_cards_limit : DRM_MAX_MINOR + 1); >+ drm_heads = drm_calloc(drm_cards_limit, sizeof(*drm_heads), DRM_MEM_STUB); > if (!drm_heads) > goto err_p1; > >@@ -383,22 +513,25 @@ static int __init drm_core_init(void) > goto err_p3; > } > >+ drm_mem_init(); >+ > DRM_INFO("Initialized %s %d.%d.%d %s\n", >- CORE_NAME, CORE_MAJOR, CORE_MINOR, CORE_PATCHLEVEL, CORE_DATE); >+ CORE_NAME, >+ CORE_MAJOR, CORE_MINOR, CORE_PATCHLEVEL, CORE_DATE); > return 0; >- err_p3: >- drm_sysfs_destroy(drm_class); >- err_p2: >+err_p3: >+ drm_sysfs_destroy(); >+err_p2: > unregister_chrdev(DRM_MAJOR, "drm"); > drm_free(drm_heads, sizeof(*drm_heads) * drm_cards_limit, DRM_MEM_STUB); >- err_p1: >+err_p1: > return ret; > } > > static void __exit drm_core_exit(void) > { > remove_proc_entry("dri", NULL); >- drm_sysfs_destroy(drm_class); >+ drm_sysfs_destroy(); > > unregister_chrdev(DRM_MAJOR, "drm"); > >@@ -412,7 +545,7 @@ module_exit(drm_core_exit); > * Get version information > * > * \param inode device inode. >- * \param filp file pointer. >+ * \param file_priv DRM file private. > * \param cmd command. > * \param arg user argument, pointing to a drm_version structure. > * \return zero on success or negative number on failure. >@@ -446,43 +579,73 @@ static int drm_version(struct drm_device > * > * Looks up the ioctl function in the ::ioctls table, checking for root > * previleges if so required, and dispatches to the respective function. >+ * >+ * Copies data in and out according to the size and direction given in cmd, >+ * which must match the ioctl cmd known by the kernel. The kernel uses a 512 >+ * byte stack buffer to store the ioctl arguments in kernel space. Should we >+ * ever need much larger ioctl arguments, we may need to allocate memory. > */ > int drm_ioctl(struct inode *inode, struct file *filp, > unsigned int cmd, unsigned long arg) > { >+ return drm_unlocked_ioctl(filp, cmd, arg); >+} >+EXPORT_SYMBOL(drm_ioctl); >+ >+long drm_unlocked_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) >+{ > struct drm_file *file_priv = filp->private_data; > struct drm_device *dev = file_priv->head->dev; > struct drm_ioctl_desc *ioctl; > drm_ioctl_t *func; > unsigned int nr = DRM_IOCTL_NR(cmd); > int retcode = -EINVAL; >- char *kdata = NULL; >+ char kdata[512]; > > atomic_inc(&dev->ioctl_count); > atomic_inc(&dev->counts[_DRM_STAT_IOCTLS]); > ++file_priv->ioctl_count; > > DRM_DEBUG("pid=%d, cmd=0x%02x, nr=0x%02x, dev 0x%lx, auth=%d\n", >- current->pid, cmd, nr, >- (long)old_encode_dev(file_priv->head->device), >+ current->pid, cmd, nr, (long)old_encode_dev(file_priv->head->device), > file_priv->authenticated); > > if ((nr >= DRM_CORE_IOCTL_COUNT) && > ((nr < DRM_COMMAND_BASE) || (nr >= DRM_COMMAND_END))) > goto err_i1; >- if ((nr >= DRM_COMMAND_BASE) && (nr < DRM_COMMAND_END) && >- (nr < DRM_COMMAND_BASE + dev->driver->num_ioctls)) >+ if ((nr >= DRM_COMMAND_BASE) && (nr < DRM_COMMAND_END) >+ && (nr < DRM_COMMAND_BASE + dev->driver->num_ioctls)) > ioctl = &dev->driver->ioctls[nr - DRM_COMMAND_BASE]; > else if ((nr >= DRM_COMMAND_END) || (nr < DRM_COMMAND_BASE)) > ioctl = &drm_ioctls[nr]; >- else >+ else { >+ retcode = -EINVAL; > goto err_i1; >- >+ } >+#if 0 >+ /* >+ * This check is disabled, because driver private ioctl->cmd >+ * are not the ioctl commands with size and direction bits but >+ * just the indices. The DRM core ioctl->cmd are the proper ioctl >+ * commands. The drivers' ioctl tables need to be fixed. >+ */ >+ if (ioctl->cmd != cmd) { >+ retcode = -EINVAL; >+ goto err_i1; >+ } >+#endif > func = ioctl->func; > /* is there a local override? */ > if ((nr == DRM_IOCTL_NR(DRM_IOCTL_DMA)) && dev->driver->dma_ioctl) > func = dev->driver->dma_ioctl; > >+ if (cmd & IOC_IN) { >+ if (copy_from_user(kdata, (void __user *)arg, >+ _IOC_SIZE(cmd)) != 0) { >+ retcode = -EACCES; >+ goto err_i1; >+ } >+ } > > if (!func) { > DRM_DEBUG("no function\n"); >@@ -492,38 +655,22 @@ int drm_ioctl(struct inode *inode, struc > ((ioctl->flags & DRM_MASTER) && !file_priv->master)) { > retcode = -EACCES; > } else { >- if (cmd & (IOC_IN | IOC_OUT)) { >- kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL); >- if (!kdata) >- return -ENOMEM; >- } >- >- if (cmd & IOC_IN) { >- if (copy_from_user(kdata, (void __user *)arg, >- _IOC_SIZE(cmd)) != 0) { >- retcode = -EACCES; >- goto err_i1; >- } >- } > retcode = func(dev, kdata, file_priv); >+ } > >- if (cmd & IOC_OUT) { >- if (copy_to_user((void __user *)arg, kdata, >- _IOC_SIZE(cmd)) != 0) >- retcode = -EACCES; >- } >+ if ((retcode == 0) && (cmd & IOC_OUT)) { >+ if (copy_to_user((void __user *)arg, kdata, >+ _IOC_SIZE(cmd)) != 0) >+ retcode = -EACCES; > } > >- err_i1: >- if (kdata) >- kfree(kdata); >+err_i1: > atomic_dec(&dev->ioctl_count); > if (retcode) >- DRM_DEBUG("ret = %x\n", retcode); >+ DRM_DEBUG("ret = %d\n", retcode); > return retcode; > } >- >-EXPORT_SYMBOL(drm_ioctl); >+EXPORT_SYMBOL(drm_unlocked_ioctl); > > drm_local_map_t *drm_getsarea(struct drm_device *dev) > { >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_fence.c linux-2.6.23.i686/drivers/char/drm/drm_fence.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_fence.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_fence.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,847 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+ >+/* >+ * Typically called by the IRQ handler. >+ */ >+ >+void drm_fence_handler(struct drm_device *dev, uint32_t fence_class, >+ uint32_t sequence, uint32_t type, uint32_t error) >+{ >+ int wake = 0; >+ uint32_t diff; >+ uint32_t relevant; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_class_manager *fc = &fm->fence_class[fence_class]; >+ struct drm_fence_driver *driver = dev->driver->fence_driver; >+ struct list_head *head; >+ struct drm_fence_object *fence, *next; >+ int found = 0; >+ int is_exe = (type & DRM_FENCE_TYPE_EXE); >+ int ge_last_exe; >+ >+ >+ diff = (sequence - fc->exe_flush_sequence) & driver->sequence_mask; >+ >+ if (fc->pending_exe_flush && is_exe && diff < driver->wrap_diff) >+ fc->pending_exe_flush = 0; >+ >+ diff = (sequence - fc->last_exe_flush) & driver->sequence_mask; >+ ge_last_exe = diff < driver->wrap_diff; >+ >+ if (is_exe && ge_last_exe) >+ fc->last_exe_flush = sequence; >+ >+ if (list_empty(&fc->ring)) >+ return; >+ >+ list_for_each_entry(fence, &fc->ring, ring) { >+ diff = (sequence - fence->sequence) & driver->sequence_mask; >+ if (diff > driver->wrap_diff) { >+ found = 1; >+ break; >+ } >+ } >+ >+ fc->pending_flush &= ~type; >+ head = (found) ? &fence->ring : &fc->ring; >+ >+ list_for_each_entry_safe_reverse(fence, next, head, ring) { >+ if (&fence->ring == &fc->ring) >+ break; >+ >+ if (error) { >+ fence->error = error; >+ fence->signaled = fence->type; >+ fence->submitted_flush = fence->type; >+ fence->flush_mask = fence->type; >+ list_del_init(&fence->ring); >+ wake = 1; >+ break; >+ } >+ >+ if (is_exe) >+ type |= fence->native_type; >+ >+ relevant = type & fence->type; >+ >+ if ((fence->signaled | relevant) != fence->signaled) { >+ fence->signaled |= relevant; >+ fence->flush_mask |= relevant; >+ fence->submitted_flush |= relevant; >+ DRM_DEBUG("Fence 0x%08lx signaled 0x%08x\n", >+ fence->base.hash.key, fence->signaled); >+ wake = 1; >+ } >+ >+ relevant = fence->flush_mask & >+ ~(fence->submitted_flush | fence->signaled); >+ >+ fc->pending_flush |= relevant; >+ fence->submitted_flush |= relevant; >+ >+ if (!(fence->type & ~fence->signaled)) { >+ DRM_DEBUG("Fence completely signaled 0x%08lx\n", >+ fence->base.hash.key); >+ list_del_init(&fence->ring); >+ } >+ >+ } >+ >+ /* >+ * Reinstate lost flush flags. >+ */ >+ >+ if ((fc->pending_flush & type) != type) { >+ head = head->prev; >+ list_for_each_entry(fence, head, ring) { >+ if (&fence->ring == &fc->ring) >+ break; >+ diff = (fc->last_exe_flush - fence->sequence) & >+ driver->sequence_mask; >+ if (diff > driver->wrap_diff) >+ break; >+ >+ relevant = fence->submitted_flush & ~fence->signaled; >+ fc->pending_flush |= relevant; >+ } >+ } >+ >+ if (wake) { >+ DRM_WAKEUP(&fc->fence_queue); >+ } >+} >+EXPORT_SYMBOL(drm_fence_handler); >+ >+static void drm_fence_unring(struct drm_device *dev, struct list_head *ring) >+{ >+ struct drm_fence_manager *fm = &dev->fm; >+ unsigned long flags; >+ >+ write_lock_irqsave(&fm->lock, flags); >+ list_del_init(ring); >+ write_unlock_irqrestore(&fm->lock, flags); >+} >+ >+void drm_fence_usage_deref_locked(struct drm_fence_object **fence) >+{ >+ struct drm_fence_object *tmp_fence = *fence; >+ struct drm_device *dev = tmp_fence->dev; >+ struct drm_fence_manager *fm = &dev->fm; >+ >+ DRM_ASSERT_LOCKED(&dev->struct_mutex); >+ *fence = NULL; >+ if (atomic_dec_and_test(&tmp_fence->usage)) { >+ drm_fence_unring(dev, &tmp_fence->ring); >+ DRM_DEBUG("Destroyed a fence object 0x%08lx\n", >+ tmp_fence->base.hash.key); >+ atomic_dec(&fm->count); >+ BUG_ON(!list_empty(&tmp_fence->base.list)); >+ drm_ctl_free(tmp_fence, sizeof(*tmp_fence), DRM_MEM_FENCE); >+ } >+} >+EXPORT_SYMBOL(drm_fence_usage_deref_locked); >+ >+void drm_fence_usage_deref_unlocked(struct drm_fence_object **fence) >+{ >+ struct drm_fence_object *tmp_fence = *fence; >+ struct drm_device *dev = tmp_fence->dev; >+ struct drm_fence_manager *fm = &dev->fm; >+ >+ *fence = NULL; >+ if (atomic_dec_and_test(&tmp_fence->usage)) { >+ mutex_lock(&dev->struct_mutex); >+ if (atomic_read(&tmp_fence->usage) == 0) { >+ drm_fence_unring(dev, &tmp_fence->ring); >+ atomic_dec(&fm->count); >+ BUG_ON(!list_empty(&tmp_fence->base.list)); >+ drm_ctl_free(tmp_fence, sizeof(*tmp_fence), DRM_MEM_FENCE); >+ } >+ mutex_unlock(&dev->struct_mutex); >+ } >+} >+EXPORT_SYMBOL(drm_fence_usage_deref_unlocked); >+ >+struct drm_fence_object >+*drm_fence_reference_locked(struct drm_fence_object *src) >+{ >+ DRM_ASSERT_LOCKED(&src->dev->struct_mutex); >+ >+ atomic_inc(&src->usage); >+ return src; >+} >+ >+void drm_fence_reference_unlocked(struct drm_fence_object **dst, >+ struct drm_fence_object *src) >+{ >+ mutex_lock(&src->dev->struct_mutex); >+ *dst = src; >+ atomic_inc(&src->usage); >+ mutex_unlock(&src->dev->struct_mutex); >+} >+EXPORT_SYMBOL(drm_fence_reference_unlocked); >+ >+static void drm_fence_object_destroy(struct drm_file *priv, >+ struct drm_user_object *base) >+{ >+ struct drm_fence_object *fence = >+ drm_user_object_entry(base, struct drm_fence_object, base); >+ >+ drm_fence_usage_deref_locked(&fence); >+} >+ >+int drm_fence_object_signaled(struct drm_fence_object *fence, >+ uint32_t mask, int poke_flush) >+{ >+ unsigned long flags; >+ int signaled; >+ struct drm_device *dev = fence->dev; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_driver *driver = dev->driver->fence_driver; >+ >+ if (poke_flush) >+ driver->poke_flush(dev, fence->fence_class); >+ read_lock_irqsave(&fm->lock, flags); >+ signaled = >+ (fence->type & mask & fence->signaled) == (fence->type & mask); >+ read_unlock_irqrestore(&fm->lock, flags); >+ >+ return signaled; >+} >+EXPORT_SYMBOL(drm_fence_object_signaled); >+ >+static void drm_fence_flush_exe(struct drm_fence_class_manager *fc, >+ struct drm_fence_driver *driver, >+ uint32_t sequence) >+{ >+ uint32_t diff; >+ >+ if (!fc->pending_exe_flush) { >+ fc->exe_flush_sequence = sequence; >+ fc->pending_exe_flush = 1; >+ } else { >+ diff = (sequence - fc->exe_flush_sequence) & driver->sequence_mask; >+ if (diff < driver->wrap_diff) >+ fc->exe_flush_sequence = sequence; >+ } >+} >+ >+int drm_fence_object_flush(struct drm_fence_object *fence, >+ uint32_t type) >+{ >+ struct drm_device *dev = fence->dev; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_class_manager *fc = &fm->fence_class[fence->fence_class]; >+ struct drm_fence_driver *driver = dev->driver->fence_driver; >+ unsigned long flags; >+ >+ if (type & ~fence->type) { >+ DRM_ERROR("Flush trying to extend fence type, " >+ "0x%x, 0x%x\n", type, fence->type); >+ return -EINVAL; >+ } >+ >+ write_lock_irqsave(&fm->lock, flags); >+ fence->flush_mask |= type; >+ if ((fence->submitted_flush & fence->signaled) >+ == fence->submitted_flush) { >+ if ((fence->type & DRM_FENCE_TYPE_EXE) && >+ !(fence->submitted_flush & DRM_FENCE_TYPE_EXE)) { >+ drm_fence_flush_exe(fc, driver, fence->sequence); >+ fence->submitted_flush |= DRM_FENCE_TYPE_EXE; >+ } else { >+ fc->pending_flush |= (fence->flush_mask & >+ ~fence->submitted_flush); >+ fence->submitted_flush = fence->flush_mask; >+ } >+ } >+ write_unlock_irqrestore(&fm->lock, flags); >+ driver->poke_flush(dev, fence->fence_class); >+ return 0; >+} >+ >+/* >+ * Make sure old fence objects are signaled before their fence sequences are >+ * wrapped around and reused. >+ */ >+ >+void drm_fence_flush_old(struct drm_device *dev, uint32_t fence_class, >+ uint32_t sequence) >+{ >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_class_manager *fc = &fm->fence_class[fence_class]; >+ struct drm_fence_driver *driver = dev->driver->fence_driver; >+ uint32_t old_sequence; >+ unsigned long flags; >+ struct drm_fence_object *fence; >+ uint32_t diff; >+ >+ write_lock_irqsave(&fm->lock, flags); >+ old_sequence = (sequence - driver->flush_diff) & driver->sequence_mask; >+ diff = (old_sequence - fc->last_exe_flush) & driver->sequence_mask; >+ >+ if ((diff < driver->wrap_diff) && !fc->pending_exe_flush) { >+ fc->pending_exe_flush = 1; >+ fc->exe_flush_sequence = sequence - (driver->flush_diff / 2); >+ } >+ write_unlock_irqrestore(&fm->lock, flags); >+ >+ mutex_lock(&dev->struct_mutex); >+ read_lock_irqsave(&fm->lock, flags); >+ >+ if (list_empty(&fc->ring)) { >+ read_unlock_irqrestore(&fm->lock, flags); >+ mutex_unlock(&dev->struct_mutex); >+ return; >+ } >+ fence = drm_fence_reference_locked(list_entry(fc->ring.next, struct drm_fence_object, ring)); >+ mutex_unlock(&dev->struct_mutex); >+ diff = (old_sequence - fence->sequence) & driver->sequence_mask; >+ read_unlock_irqrestore(&fm->lock, flags); >+ if (diff < driver->wrap_diff) >+ drm_fence_object_flush(fence, fence->type); >+ drm_fence_usage_deref_unlocked(&fence); >+} >+EXPORT_SYMBOL(drm_fence_flush_old); >+ >+static int drm_fence_lazy_wait(struct drm_fence_object *fence, >+ int ignore_signals, >+ uint32_t mask) >+{ >+ struct drm_device *dev = fence->dev; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_class_manager *fc = &fm->fence_class[fence->fence_class]; >+ int signaled; >+ unsigned long _end = jiffies + 3*DRM_HZ; >+ int ret = 0; >+ >+ do { >+ DRM_WAIT_ON(ret, fc->fence_queue, 3 * DRM_HZ, >+ (signaled = drm_fence_object_signaled(fence, mask, 1))); >+ if (signaled) >+ return 0; >+ if (time_after_eq(jiffies, _end)) >+ break; >+ } while (ret == -EINTR && ignore_signals); >+ if (drm_fence_object_signaled(fence, mask, 0)) >+ return 0; >+ if (time_after_eq(jiffies, _end)) >+ ret = -EBUSY; >+ if (ret) { >+ if (ret == -EBUSY) { >+ DRM_ERROR("Fence timeout. " >+ "GPU lockup or fence driver was " >+ "taken down. %d 0x%08x 0x%02x 0x%02x 0x%02x\n", >+ fence->fence_class, >+ fence->sequence, >+ fence->type, >+ mask, >+ fence->signaled); >+ DRM_ERROR("Pending exe flush %d 0x%08x\n", >+ fc->pending_exe_flush, >+ fc->exe_flush_sequence); >+ } >+ return ((ret == -EINTR) ? -EAGAIN : ret); >+ } >+ return 0; >+} >+ >+int drm_fence_object_wait(struct drm_fence_object *fence, >+ int lazy, int ignore_signals, uint32_t mask) >+{ >+ struct drm_device *dev = fence->dev; >+ struct drm_fence_driver *driver = dev->driver->fence_driver; >+ int ret = 0; >+ unsigned long _end; >+ int signaled; >+ >+ if (mask & ~fence->type) { >+ DRM_ERROR("Wait trying to extend fence type" >+ " 0x%08x 0x%08x\n", mask, fence->type); >+ BUG(); >+ return -EINVAL; >+ } >+ >+ if (drm_fence_object_signaled(fence, mask, 0)) >+ return 0; >+ >+ _end = jiffies + 3 * DRM_HZ; >+ >+ drm_fence_object_flush(fence, mask); >+ >+ if (lazy && driver->lazy_capable) { >+ >+ ret = drm_fence_lazy_wait(fence, ignore_signals, mask); >+ if (ret) >+ return ret; >+ >+ } else { >+ >+ if (driver->has_irq(dev, fence->fence_class, >+ DRM_FENCE_TYPE_EXE)) { >+ ret = drm_fence_lazy_wait(fence, ignore_signals, >+ DRM_FENCE_TYPE_EXE); >+ if (ret) >+ return ret; >+ } >+ >+ if (driver->has_irq(dev, fence->fence_class, >+ mask & ~DRM_FENCE_TYPE_EXE)) { >+ ret = drm_fence_lazy_wait(fence, ignore_signals, >+ mask); >+ if (ret) >+ return ret; >+ } >+ } >+ if (drm_fence_object_signaled(fence, mask, 0)) >+ return 0; >+ >+ /* >+ * Avoid kernel-space busy-waits. >+ */ >+ if (!ignore_signals) >+ return -EAGAIN; >+ >+ do { >+ schedule(); >+ signaled = drm_fence_object_signaled(fence, mask, 1); >+ } while (!signaled && !time_after_eq(jiffies, _end)); >+ >+ if (!signaled) >+ return -EBUSY; >+ >+ return 0; >+} >+EXPORT_SYMBOL(drm_fence_object_wait); >+ >+int drm_fence_object_emit(struct drm_fence_object *fence, uint32_t fence_flags, >+ uint32_t fence_class, uint32_t type) >+{ >+ struct drm_device *dev = fence->dev; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_driver *driver = dev->driver->fence_driver; >+ struct drm_fence_class_manager *fc = &fm->fence_class[fence->fence_class]; >+ unsigned long flags; >+ uint32_t sequence; >+ uint32_t native_type; >+ int ret; >+ >+ drm_fence_unring(dev, &fence->ring); >+ ret = driver->emit(dev, fence_class, fence_flags, &sequence, >+ &native_type); >+ if (ret) >+ return ret; >+ >+ write_lock_irqsave(&fm->lock, flags); >+ fence->fence_class = fence_class; >+ fence->type = type; >+ fence->flush_mask = 0x00; >+ fence->submitted_flush = 0x00; >+ fence->signaled = 0x00; >+ fence->sequence = sequence; >+ fence->native_type = native_type; >+ if (list_empty(&fc->ring)) >+ fc->last_exe_flush = sequence - 1; >+ list_add_tail(&fence->ring, &fc->ring); >+ write_unlock_irqrestore(&fm->lock, flags); >+ return 0; >+} >+EXPORT_SYMBOL(drm_fence_object_emit); >+ >+static int drm_fence_object_init(struct drm_device *dev, uint32_t fence_class, >+ uint32_t type, >+ uint32_t fence_flags, >+ struct drm_fence_object *fence) >+{ >+ int ret = 0; >+ unsigned long flags; >+ struct drm_fence_manager *fm = &dev->fm; >+ >+ mutex_lock(&dev->struct_mutex); >+ atomic_set(&fence->usage, 1); >+ mutex_unlock(&dev->struct_mutex); >+ >+ write_lock_irqsave(&fm->lock, flags); >+ INIT_LIST_HEAD(&fence->ring); >+ >+ /* >+ * Avoid hitting BUG() for kernel-only fence objects. >+ */ >+ >+ INIT_LIST_HEAD(&fence->base.list); >+ fence->fence_class = fence_class; >+ fence->type = type; >+ fence->flush_mask = 0; >+ fence->submitted_flush = 0; >+ fence->signaled = 0; >+ fence->sequence = 0; >+ fence->dev = dev; >+ write_unlock_irqrestore(&fm->lock, flags); >+ if (fence_flags & DRM_FENCE_FLAG_EMIT) { >+ ret = drm_fence_object_emit(fence, fence_flags, >+ fence->fence_class, type); >+ } >+ return ret; >+} >+ >+int drm_fence_add_user_object(struct drm_file *priv, >+ struct drm_fence_object *fence, int shareable) >+{ >+ struct drm_device *dev = priv->head->dev; >+ int ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ ret = drm_add_user_object(priv, &fence->base, shareable); >+ if (ret) >+ goto out; >+ atomic_inc(&fence->usage); >+ fence->base.type = drm_fence_type; >+ fence->base.remove = &drm_fence_object_destroy; >+ DRM_DEBUG("Fence 0x%08lx created\n", fence->base.hash.key); >+out: >+ mutex_unlock(&dev->struct_mutex); >+ return ret; >+} >+EXPORT_SYMBOL(drm_fence_add_user_object); >+ >+int drm_fence_object_create(struct drm_device *dev, uint32_t fence_class, >+ uint32_t type, unsigned flags, >+ struct drm_fence_object **c_fence) >+{ >+ struct drm_fence_object *fence; >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ >+ fence = drm_ctl_calloc(1, sizeof(*fence), DRM_MEM_FENCE); >+ if (!fence) >+ return -ENOMEM; >+ ret = drm_fence_object_init(dev, fence_class, type, flags, fence); >+ if (ret) { >+ drm_fence_usage_deref_unlocked(&fence); >+ return ret; >+ } >+ *c_fence = fence; >+ atomic_inc(&fm->count); >+ >+ return 0; >+} >+EXPORT_SYMBOL(drm_fence_object_create); >+ >+void drm_fence_manager_init(struct drm_device *dev) >+{ >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_class_manager *fence_class; >+ struct drm_fence_driver *fed = dev->driver->fence_driver; >+ int i; >+ unsigned long flags; >+ >+ rwlock_init(&fm->lock); >+ write_lock_irqsave(&fm->lock, flags); >+ fm->initialized = 0; >+ if (!fed) >+ goto out_unlock; >+ >+ fm->initialized = 1; >+ fm->num_classes = fed->num_classes; >+ BUG_ON(fm->num_classes > _DRM_FENCE_CLASSES); >+ >+ for (i = 0; i < fm->num_classes; ++i) { >+ fence_class = &fm->fence_class[i]; >+ >+ INIT_LIST_HEAD(&fence_class->ring); >+ fence_class->pending_flush = 0; >+ DRM_INIT_WAITQUEUE(&fence_class->fence_queue); >+ } >+ >+ atomic_set(&fm->count, 0); >+ out_unlock: >+ write_unlock_irqrestore(&fm->lock, flags); >+} >+ >+void drm_fence_fill_arg(struct drm_fence_object *fence, >+ struct drm_fence_arg *arg) >+{ >+ struct drm_device *dev = fence->dev; >+ struct drm_fence_manager *fm = &dev->fm; >+ unsigned long irq_flags; >+ >+ read_lock_irqsave(&fm->lock, irq_flags); >+ arg->handle = fence->base.hash.key; >+ arg->fence_class = fence->fence_class; >+ arg->type = fence->type; >+ arg->signaled = fence->signaled; >+ arg->error = fence->error; >+ arg->sequence = fence->sequence; >+ read_unlock_irqrestore(&fm->lock, irq_flags); >+} >+EXPORT_SYMBOL(drm_fence_fill_arg); >+ >+void drm_fence_manager_takedown(struct drm_device *dev) >+{ >+} >+ >+struct drm_fence_object *drm_lookup_fence_object(struct drm_file *priv, >+ uint32_t handle) >+{ >+ struct drm_device *dev = priv->head->dev; >+ struct drm_user_object *uo; >+ struct drm_fence_object *fence; >+ >+ mutex_lock(&dev->struct_mutex); >+ uo = drm_lookup_user_object(priv, handle); >+ if (!uo || (uo->type != drm_fence_type)) { >+ mutex_unlock(&dev->struct_mutex); >+ return NULL; >+ } >+ fence = drm_fence_reference_locked(drm_user_object_entry(uo, struct drm_fence_object, base)); >+ mutex_unlock(&dev->struct_mutex); >+ return fence; >+} >+ >+int drm_fence_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_arg *arg = data; >+ struct drm_fence_object *fence; >+ ret = 0; >+ >+ if (!fm->initialized) { >+ DRM_ERROR("The DRM driver does not support fencing.\n"); >+ return -EINVAL; >+ } >+ >+ if (arg->flags & DRM_FENCE_FLAG_EMIT) >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ ret = drm_fence_object_create(dev, arg->fence_class, >+ arg->type, arg->flags, &fence); >+ if (ret) >+ return ret; >+ ret = drm_fence_add_user_object(file_priv, fence, >+ arg->flags & >+ DRM_FENCE_FLAG_SHAREABLE); >+ if (ret) { >+ drm_fence_usage_deref_unlocked(&fence); >+ return ret; >+ } >+ >+ /* >+ * usage > 0. No need to lock dev->struct_mutex; >+ */ >+ >+ arg->handle = fence->base.hash.key; >+ >+ drm_fence_fill_arg(fence, arg); >+ drm_fence_usage_deref_unlocked(&fence); >+ >+ return ret; >+} >+ >+int drm_fence_reference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_arg *arg = data; >+ struct drm_fence_object *fence; >+ struct drm_user_object *uo; >+ ret = 0; >+ >+ if (!fm->initialized) { >+ DRM_ERROR("The DRM driver does not support fencing.\n"); >+ return -EINVAL; >+ } >+ >+ ret = drm_user_object_ref(file_priv, arg->handle, drm_fence_type, &uo); >+ if (ret) >+ return ret; >+ fence = drm_lookup_fence_object(file_priv, arg->handle); >+ drm_fence_fill_arg(fence, arg); >+ drm_fence_usage_deref_unlocked(&fence); >+ >+ return ret; >+} >+ >+ >+int drm_fence_unreference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_arg *arg = data; >+ ret = 0; >+ >+ if (!fm->initialized) { >+ DRM_ERROR("The DRM driver does not support fencing.\n"); >+ return -EINVAL; >+ } >+ >+ return drm_user_object_unref(file_priv, arg->handle, drm_fence_type); >+} >+ >+int drm_fence_signaled_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_arg *arg = data; >+ struct drm_fence_object *fence; >+ ret = 0; >+ >+ if (!fm->initialized) { >+ DRM_ERROR("The DRM driver does not support fencing.\n"); >+ return -EINVAL; >+ } >+ >+ fence = drm_lookup_fence_object(file_priv, arg->handle); >+ if (!fence) >+ return -EINVAL; >+ >+ drm_fence_fill_arg(fence, arg); >+ drm_fence_usage_deref_unlocked(&fence); >+ >+ return ret; >+} >+ >+int drm_fence_flush_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_arg *arg = data; >+ struct drm_fence_object *fence; >+ ret = 0; >+ >+ if (!fm->initialized) { >+ DRM_ERROR("The DRM driver does not support fencing.\n"); >+ return -EINVAL; >+ } >+ >+ fence = drm_lookup_fence_object(file_priv, arg->handle); >+ if (!fence) >+ return -EINVAL; >+ ret = drm_fence_object_flush(fence, arg->type); >+ >+ drm_fence_fill_arg(fence, arg); >+ drm_fence_usage_deref_unlocked(&fence); >+ >+ return ret; >+} >+ >+ >+int drm_fence_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_arg *arg = data; >+ struct drm_fence_object *fence; >+ ret = 0; >+ >+ if (!fm->initialized) { >+ DRM_ERROR("The DRM driver does not support fencing.\n"); >+ return -EINVAL; >+ } >+ >+ fence = drm_lookup_fence_object(file_priv, arg->handle); >+ if (!fence) >+ return -EINVAL; >+ ret = drm_fence_object_wait(fence, >+ arg->flags & DRM_FENCE_FLAG_WAIT_LAZY, >+ 0, arg->type); >+ >+ drm_fence_fill_arg(fence, arg); >+ drm_fence_usage_deref_unlocked(&fence); >+ >+ return ret; >+} >+ >+ >+int drm_fence_emit_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_arg *arg = data; >+ struct drm_fence_object *fence; >+ ret = 0; >+ >+ if (!fm->initialized) { >+ DRM_ERROR("The DRM driver does not support fencing.\n"); >+ return -EINVAL; >+ } >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ fence = drm_lookup_fence_object(file_priv, arg->handle); >+ if (!fence) >+ return -EINVAL; >+ ret = drm_fence_object_emit(fence, arg->flags, arg->fence_class, >+ arg->type); >+ >+ drm_fence_fill_arg(fence, arg); >+ drm_fence_usage_deref_unlocked(&fence); >+ >+ return ret; >+} >+ >+int drm_fence_buffers_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ int ret; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_arg *arg = data; >+ struct drm_fence_object *fence; >+ ret = 0; >+ >+ if (!fm->initialized) { >+ DRM_ERROR("The DRM driver does not support fencing.\n"); >+ return -EINVAL; >+ } >+ >+ if (!dev->bm.initialized) { >+ DRM_ERROR("Buffer object manager is not initialized\n"); >+ return -EINVAL; >+ } >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ ret = drm_fence_buffer_objects(dev, NULL, arg->flags, >+ NULL, &fence); >+ if (ret) >+ return ret; >+ >+ if (!(arg->flags & DRM_FENCE_FLAG_NO_USER)) { >+ ret = drm_fence_add_user_object(file_priv, fence, >+ arg->flags & >+ DRM_FENCE_FLAG_SHAREABLE); >+ if (ret) >+ return ret; >+ } >+ >+ arg->handle = fence->base.hash.key; >+ >+ drm_fence_fill_arg(fence, arg); >+ drm_fence_usage_deref_unlocked(&fence); >+ >+ return ret; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_fops.c linux-2.6.23.i686/drivers/char/drm/drm_fops.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_fops.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_fops.c 2008-01-06 09:24:57.000000000 +0100 >@@ -46,7 +46,7 @@ static int drm_setup(struct drm_device * > drm_local_map_t *map; > int i; > int ret; >- u32 sareapage; >+ int sareapage; > > if (dev->driver->firstopen) { > ret = dev->driver->firstopen(dev); >@@ -57,7 +57,7 @@ static int drm_setup(struct drm_device * > dev->magicfree.next = NULL; > > /* prebuild the SAREA */ >- sareapage = max_t(unsigned, SAREA_MAX, PAGE_SIZE); >+ sareapage = max(SAREA_MAX, PAGE_SIZE); > i = drm_addmap(dev, 0, sareapage, _DRM_SHM, _DRM_CONTAINS_LOCK, &map); > if (i != 0) > return i; >@@ -85,7 +85,6 @@ static int drm_setup(struct drm_device * > dev->queue_reserved = 0; > dev->queue_slots = 0; > dev->queuelist = NULL; >- dev->irq_enabled = 0; > dev->context_flag = 0; > dev->interrupt_flag = 0; > dev->dma_flag = 0; >@@ -147,11 +146,20 @@ int drm_open(struct inode *inode, struct > spin_lock(&dev->count_lock); > if (!dev->open_count++) { > spin_unlock(&dev->count_lock); >- return drm_setup(dev); >+ retcode = drm_setup(dev); >+ goto out; > } > spin_unlock(&dev->count_lock); > } > >+out: >+ mutex_lock(&dev->struct_mutex); >+ BUG_ON((dev->dev_mapping != NULL) && >+ (dev->dev_mapping != inode->i_mapping)); >+ if (dev->dev_mapping == NULL) >+ dev->dev_mapping = inode->i_mapping; >+ mutex_unlock(&dev->struct_mutex); >+ > return retcode; > } > EXPORT_SYMBOL(drm_open); >@@ -228,6 +236,7 @@ static int drm_open_helper(struct inode > int minor = iminor(inode); > struct drm_file *priv; > int ret; >+ int i, j; > > if (filp->f_flags & O_EXCL) > return -EBUSY; /* No exclusive opens */ >@@ -253,6 +262,20 @@ static int drm_open_helper(struct inode > priv->lock_count = 0; > > INIT_LIST_HEAD(&priv->lhead); >+ INIT_LIST_HEAD(&priv->refd_objects); >+ >+ for (i = 0; i < _DRM_NO_REF_TYPES; ++i) { >+ ret = drm_ht_create(&priv->refd_object_hash[i], >+ DRM_FILE_HASH_ORDER); >+ if (ret) >+ break; >+ } >+ >+ if (ret) { >+ for (j = 0; j < i; ++j) >+ drm_ht_remove(&priv->refd_object_hash[j]); >+ goto out_free; >+ } > > if (dev->driver->open) { > ret = dev->driver->open(dev, priv); >@@ -309,6 +332,33 @@ int drm_fasync(int fd, struct file *filp > } > EXPORT_SYMBOL(drm_fasync); > >+static void drm_object_release(struct file *filp) >+{ >+ struct drm_file *priv = filp->private_data; >+ struct list_head *head; >+ struct drm_ref_object *ref_object; >+ int i; >+ >+ /* >+ * Free leftover ref objects created by me. Note that we cannot use >+ * list_for_each() here, as the struct_mutex may be temporarily >+ * released by the remove_() functions, and thus the lists may be >+ * altered. >+ * Also, a drm_remove_ref_object() will not remove it >+ * from the list unless its refcount is 1. >+ */ >+ >+ head = &priv->refd_objects; >+ while (head->next != head) { >+ ref_object = list_entry(head->next, struct drm_ref_object, list); >+ drm_remove_ref_object(priv, ref_object); >+ head = &priv->refd_objects; >+ } >+ >+ for (i = 0; i < _DRM_NO_REF_TYPES; ++i) >+ drm_ht_remove(&priv->refd_object_hash[i]); >+} >+ > /** > * Release file. > * >@@ -400,6 +450,7 @@ int drm_release(struct inode *inode, str > drm_fasync(-1, filp, 0); > > mutex_lock(&dev->ctxlist_mutex); >+ > if (!list_empty(&dev->ctxlist)) { > struct drm_ctx_list *pos, *n; > >@@ -421,6 +472,7 @@ int drm_release(struct inode *inode, str > mutex_unlock(&dev->ctxlist_mutex); > > mutex_lock(&dev->struct_mutex); >+ drm_object_release(filp); > if (file_priv->remove_auth_on_close == 1) { > struct drm_file *temp; > >@@ -461,8 +513,17 @@ int drm_release(struct inode *inode, str > EXPORT_SYMBOL(drm_release); > > /** No-op. */ >+/* This is to deal with older X servers that believe 0 means data is >+ * available which is not the correct return for a poll function. >+ * This cannot be fixed until the Xserver is fixed. Xserver will need >+ * to set a newer interface version to avoid breaking older Xservers. >+ * Without fixing the Xserver you get: "WaitForSomething(): select: errno=22" >+ * http://freedesktop.org/bugzilla/show_bug.cgi?id=1505 if you try >+ * to return the correct response. >+ */ > unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait) > { >+ /* return (POLLIN | POLLOUT | POLLRDNORM | POLLWRNORM); */ > return 0; > } > EXPORT_SYMBOL(drm_poll); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm.h linux-2.6.23.i686/drivers/char/drm/drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -33,12 +33,45 @@ > * OTHER DEALINGS IN THE SOFTWARE. > */ > >+/** >+ * \mainpage >+ * >+ * The Direct Rendering Manager (DRM) is a device-independent kernel-level >+ * device driver that provides support for the XFree86 Direct Rendering >+ * Infrastructure (DRI). >+ * >+ * The DRM supports the Direct Rendering Infrastructure (DRI) in four major >+ * ways: >+ * -# The DRM provides synchronized access to the graphics hardware via >+ * the use of an optimized two-tiered lock. >+ * -# The DRM enforces the DRI security policy for access to the graphics >+ * hardware by only allowing authenticated X11 clients access to >+ * restricted regions of memory. >+ * -# The DRM provides a generic DMA engine, complete with multiple >+ * queues and the ability to detect the need for an OpenGL context >+ * switch. >+ * -# The DRM is extensible via the use of small device-specific modules >+ * that rely extensively on the API exported by the DRM module. >+ * >+ */ >+ > #ifndef _DRM_H_ > #define _DRM_H_ > >-#if defined(__linux__) >-#if defined(__KERNEL__) >+#ifndef __user >+#define __user > #endif >+#ifndef __iomem >+#define __iomem >+#endif >+ >+#ifdef __GNUC__ >+# define DEPRECATED __attribute__ ((deprecated)) >+#else >+# define DEPRECATED >+#endif >+ >+#if defined(__linux__) > #include <asm/ioctl.h> /* For _IO* macros */ > #define DRM_IOCTL_NR(n) _IOC_NR(n) > #define DRM_IOC_VOID _IOC_NONE >@@ -46,15 +79,8 @@ > #define DRM_IOC_WRITE _IOC_WRITE > #define DRM_IOC_READWRITE _IOC_READ|_IOC_WRITE > #define DRM_IOC(dir, group, nr, size) _IOC(dir, group, nr, size) >-#elif defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__) >-#if defined(__FreeBSD__) && defined(IN_MODULE) >-/* Prevent name collision when including sys/ioccom.h */ >-#undef ioctl >+#elif defined(__FreeBSD__) || defined(__FreeBSD_kernel__) || defined(__NetBSD__) || defined(__OpenBSD__) || defined(__DragonFly__) > #include <sys/ioccom.h> >-#define ioctl(a,b,c) xf86ioctl(a,b,c) >-#else >-#include <sys/ioccom.h> >-#endif /* __FreeBSD__ && xf86ioctl */ > #define DRM_IOCTL_NR(n) ((n) & 0xff) > #define DRM_IOC_VOID IOC_VOID > #define DRM_IOC_READ IOC_OUT >@@ -63,7 +89,12 @@ > #define DRM_IOC(dir, group, nr, size) _IOC(dir, group, nr, size) > #endif > >+#ifdef __OpenBSD__ >+#define DRM_MAJOR 81 >+#endif >+#if defined(__linux__) || defined(__NetBSD__) > #define DRM_MAJOR 226 >+#endif > #define DRM_MAX_MINOR 15 > > #define DRM_NAME "drm" /**< Name in kernel, /dev, and /proc */ >@@ -77,15 +108,20 @@ > #define _DRM_LOCK_IS_CONT(lock) ((lock) & _DRM_LOCK_CONT) > #define _DRM_LOCKING_CONTEXT(lock) ((lock) & ~(_DRM_LOCK_HELD|_DRM_LOCK_CONT)) > >+#if defined(__linux__) > typedef unsigned int drm_handle_t; >-typedef unsigned int drm_context_t; >+#else >+#include <sys/types.h> >+typedef unsigned long drm_handle_t; /**< To mapped regions */ >+#endif >+typedef unsigned int drm_context_t; /**< GLXContext handle */ > typedef unsigned int drm_drawable_t; >-typedef unsigned int drm_magic_t; >+typedef unsigned int drm_magic_t; /**< Magic for authentication */ > > /** > * Cliprect. > * >- * \warning: If you change this structure, make sure you change >+ * \warning If you change this structure, make sure you change > * XF86DRIClipRectRec in the server as well > * > * \note KW: Actually it's illegal to change either for >@@ -99,14 +135,6 @@ struct drm_clip_rect { > }; > > /** >- * Drawable information. >- */ >-struct drm_drawable_info { >- unsigned int num_rects; >- struct drm_clip_rect *rects; >-}; >- >-/** > * Texture region, > */ > struct drm_tex_region { >@@ -129,6 +157,22 @@ struct drm_hw_lock { > char padding[60]; /**< Pad to cache line */ > }; > >+/* This is beyond ugly, and only works on GCC. However, it allows me to use >+ * drm.h in places (i.e., in the X-server) where I can't use size_t. The real >+ * fix is to use uint32_t instead of size_t, but that fix will break existing >+ * LP64 (i.e., PowerPC64, SPARC64, IA-64, Alpha, etc.) systems. That *will* >+ * eventually happen, though. I chose 'unsigned long' to be the fallback type >+ * because that works on all the platforms I know about. Hopefully, the >+ * real fix will happen before that bites us. >+ */ >+ >+#ifdef __SIZE_TYPE__ >+# define DRM_SIZE_T __SIZE_TYPE__ >+#else >+# warning "__SIZE_TYPE__ not defined. Assuming sizeof(size_t) == sizeof(unsigned long)!" >+# define DRM_SIZE_T unsigned long >+#endif >+ > /** > * DRM_IOCTL_VERSION ioctl argument type. > * >@@ -138,12 +182,12 @@ struct drm_version { > int version_major; /**< Major version */ > int version_minor; /**< Minor version */ > int version_patchlevel; /**< Patch level */ >- size_t name_len; /**< Length of name buffer */ >- char __user *name; /**< Name of driver */ >- size_t date_len; /**< Length of date buffer */ >- char __user *date; /**< User-space buffer to hold date */ >- size_t desc_len; /**< Length of desc buffer */ >- char __user *desc; /**< User-space buffer to hold desc */ >+ DRM_SIZE_T name_len; /**< Length of name buffer */ >+ char __user *name; /**< Name of driver */ >+ DRM_SIZE_T date_len; /**< Length of date buffer */ >+ char __user *date; /**< User-space buffer to hold date */ >+ DRM_SIZE_T desc_len; /**< Length of desc buffer */ >+ char __user *desc; /**< User-space buffer to hold desc */ > }; > > /** >@@ -152,10 +196,12 @@ struct drm_version { > * \sa drmGetBusid() and drmSetBusId(). > */ > struct drm_unique { >- size_t unique_len; /**< Length of unique */ >- char __user *unique; /**< Unique name for driver instantiation */ >+ DRM_SIZE_T unique_len; /**< Length of unique */ >+ char __user *unique; /**< Unique name for driver instantiation */ > }; > >+#undef DRM_SIZE_T >+ > struct drm_list { > int count; /**< Length of user-space structures */ > struct drm_version __user *version; >@@ -190,6 +236,7 @@ enum drm_map_type { > _DRM_AGP = 3, /**< AGP/GART */ > _DRM_SCATTER_GATHER = 4, /**< Scatter/gather memory for PCI DMA */ > _DRM_CONSISTENT = 5, /**< Consistent memory for PCI DMA */ >+ _DRM_TTM = 6 > }; > > /** >@@ -202,7 +249,8 @@ enum drm_map_flags { > _DRM_KERNEL = 0x08, /**< kernel requires access */ > _DRM_WRITE_COMBINING = 0x10, /**< use write-combining if available */ > _DRM_CONTAINS_LOCK = 0x20, /**< SHM page that contains lock */ >- _DRM_REMOVABLE = 0x40 /**< Removable mapping */ >+ _DRM_REMOVABLE = 0x40, /**< Removable mapping */ >+ _DRM_DRIVER = 0x80 /**< Managed by driver */ > }; > > struct drm_ctx_priv_map { >@@ -337,8 +385,8 @@ struct drm_buf_desc { > enum { > _DRM_PAGE_ALIGN = 0x01, /**< Align on page boundaries for DMA */ > _DRM_AGP_BUFFER = 0x02, /**< Buffer is in AGP space */ >- _DRM_SG_BUFFER = 0x04, /**< Scatter/gather memory buffer */ >- _DRM_FB_BUFFER = 0x08, /**< Buffer is in frame buffer */ >+ _DRM_SG_BUFFER = 0x04, /**< Scatter/gather memory buffer */ >+ _DRM_FB_BUFFER = 0x08, /**< Buffer is in frame buffer */ > _DRM_PCI_BUFFER_RO = 0x10 /**< Map PCI DMA buffer read-only */ > } flags; > unsigned long agp_start; /**< >@@ -351,8 +399,8 @@ struct drm_buf_desc { > * DRM_IOCTL_INFO_BUFS ioctl argument type. > */ > struct drm_buf_info { >- int count; /**< Entries in list */ >- struct drm_buf_desc __user *list; >+ int count; /**< Number of buffers described in list */ >+ struct drm_buf_desc __user *list; /**< List of buffer descriptions */ > }; > > /** >@@ -380,7 +428,11 @@ struct drm_buf_pub { > */ > struct drm_buf_map { > int count; /**< Length of the buffer list */ >+#if defined(__cplusplus) >+ void __user *c_virtual; >+#else > void __user *virtual; /**< Mmap'd area in user-virtual */ >+#endif > struct drm_buf_pub __user *list; /**< Buffer information */ > }; > >@@ -399,7 +451,7 @@ struct drm_dma { > enum drm_dma_flags flags; /**< Flags */ > int request_count; /**< Number of buffers requested */ > int request_size; /**< Desired size for buffers */ >- int __user *request_indices; /**< Buffer information */ >+ int __user *request_indices; /**< Buffer information */ > int __user *request_sizes; > int granted_count; /**< Number of buffers granted */ > }; >@@ -470,6 +522,7 @@ struct drm_irq_busid { > enum drm_vblank_seq_type { > _DRM_VBLANK_ABSOLUTE = 0x0, /**< Wait for specific vblank sequence number */ > _DRM_VBLANK_RELATIVE = 0x1, /**< Wait for given number of vblanks */ >+ _DRM_VBLANK_FLIP = 0x8000000, /**< Scheduled buffer swap should flip */ > _DRM_VBLANK_NEXTONMISS = 0x10000000, /**< If missed, wait for next vblank */ > _DRM_VBLANK_SECONDARY = 0x20000000, /**< Secondary display controller */ > _DRM_VBLANK_SIGNAL = 0x40000000 /**< Send signal instead of blocking */ >@@ -544,14 +597,16 @@ struct drm_agp_info { > int agp_version_major; > int agp_version_minor; > unsigned long mode; >- unsigned long aperture_base; /* physical address */ >- unsigned long aperture_size; /* bytes */ >- unsigned long memory_allowed; /* bytes */ >+ unsigned long aperture_base; /**< physical address */ >+ unsigned long aperture_size; /**< bytes */ >+ unsigned long memory_allowed; /**< bytes */ > unsigned long memory_used; > >- /* PCI information */ >+ /** \name PCI information */ >+ /*@{ */ > unsigned short id_vendor; > unsigned short id_device; >+ /*@} */ > }; > > /** >@@ -572,6 +627,312 @@ struct drm_set_version { > int drm_dd_minor; > }; > >+ >+#define DRM_FENCE_FLAG_EMIT 0x00000001 >+#define DRM_FENCE_FLAG_SHAREABLE 0x00000002 >+#define DRM_FENCE_FLAG_WAIT_LAZY 0x00000004 >+#define DRM_FENCE_FLAG_WAIT_IGNORE_SIGNALS 0x00000008 >+#define DRM_FENCE_FLAG_NO_USER 0x00000010 >+ >+/* Reserved for driver use */ >+#define DRM_FENCE_MASK_DRIVER 0xFF000000 >+ >+#define DRM_FENCE_TYPE_EXE 0x00000001 >+ >+struct drm_fence_arg { >+ unsigned int handle; >+ unsigned int fence_class; >+ unsigned int type; >+ unsigned int flags; >+ unsigned int signaled; >+ unsigned int error; >+ unsigned int sequence; >+ unsigned int pad64; >+ uint64_t expand_pad[2]; /*Future expansion */ >+}; >+ >+/* Buffer permissions, referring to how the GPU uses the buffers. >+ * these translate to fence types used for the buffers. >+ * Typically a texture buffer is read, A destination buffer is write and >+ * a command (batch-) buffer is exe. Can be or-ed together. >+ */ >+ >+#define DRM_BO_FLAG_READ (1ULL << 0) >+#define DRM_BO_FLAG_WRITE (1ULL << 1) >+#define DRM_BO_FLAG_EXE (1ULL << 2) >+ >+/* >+ * All of the bits related to access mode >+ */ >+#define DRM_BO_MASK_ACCESS (DRM_BO_FLAG_READ | DRM_BO_FLAG_WRITE | DRM_BO_FLAG_EXE) >+/* >+ * Status flags. Can be read to determine the actual state of a buffer. >+ * Can also be set in the buffer mask before validation. >+ */ >+ >+/* >+ * Mask: Never evict this buffer. Not even with force. This type of buffer is only >+ * available to root and must be manually removed before buffer manager shutdown >+ * or lock. >+ * Flags: Acknowledge >+ */ >+#define DRM_BO_FLAG_NO_EVICT (1ULL << 4) >+ >+/* >+ * Mask: Require that the buffer is placed in mappable memory when validated. >+ * If not set the buffer may or may not be in mappable memory when validated. >+ * Flags: If set, the buffer is in mappable memory. >+ */ >+#define DRM_BO_FLAG_MAPPABLE (1ULL << 5) >+ >+/* Mask: The buffer should be shareable with other processes. >+ * Flags: The buffer is shareable with other processes. >+ */ >+#define DRM_BO_FLAG_SHAREABLE (1ULL << 6) >+ >+/* Mask: If set, place the buffer in cache-coherent memory if available. >+ * If clear, never place the buffer in cache coherent memory if validated. >+ * Flags: The buffer is currently in cache-coherent memory. >+ */ >+#define DRM_BO_FLAG_CACHED (1ULL << 7) >+ >+/* Mask: Make sure that every time this buffer is validated, >+ * it ends up on the same location provided that the memory mask is the same. >+ * The buffer will also not be evicted when claiming space for >+ * other buffers. Basically a pinned buffer but it may be thrown out as >+ * part of buffer manager shutdown or locking. >+ * Flags: Acknowledge. >+ */ >+#define DRM_BO_FLAG_NO_MOVE (1ULL << 8) >+ >+/* Mask: Make sure the buffer is in cached memory when mapped >+ * Flags: Acknowledge. >+ * Buffers allocated with this flag should not be used for suballocators >+ * This type may have issues on CPUs with over-aggressive caching >+ * http://marc.info/?l=linux-kernel&m=102376926732464&w=2 >+ */ >+#define DRM_BO_FLAG_CACHED_MAPPED (1ULL << 19) >+ >+ >+/* Mask: Force DRM_BO_FLAG_CACHED flag strictly also if it is set. >+ * Flags: Acknowledge. >+ */ >+#define DRM_BO_FLAG_FORCE_CACHING (1ULL << 13) >+ >+/* >+ * Mask: Force DRM_BO_FLAG_MAPPABLE flag strictly also if it is clear. >+ * Flags: Acknowledge. >+ */ >+#define DRM_BO_FLAG_FORCE_MAPPABLE (1ULL << 14) >+#define DRM_BO_FLAG_TILE (1ULL << 15) >+ >+/* >+ * Memory type flags that can be or'ed together in the mask, but only >+ * one appears in flags. >+ */ >+ >+/* System memory */ >+#define DRM_BO_FLAG_MEM_LOCAL (1ULL << 24) >+/* Translation table memory */ >+#define DRM_BO_FLAG_MEM_TT (1ULL << 25) >+/* Vram memory */ >+#define DRM_BO_FLAG_MEM_VRAM (1ULL << 26) >+/* Up to the driver to define. */ >+#define DRM_BO_FLAG_MEM_PRIV0 (1ULL << 27) >+#define DRM_BO_FLAG_MEM_PRIV1 (1ULL << 28) >+#define DRM_BO_FLAG_MEM_PRIV2 (1ULL << 29) >+#define DRM_BO_FLAG_MEM_PRIV3 (1ULL << 30) >+#define DRM_BO_FLAG_MEM_PRIV4 (1ULL << 31) >+/* We can add more of these now with a 64-bit flag type */ >+ >+/* >+ * This is a mask covering all of the memory type flags; easier to just >+ * use a single constant than a bunch of | values. It covers >+ * DRM_BO_FLAG_MEM_LOCAL through DRM_BO_FLAG_MEM_PRIV4 >+ */ >+#define DRM_BO_MASK_MEM 0x00000000FF000000ULL >+/* >+ * This adds all of the CPU-mapping options in with the memory >+ * type to label all bits which change how the page gets mapped >+ */ >+#define DRM_BO_MASK_MEMTYPE (DRM_BO_MASK_MEM | \ >+ DRM_BO_FLAG_CACHED_MAPPED | \ >+ DRM_BO_FLAG_CACHED | \ >+ DRM_BO_FLAG_MAPPABLE) >+ >+/* Driver-private flags */ >+#define DRM_BO_MASK_DRIVER 0xFFFF000000000000ULL >+ >+/* >+ * Don't block on validate and map. Instead, return EBUSY. >+ */ >+#define DRM_BO_HINT_DONT_BLOCK 0x00000002 >+/* >+ * Don't place this buffer on the unfenced list. This means >+ * that the buffer will not end up having a fence associated >+ * with it as a result of this operation >+ */ >+#define DRM_BO_HINT_DONT_FENCE 0x00000004 >+/* >+ * Sleep while waiting for the operation to complete. >+ * Without this flag, the kernel will, instead, spin >+ * until this operation has completed. I'm not sure >+ * why you would ever want this, so please always >+ * provide DRM_BO_HINT_WAIT_LAZY to any operation >+ * which may block >+ */ >+#define DRM_BO_HINT_WAIT_LAZY 0x00000008 >+/* >+ * The client has compute relocations refering to this buffer using the >+ * offset in the presumed_offset field. If that offset ends up matching >+ * where this buffer lands, the kernel is free to skip executing those >+ * relocations >+ */ >+#define DRM_BO_HINT_PRESUMED_OFFSET 0x00000010 >+ >+#define DRM_BO_INIT_MAGIC 0xfe769812 >+#define DRM_BO_INIT_MAJOR 1 >+#define DRM_BO_INIT_MINOR 0 >+#define DRM_BO_INIT_PATCH 0 >+ >+ >+struct drm_bo_info_req { >+ uint64_t mask; >+ uint64_t flags; >+ unsigned int handle; >+ unsigned int hint; >+ unsigned int fence_class; >+ unsigned int desired_tile_stride; >+ unsigned int tile_info; >+ unsigned int pad64; >+ uint64_t presumed_offset; >+}; >+ >+struct drm_bo_create_req { >+ uint64_t flags; >+ uint64_t size; >+ uint64_t buffer_start; >+ unsigned int hint; >+ unsigned int page_alignment; >+}; >+ >+ >+/* >+ * Reply flags >+ */ >+ >+#define DRM_BO_REP_BUSY 0x00000001 >+ >+struct drm_bo_info_rep { >+ uint64_t flags; >+ uint64_t proposed_flags; >+ uint64_t size; >+ uint64_t offset; >+ uint64_t arg_handle; >+ uint64_t buffer_start; >+ unsigned int handle; >+ unsigned int fence_flags; >+ unsigned int rep_flags; >+ unsigned int page_alignment; >+ unsigned int desired_tile_stride; >+ unsigned int hw_tile_stride; >+ unsigned int tile_info; >+ unsigned int pad64; >+ uint64_t expand_pad[4]; /*Future expansion */ >+}; >+ >+struct drm_bo_arg_rep { >+ struct drm_bo_info_rep bo_info; >+ int ret; >+ unsigned int pad64; >+}; >+ >+struct drm_bo_create_arg { >+ union { >+ struct drm_bo_create_req req; >+ struct drm_bo_info_rep rep; >+ } d; >+}; >+ >+struct drm_bo_handle_arg { >+ unsigned int handle; >+}; >+ >+struct drm_bo_reference_info_arg { >+ union { >+ struct drm_bo_handle_arg req; >+ struct drm_bo_info_rep rep; >+ } d; >+}; >+ >+struct drm_bo_map_wait_idle_arg { >+ union { >+ struct drm_bo_info_req req; >+ struct drm_bo_info_rep rep; >+ } d; >+}; >+ >+struct drm_bo_op_req { >+ enum { >+ drm_bo_validate, >+ drm_bo_fence, >+ drm_bo_ref_fence, >+ } op; >+ unsigned int arg_handle; >+ struct drm_bo_info_req bo_req; >+}; >+ >+ >+struct drm_bo_op_arg { >+ uint64_t next; >+ union { >+ struct drm_bo_op_req req; >+ struct drm_bo_arg_rep rep; >+ } d; >+ int handled; >+ unsigned int pad64; >+}; >+ >+ >+#define DRM_BO_MEM_LOCAL 0 >+#define DRM_BO_MEM_TT 1 >+#define DRM_BO_MEM_VRAM 2 >+#define DRM_BO_MEM_PRIV0 3 >+#define DRM_BO_MEM_PRIV1 4 >+#define DRM_BO_MEM_PRIV2 5 >+#define DRM_BO_MEM_PRIV3 6 >+#define DRM_BO_MEM_PRIV4 7 >+ >+#define DRM_BO_MEM_TYPES 8 /* For now. */ >+ >+#define DRM_BO_LOCK_UNLOCK_BM (1 << 0) >+#define DRM_BO_LOCK_IGNORE_NO_EVICT (1 << 1) >+ >+struct drm_bo_version_arg { >+ uint32_t major; >+ uint32_t minor; >+ uint32_t patchlevel; >+}; >+ >+struct drm_mm_type_arg { >+ unsigned int mem_type; >+ unsigned int lock_flags; >+}; >+ >+struct drm_mm_init_arg { >+ unsigned int magic; >+ unsigned int major; >+ unsigned int minor; >+ unsigned int mem_type; >+ uint64_t p_offset; >+ uint64_t p_size; >+}; >+ >+/** >+ * \name Ioctls Definitions >+ */ >+/*@{*/ >+ > #define DRM_IOCTL_BASE 'd' > #define DRM_IO(nr) _IO(DRM_IOCTL_BASE,nr) > #define DRM_IOR(nr,type) _IOR(DRM_IOCTL_BASE,nr,type) >@@ -602,7 +963,7 @@ struct drm_set_version { > #define DRM_IOCTL_RM_MAP DRM_IOW( 0x1b, struct drm_map) > > #define DRM_IOCTL_SET_SAREA_CTX DRM_IOW( 0x1c, struct drm_ctx_priv_map) >-#define DRM_IOCTL_GET_SAREA_CTX DRM_IOWR(0x1d, struct drm_ctx_priv_map) >+#define DRM_IOCTL_GET_SAREA_CTX DRM_IOWR(0x1d, struct drm_ctx_priv_map) > > #define DRM_IOCTL_ADD_CTX DRM_IOWR(0x20, struct drm_ctx) > #define DRM_IOCTL_RM_CTX DRM_IOWR(0x21, struct drm_ctx) >@@ -632,7 +993,34 @@ struct drm_set_version { > > #define DRM_IOCTL_WAIT_VBLANK DRM_IOWR(0x3a, union drm_wait_vblank) > >-#define DRM_IOCTL_UPDATE_DRAW DRM_IOW(0x3f, struct drm_update_draw) >+#define DRM_IOCTL_UPDATE_DRAW DRM_IOW(0x3f, struct drm_update_draw) >+ >+#define DRM_IOCTL_MM_INIT DRM_IOWR(0xc0, struct drm_mm_init_arg) >+#define DRM_IOCTL_MM_TAKEDOWN DRM_IOWR(0xc1, struct drm_mm_type_arg) >+#define DRM_IOCTL_MM_LOCK DRM_IOWR(0xc2, struct drm_mm_type_arg) >+#define DRM_IOCTL_MM_UNLOCK DRM_IOWR(0xc3, struct drm_mm_type_arg) >+ >+#define DRM_IOCTL_FENCE_CREATE DRM_IOWR(0xc4, struct drm_fence_arg) >+#define DRM_IOCTL_FENCE_REFERENCE DRM_IOWR(0xc6, struct drm_fence_arg) >+#define DRM_IOCTL_FENCE_UNREFERENCE DRM_IOWR(0xc7, struct drm_fence_arg) >+#define DRM_IOCTL_FENCE_SIGNALED DRM_IOWR(0xc8, struct drm_fence_arg) >+#define DRM_IOCTL_FENCE_FLUSH DRM_IOWR(0xc9, struct drm_fence_arg) >+#define DRM_IOCTL_FENCE_WAIT DRM_IOWR(0xca, struct drm_fence_arg) >+#define DRM_IOCTL_FENCE_EMIT DRM_IOWR(0xcb, struct drm_fence_arg) >+#define DRM_IOCTL_FENCE_BUFFERS DRM_IOWR(0xcc, struct drm_fence_arg) >+ >+#define DRM_IOCTL_BO_CREATE DRM_IOWR(0xcd, struct drm_bo_create_arg) >+#define DRM_IOCTL_BO_MAP DRM_IOWR(0xcf, struct drm_bo_map_wait_idle_arg) >+#define DRM_IOCTL_BO_UNMAP DRM_IOWR(0xd0, struct drm_bo_handle_arg) >+#define DRM_IOCTL_BO_REFERENCE DRM_IOWR(0xd1, struct drm_bo_reference_info_arg) >+#define DRM_IOCTL_BO_UNREFERENCE DRM_IOWR(0xd2, struct drm_bo_handle_arg) >+#define DRM_IOCTL_BO_SETSTATUS DRM_IOWR(0xd3, struct drm_bo_map_wait_idle_arg) >+#define DRM_IOCTL_BO_INFO DRM_IOWR(0xd4, struct drm_bo_reference_info_arg) >+#define DRM_IOCTL_BO_WAIT_IDLE DRM_IOWR(0xd5, struct drm_bo_map_wait_idle_arg) >+#define DRM_IOCTL_BO_VERSION DRM_IOR(0xd6, struct drm_bo_version_arg) >+ >+ >+/*@}*/ > > /** > * Device specific ioctls should only be in their respective headers >@@ -643,12 +1031,11 @@ struct drm_set_version { > * drmCommandReadWrite(). > */ > #define DRM_COMMAND_BASE 0x40 >-#define DRM_COMMAND_END 0xA0 >+#define DRM_COMMAND_END 0xA0 > > /* typedef area */ >-#ifndef __KERNEL__ >+#if !defined(__KERNEL__) || defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__) > typedef struct drm_clip_rect drm_clip_rect_t; >-typedef struct drm_drawable_info drm_drawable_info_t; > typedef struct drm_tex_region drm_tex_region_t; > typedef struct drm_hw_lock drm_hw_lock_t; > typedef struct drm_version drm_version_t; >@@ -682,12 +1069,16 @@ typedef struct drm_update_draw drm_updat > typedef struct drm_auth drm_auth_t; > typedef struct drm_irq_busid drm_irq_busid_t; > typedef enum drm_vblank_seq_type drm_vblank_seq_type_t; >- > typedef struct drm_agp_buffer drm_agp_buffer_t; > typedef struct drm_agp_binding drm_agp_binding_t; > typedef struct drm_agp_info drm_agp_info_t; > typedef struct drm_scatter_gather drm_scatter_gather_t; > typedef struct drm_set_version drm_set_version_t; >+ >+typedef struct drm_fence_arg drm_fence_arg_t; >+typedef struct drm_mm_type_arg drm_mm_type_arg_t; >+typedef struct drm_mm_init_arg drm_mm_init_arg_t; >+typedef enum drm_bo_type drm_bo_type_t; > #endif > > #endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_hashtab.c linux-2.6.23.i686/drivers/char/drm/drm_hashtab.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_hashtab.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_hashtab.c 2008-01-06 09:24:57.000000000 +0100 >@@ -51,13 +51,13 @@ int drm_ht_create(struct drm_open_hash * > } > if (!ht->table) { > ht->use_vmalloc = 1; >- ht->table = vmalloc(ht->size*sizeof(*ht->table)); >+ ht->table = vmalloc(ht->size * sizeof(*ht->table)); > } > if (!ht->table) { > DRM_ERROR("Out of memory for hash table\n"); > return -ENOMEM; > } >- for (i=0; i< ht->size; ++i) { >+ for (i = 0; i < ht->size; ++i) { > INIT_HLIST_HEAD(&ht->table[i]); > } > return 0; >@@ -80,7 +80,7 @@ void drm_ht_verbose_list(struct drm_open > } > } > >-static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht, >+static struct hlist_node *drm_ht_find_key(struct drm_open_hash *ht, > unsigned long key) > { > struct drm_hash_item *entry; >@@ -100,7 +100,6 @@ static struct hlist_node *drm_ht_find_ke > return NULL; > } > >- > int drm_ht_insert_item(struct drm_open_hash *ht, struct drm_hash_item *item) > { > struct drm_hash_item *entry; >@@ -129,10 +128,11 @@ int drm_ht_insert_item(struct drm_open_h > } > > /* >- * Just insert an item and return any "bits" bit key that hasn't been >+ * Just insert an item and return any "bits" bit key that hasn't been > * used before. > */ >-int drm_ht_just_insert_please(struct drm_open_hash *ht, struct drm_hash_item *item, >+int drm_ht_just_insert_please(struct drm_open_hash *ht, >+ struct drm_hash_item *item, > unsigned long seed, int bits, int shift, > unsigned long add) > { >@@ -147,7 +147,7 @@ int drm_ht_just_insert_please(struct drm > ret = drm_ht_insert_item(ht, item); > if (ret) > unshifted_key = (unshifted_key + 1) & mask; >- } while(ret && (unshifted_key != first)); >+ } while (ret && (unshifted_key != first)); > > if (ret) { > DRM_ERROR("Available key bit space exhausted\n"); >@@ -200,4 +200,3 @@ void drm_ht_remove(struct drm_open_hash > ht->table = NULL; > } > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_hashtab.h linux-2.6.23.i686/drivers/char/drm/drm_hashtab.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_hashtab.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_hashtab.h 2008-01-06 09:24:57.000000000 +0100 >@@ -65,4 +65,3 @@ extern void drm_ht_remove(struct drm_ope > > > #endif >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_internal.h linux-2.6.23.i686/drivers/char/drm/drm_internal.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_internal.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_internal.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,40 @@ >+/* >+ * Copyright 2007 Red Hat, Inc >+ * All rights reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR >+ * OTHER DEALINGS IN THE SOFTWARE. >+ */ >+ >+/* This header file holds function prototypes and data types that are >+ * internal to the drm (not exported to user space) but shared across >+ * drivers and platforms */ >+ >+#ifndef __DRM_INTERNAL_H__ >+#define __DRM_INTERNAL_H__ >+ >+/** >+ * Drawable information. >+ */ >+struct drm_drawable_info { >+ unsigned int num_rects; >+ struct drm_clip_rect *rects; >+}; >+ >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_ioc32.c linux-2.6.23.i686/drivers/char/drm/drm_ioc32.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_ioc32.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_ioc32.c 2008-01-06 09:24:57.000000000 +0100 >@@ -69,7 +69,7 @@ > typedef struct drm_version_32 { > int version_major; /**< Major version */ > int version_minor; /**< Minor version */ >- int version_patchlevel; /**< Patch level */ >+ int version_patchlevel; /**< Patch level */ > u32 name_len; /**< Length of name buffer */ > u32 name; /**< Name of driver */ > u32 date_len; /**< Length of date buffer */ >@@ -102,7 +102,7 @@ static int compat_drm_version(struct fil > &version->desc)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_VERSION, (unsigned long)version); > if (err) > return err; >@@ -143,7 +143,7 @@ static int compat_drm_getunique(struct f > &u->unique)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_GET_UNIQUE, (unsigned long)u); > if (err) > return err; >@@ -172,7 +172,7 @@ static int compat_drm_setunique(struct f > &u->unique)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_SET_UNIQUE, (unsigned long)u); > } > >@@ -203,7 +203,7 @@ static int compat_drm_getmap(struct file > if (__put_user(idx, &map->offset)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_GET_MAP, (unsigned long)map); > if (err) > return err; >@@ -244,7 +244,7 @@ static int compat_drm_addmap(struct file > || __put_user(m32.flags, &map->flags)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_ADD_MAP, (unsigned long)map); > if (err) > return err; >@@ -282,7 +282,7 @@ static int compat_drm_rmmap(struct file > if (__put_user((void *)(unsigned long)handle, &map->handle)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_RM_MAP, (unsigned long)map); > } > >@@ -312,7 +312,7 @@ static int compat_drm_getclient(struct f > if (__put_user(idx, &client->idx)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_GET_CLIENT, (unsigned long)client); > if (err) > return err; >@@ -349,7 +349,7 @@ static int compat_drm_getstats(struct fi > if (!access_ok(VERIFY_WRITE, stats, sizeof(*stats))) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_GET_STATS, (unsigned long)stats); > if (err) > return err; >@@ -393,7 +393,7 @@ static int compat_drm_addbufs(struct fil > || __put_user(agp_start, &buf->agp_start)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_ADD_BUFS, (unsigned long)buf); > if (err) > return err; >@@ -425,7 +425,7 @@ static int compat_drm_markbufs(struct fi > || __put_user(b32.high_mark, &buf->high_mark)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_MARK_BUFS, (unsigned long)buf); > } > >@@ -450,7 +450,7 @@ static int compat_drm_infobufs(struct fi > return -EFAULT; > > count = req32.count; >- to = (drm_buf_desc32_t __user *) (unsigned long)req32.list; >+ to = (drm_buf_desc32_t __user *)(unsigned long)req32.list; > if (count < 0) > count = 0; > if (count > 0 >@@ -467,7 +467,7 @@ static int compat_drm_infobufs(struct fi > || __put_user(list, &request->list)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_INFO_BUFS, (unsigned long)request); > if (err) > return err; >@@ -529,7 +529,7 @@ static int compat_drm_mapbufs(struct fil > || __put_user(list, &request->list)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_MAP_BUFS, (unsigned long)request); > if (err) > return err; >@@ -576,7 +576,7 @@ static int compat_drm_freebufs(struct fi > &request->list)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_FREE_BUFS, (unsigned long)request); > } > >@@ -603,7 +603,7 @@ static int compat_drm_setsareactx(struct > &request->handle)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_SET_SAREA_CTX, (unsigned long)request); > } > >@@ -626,7 +626,7 @@ static int compat_drm_getsareactx(struct > if (__put_user(ctx_id, &request->ctx_id)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_GET_SAREA_CTX, (unsigned long)request); > if (err) > return err; >@@ -662,7 +662,7 @@ static int compat_drm_resctx(struct file > &res->contexts)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_RES_CTX, (unsigned long)res); > if (err) > return err; >@@ -716,7 +716,7 @@ static int compat_drm_dma(struct file *f > &d->request_sizes)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_DMA, (unsigned long)d); > if (err) > return err; >@@ -749,7 +749,7 @@ static int compat_drm_agp_enable(struct > if (put_user(m32.mode, &mode->mode)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_AGP_ENABLE, (unsigned long)mode); > } > >@@ -779,7 +779,7 @@ static int compat_drm_agp_info(struct fi > if (!access_ok(VERIFY_WRITE, info, sizeof(*info))) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_AGP_INFO, (unsigned long)info); > if (err) > return err; >@@ -825,7 +825,7 @@ static int compat_drm_agp_alloc(struct f > || __put_user(req32.type, &request->type)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_AGP_ALLOC, (unsigned long)request); > if (err) > return err; >@@ -833,7 +833,7 @@ static int compat_drm_agp_alloc(struct f > if (__get_user(req32.handle, &request->handle) > || __get_user(req32.physical, &request->physical) > || copy_to_user(argp, &req32, sizeof(req32))) { >- drm_ioctl(file->f_path.dentry->d_inode, file, >+ drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_AGP_FREE, (unsigned long)request); > return -EFAULT; > } >@@ -854,7 +854,7 @@ static int compat_drm_agp_free(struct fi > || __put_user(handle, &request->handle)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_AGP_FREE, (unsigned long)request); > } > >@@ -879,7 +879,7 @@ static int compat_drm_agp_bind(struct fi > || __put_user(req32.offset, &request->offset)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_AGP_BIND, (unsigned long)request); > } > >@@ -896,7 +896,7 @@ static int compat_drm_agp_unbind(struct > || __put_user(handle, &request->handle)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_AGP_UNBIND, (unsigned long)request); > } > #endif /* __OS_HAS_AGP */ >@@ -921,7 +921,7 @@ static int compat_drm_sg_alloc(struct fi > || __put_user(x, &request->size)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_SG_ALLOC, (unsigned long)request); > if (err) > return err; >@@ -948,7 +948,7 @@ static int compat_drm_sg_free(struct fil > || __put_user(x << PAGE_SHIFT, &request->handle)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_SG_FREE, (unsigned long)request); > } > >@@ -988,7 +988,7 @@ static int compat_drm_wait_vblank(struct > || __put_user(req32.request.signal, &request->request.signal)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_WAIT_VBLANK, (unsigned long)request); > if (err) > return err; >@@ -1051,19 +1051,23 @@ long drm_compat_ioctl(struct file *filp, > drm_ioctl_compat_t *fn; > int ret; > >- if (nr >= ARRAY_SIZE(drm_compat_ioctls)) >- return -ENOTTY; >+ >+ /* Assume that ioctls without an explicit compat routine will "just >+ * work". This may not always be a good assumption, but it's better >+ * than always failing. >+ */ >+ if (nr >= DRM_ARRAY_SIZE(drm_compat_ioctls)) >+ return drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg); > > fn = drm_compat_ioctls[nr]; > > lock_kernel(); /* XXX for now */ > if (fn != NULL) >- ret = (*fn) (filp, cmd, arg); >+ ret = (*fn)(filp, cmd, arg); > else >- ret = drm_ioctl(filp->f_path.dentry->d_inode, filp, cmd, arg); >+ ret = drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg); > unlock_kernel(); > > return ret; > } >- > EXPORT_SYMBOL(drm_compat_ioctl); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_ioctl.c linux-2.6.23.i686/drivers/char/drm/drm_ioctl.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_ioctl.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_ioctl.c 2008-01-06 09:24:57.000000000 +0100 >@@ -128,9 +128,8 @@ int drm_setunique(struct drm_device *dev > static int drm_set_busid(struct drm_device * dev) > { > int len; >- > if (dev->unique != NULL) >- return 0; >+ return -EBUSY; > > dev->unique_len = 40; > dev->unique = drm_alloc(dev->unique_len + 1, DRM_MEM_DRIVER); >@@ -138,12 +137,12 @@ static int drm_set_busid(struct drm_devi > return -ENOMEM; > > len = snprintf(dev->unique, dev->unique_len, "pci:%04x:%02x:%02x.%d", >- drm_get_pci_domain(dev), dev->pdev->bus->number, >+ drm_get_pci_domain(dev), >+ dev->pdev->bus->number, > PCI_SLOT(dev->pdev->devfn), > PCI_FUNC(dev->pdev->devfn)); >- > if (len > dev->unique_len) >- DRM_ERROR("Unique buffer overflowed\n"); >+ DRM_ERROR("buffer overflow"); > > dev->devname = > drm_alloc(strlen(dev->driver->pci_driver.name) + dev->unique_len + >@@ -234,26 +233,23 @@ int drm_getclient(struct drm_device *dev > > idx = client->idx; > mutex_lock(&dev->struct_mutex); >- >- if (list_empty(&dev->filelist)) { >- mutex_unlock(&dev->struct_mutex); >- return -EINVAL; >- } > > i = 0; > list_for_each_entry(pt, &dev->filelist, lhead) { >- if (i++ >= idx) >- break; >- } >+ if (i++ >= idx) { >+ client->auth = pt->authenticated; >+ client->pid = pt->pid; >+ client->uid = pt->uid; >+ client->magic = pt->magic; >+ client->iocs = pt->ioctl_count; >+ mutex_unlock(&dev->struct_mutex); > >- client->auth = pt->authenticated; >- client->pid = pt->pid; >- client->uid = pt->uid; >- client->magic = pt->magic; >- client->iocs = pt->ioctl_count; >+ return 0; >+ } >+ } > mutex_unlock(&dev->struct_mutex); > >- return 0; >+ return -EINVAL; > } > > /** >@@ -272,7 +268,7 @@ int drm_getstats(struct drm_device *dev, > struct drm_stats *stats = data; > int i; > >- memset(stats, 0, sizeof(stats)); >+ memset(stats, 0, sizeof(*stats)); > > mutex_lock(&dev->struct_mutex); > >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_irq.c linux-2.6.23.i686/drivers/char/drm/drm_irq.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_irq.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_irq.c 2008-01-06 09:24:57.000000000 +0100 >@@ -75,7 +75,6 @@ int drm_irq_by_busid(struct drm_device * > * Install IRQ handler. > * > * \param dev DRM device. >- * \param irq IRQ number. > * > * Initializes the IRQ related data, and setups drm_device::vbl_queue. Installs the handler, calling the driver > * \c drm_driver_irq_preinstall() and \c drm_driver_irq_postinstall() functions >@@ -175,7 +174,6 @@ int drm_irq_uninstall(struct drm_device > > return 0; > } >- > EXPORT_SYMBOL(drm_irq_uninstall); > > /** >@@ -385,7 +383,6 @@ void drm_vbl_send_signals(struct drm_dev > > spin_unlock_irqrestore(&dev->vbl_lock, flags); > } >- > EXPORT_SYMBOL(drm_vbl_send_signals); > > /** >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_lock.c linux-2.6.23.i686/drivers/char/drm/drm_lock.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_lock.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_lock.c 2008-01-06 09:24:57.000000000 +0100 >@@ -225,6 +225,7 @@ int drm_lock_take(struct drm_lock_data * > > if ((_DRM_LOCKING_CONTEXT(new)) == context && (new & _DRM_LOCK_HELD)) { > /* Have lock */ >+ > return 1; > } > return 0; >@@ -383,6 +384,7 @@ EXPORT_SYMBOL(drm_idlelock_release); > > int drm_i_have_hw_lock(struct drm_device *dev, struct drm_file *file_priv) > { >+ > return (file_priv->lock_count && dev->lock.hw_lock && > _DRM_LOCK_IS_HELD(dev->lock.hw_lock->lock) && > dev->lock.file_priv == file_priv); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_memory.c linux-2.6.23.i686/drivers/char/drm/drm_memory.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_memory.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_memory.c 2008-01-06 09:24:57.000000000 +0100 >@@ -36,9 +36,77 @@ > #include <linux/highmem.h> > #include "drmP.h" > >-#ifdef DEBUG_MEMORY >-#include "drm_memory_debug.h" >-#else >+static struct { >+ spinlock_t lock; >+ uint64_t cur_used; >+ uint64_t low_threshold; >+ uint64_t high_threshold; >+} drm_memctl = { >+ .lock = SPIN_LOCK_UNLOCKED >+}; >+ >+static inline size_t drm_size_align(size_t size) >+{ >+ size_t tmpSize = 4; >+ if (size > PAGE_SIZE) >+ return PAGE_ALIGN(size); >+ >+ while (tmpSize < size) >+ tmpSize <<= 1; >+ >+ return (size_t) tmpSize; >+} >+ >+int drm_alloc_memctl(size_t size) >+{ >+ int ret; >+ unsigned long a_size = drm_size_align(size); >+ >+ spin_lock(&drm_memctl.lock); >+ ret = ((drm_memctl.cur_used + a_size) > drm_memctl.high_threshold) ? >+ -ENOMEM : 0; >+ if (!ret) >+ drm_memctl.cur_used += a_size; >+ spin_unlock(&drm_memctl.lock); >+ return ret; >+} >+EXPORT_SYMBOL(drm_alloc_memctl); >+ >+void drm_free_memctl(size_t size) >+{ >+ unsigned long a_size = drm_size_align(size); >+ >+ spin_lock(&drm_memctl.lock); >+ drm_memctl.cur_used -= a_size; >+ spin_unlock(&drm_memctl.lock); >+} >+EXPORT_SYMBOL(drm_free_memctl); >+ >+void drm_query_memctl(uint64_t *cur_used, >+ uint64_t *low_threshold, >+ uint64_t *high_threshold) >+{ >+ spin_lock(&drm_memctl.lock); >+ *cur_used = drm_memctl.cur_used; >+ *low_threshold = drm_memctl.low_threshold; >+ *high_threshold = drm_memctl.high_threshold; >+ spin_unlock(&drm_memctl.lock); >+} >+EXPORT_SYMBOL(drm_query_memctl); >+ >+void drm_init_memctl(size_t p_low_threshold, >+ size_t p_high_threshold, >+ size_t unit_size) >+{ >+ spin_lock(&drm_memctl.lock); >+ drm_memctl.cur_used = 0; >+ drm_memctl.low_threshold = p_low_threshold * unit_size; >+ drm_memctl.high_threshold = p_high_threshold * unit_size; >+ spin_unlock(&drm_memctl.lock); >+} >+ >+ >+#ifndef DEBUG_MEMORY > > /** No-op. */ > void drm_mem_init(void) >@@ -64,6 +132,13 @@ int drm_mem_info(char *buf, char **start > return 0; > } > >+/** Wrapper around kmalloc() */ >+void *drm_calloc(size_t nmemb, size_t size, int area) >+{ >+ return kcalloc(nmemb, size, GFP_KERNEL); >+} >+EXPORT_SYMBOL(drm_calloc); >+ > /** Wrapper around kmalloc() and kfree() */ > void *drm_realloc(void *oldpt, size_t oldsize, size_t size, int area) > { >@@ -78,9 +153,68 @@ void *drm_realloc(void *oldpt, size_t ol > return pt; > } > >+/** >+ * Allocate pages. >+ * >+ * \param order size order. >+ * \param area memory area. (Not used.) >+ * \return page address on success, or zero on failure. >+ * >+ * Allocate and reserve free pages. >+ */ >+unsigned long drm_alloc_pages(int order, int area) >+{ >+ unsigned long address; >+ unsigned long bytes = PAGE_SIZE << order; >+ unsigned long addr; >+ unsigned int sz; >+ >+ address = __get_free_pages(GFP_KERNEL, order); >+ if (!address) >+ return 0; >+ >+ /* Zero */ >+ memset((void *)address, 0, bytes); >+ >+ /* Reserve */ >+ for (addr = address, sz = bytes; >+ sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { >+ SetPageReserved(virt_to_page(addr)); >+ } >+ >+ return address; >+} >+ >+/** >+ * Free pages. >+ * >+ * \param address address of the pages to free. >+ * \param order size order. >+ * \param area memory area. (Not used.) >+ * >+ * Unreserve and free pages allocated by alloc_pages(). >+ */ >+void drm_free_pages(unsigned long address, int order, int area) >+{ >+ unsigned long bytes = PAGE_SIZE << order; >+ unsigned long addr; >+ unsigned int sz; >+ >+ if (!address) >+ return; >+ >+ /* Unreserve */ >+ for (addr = address, sz = bytes; >+ sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { >+ ClearPageReserved(virt_to_page(addr)); >+ } >+ >+ free_pages(address, order); >+} >+ > #if __OS_HAS_AGP > static void *agp_remap(unsigned long offset, unsigned long size, >- struct drm_device * dev) >+ struct drm_device * dev) > { > unsigned long *phys_addr_map, i, num_pages = > PAGE_ALIGN(size) / PAGE_SIZE; >@@ -123,10 +257,17 @@ static void *agp_remap(unsigned long off > } > > /** Wrapper around agp_allocate_memory() */ >-DRM_AGP_MEM *drm_alloc_agp(struct drm_device * dev, int pages, u32 type) >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+DRM_AGP_MEM *drm_alloc_agp(struct drm_device *dev, int pages, u32 type) >+{ >+ return drm_agp_allocate_memory(pages, type); >+} >+#else >+DRM_AGP_MEM *drm_alloc_agp(struct drm_device *dev, int pages, u32 type) > { > return drm_agp_allocate_memory(dev->agp->bridge, pages, type); > } >+#endif > > /** Wrapper around agp_free_memory() */ > int drm_free_agp(DRM_AGP_MEM * handle, int pages) >@@ -146,13 +287,12 @@ int drm_unbind_agp(DRM_AGP_MEM * handle) > return drm_agp_unbind_memory(handle); > } > >-#else /* __OS_HAS_AGP */ >-static inline void *agp_remap(unsigned long offset, unsigned long size, >- struct drm_device * dev) >+#else /* __OS_HAS_AGP*/ >+static void *agp_remap(unsigned long offset, unsigned long size, >+ struct drm_device * dev) > { > return NULL; > } >- > #endif /* agp */ > > #endif /* debug_memory */ >@@ -165,7 +305,7 @@ void drm_core_ioremap(struct drm_map *ma > else > map->handle = ioremap(map->offset, map->size); > } >-EXPORT_SYMBOL(drm_core_ioremap); >+EXPORT_SYMBOL_GPL(drm_core_ioremap); > > void drm_core_ioremapfree(struct drm_map *map, struct drm_device *dev) > { >@@ -178,5 +318,4 @@ void drm_core_ioremapfree(struct drm_map > else > iounmap(map->handle); > } >-EXPORT_SYMBOL(drm_core_ioremapfree); >- >+EXPORT_SYMBOL_GPL(drm_core_ioremapfree); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_memory_debug.c linux-2.6.23.i686/drivers/char/drm/drm_memory_debug.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_memory_debug.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_memory_debug.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,403 @@ >+/** >+ * \file drm_memory_debug.c >+ * Memory management wrappers for DRM. >+ * >+ * \author Rickard E. (Rik) Faith <faith@valinux.com> >+ * \author Gareth Hughes <gareth@valinux.com> >+ */ >+ >+/* >+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. >+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR >+ * OTHER DEALINGS IN THE SOFTWARE. >+ */ >+ >+#include "drmP.h" >+ >+#ifdef DEBUG_MEMORY >+ >+typedef struct drm_mem_stats { >+ const char *name; >+ int succeed_count; >+ int free_count; >+ int fail_count; >+ unsigned long bytes_allocated; >+ unsigned long bytes_freed; >+} drm_mem_stats_t; >+ >+static spinlock_t drm_mem_lock = SPIN_LOCK_UNLOCKED; >+static unsigned long drm_ram_available = 0; /* In pages */ >+static unsigned long drm_ram_used = 0; >+static drm_mem_stats_t drm_mem_stats[] = { >+ [DRM_MEM_DMA] = {"dmabufs"}, >+ [DRM_MEM_SAREA] = {"sareas"}, >+ [DRM_MEM_DRIVER] = {"driver"}, >+ [DRM_MEM_MAGIC] = {"magic"}, >+ [DRM_MEM_IOCTLS] = {"ioctltab"}, >+ [DRM_MEM_MAPS] = {"maplist"}, >+ [DRM_MEM_VMAS] = {"vmalist"}, >+ [DRM_MEM_BUFS] = {"buflist"}, >+ [DRM_MEM_SEGS] = {"seglist"}, >+ [DRM_MEM_PAGES] = {"pagelist"}, >+ [DRM_MEM_FILES] = {"files"}, >+ [DRM_MEM_QUEUES] = {"queues"}, >+ [DRM_MEM_CMDS] = {"commands"}, >+ [DRM_MEM_MAPPINGS] = {"mappings"}, >+ [DRM_MEM_BUFLISTS] = {"buflists"}, >+ [DRM_MEM_AGPLISTS] = {"agplist"}, >+ [DRM_MEM_SGLISTS] = {"sglist"}, >+ [DRM_MEM_TOTALAGP] = {"totalagp"}, >+ [DRM_MEM_BOUNDAGP] = {"boundagp"}, >+ [DRM_MEM_CTXBITMAP] = {"ctxbitmap"}, >+ [DRM_MEM_CTXLIST] = {"ctxlist"}, >+ [DRM_MEM_STUB] = {"stub"}, >+ {NULL, 0,} /* Last entry must be null */ >+}; >+ >+void drm_mem_init(void) >+{ >+ drm_mem_stats_t *mem; >+ struct sysinfo si; >+ >+ for (mem = drm_mem_stats; mem->name; ++mem) { >+ mem->succeed_count = 0; >+ mem->free_count = 0; >+ mem->fail_count = 0; >+ mem->bytes_allocated = 0; >+ mem->bytes_freed = 0; >+ } >+ >+ si_meminfo(&si); >+ drm_ram_available = si.totalram; >+ drm_ram_used = 0; >+} >+ >+/* drm_mem_info is called whenever a process reads /dev/drm/mem. */ >+ >+static int drm__mem_info(char *buf, char **start, off_t offset, >+ int request, int *eof, void *data) >+{ >+ drm_mem_stats_t *pt; >+ int len = 0; >+ >+ if (offset > DRM_PROC_LIMIT) { >+ *eof = 1; >+ return 0; >+ } >+ >+ *eof = 0; >+ *start = &buf[offset]; >+ >+ DRM_PROC_PRINT(" total counts " >+ " | outstanding \n"); >+ DRM_PROC_PRINT("type alloc freed fail bytes freed" >+ " | allocs bytes\n\n"); >+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n", >+ "system", 0, 0, 0, >+ drm_ram_available << (PAGE_SHIFT - 10)); >+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu kB |\n", >+ "locked", 0, 0, 0, drm_ram_used >> 10); >+ DRM_PROC_PRINT("\n"); >+ for (pt = drm_mem_stats; pt->name; pt++) { >+ DRM_PROC_PRINT("%-9.9s %5d %5d %4d %10lu %10lu | %6d %10ld\n", >+ pt->name, >+ pt->succeed_count, >+ pt->free_count, >+ pt->fail_count, >+ pt->bytes_allocated, >+ pt->bytes_freed, >+ pt->succeed_count - pt->free_count, >+ (long)pt->bytes_allocated >+ - (long)pt->bytes_freed); >+ } >+ >+ if (len > request + offset) >+ return request; >+ *eof = 1; >+ return len - offset; >+} >+ >+int drm_mem_info(char *buf, char **start, off_t offset, >+ int len, int *eof, void *data) >+{ >+ int ret; >+ >+ spin_lock(&drm_mem_lock); >+ ret = drm__mem_info(buf, start, offset, len, eof, data); >+ spin_unlock(&drm_mem_lock); >+ return ret; >+} >+ >+void *drm_alloc(size_t size, int area) >+{ >+ void *pt; >+ >+ if (!size) { >+ DRM_MEM_ERROR(area, "Allocating 0 bytes\n"); >+ return NULL; >+ } >+ >+ if (!(pt = kmalloc(size, GFP_KERNEL))) { >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[area].fail_count; >+ spin_unlock(&drm_mem_lock); >+ return NULL; >+ } >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[area].succeed_count; >+ drm_mem_stats[area].bytes_allocated += size; >+ spin_unlock(&drm_mem_lock); >+ return pt; >+} >+EXPORT_SYMBOL(drm_alloc); >+ >+void *drm_calloc(size_t nmemb, size_t size, int area) >+{ >+ void *addr; >+ >+ addr = drm_alloc(nmemb * size, area); >+ if (addr != NULL) >+ memset((void *)addr, 0, size * nmemb); >+ >+ return addr; >+} >+EXPORT_SYMBOL(drm_calloc); >+ >+void *drm_realloc(void *oldpt, size_t oldsize, size_t size, int area) >+{ >+ void *pt; >+ >+ if (!(pt = drm_alloc(size, area))) >+ return NULL; >+ if (oldpt && oldsize) { >+ memcpy(pt, oldpt, oldsize); >+ drm_free(oldpt, oldsize, area); >+ } >+ return pt; >+} >+EXPORT_SYMBOL(drm_realloc); >+ >+void drm_free(void *pt, size_t size, int area) >+{ >+ int alloc_count; >+ int free_count; >+ >+ if (!pt) >+ DRM_MEM_ERROR(area, "Attempt to free NULL pointer\n"); >+ else >+ kfree(pt); >+ spin_lock(&drm_mem_lock); >+ drm_mem_stats[area].bytes_freed += size; >+ free_count = ++drm_mem_stats[area].free_count; >+ alloc_count = drm_mem_stats[area].succeed_count; >+ spin_unlock(&drm_mem_lock); >+ if (free_count > alloc_count) { >+ DRM_MEM_ERROR(area, "Excess frees: %d frees, %d allocs\n", >+ free_count, alloc_count); >+ } >+} >+EXPORT_SYMBOL(drm_free); >+ >+unsigned long drm_alloc_pages(int order, int area) >+{ >+ unsigned long address; >+ unsigned long bytes = PAGE_SIZE << order; >+ unsigned long addr; >+ unsigned int sz; >+ >+ spin_lock(&drm_mem_lock); >+ if ((drm_ram_used >> PAGE_SHIFT) >+ > (DRM_RAM_PERCENT * drm_ram_available) / 100) { >+ spin_unlock(&drm_mem_lock); >+ return 0; >+ } >+ spin_unlock(&drm_mem_lock); >+ >+ address = __get_free_pages(GFP_KERNEL, order); >+ if (!address) { >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[area].fail_count; >+ spin_unlock(&drm_mem_lock); >+ return 0; >+ } >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[area].succeed_count; >+ drm_mem_stats[area].bytes_allocated += bytes; >+ drm_ram_used += bytes; >+ spin_unlock(&drm_mem_lock); >+ >+ /* Zero outside the lock */ >+ memset((void *)address, 0, bytes); >+ >+ /* Reserve */ >+ for (addr = address, sz = bytes; >+ sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { >+ SetPageReserved(virt_to_page(addr)); >+ } >+ >+ return address; >+} >+ >+void drm_free_pages(unsigned long address, int order, int area) >+{ >+ unsigned long bytes = PAGE_SIZE << order; >+ int alloc_count; >+ int free_count; >+ unsigned long addr; >+ unsigned int sz; >+ >+ if (!address) { >+ DRM_MEM_ERROR(area, "Attempt to free address 0\n"); >+ } else { >+ /* Unreserve */ >+ for (addr = address, sz = bytes; >+ sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { >+ ClearPageReserved(virt_to_page(addr)); >+ } >+ free_pages(address, order); >+ } >+ >+ spin_lock(&drm_mem_lock); >+ free_count = ++drm_mem_stats[area].free_count; >+ alloc_count = drm_mem_stats[area].succeed_count; >+ drm_mem_stats[area].bytes_freed += bytes; >+ drm_ram_used -= bytes; >+ spin_unlock(&drm_mem_lock); >+ if (free_count > alloc_count) { >+ DRM_MEM_ERROR(area, >+ "Excess frees: %d frees, %d allocs\n", >+ free_count, alloc_count); >+ } >+} >+ >+#if __OS_HAS_AGP >+ >+DRM_AGP_MEM *drm_alloc_agp(struct drm_device *dev, int pages, u32 type) >+{ >+ DRM_AGP_MEM *handle; >+ >+ if (!pages) { >+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP, "Allocating 0 pages\n"); >+ return NULL; >+ } >+ >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+ if ((handle = drm_agp_allocate_memory(pages, type))) { >+#else >+ if ((handle = drm_agp_allocate_memory(dev->agp->bridge, pages, type))) { >+#endif >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[DRM_MEM_TOTALAGP].succeed_count; >+ drm_mem_stats[DRM_MEM_TOTALAGP].bytes_allocated >+ += pages << PAGE_SHIFT; >+ spin_unlock(&drm_mem_lock); >+ return handle; >+ } >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[DRM_MEM_TOTALAGP].fail_count; >+ spin_unlock(&drm_mem_lock); >+ return NULL; >+} >+ >+int drm_free_agp(DRM_AGP_MEM * handle, int pages) >+{ >+ int alloc_count; >+ int free_count; >+ int retval = -EINVAL; >+ >+ if (!handle) { >+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP, >+ "Attempt to free NULL AGP handle\n"); >+ return retval; >+ } >+ >+ if (drm_agp_free_memory(handle)) { >+ spin_lock(&drm_mem_lock); >+ free_count = ++drm_mem_stats[DRM_MEM_TOTALAGP].free_count; >+ alloc_count = drm_mem_stats[DRM_MEM_TOTALAGP].succeed_count; >+ drm_mem_stats[DRM_MEM_TOTALAGP].bytes_freed >+ += pages << PAGE_SHIFT; >+ spin_unlock(&drm_mem_lock); >+ if (free_count > alloc_count) { >+ DRM_MEM_ERROR(DRM_MEM_TOTALAGP, >+ "Excess frees: %d frees, %d allocs\n", >+ free_count, alloc_count); >+ } >+ return 0; >+ } >+ return retval; >+} >+ >+int drm_bind_agp(DRM_AGP_MEM * handle, unsigned int start) >+{ >+ int retcode = -EINVAL; >+ >+ if (!handle) { >+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, >+ "Attempt to bind NULL AGP handle\n"); >+ return retcode; >+ } >+ >+ if (!(retcode = drm_agp_bind_memory(handle, start))) { >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[DRM_MEM_BOUNDAGP].succeed_count; >+ drm_mem_stats[DRM_MEM_BOUNDAGP].bytes_allocated >+ += handle->page_count << PAGE_SHIFT; >+ spin_unlock(&drm_mem_lock); >+ return retcode; >+ } >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[DRM_MEM_BOUNDAGP].fail_count; >+ spin_unlock(&drm_mem_lock); >+ return retcode; >+} >+ >+int drm_unbind_agp(DRM_AGP_MEM * handle) >+{ >+ int alloc_count; >+ int free_count; >+ int retcode = -EINVAL; >+ >+ if (!handle) { >+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, >+ "Attempt to unbind NULL AGP handle\n"); >+ return retcode; >+ } >+ >+ if ((retcode = drm_agp_unbind_memory(handle))) >+ return retcode; >+ spin_lock(&drm_mem_lock); >+ free_count = ++drm_mem_stats[DRM_MEM_BOUNDAGP].free_count; >+ alloc_count = drm_mem_stats[DRM_MEM_BOUNDAGP].succeed_count; >+ drm_mem_stats[DRM_MEM_BOUNDAGP].bytes_freed >+ += handle->page_count << PAGE_SHIFT; >+ spin_unlock(&drm_mem_lock); >+ if (free_count > alloc_count) { >+ DRM_MEM_ERROR(DRM_MEM_BOUNDAGP, >+ "Excess frees: %d frees, %d allocs\n", >+ free_count, alloc_count); >+ } >+ return retcode; >+} >+ >+#endif >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_memory_debug.h linux-2.6.23.i686/drivers/char/drm/drm_memory_debug.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_memory_debug.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_memory_debug.h 2008-01-06 09:24:57.000000000 +0100 >@@ -42,7 +42,7 @@ typedef struct drm_mem_stats { > unsigned long bytes_freed; > } drm_mem_stats_t; > >-static DEFINE_SPINLOCK(drm_mem_lock); >+static spinlock_t drm_mem_lock = SPIN_LOCK_UNLOCKED; > static unsigned long drm_ram_available = 0; /* In pages */ > static unsigned long drm_ram_used = 0; > static drm_mem_stats_t drm_mem_stats[] = >@@ -205,9 +205,79 @@ void drm_free (void *pt, size_t size, in > } > } > >+unsigned long drm_alloc_pages (int order, int area) { >+ unsigned long address; >+ unsigned long bytes = PAGE_SIZE << order; >+ unsigned long addr; >+ unsigned int sz; >+ >+ spin_lock(&drm_mem_lock); >+ if ((drm_ram_used >> PAGE_SHIFT) >+ > (DRM_RAM_PERCENT * drm_ram_available) / 100) { >+ spin_unlock(&drm_mem_lock); >+ return 0; >+ } >+ spin_unlock(&drm_mem_lock); >+ >+ address = __get_free_pages(GFP_KERNEL, order); >+ if (!address) { >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[area].fail_count; >+ spin_unlock(&drm_mem_lock); >+ return 0; >+ } >+ spin_lock(&drm_mem_lock); >+ ++drm_mem_stats[area].succeed_count; >+ drm_mem_stats[area].bytes_allocated += bytes; >+ drm_ram_used += bytes; >+ spin_unlock(&drm_mem_lock); >+ >+ /* Zero outside the lock */ >+ memset((void *)address, 0, bytes); >+ >+ /* Reserve */ >+ for (addr = address, sz = bytes; >+ sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { >+ SetPageReserved(virt_to_page(addr)); >+ } >+ >+ return address; >+} >+ >+void drm_free_pages (unsigned long address, int order, int area) { >+ unsigned long bytes = PAGE_SIZE << order; >+ int alloc_count; >+ int free_count; >+ unsigned long addr; >+ unsigned int sz; >+ >+ if (!address) { >+ DRM_MEM_ERROR(area, "Attempt to free address 0\n"); >+ } else { >+ /* Unreserve */ >+ for (addr = address, sz = bytes; >+ sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) { >+ ClearPageReserved(virt_to_page(addr)); >+ } >+ free_pages(address, order); >+ } >+ >+ spin_lock(&drm_mem_lock); >+ free_count = ++drm_mem_stats[area].free_count; >+ alloc_count = drm_mem_stats[area].succeed_count; >+ drm_mem_stats[area].bytes_freed += bytes; >+ drm_ram_used -= bytes; >+ spin_unlock(&drm_mem_lock); >+ if (free_count > alloc_count) { >+ DRM_MEM_ERROR(area, >+ "Excess frees: %d frees, %d allocs\n", >+ free_count, alloc_count); >+ } >+} >+ > #if __OS_HAS_AGP > >-DRM_AGP_MEM *drm_alloc_agp (drm_device_t *dev, int pages, u32 type) { >+DRM_AGP_MEM *drm_alloc_agp (struct drm_device *dev, int pages, u32 type) { > DRM_AGP_MEM *handle; > > if (!pages) { >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_mm.c linux-2.6.23.i686/drivers/char/drm/drm_mm.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_mm.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_mm.c 2008-01-06 09:24:57.000000000 +0100 >@@ -82,7 +82,7 @@ static int drm_mm_create_tail_node(struc > struct drm_mm_node *child; > > child = (struct drm_mm_node *) >- drm_alloc(sizeof(*child), DRM_MEM_MM); >+ drm_ctl_alloc(sizeof(*child), DRM_MEM_MM); > if (!child) > return -ENOMEM; > >@@ -118,7 +118,7 @@ static struct drm_mm_node *drm_mm_split_ > struct drm_mm_node *child; > > child = (struct drm_mm_node *) >- drm_alloc(sizeof(*child), DRM_MEM_MM); >+ drm_ctl_alloc(sizeof(*child), DRM_MEM_MM); > if (!child) > return NULL; > >@@ -137,8 +137,6 @@ static struct drm_mm_node *drm_mm_split_ > return child; > } > >- >- > struct drm_mm_node *drm_mm_get_block(struct drm_mm_node * parent, > unsigned long size, unsigned alignment) > { >@@ -200,8 +198,8 @@ void drm_mm_put_block(struct drm_mm_node > prev_node->size += next_node->size; > list_del(&next_node->ml_entry); > list_del(&next_node->fl_entry); >- drm_free(next_node, sizeof(*next_node), >- DRM_MEM_MM); >+ drm_ctl_free(next_node, sizeof(*next_node), >+ DRM_MEM_MM); > } else { > next_node->size += cur->size; > next_node->start = cur->start; >@@ -214,9 +212,10 @@ void drm_mm_put_block(struct drm_mm_node > list_add(&cur->fl_entry, &mm->fl_entry); > } else { > list_del(&cur->ml_entry); >- drm_free(cur, sizeof(*cur), DRM_MEM_MM); >+ drm_ctl_free(cur, sizeof(*cur), DRM_MEM_MM); > } > } >+EXPORT_SYMBOL(drm_mm_put_block); > > struct drm_mm_node *drm_mm_search_free(const struct drm_mm * mm, > unsigned long size, >@@ -274,6 +273,7 @@ int drm_mm_init(struct drm_mm * mm, unsi > return drm_mm_create_tail_node(mm, start, size); > } > >+EXPORT_SYMBOL(drm_mm_init); > > void drm_mm_takedown(struct drm_mm * mm) > { >@@ -290,7 +290,7 @@ void drm_mm_takedown(struct drm_mm * mm) > > list_del(&entry->fl_entry); > list_del(&entry->ml_entry); >- >- drm_free(entry, sizeof(*entry), DRM_MEM_MM); >+ drm_ctl_free(entry, sizeof(*entry), DRM_MEM_MM); > } > >+EXPORT_SYMBOL(drm_mm_takedown); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_object.c linux-2.6.23.i686/drivers/char/drm/drm_object.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_object.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_object.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,294 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+ >+int drm_add_user_object(struct drm_file *priv, struct drm_user_object *item, >+ int shareable) >+{ >+ struct drm_device *dev = priv->head->dev; >+ int ret; >+ >+ DRM_ASSERT_LOCKED(&dev->struct_mutex); >+ >+ /* The refcount will be bumped to 1 when we add the ref object below. */ >+ atomic_set(&item->refcount, 0); >+ item->shareable = shareable; >+ item->owner = priv; >+ >+ ret = drm_ht_just_insert_please(&dev->object_hash, &item->hash, >+ (unsigned long)item, 32, 0, 0); >+ if (ret) >+ return ret; >+ >+ ret = drm_add_ref_object(priv, item, _DRM_REF_USE); >+ if (ret) >+ ret = drm_ht_remove_item(&dev->object_hash, &item->hash); >+ >+ return ret; >+} >+EXPORT_SYMBOL(drm_add_user_object); >+ >+struct drm_user_object *drm_lookup_user_object(struct drm_file *priv, uint32_t key) >+{ >+ struct drm_device *dev = priv->head->dev; >+ struct drm_hash_item *hash; >+ int ret; >+ struct drm_user_object *item; >+ >+ DRM_ASSERT_LOCKED(&dev->struct_mutex); >+ >+ ret = drm_ht_find_item(&dev->object_hash, key, &hash); >+ if (ret) >+ return NULL; >+ >+ item = drm_hash_entry(hash, struct drm_user_object, hash); >+ >+ if (priv != item->owner) { >+ struct drm_open_hash *ht = &priv->refd_object_hash[_DRM_REF_USE]; >+ ret = drm_ht_find_item(ht, (unsigned long)item, &hash); >+ if (ret) { >+ DRM_ERROR("Object not registered for usage\n"); >+ return NULL; >+ } >+ } >+ return item; >+} >+EXPORT_SYMBOL(drm_lookup_user_object); >+ >+static void drm_deref_user_object(struct drm_file *priv, struct drm_user_object *item) >+{ >+ struct drm_device *dev = priv->head->dev; >+ int ret; >+ >+ if (atomic_dec_and_test(&item->refcount)) { >+ ret = drm_ht_remove_item(&dev->object_hash, &item->hash); >+ BUG_ON(ret); >+ item->remove(priv, item); >+ } >+} >+ >+static int drm_object_ref_action(struct drm_file *priv, struct drm_user_object *ro, >+ enum drm_ref_type action) >+{ >+ int ret = 0; >+ >+ switch (action) { >+ case _DRM_REF_USE: >+ atomic_inc(&ro->refcount); >+ break; >+ default: >+ if (!ro->ref_struct_locked) { >+ break; >+ } else { >+ ro->ref_struct_locked(priv, ro, action); >+ } >+ } >+ return ret; >+} >+ >+int drm_add_ref_object(struct drm_file *priv, struct drm_user_object *referenced_object, >+ enum drm_ref_type ref_action) >+{ >+ int ret = 0; >+ struct drm_ref_object *item; >+ struct drm_open_hash *ht = &priv->refd_object_hash[ref_action]; >+ >+ DRM_ASSERT_LOCKED(&priv->head->dev->struct_mutex); >+ if (!referenced_object->shareable && priv != referenced_object->owner) { >+ DRM_ERROR("Not allowed to reference this object\n"); >+ return -EINVAL; >+ } >+ >+ /* >+ * If this is not a usage reference, Check that usage has been registered >+ * first. Otherwise strange things may happen on destruction. >+ */ >+ >+ if ((ref_action != _DRM_REF_USE) && priv != referenced_object->owner) { >+ item = >+ drm_lookup_ref_object(priv, referenced_object, >+ _DRM_REF_USE); >+ if (!item) { >+ DRM_ERROR >+ ("Object not registered for usage by this client\n"); >+ return -EINVAL; >+ } >+ } >+ >+ if (NULL != >+ (item = >+ drm_lookup_ref_object(priv, referenced_object, ref_action))) { >+ atomic_inc(&item->refcount); >+ return drm_object_ref_action(priv, referenced_object, >+ ref_action); >+ } >+ >+ item = drm_ctl_calloc(1, sizeof(*item), DRM_MEM_OBJECTS); >+ if (item == NULL) { >+ DRM_ERROR("Could not allocate reference object\n"); >+ return -ENOMEM; >+ } >+ >+ atomic_set(&item->refcount, 1); >+ item->hash.key = (unsigned long)referenced_object; >+ ret = drm_ht_insert_item(ht, &item->hash); >+ item->unref_action = ref_action; >+ >+ if (ret) >+ goto out; >+ >+ list_add(&item->list, &priv->refd_objects); >+ ret = drm_object_ref_action(priv, referenced_object, ref_action); >+out: >+ return ret; >+} >+ >+struct drm_ref_object *drm_lookup_ref_object(struct drm_file *priv, >+ struct drm_user_object *referenced_object, >+ enum drm_ref_type ref_action) >+{ >+ struct drm_hash_item *hash; >+ int ret; >+ >+ DRM_ASSERT_LOCKED(&priv->head->dev->struct_mutex); >+ ret = drm_ht_find_item(&priv->refd_object_hash[ref_action], >+ (unsigned long)referenced_object, &hash); >+ if (ret) >+ return NULL; >+ >+ return drm_hash_entry(hash, struct drm_ref_object, hash); >+} >+EXPORT_SYMBOL(drm_lookup_ref_object); >+ >+static void drm_remove_other_references(struct drm_file *priv, >+ struct drm_user_object *ro) >+{ >+ int i; >+ struct drm_open_hash *ht; >+ struct drm_hash_item *hash; >+ struct drm_ref_object *item; >+ >+ for (i = _DRM_REF_USE + 1; i < _DRM_NO_REF_TYPES; ++i) { >+ ht = &priv->refd_object_hash[i]; >+ while (!drm_ht_find_item(ht, (unsigned long)ro, &hash)) { >+ item = drm_hash_entry(hash, struct drm_ref_object, hash); >+ drm_remove_ref_object(priv, item); >+ } >+ } >+} >+ >+void drm_remove_ref_object(struct drm_file *priv, struct drm_ref_object *item) >+{ >+ int ret; >+ struct drm_user_object *user_object = (struct drm_user_object *) item->hash.key; >+ struct drm_open_hash *ht = &priv->refd_object_hash[item->unref_action]; >+ enum drm_ref_type unref_action; >+ >+ DRM_ASSERT_LOCKED(&priv->head->dev->struct_mutex); >+ unref_action = item->unref_action; >+ if (atomic_dec_and_test(&item->refcount)) { >+ ret = drm_ht_remove_item(ht, &item->hash); >+ BUG_ON(ret); >+ list_del_init(&item->list); >+ if (unref_action == _DRM_REF_USE) >+ drm_remove_other_references(priv, user_object); >+ drm_ctl_free(item, sizeof(*item), DRM_MEM_OBJECTS); >+ } >+ >+ switch (unref_action) { >+ case _DRM_REF_USE: >+ drm_deref_user_object(priv, user_object); >+ break; >+ default: >+ BUG_ON(!user_object->unref); >+ user_object->unref(priv, user_object, unref_action); >+ break; >+ } >+ >+} >+EXPORT_SYMBOL(drm_remove_ref_object); >+ >+int drm_user_object_ref(struct drm_file *priv, uint32_t user_token, >+ enum drm_object_type type, struct drm_user_object **object) >+{ >+ struct drm_device *dev = priv->head->dev; >+ struct drm_user_object *uo; >+ struct drm_hash_item *hash; >+ int ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ ret = drm_ht_find_item(&dev->object_hash, user_token, &hash); >+ if (ret) { >+ DRM_ERROR("Could not find user object to reference.\n"); >+ goto out_err; >+ } >+ uo = drm_hash_entry(hash, struct drm_user_object, hash); >+ if (uo->type != type) { >+ ret = -EINVAL; >+ goto out_err; >+ } >+ ret = drm_add_ref_object(priv, uo, _DRM_REF_USE); >+ if (ret) >+ goto out_err; >+ mutex_unlock(&dev->struct_mutex); >+ *object = uo; >+ return 0; >+out_err: >+ mutex_unlock(&dev->struct_mutex); >+ return ret; >+} >+ >+int drm_user_object_unref(struct drm_file *priv, uint32_t user_token, >+ enum drm_object_type type) >+{ >+ struct drm_device *dev = priv->head->dev; >+ struct drm_user_object *uo; >+ struct drm_ref_object *ro; >+ int ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ uo = drm_lookup_user_object(priv, user_token); >+ if (!uo || (uo->type != type)) { >+ ret = -EINVAL; >+ goto out_err; >+ } >+ ro = drm_lookup_ref_object(priv, uo, _DRM_REF_USE); >+ if (!ro) { >+ ret = -EINVAL; >+ goto out_err; >+ } >+ drm_remove_ref_object(priv, ro); >+ mutex_unlock(&dev->struct_mutex); >+ return 0; >+out_err: >+ mutex_unlock(&dev->struct_mutex); >+ return ret; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_objects.h linux-2.6.23.i686/drivers/char/drm/drm_objects.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_objects.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_objects.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,747 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#ifndef _DRM_OBJECTS_H >+#define _DRM_OBJECTS_H >+ >+struct drm_device; >+struct drm_bo_mem_reg; >+ >+/*************************************************** >+ * User space objects. (drm_object.c) >+ */ >+ >+#define drm_user_object_entry(_ptr, _type, _member) container_of(_ptr, _type, _member) >+ >+enum drm_object_type { >+ drm_fence_type, >+ drm_buffer_type, >+ drm_lock_type, >+ /* >+ * Add other user space object types here. >+ */ >+ drm_driver_type0 = 256, >+ drm_driver_type1, >+ drm_driver_type2, >+ drm_driver_type3, >+ drm_driver_type4 >+}; >+ >+/* >+ * A user object is a structure that helps the drm give out user handles >+ * to kernel internal objects and to keep track of these objects so that >+ * they can be destroyed, for example when the user space process exits. >+ * Designed to be accessible using a user space 32-bit handle. >+ */ >+ >+struct drm_user_object { >+ struct drm_hash_item hash; >+ struct list_head list; >+ enum drm_object_type type; >+ atomic_t refcount; >+ int shareable; >+ struct drm_file *owner; >+ void (*ref_struct_locked) (struct drm_file *priv, >+ struct drm_user_object *obj, >+ enum drm_ref_type ref_action); >+ void (*unref) (struct drm_file *priv, struct drm_user_object *obj, >+ enum drm_ref_type unref_action); >+ void (*remove) (struct drm_file *priv, struct drm_user_object *obj); >+}; >+ >+/* >+ * A ref object is a structure which is used to >+ * keep track of references to user objects and to keep track of these >+ * references so that they can be destroyed for example when the user space >+ * process exits. Designed to be accessible using a pointer to the _user_ object. >+ */ >+ >+struct drm_ref_object { >+ struct drm_hash_item hash; >+ struct list_head list; >+ atomic_t refcount; >+ enum drm_ref_type unref_action; >+}; >+ >+/** >+ * Must be called with the struct_mutex held. >+ */ >+ >+extern int drm_add_user_object(struct drm_file *priv, struct drm_user_object *item, >+ int shareable); >+/** >+ * Must be called with the struct_mutex held. >+ */ >+ >+extern struct drm_user_object *drm_lookup_user_object(struct drm_file *priv, >+ uint32_t key); >+ >+/* >+ * Must be called with the struct_mutex held. May temporarily release it. >+ */ >+ >+extern int drm_add_ref_object(struct drm_file *priv, >+ struct drm_user_object *referenced_object, >+ enum drm_ref_type ref_action); >+ >+/* >+ * Must be called with the struct_mutex held. >+ */ >+ >+struct drm_ref_object *drm_lookup_ref_object(struct drm_file *priv, >+ struct drm_user_object *referenced_object, >+ enum drm_ref_type ref_action); >+/* >+ * Must be called with the struct_mutex held. >+ * If "item" has been obtained by a call to drm_lookup_ref_object. You may not >+ * release the struct_mutex before calling drm_remove_ref_object. >+ * This function may temporarily release the struct_mutex. >+ */ >+ >+extern void drm_remove_ref_object(struct drm_file *priv, struct drm_ref_object *item); >+extern int drm_user_object_ref(struct drm_file *priv, uint32_t user_token, >+ enum drm_object_type type, >+ struct drm_user_object **object); >+extern int drm_user_object_unref(struct drm_file *priv, uint32_t user_token, >+ enum drm_object_type type); >+ >+/*************************************************** >+ * Fence objects. (drm_fence.c) >+ */ >+ >+struct drm_fence_object { >+ struct drm_user_object base; >+ struct drm_device *dev; >+ atomic_t usage; >+ >+ /* >+ * The below three fields are protected by the fence manager spinlock. >+ */ >+ >+ struct list_head ring; >+ int fence_class; >+ uint32_t native_type; >+ uint32_t type; >+ uint32_t signaled; >+ uint32_t sequence; >+ uint32_t flush_mask; >+ uint32_t submitted_flush; >+ uint32_t error; >+}; >+ >+#define _DRM_FENCE_CLASSES 8 >+#define _DRM_FENCE_TYPE_EXE 0x00 >+ >+struct drm_fence_class_manager { >+ struct list_head ring; >+ uint32_t pending_flush; >+ wait_queue_head_t fence_queue; >+ int pending_exe_flush; >+ uint32_t last_exe_flush; >+ uint32_t exe_flush_sequence; >+}; >+ >+struct drm_fence_manager { >+ int initialized; >+ rwlock_t lock; >+ struct drm_fence_class_manager fence_class[_DRM_FENCE_CLASSES]; >+ uint32_t num_classes; >+ atomic_t count; >+}; >+ >+struct drm_fence_driver { >+ uint32_t num_classes; >+ uint32_t wrap_diff; >+ uint32_t flush_diff; >+ uint32_t sequence_mask; >+ int lazy_capable; >+ int (*has_irq) (struct drm_device *dev, uint32_t fence_class, >+ uint32_t flags); >+ int (*emit) (struct drm_device *dev, uint32_t fence_class, >+ uint32_t flags, uint32_t *breadcrumb, >+ uint32_t *native_type); >+ void (*poke_flush) (struct drm_device *dev, uint32_t fence_class); >+}; >+ >+extern void drm_fence_handler(struct drm_device *dev, uint32_t fence_class, >+ uint32_t sequence, uint32_t type, >+ uint32_t error); >+extern void drm_fence_manager_init(struct drm_device *dev); >+extern void drm_fence_manager_takedown(struct drm_device *dev); >+extern void drm_fence_flush_old(struct drm_device *dev, uint32_t fence_class, >+ uint32_t sequence); >+extern int drm_fence_object_flush(struct drm_fence_object *fence, >+ uint32_t type); >+extern int drm_fence_object_signaled(struct drm_fence_object *fence, >+ uint32_t type, int flush); >+extern void drm_fence_usage_deref_locked(struct drm_fence_object **fence); >+extern void drm_fence_usage_deref_unlocked(struct drm_fence_object **fence); >+extern struct drm_fence_object *drm_fence_reference_locked(struct drm_fence_object *src); >+extern void drm_fence_reference_unlocked(struct drm_fence_object **dst, >+ struct drm_fence_object *src); >+extern int drm_fence_object_wait(struct drm_fence_object *fence, >+ int lazy, int ignore_signals, uint32_t mask); >+extern int drm_fence_object_create(struct drm_device *dev, uint32_t type, >+ uint32_t fence_flags, uint32_t fence_class, >+ struct drm_fence_object **c_fence); >+extern int drm_fence_object_emit(struct drm_fence_object *fence, >+ uint32_t fence_flags, uint32_t class, >+ uint32_t type); >+extern void drm_fence_fill_arg(struct drm_fence_object *fence, >+ struct drm_fence_arg *arg); >+ >+extern int drm_fence_add_user_object(struct drm_file *priv, >+ struct drm_fence_object *fence, >+ int shareable); >+ >+extern int drm_fence_create_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int drm_fence_destroy_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int drm_fence_reference_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int drm_fence_unreference_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int drm_fence_signaled_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int drm_fence_flush_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int drm_fence_wait_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int drm_fence_emit_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int drm_fence_buffers_ioctl(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+/************************************************** >+ *TTMs >+ */ >+ >+/* >+ * The ttm backend GTT interface. (In our case AGP). >+ * Any similar type of device (PCIE?) >+ * needs only to implement these functions to be usable with the TTM interface. >+ * The AGP backend implementation lives in drm_agpsupport.c >+ * basically maps these calls to available functions in agpgart. >+ * Each drm device driver gets an >+ * additional function pointer that creates these types, >+ * so that the device can choose the correct aperture. >+ * (Multiple AGP apertures, etc.) >+ * Most device drivers will let this point to the standard AGP implementation. >+ */ >+ >+#define DRM_BE_FLAG_NEEDS_FREE 0x00000001 >+#define DRM_BE_FLAG_BOUND_CACHED 0x00000002 >+ >+struct drm_ttm_backend; >+struct drm_ttm_backend_func { >+ int (*needs_ub_cache_adjust) (struct drm_ttm_backend *backend); >+ int (*populate) (struct drm_ttm_backend *backend, >+ unsigned long num_pages, struct page **pages, >+ struct page *dummy_read_page); >+ void (*clear) (struct drm_ttm_backend *backend); >+ int (*bind) (struct drm_ttm_backend *backend, >+ struct drm_bo_mem_reg *bo_mem); >+ int (*unbind) (struct drm_ttm_backend *backend); >+ void (*destroy) (struct drm_ttm_backend *backend); >+}; >+ >+ >+struct drm_ttm_backend { >+ struct drm_device *dev; >+ uint32_t flags; >+ struct drm_ttm_backend_func *func; >+}; >+ >+struct drm_ttm { >+ struct page *dummy_read_page; >+ struct page **pages; >+ uint32_t page_flags; >+ unsigned long num_pages; >+ atomic_t vma_count; >+ struct drm_device *dev; >+ int destroy; >+ uint32_t mapping_offset; >+ struct drm_ttm_backend *be; >+ enum { >+ ttm_bound, >+ ttm_evicted, >+ ttm_unbound, >+ ttm_unpopulated, >+ } state; >+ >+}; >+ >+extern struct drm_ttm *drm_ttm_create(struct drm_device *dev, unsigned long size, >+ uint32_t page_flags, >+ struct page *dummy_read_page); >+extern int drm_ttm_bind(struct drm_ttm *ttm, struct drm_bo_mem_reg *bo_mem); >+extern void drm_ttm_unbind(struct drm_ttm *ttm); >+extern void drm_ttm_evict(struct drm_ttm *ttm); >+extern void drm_ttm_fixup_caching(struct drm_ttm *ttm); >+extern struct page *drm_ttm_get_page(struct drm_ttm *ttm, int index); >+extern void drm_ttm_cache_flush(void); >+extern int drm_ttm_populate(struct drm_ttm *ttm); >+extern int drm_ttm_set_user(struct drm_ttm *ttm, >+ struct task_struct *tsk, >+ unsigned long start, >+ unsigned long num_pages); >+ >+/* >+ * Destroy a ttm. The user normally calls drmRmMap or a similar IOCTL to do >+ * this which calls this function iff there are no vmas referencing it anymore. >+ * Otherwise it is called when the last vma exits. >+ */ >+ >+extern int drm_ttm_destroy(struct drm_ttm *ttm); >+ >+#define DRM_FLAG_MASKED(_old, _new, _mask) {\ >+(_old) ^= (((_old) ^ (_new)) & (_mask)); \ >+} >+ >+#define DRM_TTM_MASK_FLAGS ((1 << PAGE_SHIFT) - 1) >+#define DRM_TTM_MASK_PFN (0xFFFFFFFFU - DRM_TTM_MASK_FLAGS) >+ >+/* >+ * Page flags. >+ */ >+ >+/* >+ * This ttm should not be cached by the CPU >+ */ >+#define DRM_TTM_PAGE_UNCACHED (1 << 0) >+/* >+ * This flat is not used at this time; I don't know what the >+ * intent was >+ */ >+#define DRM_TTM_PAGE_USED (1 << 1) >+/* >+ * This flat is not used at this time; I don't know what the >+ * intent was >+ */ >+#define DRM_TTM_PAGE_BOUND (1 << 2) >+/* >+ * This flat is not used at this time; I don't know what the >+ * intent was >+ */ >+#define DRM_TTM_PAGE_PRESENT (1 << 3) >+/* >+ * The array of page pointers was allocated with vmalloc >+ * instead of drm_calloc. >+ */ >+#define DRM_TTM_PAGE_VMALLOC (1 << 4) >+/* >+ * This ttm is mapped from user space >+ */ >+#define DRM_TTM_PAGE_USER (1 << 5) >+/* >+ * This ttm will be written to by the GPU >+ */ >+#define DRM_TTM_PAGE_WRITE (1 << 6) >+/* >+ * This ttm was mapped to the GPU, and so the contents may have >+ * been modified >+ */ >+#define DRM_TTM_PAGE_USER_DIRTY (1 << 7) >+/* >+ * This flag is not used at this time; I don't know what the >+ * intent was. >+ */ >+#define DRM_TTM_PAGE_USER_DMA (1 << 8) >+ >+/*************************************************** >+ * Buffer objects. (drm_bo.c, drm_bo_move.c) >+ */ >+ >+struct drm_bo_mem_reg { >+ struct drm_mm_node *mm_node; >+ unsigned long size; >+ unsigned long num_pages; >+ uint32_t page_alignment; >+ uint32_t mem_type; >+ /* >+ * Current buffer status flags, indicating >+ * where the buffer is located and which >+ * access modes are in effect >+ */ >+ uint64_t flags; >+ /** >+ * These are the flags proposed for >+ * a validate operation. If the >+ * validate succeeds, they'll get moved >+ * into the flags field >+ */ >+ uint64_t proposed_flags; >+ >+ uint32_t desired_tile_stride; >+ uint32_t hw_tile_stride; >+}; >+ >+enum drm_bo_type { >+ /* >+ * drm_bo_type_device are 'normal' drm allocations, >+ * pages are allocated from within the kernel automatically >+ * and the objects can be mmap'd from the drm device. Each >+ * drm_bo_type_device object has a unique name which can be >+ * used by other processes to share access to the underlying >+ * buffer. >+ */ >+ drm_bo_type_device, >+ /* >+ * drm_bo_type_user are buffers of pages that already exist >+ * in the process address space. They are more limited than >+ * drm_bo_type_device buffers in that they must always >+ * remain cached (as we assume the user pages are mapped cached), >+ * and they are not sharable to other processes through DRM >+ * (although, regular shared memory should still work fine). >+ */ >+ drm_bo_type_user, >+ /* >+ * drm_bo_type_kernel are buffers that exist solely for use >+ * within the kernel. The pages cannot be mapped into the >+ * process. One obvious use would be for the ring >+ * buffer where user access would not (ideally) be required. >+ */ >+ drm_bo_type_kernel, >+}; >+ >+struct drm_buffer_object { >+ struct drm_device *dev; >+ struct drm_user_object base; >+ >+ /* >+ * If there is a possibility that the usage variable is zero, >+ * then dev->struct_mutext should be locked before incrementing it. >+ */ >+ >+ atomic_t usage; >+ unsigned long buffer_start; >+ enum drm_bo_type type; >+ unsigned long offset; >+ atomic_t mapped; >+ struct drm_bo_mem_reg mem; >+ >+ struct list_head lru; >+ struct list_head ddestroy; >+ >+ uint32_t fence_type; >+ uint32_t fence_class; >+ uint32_t new_fence_type; >+ uint32_t new_fence_class; >+ struct drm_fence_object *fence; >+ uint32_t priv_flags; >+ wait_queue_head_t event_queue; >+ struct mutex mutex; >+ unsigned long num_pages; >+ >+ /* For pinned buffers */ >+ struct drm_mm_node *pinned_node; >+ uint32_t pinned_mem_type; >+ struct list_head pinned_lru; >+ >+ /* For vm */ >+ struct drm_ttm *ttm; >+ struct drm_map_list map_list; >+ uint32_t memory_type; >+ unsigned long bus_offset; >+ uint32_t vm_flags; >+ void *iomap; >+ >+#ifdef DRM_ODD_MM_COMPAT >+ /* dev->struct_mutex only protected. */ >+ struct list_head vma_list; >+ struct list_head p_mm_list; >+#endif >+ >+}; >+ >+#define _DRM_BO_FLAG_UNFENCED 0x00000001 >+#define _DRM_BO_FLAG_EVICTED 0x00000002 >+ >+struct drm_mem_type_manager { >+ int has_type; >+ int use_type; >+ struct drm_mm manager; >+ struct list_head lru; >+ struct list_head pinned; >+ uint32_t flags; >+ uint32_t drm_bus_maptype; >+ unsigned long gpu_offset; >+ unsigned long io_offset; >+ unsigned long io_size; >+ void *io_addr; >+}; >+ >+struct drm_bo_lock { >+ struct drm_user_object base; >+ wait_queue_head_t queue; >+ atomic_t write_lock_pending; >+ atomic_t readers; >+}; >+ >+#define _DRM_FLAG_MEMTYPE_FIXED 0x00000001 /* Fixed (on-card) PCI memory */ >+#define _DRM_FLAG_MEMTYPE_MAPPABLE 0x00000002 /* Memory mappable */ >+#define _DRM_FLAG_MEMTYPE_CACHED 0x00000004 /* Cached binding */ >+#define _DRM_FLAG_NEEDS_IOREMAP 0x00000008 /* Fixed memory needs ioremap >+ before kernel access. */ >+#define _DRM_FLAG_MEMTYPE_CMA 0x00000010 /* Can't map aperture */ >+#define _DRM_FLAG_MEMTYPE_CSELECT 0x00000020 /* Select caching */ >+ >+struct drm_buffer_manager { >+ struct drm_bo_lock bm_lock; >+ struct mutex evict_mutex; >+ int nice_mode; >+ int initialized; >+ struct drm_file *last_to_validate; >+ struct drm_mem_type_manager man[DRM_BO_MEM_TYPES]; >+ struct list_head unfenced; >+ struct list_head ddestroy; >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20) >+ struct work_struct wq; >+#else >+ struct delayed_work wq; >+#endif >+ uint32_t fence_type; >+ unsigned long cur_pages; >+ atomic_t count; >+ struct page *dummy_read_page; >+}; >+ >+struct drm_bo_driver { >+ const uint32_t *mem_type_prio; >+ const uint32_t *mem_busy_prio; >+ uint32_t num_mem_type_prio; >+ uint32_t num_mem_busy_prio; >+ struct drm_ttm_backend *(*create_ttm_backend_entry) >+ (struct drm_device *dev); >+ int (*fence_type) (struct drm_buffer_object *bo, uint32_t *fclass, >+ uint32_t *type); >+ int (*invalidate_caches) (struct drm_device *dev, uint64_t flags); >+ int (*init_mem_type) (struct drm_device *dev, uint32_t type, >+ struct drm_mem_type_manager *man); >+ /* >+ * evict_flags: >+ * >+ * @bo: the buffer object to be evicted >+ * >+ * Return the bo flags for a buffer which is not mapped to the hardware. >+ * These will be placed in proposed_flags so that when the move is >+ * finished, they'll end up in bo->mem.flags >+ */ >+ uint64_t(*evict_flags) (struct drm_buffer_object *bo); >+ /* >+ * move: >+ * >+ * @bo: the buffer to move >+ * >+ * @evict: whether this motion is evicting the buffer from >+ * the graphics address space >+ * >+ * @no_wait: whether this should give up and return -EBUSY >+ * if this move would require sleeping >+ * >+ * @new_mem: the new memory region receiving the buffer >+ * >+ * Move a buffer between two memory regions. >+ */ >+ int (*move) (struct drm_buffer_object *bo, >+ int evict, int no_wait, struct drm_bo_mem_reg *new_mem); >+ /* >+ * ttm_cache_flush >+ */ >+ void (*ttm_cache_flush)(struct drm_ttm *ttm); >+}; >+ >+/* >+ * buffer objects (drm_bo.c) >+ */ >+ >+extern int drm_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_destroy_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_map_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_unmap_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_reference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_unreference_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_wait_idle_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_setstatus_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_mm_init_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_mm_takedown_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_mm_lock_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_mm_unlock_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_version_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern int drm_bo_driver_finish(struct drm_device *dev); >+extern int drm_bo_driver_init(struct drm_device *dev); >+extern int drm_bo_pci_offset(struct drm_device *dev, >+ struct drm_bo_mem_reg *mem, >+ unsigned long *bus_base, >+ unsigned long *bus_offset, >+ unsigned long *bus_size); >+extern int drm_mem_reg_is_pci(struct drm_device *dev, struct drm_bo_mem_reg *mem); >+ >+extern void drm_bo_usage_deref_locked(struct drm_buffer_object **bo); >+extern void drm_bo_usage_deref_unlocked(struct drm_buffer_object **bo); >+extern void drm_putback_buffer_objects(struct drm_device *dev); >+extern int drm_fence_buffer_objects(struct drm_device *dev, >+ struct list_head *list, >+ uint32_t fence_flags, >+ struct drm_fence_object *fence, >+ struct drm_fence_object **used_fence); >+extern void drm_bo_add_to_lru(struct drm_buffer_object *bo); >+extern int drm_buffer_object_create(struct drm_device *dev, unsigned long size, >+ enum drm_bo_type type, uint64_t flags, >+ uint32_t hint, uint32_t page_alignment, >+ unsigned long buffer_start, >+ struct drm_buffer_object **bo); >+extern int drm_bo_wait(struct drm_buffer_object *bo, int lazy, int ignore_signals, >+ int no_wait); >+extern int drm_bo_mem_space(struct drm_buffer_object *bo, >+ struct drm_bo_mem_reg *mem, int no_wait); >+extern int drm_bo_move_buffer(struct drm_buffer_object *bo, >+ uint64_t new_mem_flags, >+ int no_wait, int move_unfenced); >+extern int drm_bo_clean_mm(struct drm_device *dev, unsigned mem_type); >+extern int drm_bo_init_mm(struct drm_device *dev, unsigned type, >+ unsigned long p_offset, unsigned long p_size); >+extern int drm_bo_handle_validate(struct drm_file *file_priv, uint32_t handle, >+ uint64_t flags, uint64_t mask, uint32_t hint, >+ uint32_t fence_class, int use_old_fence_class, >+ struct drm_bo_info_rep *rep, >+ struct drm_buffer_object **bo_rep); >+extern struct drm_buffer_object *drm_lookup_buffer_object(struct drm_file *file_priv, >+ uint32_t handle, >+ int check_owner); >+extern int drm_bo_do_validate(struct drm_buffer_object *bo, >+ uint64_t flags, uint64_t mask, uint32_t hint, >+ uint32_t fence_class, >+ struct drm_bo_info_rep *rep); >+ >+/* >+ * Buffer object memory move- and map helpers. >+ * drm_bo_move.c >+ */ >+ >+extern int drm_bo_move_ttm(struct drm_buffer_object *bo, >+ int evict, int no_wait, >+ struct drm_bo_mem_reg *new_mem); >+extern int drm_bo_move_memcpy(struct drm_buffer_object *bo, >+ int evict, >+ int no_wait, struct drm_bo_mem_reg *new_mem); >+extern int drm_bo_move_accel_cleanup(struct drm_buffer_object *bo, >+ int evict, int no_wait, >+ uint32_t fence_class, uint32_t fence_type, >+ uint32_t fence_flags, >+ struct drm_bo_mem_reg *new_mem); >+extern int drm_bo_same_page(unsigned long offset, unsigned long offset2); >+extern unsigned long drm_bo_offset_end(unsigned long offset, >+ unsigned long end); >+ >+struct drm_bo_kmap_obj { >+ void *virtual; >+ struct page *page; >+ enum { >+ bo_map_iomap, >+ bo_map_vmap, >+ bo_map_kmap, >+ bo_map_premapped, >+ } bo_kmap_type; >+}; >+ >+static inline void *drm_bmo_virtual(struct drm_bo_kmap_obj *map, int *is_iomem) >+{ >+ *is_iomem = (map->bo_kmap_type == bo_map_iomap || >+ map->bo_kmap_type == bo_map_premapped); >+ return map->virtual; >+} >+extern void drm_bo_kunmap(struct drm_bo_kmap_obj *map); >+extern int drm_bo_kmap(struct drm_buffer_object *bo, unsigned long start_page, >+ unsigned long num_pages, struct drm_bo_kmap_obj *map); >+ >+ >+/* >+ * drm_regman.c >+ */ >+ >+struct drm_reg { >+ struct list_head head; >+ struct drm_fence_object *fence; >+ uint32_t fence_type; >+ uint32_t new_fence_type; >+}; >+ >+struct drm_reg_manager { >+ struct list_head free; >+ struct list_head lru; >+ struct list_head unfenced; >+ >+ int (*reg_reusable)(const struct drm_reg *reg, const void *data); >+ void (*reg_destroy)(struct drm_reg *reg); >+}; >+ >+extern int drm_regs_alloc(struct drm_reg_manager *manager, >+ const void *data, >+ uint32_t fence_class, >+ uint32_t fence_type, >+ int interruptible, >+ int no_wait, >+ struct drm_reg **reg); >+ >+extern void drm_regs_fence(struct drm_reg_manager *regs, >+ struct drm_fence_object *fence); >+ >+extern void drm_regs_free(struct drm_reg_manager *manager); >+extern void drm_regs_add(struct drm_reg_manager *manager, struct drm_reg *reg); >+extern void drm_regs_init(struct drm_reg_manager *manager, >+ int (*reg_reusable)(const struct drm_reg *, >+ const void *), >+ void (*reg_destroy)(struct drm_reg *)); >+ >+/* >+ * drm_bo_lock.c >+ * Simple replacement for the hardware lock on buffer manager init and clean. >+ */ >+ >+ >+extern void drm_bo_init_lock(struct drm_bo_lock *lock); >+extern void drm_bo_read_unlock(struct drm_bo_lock *lock); >+extern int drm_bo_read_lock(struct drm_bo_lock *lock); >+extern int drm_bo_write_lock(struct drm_bo_lock *lock, >+ struct drm_file *file_priv); >+ >+extern int drm_bo_write_unlock(struct drm_bo_lock *lock, >+ struct drm_file *file_priv); >+ >+#ifdef CONFIG_DEBUG_MUTEXES >+#define DRM_ASSERT_LOCKED(_mutex) \ >+ BUG_ON(!mutex_is_locked(_mutex) || \ >+ ((_mutex)->owner != current_thread_info())) >+#else >+#define DRM_ASSERT_LOCKED(_mutex) >+#endif >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_os_linux.h linux-2.6.23.i686/drivers/char/drm/drm_os_linux.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_os_linux.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_os_linux.h 2008-01-06 09:24:57.000000000 +0100 >@@ -10,18 +10,36 @@ > #define DRM_CURRENTPID current->pid > #define DRM_SUSER(p) capable(CAP_SYS_ADMIN) > #define DRM_UDELAY(d) udelay(d) >+#if LINUX_VERSION_CODE <= 0x020608 /* KERNEL_VERSION(2,6,8) */ >+#ifndef __iomem >+#define __iomem >+#endif > /** Read a byte from a MMIO region */ > #define DRM_READ8(map, offset) readb(((void __iomem *)(map)->handle) + (offset)) > /** Read a word from a MMIO region */ >-#define DRM_READ16(map, offset) readw(((void __iomem *)(map)->handle) + (offset)) >+#define DRM_READ16(map, offset) readw(((void __iomem *)(map)->handle) + (offset)) > /** Read a dword from a MMIO region */ > #define DRM_READ32(map, offset) readl(((void __iomem *)(map)->handle) + (offset)) > /** Write a byte into a MMIO region */ > #define DRM_WRITE8(map, offset, val) writeb(val, ((void __iomem *)(map)->handle) + (offset)) > /** Write a word into a MMIO region */ >-#define DRM_WRITE16(map, offset, val) writew(val, ((void __iomem *)(map)->handle) + (offset)) >+#define DRM_WRITE16(map, offset, val) writew(val, ((void __iomem *)(map)->handle) + (offset)) > /** Write a dword into a MMIO region */ > #define DRM_WRITE32(map, offset, val) writel(val, ((void __iomem *)(map)->handle) + (offset)) >+#else >+/** Read a byte from a MMIO region */ >+#define DRM_READ8(map, offset) readb((map)->handle + (offset)) >+/** Read a word from a MMIO region */ >+#define DRM_READ16(map, offset) readw((map)->handle + (offset)) >+/** Read a dword from a MMIO region */ >+#define DRM_READ32(map, offset) readl((map)->handle + (offset)) >+/** Write a byte into a MMIO region */ >+#define DRM_WRITE8(map, offset, val) writeb(val, (map)->handle + (offset)) >+/** Write a word into a MMIO region */ >+#define DRM_WRITE16(map, offset, val) writew(val, (map)->handle + (offset)) >+/** Write a dword into a MMIO region */ >+#define DRM_WRITE32(map, offset, val) writel(val, (map)->handle + (offset)) >+#endif > /** Read memory barrier */ > #define DRM_READMEMORYBARRIER() rmb() > /** Write memory barrier */ >@@ -31,6 +49,12 @@ > > /** IRQ handler arguments and return type and values */ > #define DRM_IRQ_ARGS int irq, void *arg >+/** backwards compatibility with old irq return values */ >+#ifndef IRQ_HANDLED >+typedef void irqreturn_t; >+#define IRQ_HANDLED /* nothing */ >+#define IRQ_NONE /* nothing */ >+#endif > > /** AGP types */ > #if __OS_HAS_AGP >@@ -42,8 +66,8 @@ struct no_agp_kern { > unsigned long aper_base; > unsigned long aper_size; > }; >-#define DRM_AGP_MEM int >-#define DRM_AGP_KERN struct no_agp_kern >+#define DRM_AGP_MEM int >+#define DRM_AGP_KERN struct no_agp_kern > #endif > > #if !(__OS_HAS_MTRR) >@@ -59,17 +83,8 @@ static __inline__ int mtrr_del(int reg, > } > > #define MTRR_TYPE_WRCOMB 1 >- > #endif > >-/** For data going into the kernel through the ioctl argument */ >-#define DRM_COPY_FROM_USER_IOCTL(arg1, arg2, arg3) \ >- if ( copy_from_user(&arg1, arg2, arg3) ) \ >- return -EFAULT >-/** For data going from the kernel through the ioctl argument */ >-#define DRM_COPY_TO_USER_IOCTL(arg1, arg2, arg3) \ >- if ( copy_to_user(arg1, &arg2, arg3) ) \ >- return -EFAULT > /** Other copying of data to kernel space */ > #define DRM_COPY_FROM_USER(arg1, arg2, arg3) \ > copy_from_user(arg1, arg2, arg3) >@@ -77,9 +92,9 @@ static __inline__ int mtrr_del(int reg, > #define DRM_COPY_TO_USER(arg1, arg2, arg3) \ > copy_to_user(arg1, arg2, arg3) > /* Macros for copyfrom user, but checking readability only once */ >-#define DRM_VERIFYAREA_READ( uaddr, size ) \ >- (access_ok( VERIFY_READ, uaddr, size ) ? 0 : -EFAULT) >-#define DRM_COPY_FROM_USER_UNCHECKED(arg1, arg2, arg3) \ >+#define DRM_VERIFYAREA_READ( uaddr, size ) \ >+ (access_ok( VERIFY_READ, uaddr, size) ? 0 : -EFAULT) >+#define DRM_COPY_FROM_USER_UNCHECKED(arg1, arg2, arg3) \ > __copy_from_user(arg1, arg2, arg3) > #define DRM_COPY_TO_USER_UNCHECKED(arg1, arg2, arg3) \ > __copy_to_user(arg1, arg2, arg3) >@@ -114,3 +129,17 @@ do { \ > > #define DRM_WAKEUP( queue ) wake_up_interruptible( queue ) > #define DRM_INIT_WAITQUEUE( queue ) init_waitqueue_head( queue ) >+ >+/** Type for the OS's non-sleepable mutex lock */ >+#define DRM_SPINTYPE spinlock_t >+/** >+ * Initialize the lock for use. name is an optional string describing the >+ * lock >+ */ >+#define DRM_SPININIT(l,name) spin_lock_init(l) >+#define DRM_SPINUNINIT(l) >+#define DRM_SPINLOCK(l) spin_lock(l) >+#define DRM_SPINUNLOCK(l) spin_unlock(l) >+#define DRM_SPINLOCK_IRQSAVE(l, _flags) spin_lock_irqsave(l, _flags); >+#define DRM_SPINUNLOCK_IRQRESTORE(l, _flags) spin_unlock_irqrestore(l, _flags); >+#define DRM_SPINLOCK_ASSERT(l) do {} while (0) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_pci.c linux-2.6.23.i686/drivers/char/drm/drm_pci.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_pci.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_pci.c 2008-01-06 09:24:57.000000000 +0100 >@@ -51,10 +51,8 @@ drm_dma_handle_t *drm_pci_alloc(struct d > dma_addr_t maxaddr) > { > drm_dma_handle_t *dmah; >-#if 1 > unsigned long addr; > size_t sz; >-#endif > #ifdef DRM_DEBUG_MEMORY > int area = DRM_MEM_DMA; > >@@ -118,7 +116,6 @@ drm_dma_handle_t *drm_pci_alloc(struct d > > return dmah; > } >- > EXPORT_SYMBOL(drm_pci_alloc); > > /** >@@ -126,12 +123,10 @@ EXPORT_SYMBOL(drm_pci_alloc); > * > * This function is for internal use in the Linux-specific DRM core code. > */ >-void __drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah) >+void __drm_pci_free(struct drm_device *dev, drm_dma_handle_t *dmah) > { >-#if 1 > unsigned long addr; > size_t sz; >-#endif > #ifdef DRM_DEBUG_MEMORY > int area = DRM_MEM_DMA; > int alloc_count; >@@ -172,12 +167,11 @@ void __drm_pci_free(struct drm_device * > /** > * \brief Free a PCI consistent memory block > */ >-void drm_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah) >+void drm_pci_free(struct drm_device *dev, drm_dma_handle_t *dmah) > { > __drm_pci_free(dev, dmah); > kfree(dmah); > } >- > EXPORT_SYMBOL(drm_pci_free); > > /*@}*/ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_pciids.h linux-2.6.23.i686/drivers/char/drm/drm_pciids.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_pciids.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_pciids.h 2008-01-06 18:49:12.000000000 +0100 >@@ -52,7 +52,7 @@ > {0x1002, 0x4C59, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100|RADEON_IS_MOBILITY}, \ > {0x1002, 0x4C5A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV100|RADEON_IS_MOBILITY}, \ > {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ >- {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ >+ {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250}, \ > {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ > {0x1002, 0x4E44, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ > {0x1002, 0x4E45, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ >@@ -213,7 +213,7 @@ > {0x1002, 0x4c4e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0, 0, 0} > >-#define sisdrv_PCI_IDS \ >+#define sis_PCI_IDS \ > {0x1039, 0x0300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0x1039, 0x5300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0x1039, 0x6300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >@@ -236,10 +236,8 @@ > {0x1106, 0x3022, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0x1106, 0x3118, PCI_ANY_ID, PCI_ANY_ID, 0, 0, VIA_PRO_GROUP_A}, \ > {0x1106, 0x3122, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x1106, 0x7204, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0x1106, 0x7205, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0x1106, 0x3108, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x1106, 0x3304, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0x1106, 0x3344, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0x1106, 0x3343, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0x1106, 0x3230, PCI_ANY_ID, PCI_ANY_ID, 0, 0, VIA_DX9_0}, \ >@@ -294,285 +292,215 @@ > {0, 0, 0} > > #define i915_PCI_IDS \ >- {0x8086, 0x3577, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x3582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2572, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2592, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2772, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x27a2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x27ae, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2972, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2982, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2992, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x29a2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x29b2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x29c2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x29d2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2a02, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0x8086, 0x2a12, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >- {0, 0, 0} >- >-#define nouveau_PCI_IDS \ >- {0x10de, 0x0008, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_03}, \ >- {0x10de, 0x0009, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_03}, \ >- {0x10de, 0x0010, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_03}, \ >- {0x10de, 0x0020, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x0028, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x0029, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x002a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x002b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x002c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x002d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x002e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x002f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x0040, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0041, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0042, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0043, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0044, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0045, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0046, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0047, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0048, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0049, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x004d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x004e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0090, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0091, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0092, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0093, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0095, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0098, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0099, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x009d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00a0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x10de, 0x00c0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00c1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00c2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00c3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00c8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00c9, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00cc, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00cd, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00ce, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f4, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f5, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f6, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00f9, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x00fa, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x00fb, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x00fc, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x00fd, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x00fe, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x00ff, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0100, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_10}, \ >- {0x10de, 0x0101, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_10}, \ >- {0x10de, 0x0103, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_10}, \ >- {0x10de, 0x0110, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_11}, \ >- {0x10de, 0x0111, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_11}, \ >- {0x10de, 0x0112, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_11}, \ >- {0x10de, 0x0113, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_11}, \ >- {0x10de, 0x0140, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0141, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0142, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0143, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0144, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0145, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0146, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0147, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0148, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0149, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x014a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x014c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x014d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x014e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x014f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0150, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_15}, \ >- {0x10de, 0x0151, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_15}, \ >- {0x10de, 0x0152, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_15}, \ >- {0x10de, 0x0153, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_15}, \ >- {0x10de, 0x0160, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0161, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0162, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0163, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0164, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0165, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0166, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0167, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0168, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0169, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0170, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0171, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0172, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0173, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0174, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0175, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0176, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0177, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0178, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0179, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x017a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x017b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x017c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x017d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0181, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0182, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0183, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0185, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0186, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0187, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0188, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x018a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x018b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x018c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x018d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17}, \ >- {0x10de, 0x0191, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x0193, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x0194, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x019d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x019e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x01a0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_11|NV_NFORCE}, \ >- {0x10de, 0x01d1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01d3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01d6, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01d7, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01d8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01d9, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01da, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01db, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01dc, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01dd, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01de, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01df, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x01f0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_17|NV_NFORCE2}, \ >- {0x10de, 0x0200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_20}, \ >- {0x10de, 0x0201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_20}, \ >- {0x10de, 0x0202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_20}, \ >- {0x10de, 0x0203, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_20}, \ >- {0x10de, 0x0211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0212, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0215, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0218, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0221, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0222, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0240, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0241, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0244, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0247, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0250, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0251, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0252, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0253, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0258, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0259, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x025b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0280, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0281, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0282, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0286, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0288, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0289, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x028c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_25}, \ >- {0x10de, 0x0290, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0291, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0292, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0298, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0299, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x029a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x029b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x029c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x029d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x029e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x029f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x02a0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_20}, \ >- {0x10de, 0x02e1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0301, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0302, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0308, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0309, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0311, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0312, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0313, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0314, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0316, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0317, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x031a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x031b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x031d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x031e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x031f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0320, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0321, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0322, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0323, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0324, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0325, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0326, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0327, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0328, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0329, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x032a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x032b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x032c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x032d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x032f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_34}, \ >- {0x10de, 0x0330, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0331, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0332, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0333, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0334, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0338, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x033f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0341, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0342, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0343, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0344, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0345, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0347, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0348, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0349, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x034b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x034c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x034e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x034f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_30}, \ >- {0x10de, 0x0391, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0392, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0393, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0394, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0395, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0397, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0398, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x0399, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x039a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x039b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x039c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x039e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_40}, \ >- {0x10de, 0x03d0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x03d1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x03d2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x03d5, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_44}, \ >- {0x10de, 0x0400, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x0402, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x0421, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x0422, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x10de, 0x0423, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_50}, \ >- {0x12d2, 0x0008, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_03}, \ >- {0x12d2, 0x0009, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_03}, \ >- {0x12d2, 0x0018, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_03}, \ >- {0x12d2, 0x0019, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_03}, \ >- {0x12d2, 0x0020, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x12d2, 0x0028, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x12d2, 0x0029, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x12d2, 0x002c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >- {0x12d2, 0x00a0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV_04}, \ >+ {0x8086, 0x3577, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I8XX}, \ >+ {0x8086, 0x2562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I8XX}, \ >+ {0x8086, 0x3582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I8XX}, \ >+ {0x8086, 0x2572, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I8XX}, \ >+ {0x8086, 0x2582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I915}, \ >+ {0x8086, 0x2592, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I915}, \ >+ {0x8086, 0x2772, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I915}, \ >+ {0x8086, 0x27A2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I915}, \ >+ {0x8086, 0x27AE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I915}, \ >+ {0x8086, 0x2972, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I965}, \ >+ {0x8086, 0x2982, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I965}, \ >+ {0x8086, 0x2992, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I965}, \ >+ {0x8086, 0x29A2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I965}, \ >+ {0x8086, 0x2A02, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I965}, \ >+ {0x8086, 0x2A12, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I965}, \ >+ {0x8086, 0x29C2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I915}, \ >+ {0x8086, 0x29B2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I915}, \ >+ {0x8086, 0x29D2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_I9XX|CHIP_I915}, \ >+ {0, 0, 0} >+ >+#define imagine_PCI_IDS \ >+ {0x105d, 0x2309, PCI_ANY_ID, PCI_ANY_ID, 0, 0, IMAGINE_128}, \ >+ {0x105d, 0x2339, PCI_ANY_ID, PCI_ANY_ID, 0, 0, IMAGINE_128_2}, \ >+ {0x105d, 0x493d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, IMAGINE_T2R}, \ >+ {0x105d, 0x5348, PCI_ANY_ID, PCI_ANY_ID, 0, 0, IMAGINE_REV4}, \ >+ {0, 0, 0} >+ >+#define nv_PCI_IDS \ >+ {0x10DE, 0x0020, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV04}, \ >+ {0x10DE, 0x0028, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV04}, \ >+ {0x10DE, 0x002A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV04}, \ >+ {0x10DE, 0x002C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV04}, \ >+ {0x10DE, 0x0029, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV04}, \ >+ {0x10DE, 0x002D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV04}, \ >+ {0x10DE, 0x00A0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV04}, \ >+ {0x10DE, 0x0100, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0101, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0103, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0110, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0111, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0112, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0113, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0150, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0151, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0152, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0153, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0170, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0171, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0172, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0173, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0174, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0175, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0176, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0177, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0178, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0179, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x017A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x017C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x017D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0181, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0182, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0183, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0185, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0186, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0187, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0188, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0189, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x018A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x018B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x018C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x018D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x01A0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x01F0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV10}, \ >+ {0x10DE, 0x0200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0201, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0202, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0203, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0250, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0251, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0252, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0253, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0258, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0259, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x025B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0280, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0281, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0282, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0286, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x028C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0288, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0289, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV20}, \ >+ {0x10DE, 0x0301, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0302, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0308, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0309, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0311, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0312, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0313, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0314, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0316, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0317, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x031A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x031B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x031C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x031D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x031E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x031F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0320, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0321, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0322, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0323, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0324, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0325, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0326, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0327, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0328, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0329, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x032A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x032B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x032C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x032D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x032F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0330, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0331, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0332, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0333, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x033F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0334, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0338, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0341, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0342, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0343, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0344, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0345, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0347, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0348, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0349, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x034B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x034C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x034E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x034F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV30}, \ >+ {0x10DE, 0x0040, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0041, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0042, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0043, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0045, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0046, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0049, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x004E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x00C0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x00C1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x00C2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x00C8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x00C9, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x00CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x00CD, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x00CE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10de, 0x00f0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10de, 0x00f1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0140, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0141, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0142, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0143, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0144, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0145, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0146, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0147, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0148, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0149, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x014B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x014C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x014D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x014E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x014F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0160, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0161, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0162, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0163, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0164, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0165, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0166, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0167, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0168, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0169, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x016B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x016C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x016D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x016E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0210, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0212, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0215, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0220, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0221, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0222, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0228, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0090, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0091, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0092, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0093, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0094, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0098, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x0099, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x009C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x009D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0x10DE, 0x009E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, NV40}, \ >+ {0, 0, 0} >+ >+#define xgi_PCI_IDS \ >+ {0x18ca, 0x2200, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ >+ {0x18ca, 0x0047, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \ > {0, 0, 0} >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drmP.h linux-2.6.23.i686/drivers/char/drm/drmP.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drmP.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drmP.h 2008-01-06 09:24:57.000000000 +0100 >@@ -34,9 +34,6 @@ > #ifndef _DRM_P_H_ > #define _DRM_P_H_ > >-/* If you want the memory alloc debug functionality, change define below */ >-/* #define DEBUG_MEMORY */ >- > #ifdef __KERNEL__ > #ifdef __alpha__ > /* add include of current.h so that "current" is defined >@@ -52,11 +49,14 @@ > #include <linux/init.h> > #include <linux/file.h> > #include <linux/pci.h> >-#include <linux/jiffies.h> >+#include <linux/version.h> >+#include <linux/sched.h> > #include <linux/smp_lock.h> /* For (un)lock_kernel */ > #include <linux/mm.h> >-#include <linux/cdev.h> >+#include <linux/pagemap.h> >+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,16) > #include <linux/mutex.h> >+#endif > #if defined(__alpha__) || defined(__powerpc__) > #include <asm/pgtable.h> /* For pte_wrprotect */ > #endif >@@ -66,6 +66,7 @@ > #ifdef CONFIG_MTRR > #include <asm/mtrr.h> > #endif >+#include <asm/agp.h> > #if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) > #include <linux/types.h> > #include <linux/agp_backend.h> >@@ -74,17 +75,21 @@ > #include <linux/poll.h> > #include <asm/pgalloc.h> > #include "drm.h" >- >+#include <linux/slab.h> > #include <linux/idr.h> > > #define __OS_HAS_AGP (defined(CONFIG_AGP) || (defined(CONFIG_AGP_MODULE) && defined(MODULE))) > #define __OS_HAS_MTRR (defined(CONFIG_MTRR)) > >-struct drm_file; >-struct drm_device; >- > #include "drm_os_linux.h" > #include "drm_hashtab.h" >+#include "drm_internal.h" >+ >+struct drm_device; >+struct drm_file; >+ >+/* If you want the memory alloc debug functionality, change define below */ >+/* #define DEBUG_MEMORY */ > > /***********************************************************************/ > /** \name DRM template customization defaults */ >@@ -104,6 +109,9 @@ struct drm_device; > #define DRIVER_FB_DMA 0x400 > #define DRIVER_IRQ_VBL2 0x800 > >+ >+/*@}*/ >+ > /***********************************************************************/ > /** \name Begin the DRM... */ > /*@{*/ >@@ -111,7 +119,7 @@ struct drm_device; > #define DRM_DEBUG_CODE 2 /**< Include debugging code if > 1, then > also include looping detection. */ > >-#define DRM_MAGIC_HASH_ORDER 4 /**< Size of key hash table. Must be power of 2. */ >+#define DRM_MAGIC_HASH_ORDER 4 /**< Size of key hash table. Must be power of 2. */ > #define DRM_KERNEL_CONTEXT 0 /**< Change drm_resctx if changed */ > #define DRM_RESERVED_CONTEXTS 1 /**< Change drm_resctx if changed */ > #define DRM_LOOPING_LIMIT 5000000 >@@ -144,12 +152,29 @@ struct drm_device; > #define DRM_MEM_CTXLIST 21 > #define DRM_MEM_MM 22 > #define DRM_MEM_HASHTAB 23 >+#define DRM_MEM_OBJECTS 24 >+#define DRM_MEM_FENCE 25 >+#define DRM_MEM_TTM 26 >+#define DRM_MEM_BUFOBJ 27 > > #define DRM_MAX_CTXBITMAP (PAGE_SIZE * 8) > #define DRM_MAP_HASH_OFFSET 0x10000000 >+#define DRM_MAP_HASH_ORDER 12 >+#define DRM_OBJECT_HASH_ORDER 12 >+#define DRM_FILE_PAGE_OFFSET_START ((0xFFFFFFFFUL >> PAGE_SHIFT) + 1) >+#define DRM_FILE_PAGE_OFFSET_SIZE ((0xFFFFFFFFUL >> PAGE_SHIFT) * 16) >+/* >+ * This should be small enough to allow the use of kmalloc for hash tables >+ * instead of vmalloc. >+ */ >+ >+#define DRM_FILE_HASH_ORDER 8 >+#define DRM_MM_INIT_MAX_PAGES 256 > > /*@}*/ > >+#include "drm_compat.h" >+ > /***********************************************************************/ > /** \name Macros to make printk easier */ > /*@{*/ >@@ -173,7 +198,6 @@ struct drm_device; > #define DRM_MEM_ERROR(area, fmt, arg...) \ > printk(KERN_ERR "[" DRM_NAME ":%s:%s] *ERROR* " fmt , __FUNCTION__, \ > drm_mem_stats[area].name , ##arg) >- > #define DRM_INFO(fmt, arg...) printk(KERN_INFO "[" DRM_NAME "] " fmt , ##arg) > > /** >@@ -185,9 +209,9 @@ struct drm_device; > #if DRM_DEBUG_CODE > #define DRM_DEBUG(fmt, arg...) \ > do { \ >- if ( drm_debug ) \ >+ if ( drm_debug ) \ > printk(KERN_DEBUG \ >- "[" DRM_NAME ":%s] " fmt , \ >+ "[" DRM_NAME ":%s] " fmt , \ > __FUNCTION__ , ##arg); \ > } while (0) > #else >@@ -211,6 +235,8 @@ struct drm_device; > /*@{*/ > > #define DRM_ARRAY_SIZE(x) ARRAY_SIZE(x) >+#define DRM_MIN(a,b) min(a,b) >+#define DRM_MAX(a,b) max(a,b) > > #define DRM_LEFTCOUNT(x) (((x)->rp + (x)->count - (x)->wp) % ((x)->count + 1)) > #define DRM_BUFCOUNT(x) ((x)->count - DRM_LEFTCOUNT(x)) >@@ -232,7 +258,7 @@ struct drm_device; > * Test that the hardware lock is held by the caller, returning otherwise. > * > * \param dev DRM device. >- * \param filp file pointer of the caller. >+ * \param file_priv DRM file private pointer of the caller. > */ > #define LOCK_TEST_WITH_RETURN( dev, file_priv ) \ > do { \ >@@ -260,10 +286,10 @@ do { \ > /** > * Ioctl function type. > * >- * \param inode device inode. >+ * \param dev DRM device structure >+ * \param data pointer to kernel-space stored data, copied in and out according >+ * to ioctl description. > * \param file_priv DRM file private pointer. >- * \param cmd command. >- * \param arg argument. > */ > typedef int drm_ioctl_t(struct drm_device *dev, void *data, > struct drm_file *file_priv); >@@ -271,16 +297,15 @@ typedef int drm_ioctl_t(struct drm_devic > typedef int drm_ioctl_compat_t(struct file *filp, unsigned int cmd, > unsigned long arg); > >-#define DRM_AUTH 0x1 >-#define DRM_MASTER 0x2 >-#define DRM_ROOT_ONLY 0x4 >+#define DRM_AUTH 0x1 >+#define DRM_MASTER 0x2 >+#define DRM_ROOT_ONLY 0x4 > > struct drm_ioctl_desc { > unsigned int cmd; > drm_ioctl_t *func; > int flags; > }; >- > /** > * Creates a driver or general drm_ioctl_desc array entry for the given > * ioctl, for use by drm_ioctl(). >@@ -292,7 +317,6 @@ struct drm_magic_entry { > struct list_head head; > struct drm_hash_item hash_item; > struct drm_file *priv; >- struct drm_magic_entry *next; > }; > > struct drm_vma_entry { >@@ -328,8 +352,8 @@ struct drm_buf { > DRM_LIST_RECLAIM = 5 > } list; /**< Which list we're on */ > >- int dev_priv_size; /**< Size of buffer private storage */ >- void *dev_private; /**< Per-buffer private storage */ >+ int dev_priv_size; /**< Size of buffer private storage */ >+ void *dev_private; /**< Per-buffer private storage */ > }; > > /** bufs is one longer than it has to be */ >@@ -371,10 +395,17 @@ struct drm_buf_entry { > int seg_count; > int page_order; > struct drm_dma_handle **seglist; >- > struct drm_freelist freelist; > }; > >+ >+enum drm_ref_type { >+ _DRM_REF_USE = 0, >+ _DRM_REF_TYPE1, >+ _DRM_NO_REF_TYPES >+}; >+ >+ > /** File private data */ > struct drm_file { > int authenticated; >@@ -388,8 +419,19 @@ struct drm_file { > struct drm_head *head; > int remove_auth_on_close; > unsigned long lock_count; >- void *driver_priv; >+ >+ /* >+ * The user object hash table is global and resides in the >+ * drm_device structure. We protect the lists and hash tables with the >+ * device struct_mutex. A bit coarse-grained but probably the best >+ * option. >+ */ >+ >+ struct list_head refd_objects; >+ >+ struct drm_open_hash refd_object_hash[_DRM_NO_REF_TYPES]; > struct file *filp; >+ void *driver_priv; > }; > > /** Wait queue */ >@@ -415,8 +457,9 @@ struct drm_queue { > * Lock data. > */ > struct drm_lock_data { >- struct drm_hw_lock *hw_lock; /**< Hardware lock */ >- struct drm_file *file_priv; /**< File descr of lock holder (0=kernel) */ >+ struct drm_hw_lock *hw_lock; /**< Hardware lock */ >+ /** Private of lock holder's file (NULL=kernel) */ >+ struct drm_file *file_priv; > wait_queue_head_t lock_queue; /**< Queue of blocked processes */ > unsigned long lock_time; /**< Time of last lock in jiffies */ > spinlock_t spinlock; >@@ -466,7 +509,9 @@ struct drm_agp_head { > DRM_AGP_KERN agp_info; /**< AGP device information */ > struct list_head memory; > unsigned long mode; /**< AGP mode */ >+#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,11) > struct agp_bridge_data *bridge; >+#endif > int enabled; /**< whether the AGP bus as been enabled */ > int acquired; /**< whether the AGP device has been acquired */ > unsigned long base; >@@ -491,6 +536,27 @@ struct drm_sigdata { > struct drm_hw_lock *lock; > }; > >+ >+/* >+ * Generic memory manager structs >+ */ >+ >+struct drm_mm_node { >+ struct list_head fl_entry; >+ struct list_head ml_entry; >+ int free; >+ unsigned long start; >+ unsigned long size; >+ struct drm_mm *mm; >+ void *private; >+}; >+ >+struct drm_mm { >+ struct list_head fl_entry; >+ struct list_head ml_entry; >+}; >+ >+ > /** > * Mappings list > */ >@@ -498,7 +564,8 @@ struct drm_map_list { > struct list_head head; /**< list head */ > struct drm_hash_item hash; > struct drm_map *map; /**< mapping */ >- unsigned int user_token; >+ uint64_t user_token; >+ struct drm_mm_node *file_offset_node; > }; > > typedef struct drm_map drm_local_map_t; >@@ -536,30 +603,13 @@ struct drm_ati_pcigart_info { > int table_size; > }; > >-/* >- * Generic memory manager structs >- */ >-struct drm_mm_node { >- struct list_head fl_entry; >- struct list_head ml_entry; >- int free; >- unsigned long start; >- unsigned long size; >- struct drm_mm *mm; >- void *private; >-}; >- >-struct drm_mm { >- struct list_head fl_entry; >- struct list_head ml_entry; >-}; >+#include "drm_objects.h" > > /** > * DRM driver structure. This structure represent the common code for > * a family of cards. There will one drm_device for each card present > * in this family > */ >-struct drm_device; > > struct drm_driver { > int (*load) (struct drm_device *, unsigned long flags); >@@ -569,6 +619,8 @@ struct drm_driver { > void (*postclose) (struct drm_device *, struct drm_file *); > void (*lastclose) (struct drm_device *); > int (*unload) (struct drm_device *); >+ int (*suspend) (struct drm_device *); >+ int (*resume) (struct drm_device *); > int (*dma_ioctl) (struct drm_device *dev, void *data, struct drm_file *file_priv); > void (*dma_ready) (struct drm_device *); > int (*dma_quiescent) (struct drm_device *); >@@ -579,7 +631,7 @@ struct drm_driver { > void (*kernel_context_switch_unlock) (struct drm_device *dev); > int (*vblank_wait) (struct drm_device *dev, unsigned int *sequence); > int (*vblank_wait2) (struct drm_device *dev, unsigned int *sequence); >- int (*dri_library_name) (struct drm_device *dev, char *buf); >+ int (*dri_library_name) (struct drm_device *dev, char * buf); > > /** > * Called by \c drm_device_is_agp. Typically used to determine if a >@@ -594,23 +646,25 @@ struct drm_driver { > */ > int (*device_is_agp) (struct drm_device *dev); > >- /* these have to be filled in */ >- >- irqreturn_t(*irq_handler) (DRM_IRQ_ARGS); >+/* these have to be filled in */ >+ irqreturn_t(*irq_handler) (DRM_IRQ_ARGS); > void (*irq_preinstall) (struct drm_device *dev); > void (*irq_postinstall) (struct drm_device *dev); > void (*irq_uninstall) (struct drm_device *dev); > void (*reclaim_buffers) (struct drm_device *dev, >- struct drm_file * file_priv); >+ struct drm_file *file_priv); > void (*reclaim_buffers_locked) (struct drm_device *dev, > struct drm_file *file_priv); > void (*reclaim_buffers_idlelocked) (struct drm_device *dev, > struct drm_file *file_priv); >- unsigned long (*get_map_ofs) (struct drm_map * map); >+ unsigned long (*get_map_ofs) (struct drm_map *map); > unsigned long (*get_reg_ofs) (struct drm_device *dev); > void (*set_version) (struct drm_device *dev, > struct drm_set_version *sv); > >+ struct drm_fence_driver *fence_driver; >+ struct drm_bo_driver *bo_driver; >+ > int major; > int minor; > int patchlevel; >@@ -618,6 +672,7 @@ struct drm_driver { > char *desc; > char *date; > >+/* variables */ > u32 driver_features; > int dev_priv_size; > struct drm_ioctl_desc *ioctls; >@@ -639,11 +694,13 @@ struct drm_head { > struct class_device *dev_class; > }; > >+ > /** > * DRM device structure. This structure represent a complete card that > * may contain multiple heads. > */ > struct drm_device { >+ struct device dev; /**< Linux device */ > char *unique; /**< Unique identifier: e.g., busid */ > int unique_len; /**< Length of unique field */ > char *devname; /**< For /proc/interrupts */ >@@ -676,7 +733,7 @@ struct drm_device { > /** \name Authentication */ > /*@{ */ > struct list_head filelist; >- struct drm_open_hash magiclist; /**< magic hash table */ >+ struct drm_open_hash magiclist; > struct list_head magicfree; > /*@} */ > >@@ -684,7 +741,11 @@ struct drm_device { > /*@{ */ > struct list_head maplist; /**< Linked list of regions */ > int map_count; /**< Number of mappable regions */ >- struct drm_open_hash map_hash; /**< User token hash table for maps */ >+ struct drm_open_hash map_hash; /**< User token hash table for maps */ >+ struct drm_mm offset_manager; /**< User token manager */ >+ struct drm_open_hash object_hash; /**< User token hash table for objects */ >+ struct address_space *dev_mapping; /**< For unmap_mapping_range() */ >+ struct page *ttm_dummy_page; > > /** \name Context handle management */ > /*@{ */ >@@ -695,13 +756,13 @@ struct drm_device { > struct idr ctx_idr; > > struct list_head vmalist; /**< List of vmas (for debugging) */ >- struct drm_lock_data lock; /**< Information on hardware lock */ >+ struct drm_lock_data lock; /**< Information on hardware lock */ > /*@} */ > > /** \name DMA queues (contexts) */ > /*@{ */ > int queue_count; /**< Number of active DMA queues */ >- int queue_reserved; /**< Number of reserved DMA queues */ >+ int queue_reserved; /**< Number of reserved DMA queues */ > int queue_slots; /**< Actual length of queuelist */ > struct drm_queue **queuelist; /**< Vector of pointers to DMA queues */ > struct drm_device_dma *dma; /**< Optional pointer for DMA support */ >@@ -722,6 +783,7 @@ struct drm_device { > /*@} */ > > struct work_struct work; >+ > /** \name VBLANK IRQ support */ > /*@{ */ > >@@ -743,7 +805,7 @@ struct drm_device { > wait_queue_head_t buf_readers; /**< Processes waiting to read */ > wait_queue_head_t buf_writers; /**< Processes waiting to ctx switch */ > >- struct drm_agp_head *agp; /**< AGP data */ >+ struct drm_agp_head *agp; /**< AGP data */ > > struct pci_dev *pdev; /**< PCI device structure */ > int pci_vendor; /**< PCI vendor id */ >@@ -751,10 +813,9 @@ struct drm_device { > #ifdef __alpha__ > struct pci_controller *hose; > #endif >- struct drm_sg_mem *sg; /**< Scatter gather memory */ >- unsigned long *ctx_bitmap; /**< context bitmap */ >+ struct drm_sg_mem *sg; /**< Scatter gather memory */ > void *dev_private; /**< device private data */ >- struct drm_sigdata sigdata; /**< For block_all_signals */ >+ struct drm_sigdata sigdata; /**< For block_all_signals */ > sigset_t sigmask; > > struct drm_driver *driver; >@@ -762,6 +823,9 @@ struct drm_device { > unsigned int agp_buffer_token; > struct drm_head primary; /**< primary screen head */ > >+ struct drm_fence_manager fm; >+ struct drm_buffer_manager bm; >+ > /** \name Drawable information */ > /*@{ */ > spinlock_t drw_lock; >@@ -769,6 +833,16 @@ struct drm_device { > /*@} */ > }; > >+#if __OS_HAS_AGP >+struct drm_agp_ttm_backend { >+ struct drm_ttm_backend backend; >+ DRM_AGP_MEM *mem; >+ struct agp_bridge_data *bridge; >+ int populated; >+}; >+#endif >+ >+ > static __inline__ int drm_core_check_feature(struct drm_device *dev, > int feature) > { >@@ -811,34 +885,40 @@ static inline int drm_mtrr_del(int handl > } > > #else >-#define drm_core_has_MTRR(dev) (0) >- >-#define DRM_MTRR_WC 0 >- > static inline int drm_mtrr_add(unsigned long offset, unsigned long size, > unsigned int flags) > { >- return 0; >+ return -ENODEV; > } > > static inline int drm_mtrr_del(int handle, unsigned long offset, > unsigned long size, unsigned int flags) > { >- return 0; >+ return -ENODEV; > } >+ >+#define drm_core_has_MTRR(dev) (0) >+#define DRM_MTRR_WC 0 > #endif > >+ > /******************************************************************/ > /** \name Internal function definitions */ > /*@{*/ > > /* Driver support (drm_drv.h) */ >-extern int drm_init(struct drm_driver *driver); >+extern int drm_fb_loaded; >+extern int drm_init(struct drm_driver *driver, >+ struct pci_device_id *pciidlist); > extern void drm_exit(struct drm_driver *driver); >+extern void drm_cleanup_pci(struct pci_dev *pdev); > extern int drm_ioctl(struct inode *inode, struct file *filp, > unsigned int cmd, unsigned long arg); >+extern long drm_unlocked_ioctl(struct file *filp, >+ unsigned int cmd, unsigned long arg); > extern long drm_compat_ioctl(struct file *filp, > unsigned int cmd, unsigned long arg); >+ > extern int drm_lastclose(struct drm_device *dev); > > /* Device support (drm_fops.h) */ >@@ -846,23 +926,37 @@ extern int drm_open(struct inode *inode, > extern int drm_stub_open(struct inode *inode, struct file *filp); > extern int drm_fasync(int fd, struct file *filp, int on); > extern int drm_release(struct inode *inode, struct file *filp); >+unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait); > > /* Mapping support (drm_vm.h) */ > extern int drm_mmap(struct file *filp, struct vm_area_struct *vma); >-extern unsigned int drm_poll(struct file *filp, struct poll_table_struct *wait); >+extern unsigned long drm_core_get_map_ofs(struct drm_map * map); >+extern unsigned long drm_core_get_reg_ofs(struct drm_device *dev); >+extern pgprot_t drm_io_prot(uint32_t map_type, struct vm_area_struct *vma); > > /* Memory management support (drm_memory.h) */ > #include "drm_memory.h" > extern void drm_mem_init(void); > extern int drm_mem_info(char *buf, char **start, off_t offset, > int request, int *eof, void *data); >+extern void *drm_calloc(size_t nmemb, size_t size, int area); > extern void *drm_realloc(void *oldpt, size_t oldsize, size_t size, int area); >- >+extern unsigned long drm_alloc_pages(int order, int area); >+extern void drm_free_pages(unsigned long address, int order, int area); > extern DRM_AGP_MEM *drm_alloc_agp(struct drm_device *dev, int pages, u32 type); > extern int drm_free_agp(DRM_AGP_MEM * handle, int pages); > extern int drm_bind_agp(DRM_AGP_MEM * handle, unsigned int start); > extern int drm_unbind_agp(DRM_AGP_MEM * handle); > >+extern void drm_free_memctl(size_t size); >+extern int drm_alloc_memctl(size_t size); >+extern void drm_query_memctl(uint64_t *cur_used, >+ uint64_t *low_threshold, >+ uint64_t *high_threshold); >+extern void drm_init_memctl(size_t low_threshold, >+ size_t high_threshold, >+ size_t unit_size); >+ > /* Misc. IOCTL support (drm_ioctl.h) */ > extern int drm_irq_by_busid(struct drm_device *dev, void *data, > struct drm_file *file_priv); >@@ -914,7 +1008,7 @@ extern int drm_rmdraw(struct drm_device > extern int drm_update_drawable_info(struct drm_device *dev, void *data, > struct drm_file *file_priv); > extern struct drm_drawable_info *drm_get_drawable_info(struct drm_device *dev, >- drm_drawable_t id); >+ drm_drawable_t id); > extern void drm_drawable_free_all(struct drm_device *dev); > > /* Authentication IOCTL support (drm_auth.h) */ >@@ -938,11 +1032,13 @@ extern void drm_idlelock_release(struct > * DMA quiscent + idle. DMA quiescent usually requires the hardware lock. > */ > >-extern int drm_i_have_hw_lock(struct drm_device *dev, struct drm_file *file_priv); >+extern int drm_i_have_hw_lock(struct drm_device *dev, >+ struct drm_file *file_priv); > > /* Buffer management support (drm_bufs.h) */ > extern int drm_addbufs_agp(struct drm_device *dev, struct drm_buf_desc * request); > extern int drm_addbufs_pci(struct drm_device *dev, struct drm_buf_desc * request); >+extern int drm_addbufs_fb (struct drm_device *dev, struct drm_buf_desc * request); > extern int drm_addmap(struct drm_device *dev, unsigned int offset, > unsigned int size, enum drm_map_type type, > enum drm_map_flags flags, drm_local_map_t ** map_ptr); >@@ -967,8 +1063,10 @@ extern unsigned long drm_get_resource_st > unsigned int resource); > extern unsigned long drm_get_resource_len(struct drm_device *dev, > unsigned int resource); >-struct drm_map_list *drm_find_matching_map(struct drm_device *dev, >- drm_local_map_t *map); >+extern struct drm_map_list *drm_find_matching_map(struct drm_device *dev, >+ drm_local_map_t *map); >+ >+ > /* DMA support (drm_dma.h) */ > extern int drm_dma_setup(struct drm_device *dev); > extern void drm_dma_takedown(struct drm_device *dev); >@@ -980,7 +1078,7 @@ extern void drm_core_reclaim_buffers(str > extern int drm_control(struct drm_device *dev, void *data, > struct drm_file *file_priv); > extern irqreturn_t drm_irq_handler(DRM_IRQ_ARGS); >-extern int drm_irq_install(struct drm_device * dev); >+extern int drm_irq_install(struct drm_device *dev); > extern int drm_irq_uninstall(struct drm_device *dev); > extern void drm_driver_irq_preinstall(struct drm_device *dev); > extern void drm_driver_irq_postinstall(struct drm_device *dev); >@@ -1018,17 +1116,22 @@ extern int drm_agp_unbind_ioctl(struct d > extern int drm_agp_bind(struct drm_device *dev, struct drm_agp_binding *request); > extern int drm_agp_bind_ioctl(struct drm_device *dev, void *data, > struct drm_file *file_priv); >+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,11) >+extern DRM_AGP_MEM *drm_agp_allocate_memory(size_t pages, u32 type); >+#else > extern DRM_AGP_MEM *drm_agp_allocate_memory(struct agp_bridge_data *bridge, size_t pages, u32 type); >+#endif > extern int drm_agp_free_memory(DRM_AGP_MEM * handle); > extern int drm_agp_bind_memory(DRM_AGP_MEM * handle, off_t start); > extern int drm_agp_unbind_memory(DRM_AGP_MEM * handle); >- >+extern struct drm_ttm_backend *drm_agp_init_ttm(struct drm_device *dev); >+extern void drm_agp_chipset_flush(struct drm_device *dev); > /* Stub support (drm_stub.h) */ > extern int drm_get_dev(struct pci_dev *pdev, const struct pci_device_id *ent, >- struct drm_driver *driver); >+ struct drm_driver *driver); > extern int drm_put_dev(struct drm_device *dev); >-extern int drm_put_head(struct drm_head *head); >-extern unsigned int drm_debug; >+extern int drm_put_head(struct drm_head * head); >+extern unsigned int drm_debug; /* 1 to enable debug output */ > extern unsigned int drm_cards_limit; > extern struct drm_head **drm_heads; > extern struct class *drm_class; >@@ -1054,32 +1157,30 @@ extern int drm_sg_free(struct drm_device > struct drm_file *file_priv); > > /* ATI PCIGART support (ati_pcigart.h) */ >-extern int drm_ati_pcigart_init(struct drm_device *dev, >- struct drm_ati_pcigart_info * gart_info); >-extern int drm_ati_pcigart_cleanup(struct drm_device *dev, >- struct drm_ati_pcigart_info * gart_info); >+extern int drm_ati_pcigart_init(struct drm_device *dev, struct drm_ati_pcigart_info *gart_info); >+extern int drm_ati_pcigart_cleanup(struct drm_device *dev, struct drm_ati_pcigart_info *gart_info); > > extern drm_dma_handle_t *drm_pci_alloc(struct drm_device *dev, size_t size, >- size_t align, dma_addr_t maxaddr); >-extern void __drm_pci_free(struct drm_device *dev, drm_dma_handle_t * dmah); >-extern void drm_pci_free(struct drm_device *dev, drm_dma_handle_t * dmah); >+ size_t align, dma_addr_t maxaddr); >+extern void __drm_pci_free(struct drm_device *dev, drm_dma_handle_t *dmah); >+extern void drm_pci_free(struct drm_device *dev, drm_dma_handle_t *dmah); > > /* sysfs support (drm_sysfs.c) */ >+struct drm_sysfs_class; > extern struct class *drm_sysfs_create(struct module *owner, char *name); >-extern void drm_sysfs_destroy(struct class *cs); >-extern struct class_device *drm_sysfs_device_add(struct class *cs, >- struct drm_head *head); >-extern void drm_sysfs_device_remove(struct class_device *class_dev); >+extern void drm_sysfs_destroy(void); >+extern int drm_sysfs_device_add(struct drm_device *dev, struct drm_head *head); >+extern void drm_sysfs_device_remove(struct drm_device *dev); > > /* > * Basic memory manager support (drm_mm.c) > */ >-extern struct drm_mm_node *drm_mm_get_block(struct drm_mm_node * parent, >- unsigned long size, >- unsigned alignment); >-void drm_mm_put_block(struct drm_mm_node * cur); >+ >+extern struct drm_mm_node * drm_mm_get_block(struct drm_mm_node * parent, unsigned long size, >+ unsigned alignment); >+extern void drm_mm_put_block(struct drm_mm_node *cur); > extern struct drm_mm_node *drm_mm_search_free(const struct drm_mm *mm, unsigned long size, >- unsigned alignment, int best_match); >+ unsigned alignment, int best_match); > extern int drm_mm_init(struct drm_mm *mm, unsigned long start, unsigned long size); > extern void drm_mm_takedown(struct drm_mm *mm); > extern int drm_mm_clean(struct drm_mm *mm); >@@ -1087,6 +1188,11 @@ extern unsigned long drm_mm_tail_space(s > extern int drm_mm_remove_space_from_tail(struct drm_mm *mm, unsigned long size); > extern int drm_mm_add_space_to_tail(struct drm_mm *mm, unsigned long size); > >+static inline struct drm_mm *drm_get_mm(struct drm_mm_node *block) >+{ >+ return block->mm; >+} >+ > extern void drm_core_ioremap(struct drm_map *map, struct drm_device *dev); > extern void drm_core_ioremapfree(struct drm_map *map, struct drm_device *dev); > >@@ -1095,15 +1201,15 @@ static __inline__ struct drm_map *drm_co > { > struct drm_map_list *_entry; > list_for_each_entry(_entry, &dev->maplist, head) >- if (_entry->user_token == token) >- return _entry->map; >+ if (_entry->user_token == token) >+ return _entry->map; > return NULL; > } > > static __inline__ int drm_device_is_agp(struct drm_device *dev) > { >- if (dev->driver->device_is_agp != NULL) { >- int err = (*dev->driver->device_is_agp) (dev); >+ if ( dev->driver->device_is_agp != NULL ) { >+ int err = (*dev->driver->device_is_agp)(dev); > > if (err != 2) { > return err; >@@ -1134,22 +1240,45 @@ static __inline__ void drm_free(void *pt > { > kfree(pt); > } >- >-/** Wrapper around kcalloc() */ >-static __inline__ void *drm_calloc(size_t nmemb, size_t size, int area) >-{ >- return kcalloc(nmemb, size, GFP_KERNEL); >-} > #else > extern void *drm_alloc(size_t size, int area); > extern void drm_free(void *pt, size_t size, int area); >-extern void *drm_calloc(size_t nmemb, size_t size, int area); > #endif > >-/*@}*/ >+/* >+ * Accounting variants of standard calls. >+ */ > >-extern unsigned long drm_core_get_map_ofs(struct drm_map * map); >-extern unsigned long drm_core_get_reg_ofs(struct drm_device *dev); >+static inline void *drm_ctl_alloc(size_t size, int area) >+{ >+ void *ret; >+ if (drm_alloc_memctl(size)) >+ return NULL; >+ ret = drm_alloc(size, area); >+ if (!ret) >+ drm_free_memctl(size); >+ return ret; >+} >+ >+static inline void *drm_ctl_calloc(size_t nmemb, size_t size, int area) >+{ >+ void *ret; >+ >+ if (drm_alloc_memctl(nmemb*size)) >+ return NULL; >+ ret = drm_calloc(nmemb, size, area); >+ if (!ret) >+ drm_free_memctl(nmemb*size); >+ return ret; >+} >+ >+static inline void drm_ctl_free(void *pt, size_t size, int area) >+{ >+ drm_free(pt, size, area); >+ drm_free_memctl(size); >+} >+ >+/*@}*/ > > #endif /* __KERNEL__ */ > #endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_proc.c linux-2.6.23.i686/drivers/char/drm/drm_proc.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_proc.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_proc.c 2008-01-06 09:24:57.000000000 +0100 >@@ -49,6 +49,8 @@ static int drm_queues_info(char *buf, ch > int request, int *eof, void *data); > static int drm_bufs_info(char *buf, char **start, off_t offset, > int request, int *eof, void *data); >+static int drm_objects_info(char *buf, char **start, off_t offset, >+ int request, int *eof, void *data); > #if DRM_DEBUG_CODE > static int drm_vma_info(char *buf, char **start, off_t offset, > int request, int *eof, void *data); >@@ -67,6 +69,7 @@ static struct drm_proc_list { > {"clients", drm_clients_info}, > {"queues", drm_queues_info}, > {"bufs", drm_bufs_info}, >+ {"objects", drm_objects_info}, > #if DRM_DEBUG_CODE > {"vma", drm_vma_info}, > #endif >@@ -116,7 +119,6 @@ int drm_proc_init(struct drm_device * de > ent->read_proc = drm_proc_list[i].f; > ent->data = dev; > } >- > return 0; > } > >@@ -211,8 +213,8 @@ static int drm__vm_info(char *buf, char > struct drm_map_list *r_list; > > /* Hardcoded from _DRM_FRAME_BUFFER, >- _DRM_REGISTERS, _DRM_SHM, _DRM_AGP, and >- _DRM_SCATTER_GATHER and _DRM_CONSISTENT */ >+ _DRM_REGISTERS, _DRM_SHM, _DRM_AGP, >+ _DRM_SCATTER_GATHER, and _DRM_CONSISTENT. */ > const char *types[] = { "FB", "REG", "SHM", "AGP", "SG", "PCI" }; > const char *type; > int i; >@@ -236,11 +238,12 @@ static int drm__vm_info(char *buf, char > type = "??"; > else > type = types[map->type]; >- DRM_PROC_PRINT("%4d 0x%08lx 0x%08lx %4.4s 0x%02x 0x%08x ", >+ DRM_PROC_PRINT("%4d 0x%08lx 0x%08lx %4.4s 0x%02x 0x%08lx ", > i, > map->offset, > map->size, type, map->flags, >- r_list->user_token); >+ (unsigned long) r_list->user_token); >+ > if (map->mtrr < 0) { > DRM_PROC_PRINT("none\n"); > } else { >@@ -416,6 +419,93 @@ static int drm_bufs_info(char *buf, char > } > > /** >+ * Called when "/proc/dri/.../objects" is read. >+ * >+ * \param buf output buffer. >+ * \param start start of output data. >+ * \param offset requested start offset. >+ * \param request requested number of bytes. >+ * \param eof whether there is no more data to return. >+ * \param data private data. >+ * \return number of written bytes. >+ */ >+static int drm__objects_info(char *buf, char **start, off_t offset, int request, >+ int *eof, void *data) >+{ >+ struct drm_device *dev = (struct drm_device *) data; >+ int len = 0; >+ struct drm_buffer_manager *bm = &dev->bm; >+ struct drm_fence_manager *fm = &dev->fm; >+ uint64_t used_mem; >+ uint64_t low_mem; >+ uint64_t high_mem; >+ >+ >+ if (offset > DRM_PROC_LIMIT) { >+ *eof = 1; >+ return 0; >+ } >+ >+ *start = &buf[offset]; >+ *eof = 0; >+ >+ DRM_PROC_PRINT("Object accounting:\n\n"); >+ if (fm->initialized) { >+ DRM_PROC_PRINT("Number of active fence objects: %d.\n", >+ atomic_read(&fm->count)); >+ } else { >+ DRM_PROC_PRINT("Fence objects are not supported by this driver\n"); >+ } >+ >+ if (bm->initialized) { >+ DRM_PROC_PRINT("Number of active buffer objects: %d.\n\n", >+ atomic_read(&bm->count)); >+ } >+ DRM_PROC_PRINT("Memory accounting:\n\n"); >+ if (bm->initialized) { >+ DRM_PROC_PRINT("Number of locked GATT pages: %lu.\n", bm->cur_pages); >+ } else { >+ DRM_PROC_PRINT("Buffer objects are not supported by this driver.\n"); >+ } >+ >+ drm_query_memctl(&used_mem, &low_mem, &high_mem); >+ >+ if (used_mem > 16*PAGE_SIZE) { >+ DRM_PROC_PRINT("Used object memory is %lu pages.\n", >+ (unsigned long) (used_mem >> PAGE_SHIFT)); >+ } else { >+ DRM_PROC_PRINT("Used object memory is %lu bytes.\n", >+ (unsigned long) used_mem); >+ } >+ DRM_PROC_PRINT("Soft object memory usage threshold is %lu pages.\n", >+ (unsigned long) (low_mem >> PAGE_SHIFT)); >+ DRM_PROC_PRINT("Hard object memory usage threshold is %lu pages.\n", >+ (unsigned long) (high_mem >> PAGE_SHIFT)); >+ >+ DRM_PROC_PRINT("\n"); >+ >+ if (len > request + offset) >+ return request; >+ *eof = 1; >+ return len - offset; >+} >+ >+/** >+ * Simply calls _objects_info() while holding the drm_device::struct_mutex lock. >+ */ >+static int drm_objects_info(char *buf, char **start, off_t offset, int request, >+ int *eof, void *data) >+{ >+ struct drm_device *dev = (struct drm_device *) data; >+ int ret; >+ >+ mutex_lock(&dev->struct_mutex); >+ ret = drm__objects_info(buf, start, offset, request, eof, data); >+ mutex_unlock(&dev->struct_mutex); >+ return ret; >+} >+ >+/** > * Called when "/proc/dri/.../clients" is read. > * > * \param buf output buffer. >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_regman.c linux-2.6.23.i686/drivers/char/drm/drm_regman.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_regman.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_regman.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,200 @@ >+/************************************************************************** >+ * Copyright (c) 2007 Tungsten Graphics, Inc., Cedar Park, TX., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * An allocate-fence manager implementation intended for sets of base-registers >+ * or tiling-registers. >+ */ >+ >+#include "drmP.h" >+ >+/* >+ * Allocate a compatible register and put it on the unfenced list. >+ */ >+ >+int drm_regs_alloc(struct drm_reg_manager *manager, >+ const void *data, >+ uint32_t fence_class, >+ uint32_t fence_type, >+ int interruptible, int no_wait, struct drm_reg **reg) >+{ >+ struct drm_reg *entry, *next_entry; >+ int ret; >+ >+ *reg = NULL; >+ >+ /* >+ * Search the unfenced list. >+ */ >+ >+ list_for_each_entry(entry, &manager->unfenced, head) { >+ if (manager->reg_reusable(entry, data)) { >+ entry->new_fence_type |= fence_type; >+ goto out; >+ } >+ } >+ >+ /* >+ * Search the lru list. >+ */ >+ >+ list_for_each_entry_safe(entry, next_entry, &manager->lru, head) { >+ struct drm_fence_object *fence = entry->fence; >+ if (fence->fence_class == fence_class && >+ (entry->fence_type & fence_type) == entry->fence_type && >+ manager->reg_reusable(entry, data)) { >+ list_del(&entry->head); >+ entry->new_fence_type = fence_type; >+ list_add_tail(&entry->head, &manager->unfenced); >+ goto out; >+ } >+ } >+ >+ /* >+ * Search the free list. >+ */ >+ >+ list_for_each_entry(entry, &manager->free, head) { >+ list_del(&entry->head); >+ entry->new_fence_type = fence_type; >+ list_add_tail(&entry->head, &manager->unfenced); >+ goto out; >+ } >+ >+ if (no_wait) >+ return -EBUSY; >+ >+ /* >+ * Go back to the lru list and try to expire fences. >+ */ >+ >+ list_for_each_entry_safe(entry, next_entry, &manager->lru, head) { >+ BUG_ON(!entry->fence); >+ ret = drm_fence_object_wait(entry->fence, 0, !interruptible, >+ entry->fence_type); >+ if (ret) >+ return ret; >+ >+ drm_fence_usage_deref_unlocked(&entry->fence); >+ list_del(&entry->head); >+ entry->new_fence_type = fence_type; >+ list_add_tail(&entry->head, &manager->unfenced); >+ goto out; >+ } >+ >+ /* >+ * Oops. All registers are used up :(. >+ */ >+ >+ return -EBUSY; >+out: >+ *reg = entry; >+ return 0; >+} >+EXPORT_SYMBOL(drm_regs_alloc); >+ >+void drm_regs_fence(struct drm_reg_manager *manager, >+ struct drm_fence_object *fence) >+{ >+ struct drm_reg *entry; >+ struct drm_reg *next_entry; >+ >+ if (!fence) { >+ >+ /* >+ * Old fence (if any) is still valid. >+ * Put back on free and lru lists. >+ */ >+ >+ list_for_each_entry_safe_reverse(entry, next_entry, >+ &manager->unfenced, head) { >+ list_del(&entry->head); >+ list_add(&entry->head, (entry->fence) ? >+ &manager->lru : &manager->free); >+ } >+ } else { >+ >+ /* >+ * Fence with a new fence and put on lru list. >+ */ >+ >+ list_for_each_entry_safe(entry, next_entry, &manager->unfenced, >+ head) { >+ list_del(&entry->head); >+ if (entry->fence) >+ drm_fence_usage_deref_unlocked(&entry->fence); >+ drm_fence_reference_unlocked(&entry->fence, fence); >+ >+ entry->fence_type = entry->new_fence_type; >+ BUG_ON((entry->fence_type & fence->type) != >+ entry->fence_type); >+ >+ list_add_tail(&entry->head, &manager->lru); >+ } >+ } >+} >+EXPORT_SYMBOL(drm_regs_fence); >+ >+void drm_regs_free(struct drm_reg_manager *manager) >+{ >+ struct drm_reg *entry; >+ struct drm_reg *next_entry; >+ >+ drm_regs_fence(manager, NULL); >+ >+ list_for_each_entry_safe(entry, next_entry, &manager->free, head) { >+ list_del(&entry->head); >+ manager->reg_destroy(entry); >+ } >+ >+ list_for_each_entry_safe(entry, next_entry, &manager->lru, head) { >+ >+ (void)drm_fence_object_wait(entry->fence, 1, 1, >+ entry->fence_type); >+ list_del(&entry->head); >+ drm_fence_usage_deref_unlocked(&entry->fence); >+ manager->reg_destroy(entry); >+ } >+} >+EXPORT_SYMBOL(drm_regs_free); >+ >+void drm_regs_add(struct drm_reg_manager *manager, struct drm_reg *reg) >+{ >+ reg->fence = NULL; >+ list_add_tail(®->head, &manager->free); >+} >+EXPORT_SYMBOL(drm_regs_add); >+ >+void drm_regs_init(struct drm_reg_manager *manager, >+ int (*reg_reusable) (const struct drm_reg *, const void *), >+ void (*reg_destroy) (struct drm_reg *)) >+{ >+ INIT_LIST_HEAD(&manager->free); >+ INIT_LIST_HEAD(&manager->lru); >+ INIT_LIST_HEAD(&manager->unfenced); >+ manager->reg_reusable = reg_reusable; >+ manager->reg_destroy = reg_destroy; >+} >+EXPORT_SYMBOL(drm_regs_init); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_sarea.h linux-2.6.23.i686/drivers/char/drm/drm_sarea.h >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_sarea.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_sarea.h 2008-01-06 09:24:57.000000000 +0100 >@@ -2,7 +2,7 @@ > * \file drm_sarea.h > * \brief SAREA definitions > * >- * \author Michel Dänzer <michel@daenzer.net> >+ * \author Michel D�zer <michel@daenzer.net> > */ > > /* >@@ -41,11 +41,11 @@ > #define SAREA_MAX 0x10000 /* 64kB */ > #else > /* Intel 830M driver needs at least 8k SAREA */ >-#define SAREA_MAX 0x2000 >+#define SAREA_MAX 0x2000UL > #endif > > /** Maximum number of drawables in the SAREA */ >-#define SAREA_MAX_DRAWABLES 256 >+#define SAREA_MAX_DRAWABLES 256 > > #define SAREA_DRAWABLE_CLAIMED_ENTRY 0x80000000 > >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_scatter.c linux-2.6.23.i686/drivers/char/drm/drm_scatter.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_scatter.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_scatter.c 2008-01-06 09:24:57.000000000 +0100 >@@ -36,7 +36,7 @@ > > #define DEBUG_SCATTER 0 > >-void drm_sg_cleanup(struct drm_sg_mem * entry) >+void drm_sg_cleanup(struct drm_sg_mem *entry) > { > struct page *page; > int i; >@@ -55,6 +55,7 @@ void drm_sg_cleanup(struct drm_sg_mem * > entry->pages * sizeof(*entry->pagelist), DRM_MEM_PAGES); > drm_free(entry, sizeof(*entry), DRM_MEM_SGLISTS); > } >+EXPORT_SYMBOL(drm_sg_cleanup); > > #ifdef _LP64 > # define ScatterHandle(x) (unsigned int)((x >> 32) + (x & ((1L << 32) - 1))) >@@ -182,10 +183,10 @@ int drm_sg_alloc(struct drm_device *dev, > failed: > drm_sg_cleanup(entry); > return -ENOMEM; >+ > } > EXPORT_SYMBOL(drm_sg_alloc); > >- > int drm_sg_alloc_ioctl(struct drm_device *dev, void *data, > struct drm_file *file_priv) > { >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_stub.c linux-2.6.23.i686/drivers/char/drm/drm_stub.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_stub.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_stub.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,5 +1,5 @@ > /** >- * \file drm_stub.h >+ * \file drm_stub.c > * Stub support > * > * \author Rickard E. (Rik) Faith <faith@valinux.com> >@@ -33,11 +33,12 @@ > > #include <linux/module.h> > #include <linux/moduleparam.h> >+ > #include "drmP.h" > #include "drm_core.h" > > unsigned int drm_cards_limit = 16; /* Enough for one machine */ >-unsigned int drm_debug = 0; /* 1 to enable debug output */ >+unsigned int drm_debug = 0; /* 1 to enable debug output */ > EXPORT_SYMBOL(drm_debug); > > MODULE_AUTHOR(CORE_AUTHOR); >@@ -71,6 +72,7 @@ static int drm_fill_in_dev(struct drm_de > init_timer(&dev->timer); > mutex_init(&dev->struct_mutex); > mutex_init(&dev->ctxlist_mutex); >+ mutex_init(&dev->bm.evict_mutex); > > idr_init(&dev->drw_idr); > >@@ -82,12 +84,24 @@ static int drm_fill_in_dev(struct drm_de > dev->hose = pdev->sysdata; > #endif > dev->irq = pdev->irq; >+ dev->irq_enabled = 0; >+ >+ if (drm_ht_create(&dev->map_hash, DRM_MAP_HASH_ORDER)) { >+ return -ENOMEM; >+ } >+ if (drm_mm_init(&dev->offset_manager, DRM_FILE_PAGE_OFFSET_START, >+ DRM_FILE_PAGE_OFFSET_SIZE)) { >+ drm_ht_remove(&dev->map_hash); >+ return -ENOMEM; >+ } > >- if (drm_ht_create(&dev->map_hash, 12)) { >+ if (drm_ht_create(&dev->object_hash, DRM_OBJECT_HASH_ORDER)) { >+ drm_ht_remove(&dev->map_hash); >+ drm_mm_takedown(&dev->offset_manager); > return -ENOMEM; > } > >- /* the DRM has 6 basic counters */ >+ /* the DRM has 6 counters */ > dev->counters = 6; > dev->types[0] = _DRM_STAT_LOCK; > dev->types[1] = _DRM_STAT_OPENS; >@@ -98,10 +112,6 @@ static int drm_fill_in_dev(struct drm_de > > dev->driver = driver; > >- if (dev->driver->load) >- if ((retcode = dev->driver->load(dev, ent->driver_data))) >- goto error_out_unreg; >- > if (drm_core_has_AGP(dev)) { > if (drm_device_is_agp(dev)) > dev->agp = drm_agp_init(dev); >@@ -111,6 +121,7 @@ static int drm_fill_in_dev(struct drm_de > retcode = -EINVAL; > goto error_out_unreg; > } >+ > if (drm_core_has_MTRR(dev)) { > if (dev->agp) > dev->agp->agp_mtrr = >@@ -120,20 +131,25 @@ static int drm_fill_in_dev(struct drm_de > } > } > >+ if (dev->driver->load) >+ if ((retcode = dev->driver->load(dev, ent->driver_data))) >+ goto error_out_unreg; >+ >+ > retcode = drm_ctxbitmap_init(dev); > if (retcode) { > DRM_ERROR("Cannot allocate memory for context bitmap.\n"); > goto error_out_unreg; > } > >+ drm_fence_manager_init(dev); > return 0; > >- error_out_unreg: >+error_out_unreg: > drm_lastclose(dev); > return retcode; > } > >- > /** > * Get a secondary minor number. > * >@@ -157,9 +173,10 @@ static int drm_get_head(struct drm_devic > if (!*heads) { > > *head = (struct drm_head) { >- .dev = dev,.device = >- MKDEV(DRM_MAJOR, minor),.minor = minor,}; >- >+ .dev = dev, >+ .device = MKDEV(DRM_MAJOR, minor), >+ .minor = minor, >+ }; > if ((ret = > drm_proc_init(dev, minor, drm_proc_root, > &head->dev_root))) { >@@ -168,11 +185,10 @@ static int drm_get_head(struct drm_devic > goto err_g1; > } > >- head->dev_class = drm_sysfs_device_add(drm_class, head); >- if (IS_ERR(head->dev_class)) { >+ ret = drm_sysfs_device_add(dev, head); >+ if (ret) { > printk(KERN_ERR > "DRM: Error sysfs_device_add.\n"); >- ret = PTR_ERR(head->dev_class); > goto err_g2; > } > *heads = head; >@@ -183,11 +199,11 @@ static int drm_get_head(struct drm_devic > } > DRM_ERROR("out of minors\n"); > return -ENOMEM; >- err_g2: >+err_g2: > drm_proc_cleanup(minor, drm_proc_root, head->dev_root); >- err_g1: >+err_g1: > *head = (struct drm_head) { >- .dev = NULL}; >+ .dev = NULL}; > return ret; > } > >@@ -214,29 +230,47 @@ int drm_get_dev(struct pci_dev *pdev, co > if (!dev) > return -ENOMEM; > >+ if (!drm_fb_loaded) { >+ pci_set_drvdata(pdev, dev); >+ ret = pci_request_regions(pdev, driver->pci_driver.name); >+ if (ret) >+ goto err_g1; >+ } >+ > ret = pci_enable_device(pdev); > if (ret) >- goto err_g1; >+ goto err_g2; >+ pci_set_master(pdev); > > if ((ret = drm_fill_in_dev(dev, pdev, ent, driver))) { >- printk(KERN_ERR "DRM: Fill_in_dev failed.\n"); >- goto err_g2; >+ printk(KERN_ERR "DRM: fill_in_dev failed\n"); >+ goto err_g3; > } > if ((ret = drm_get_head(dev, &dev->primary))) >- goto err_g2; >- >+ goto err_g3; >+ > DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n", > driver->name, driver->major, driver->minor, driver->patchlevel, > driver->date, dev->primary.minor); > > return 0; > >-err_g2: >- pci_disable_device(pdev); >-err_g1: >+ err_g3: >+ if (!drm_fb_loaded) >+ pci_disable_device(pdev); >+ err_g2: >+ if (!drm_fb_loaded) >+ pci_release_regions(pdev); >+ err_g1: >+ if (!drm_fb_loaded) >+ pci_set_drvdata(pdev, NULL); >+ > drm_free(dev, sizeof(*dev), DRM_MEM_STUB); >+ printk(KERN_ERR "DRM: drm_get_dev failed.\n"); > return ret; > } >+EXPORT_SYMBOL(drm_get_dev); >+ > > /** > * Put a device minor number. >@@ -283,11 +317,10 @@ int drm_put_head(struct drm_head * head) > DRM_DEBUG("release secondary minor %d\n", minor); > > drm_proc_cleanup(minor, drm_proc_root, head->dev_root); >- drm_sysfs_device_remove(head->dev_class); >+ drm_sysfs_device_remove(head->dev); > > *head = (struct drm_head) {.dev = NULL}; > > drm_heads[minor] = NULL; >- > return 0; > } >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_sysfs.c linux-2.6.23.i686/drivers/char/drm/drm_sysfs.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_sysfs.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/drm_sysfs.c 2008-01-06 09:24:57.000000000 +0100 >@@ -19,6 +19,45 @@ > #include "drm_core.h" > #include "drmP.h" > >+#define to_drm_device(d) container_of(d, struct drm_device, dev) >+ >+/** >+ * drm_sysfs_suspend - DRM class suspend hook >+ * @dev: Linux device to suspend >+ * @state: power state to enter >+ * >+ * Just figures out what the actual struct drm_device associated with >+ * @dev is and calls its suspend hook, if present. >+ */ >+static int drm_sysfs_suspend(struct device *dev, pm_message_t state) >+{ >+ struct drm_device *drm_dev = to_drm_device(dev); >+ >+ printk(KERN_ERR "%s\n", __FUNCTION__); >+ >+ if (drm_dev->driver->suspend) >+ return drm_dev->driver->suspend(drm_dev); >+ >+ return 0; >+} >+ >+/** >+ * drm_sysfs_resume - DRM class resume hook >+ * @dev: Linux device to resume >+ * >+ * Just figures out what the actual struct drm_device associated with >+ * @dev is and calls its resume hook, if present. >+ */ >+static int drm_sysfs_resume(struct device *dev) >+{ >+ struct drm_device *drm_dev = to_drm_device(dev); >+ >+ if (drm_dev->driver->resume) >+ return drm_dev->driver->resume(drm_dev); >+ >+ return 0; >+} >+ > /* Display the version of drm_core. This doesn't work right in current design */ > static ssize_t version_show(struct class *dev, char *buf) > { >@@ -33,7 +72,7 @@ static CLASS_ATTR(version, S_IRUGO, vers > * @owner: pointer to the module that is to "own" this struct drm_sysfs_class > * @name: pointer to a string for the name of this class. > * >- * This is used to create a struct drm_sysfs_class pointer that can then be used >+ * This is used to create DRM class pointer that can then be used > * in calls to drm_sysfs_device_add(). > * > * Note, the pointer created here is to be destroyed when finished by making a >@@ -50,6 +89,11 @@ struct class *drm_sysfs_create(struct mo > goto err_out; > } > >+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,22)) >+ class->suspend = drm_sysfs_suspend; >+ class->resume = drm_sysfs_resume; >+#endif >+ > err = class_create_file(class, &class_attr_version); > if (err) > goto err_out_class; >@@ -63,94 +107,100 @@ err_out: > } > > /** >- * drm_sysfs_destroy - destroys a struct drm_sysfs_class structure >- * @cs: pointer to the struct drm_sysfs_class that is to be destroyed >+ * drm_sysfs_destroy - destroys DRM class > * >- * Note, the pointer to be destroyed must have been created with a call to >- * drm_sysfs_create(). >+ * Destroy the DRM device class. > */ >-void drm_sysfs_destroy(struct class *class) >+void drm_sysfs_destroy(void) > { >- if ((class == NULL) || (IS_ERR(class))) >+ if ((drm_class == NULL) || (IS_ERR(drm_class))) > return; >- >- class_remove_file(class, &class_attr_version); >- class_destroy(class); >+ class_remove_file(drm_class, &class_attr_version); >+ class_destroy(drm_class); > } > >-static ssize_t show_dri(struct class_device *class_device, char *buf) >+static ssize_t show_dri(struct device *device, struct device_attribute *attr, >+ char *buf) > { >- struct drm_device * dev = ((struct drm_head *)class_get_devdata(class_device))->dev; >+ struct drm_device *dev = to_drm_device(device); > if (dev->driver->dri_library_name) > return dev->driver->dri_library_name(dev, buf); > return snprintf(buf, PAGE_SIZE, "%s\n", dev->driver->pci_driver.name); > } > >-static struct class_device_attribute class_device_attrs[] = { >+static struct device_attribute device_attrs[] = { > __ATTR(dri_library_name, S_IRUGO, show_dri, NULL), > }; > > /** >+ * drm_sysfs_device_release - do nothing >+ * @dev: Linux device >+ * >+ * Normally, this would free the DRM device associated with @dev, along >+ * with cleaning up any other stuff. But we do that in the DRM core, so >+ * this function can just return and hope that the core does its job. >+ */ >+static void drm_sysfs_device_release(struct device *dev) >+{ >+ return; >+} >+ >+/** > * drm_sysfs_device_add - adds a class device to sysfs for a character driver >- * @cs: pointer to the struct class that this device should be registered to. >- * @dev: the dev_t for the device to be added. >- * @device: a pointer to a struct device that is assiociated with this class device. >- * @fmt: string for the class device's name >- * >- * A struct class_device will be created in sysfs, registered to the specified >- * class. A "dev" file will be created, showing the dev_t for the device. The >- * pointer to the struct class_device will be returned from the call. Any further >- * sysfs files that might be required can be created using this pointer. >- * Note: the struct class passed to this function must have previously been >- * created with a call to drm_sysfs_create(). >- */ >-struct class_device *drm_sysfs_device_add(struct class *cs, struct drm_head *head) >-{ >- struct class_device *class_dev; >- int i, j, err; >- >- class_dev = class_device_create(cs, NULL, >- MKDEV(DRM_MAJOR, head->minor), >- &(head->dev->pdev)->dev, >- "card%d", head->minor); >- if (IS_ERR(class_dev)) { >- err = PTR_ERR(class_dev); >+ * @dev: DRM device to be added >+ * @head: DRM head in question >+ * >+ * Add a DRM device to the DRM's device model class. We use @dev's PCI device >+ * as the parent for the Linux device, and make sure it has a file containing >+ * the driver we're using (for userspace compatibility). >+ */ >+int drm_sysfs_device_add(struct drm_device *dev, struct drm_head *head) >+{ >+ int err; >+ int i, j; >+ >+ dev->dev.parent = &dev->pdev->dev; >+ dev->dev.class = drm_class; >+ dev->dev.release = drm_sysfs_device_release; >+ dev->dev.devt = head->device; >+ snprintf(dev->dev.bus_id, BUS_ID_SIZE, "card%d", head->minor); >+ >+ err = device_register(&dev->dev); >+ if (err) { >+ DRM_ERROR("device add failed: %d\n", err); > goto err_out; > } > >- class_set_devdata(class_dev, head); >- >- for (i = 0; i < ARRAY_SIZE(class_device_attrs); i++) { >- err = class_device_create_file(class_dev, >- &class_device_attrs[i]); >+ for (i = 0; i < ARRAY_SIZE(device_attrs); i++) { >+ err = device_create_file(&dev->dev, &device_attrs[i]); > if (err) > goto err_out_files; > } > >- return class_dev; >+ return 0; > > err_out_files: > if (i > 0) > for (j = 0; j < i; j++) >- class_device_remove_file(class_dev, >- &class_device_attrs[i]); >- class_device_unregister(class_dev); >+ device_remove_file(&dev->dev, &device_attrs[i]); >+ device_unregister(&dev->dev); > err_out: >- return ERR_PTR(err); >+ >+ return err; > } > > /** >- * drm_sysfs_device_remove - removes a class device that was created with drm_sysfs_device_add() >- * @dev: the dev_t of the device that was previously registered. >+ * drm_sysfs_device_remove - remove DRM device >+ * @dev: DRM device to remove > * > * This call unregisters and cleans up a class device that was created with a > * call to drm_sysfs_device_add() > */ >-void drm_sysfs_device_remove(struct class_device *class_dev) >+void drm_sysfs_device_remove(struct drm_device *dev) > { > int i; > >- for (i = 0; i < ARRAY_SIZE(class_device_attrs); i++) >- class_device_remove_file(class_dev, &class_device_attrs[i]); >- class_device_unregister(class_dev); >+ for (i = 0; i < ARRAY_SIZE(device_attrs); i++) >+ device_remove_file(&dev->dev, &device_attrs[i]); >+ device_unregister(&dev->dev); > } >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_ttm.c linux-2.6.23.i686/drivers/char/drm/drm_ttm.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_ttm.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_ttm.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,472 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2006-2007 Tungsten Graphics, Inc., Cedar Park, TX., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+ >+static void drm_ttm_ipi_handler(void *null) >+{ >+ flush_agp_cache(); >+} >+ >+void drm_ttm_cache_flush(void) >+{ >+ if (on_each_cpu(drm_ttm_ipi_handler, NULL, 1, 1) != 0) >+ DRM_ERROR("Timed out waiting for drm cache flush.\n"); >+} >+EXPORT_SYMBOL(drm_ttm_cache_flush); >+ >+/* >+ * Use kmalloc if possible. Otherwise fall back to vmalloc. >+ */ >+ >+static void drm_ttm_alloc_pages(struct drm_ttm *ttm) >+{ >+ unsigned long size = ttm->num_pages * sizeof(*ttm->pages); >+ ttm->pages = NULL; >+ >+ if (drm_alloc_memctl(size)) >+ return; >+ >+ if (size <= PAGE_SIZE) >+ ttm->pages = drm_calloc(1, size, DRM_MEM_TTM); >+ >+ if (!ttm->pages) { >+ ttm->pages = vmalloc_user(size); >+ if (ttm->pages) >+ ttm->page_flags |= DRM_TTM_PAGE_VMALLOC; >+ } >+ if (!ttm->pages) >+ drm_free_memctl(size); >+} >+ >+static void drm_ttm_free_pages(struct drm_ttm *ttm) >+{ >+ unsigned long size = ttm->num_pages * sizeof(*ttm->pages); >+ >+ if (ttm->page_flags & DRM_TTM_PAGE_VMALLOC) { >+ vfree(ttm->pages); >+ ttm->page_flags &= ~DRM_TTM_PAGE_VMALLOC; >+ } else { >+ drm_free(ttm->pages, size, DRM_MEM_TTM); >+ } >+ drm_free_memctl(size); >+ ttm->pages = NULL; >+} >+ >+static struct page *drm_ttm_alloc_page(void) >+{ >+ struct page *page; >+ >+ if (drm_alloc_memctl(PAGE_SIZE)) >+ return NULL; >+ >+ page = alloc_page(GFP_KERNEL | __GFP_ZERO | GFP_DMA32); >+ if (!page) { >+ drm_free_memctl(PAGE_SIZE); >+ return NULL; >+ } >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+ SetPageReserved(page); >+#endif >+ return page; >+} >+ >+/* >+ * Change caching policy for the linear kernel map >+ * for range of pages in a ttm. >+ */ >+ >+static int drm_ttm_set_caching(struct drm_ttm *ttm, int noncached) >+{ >+ int i; >+ struct page **cur_page; >+ int do_tlbflush = 0; >+ >+ if ((ttm->page_flags & DRM_TTM_PAGE_UNCACHED) == noncached) >+ return 0; >+ >+ if (noncached) >+ drm_ttm_cache_flush(); >+ >+ for (i = 0; i < ttm->num_pages; ++i) { >+ cur_page = ttm->pages + i; >+ if (*cur_page) { >+ if (!PageHighMem(*cur_page)) { >+ if (noncached) { >+ map_page_into_agp(*cur_page); >+ } else { >+ unmap_page_from_agp(*cur_page); >+ } >+ do_tlbflush = 1; >+ } >+ } >+ } >+ if (do_tlbflush) >+ flush_agp_mappings(); >+ >+ DRM_FLAG_MASKED(ttm->page_flags, noncached, DRM_TTM_PAGE_UNCACHED); >+ >+ return 0; >+} >+ >+ >+static void drm_ttm_free_user_pages(struct drm_ttm *ttm) >+{ >+ int write; >+ int dirty; >+ struct page *page; >+ int i; >+ >+ BUG_ON(!(ttm->page_flags & DRM_TTM_PAGE_USER)); >+ write = ((ttm->page_flags & DRM_TTM_PAGE_WRITE) != 0); >+ dirty = ((ttm->page_flags & DRM_TTM_PAGE_USER_DIRTY) != 0); >+ >+ for (i = 0; i < ttm->num_pages; ++i) { >+ page = ttm->pages[i]; >+ if (page == NULL) >+ continue; >+ >+ if (page == ttm->dummy_read_page) { >+ BUG_ON(write); >+ continue; >+ } >+ >+ if (write && dirty && !PageReserved(page)) >+ set_page_dirty_lock(page); >+ >+ ttm->pages[i] = NULL; >+ put_page(page); >+ } >+} >+ >+static void drm_ttm_free_alloced_pages(struct drm_ttm *ttm) >+{ >+ int i; >+ struct drm_buffer_manager *bm = &ttm->dev->bm; >+ struct page **cur_page; >+ >+ for (i = 0; i < ttm->num_pages; ++i) { >+ cur_page = ttm->pages + i; >+ if (*cur_page) { >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,15)) >+ ClearPageReserved(*cur_page); >+#endif >+ if (page_count(*cur_page) != 1) >+ DRM_ERROR("Erroneous page count. Leaking pages.\n"); >+ if (page_mapped(*cur_page)) >+ DRM_ERROR("Erroneous map count. Leaking page mappings.\n"); >+ __free_page(*cur_page); >+ drm_free_memctl(PAGE_SIZE); >+ --bm->cur_pages; >+ } >+ } >+} >+ >+/* >+ * Free all resources associated with a ttm. >+ */ >+ >+int drm_ttm_destroy(struct drm_ttm *ttm) >+{ >+ struct drm_ttm_backend *be; >+ >+ if (!ttm) >+ return 0; >+ >+ be = ttm->be; >+ if (be) { >+ be->func->destroy(be); >+ ttm->be = NULL; >+ } >+ >+ if (ttm->pages) { >+ if (ttm->page_flags & DRM_TTM_PAGE_UNCACHED) >+ drm_ttm_set_caching(ttm, 0); >+ >+ if (ttm->page_flags & DRM_TTM_PAGE_USER) >+ drm_ttm_free_user_pages(ttm); >+ else >+ drm_ttm_free_alloced_pages(ttm); >+ >+ drm_ttm_free_pages(ttm); >+ } >+ >+ drm_ctl_free(ttm, sizeof(*ttm), DRM_MEM_TTM); >+ return 0; >+} >+ >+struct page *drm_ttm_get_page(struct drm_ttm *ttm, int index) >+{ >+ struct page *p; >+ struct drm_buffer_manager *bm = &ttm->dev->bm; >+ >+ p = ttm->pages[index]; >+ if (!p) { >+ p = drm_ttm_alloc_page(); >+ if (!p) >+ return NULL; >+ ttm->pages[index] = p; >+ ++bm->cur_pages; >+ } >+ return p; >+} >+EXPORT_SYMBOL(drm_ttm_get_page); >+ >+/** >+ * drm_ttm_set_user: >+ * >+ * @ttm: the ttm to map pages to. This must always be >+ * a freshly created ttm. >+ * >+ * @tsk: a pointer to the address space from which to map >+ * pages. >+ * >+ * @write: a boolean indicating that write access is desired >+ * >+ * start: the starting address >+ * >+ * Map a range of user addresses to a new ttm object. This >+ * provides access to user memory from the graphics device. >+ */ >+int drm_ttm_set_user(struct drm_ttm *ttm, >+ struct task_struct *tsk, >+ unsigned long start, >+ unsigned long num_pages) >+{ >+ struct mm_struct *mm = tsk->mm; >+ int ret; >+ int write = (ttm->page_flags & DRM_TTM_PAGE_WRITE) != 0; >+ >+ BUG_ON(num_pages != ttm->num_pages); >+ BUG_ON((ttm->page_flags & DRM_TTM_PAGE_USER) == 0); >+ >+ down_read(&mm->mmap_sem); >+ ret = get_user_pages(tsk, mm, start, num_pages, >+ write, 0, ttm->pages, NULL); >+ up_read(&mm->mmap_sem); >+ >+ if (ret != num_pages && write) { >+ drm_ttm_free_user_pages(ttm); >+ return -ENOMEM; >+ } >+ >+ return 0; >+} >+ >+/** >+ * drm_ttm_populate: >+ * >+ * @ttm: the object to allocate pages for >+ * >+ * Allocate pages for all unset page entries, then >+ * call the backend to create the hardware mappings >+ */ >+int drm_ttm_populate(struct drm_ttm *ttm) >+{ >+ struct page *page; >+ unsigned long i; >+ struct drm_ttm_backend *be; >+ >+ if (ttm->state != ttm_unpopulated) >+ return 0; >+ >+ be = ttm->be; >+ if (ttm->page_flags & DRM_TTM_PAGE_WRITE) { >+ for (i = 0; i < ttm->num_pages; ++i) { >+ page = drm_ttm_get_page(ttm, i); >+ if (!page) >+ return -ENOMEM; >+ } >+ } >+ be->func->populate(be, ttm->num_pages, ttm->pages, ttm->dummy_read_page); >+ ttm->state = ttm_unbound; >+ return 0; >+} >+ >+/** >+ * drm_ttm_create: >+ * >+ * @dev: the drm_device >+ * >+ * @size: The size (in bytes) of the desired object >+ * >+ * @page_flags: various DRM_TTM_PAGE_* flags. See drm_object.h. >+ * >+ * Allocate and initialize a ttm, leaving it unpopulated at this time >+ */ >+ >+struct drm_ttm *drm_ttm_create(struct drm_device *dev, unsigned long size, >+ uint32_t page_flags, struct page *dummy_read_page) >+{ >+ struct drm_bo_driver *bo_driver = dev->driver->bo_driver; >+ struct drm_ttm *ttm; >+ >+ if (!bo_driver) >+ return NULL; >+ >+ ttm = drm_ctl_calloc(1, sizeof(*ttm), DRM_MEM_TTM); >+ if (!ttm) >+ return NULL; >+ >+ ttm->dev = dev; >+ atomic_set(&ttm->vma_count, 0); >+ >+ ttm->destroy = 0; >+ ttm->num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; >+ >+ ttm->page_flags = page_flags; >+ >+ ttm->dummy_read_page = dummy_read_page; >+ >+ /* >+ * Account also for AGP module memory usage. >+ */ >+ >+ drm_ttm_alloc_pages(ttm); >+ if (!ttm->pages) { >+ drm_ttm_destroy(ttm); >+ DRM_ERROR("Failed allocating page table\n"); >+ return NULL; >+ } >+ ttm->be = bo_driver->create_ttm_backend_entry(dev); >+ if (!ttm->be) { >+ drm_ttm_destroy(ttm); >+ DRM_ERROR("Failed creating ttm backend entry\n"); >+ return NULL; >+ } >+ ttm->state = ttm_unpopulated; >+ return ttm; >+} >+ >+/** >+ * drm_ttm_evict: >+ * >+ * @ttm: the object to be unbound from the aperture. >+ * >+ * Transition a ttm from bound to evicted, where it >+ * isn't present in the aperture, but various caches may >+ * not be consistent. >+ */ >+void drm_ttm_evict(struct drm_ttm *ttm) >+{ >+ struct drm_ttm_backend *be = ttm->be; >+ int ret; >+ >+ if (ttm->state == ttm_bound) { >+ ret = be->func->unbind(be); >+ BUG_ON(ret); >+ } >+ >+ ttm->state = ttm_evicted; >+} >+ >+/** >+ * drm_ttm_fixup_caching: >+ * >+ * @ttm: the object to set unbound >+ * >+ * XXX this function is misnamed. Transition a ttm from evicted to >+ * unbound, flushing caches as appropriate. >+ */ >+void drm_ttm_fixup_caching(struct drm_ttm *ttm) >+{ >+ >+ if (ttm->state == ttm_evicted) { >+ struct drm_ttm_backend *be = ttm->be; >+ if (be->func->needs_ub_cache_adjust(be)) >+ drm_ttm_set_caching(ttm, 0); >+ ttm->state = ttm_unbound; >+ } >+} >+ >+/** >+ * drm_ttm_unbind: >+ * >+ * @ttm: the object to unbind from the graphics device >+ * >+ * Unbind an object from the aperture. This removes the mappings >+ * from the graphics device and flushes caches if necessary. >+ */ >+void drm_ttm_unbind(struct drm_ttm *ttm) >+{ >+ if (ttm->state == ttm_bound) >+ drm_ttm_evict(ttm); >+ >+ drm_ttm_fixup_caching(ttm); >+} >+ >+/** >+ * drm_ttm_bind: >+ * >+ * @ttm: the ttm object to bind to the graphics device >+ * >+ * @bo_mem: the aperture memory region which will hold the object >+ * >+ * Bind a ttm object to the aperture. This ensures that the necessary >+ * pages are allocated, flushes CPU caches as needed and marks the >+ * ttm as DRM_TTM_PAGE_USER_DIRTY to indicate that it may have been >+ * modified by the GPU >+ */ >+int drm_ttm_bind(struct drm_ttm *ttm, struct drm_bo_mem_reg *bo_mem) >+{ >+ struct drm_bo_driver *bo_driver = ttm->dev->driver->bo_driver; >+ int ret = 0; >+ struct drm_ttm_backend *be; >+ >+ if (!ttm) >+ return -EINVAL; >+ if (ttm->state == ttm_bound) >+ return 0; >+ >+ be = ttm->be; >+ >+ ret = drm_ttm_populate(ttm); >+ if (ret) >+ return ret; >+ >+ if (ttm->state == ttm_unbound && !(bo_mem->flags & DRM_BO_FLAG_CACHED)) >+ drm_ttm_set_caching(ttm, DRM_TTM_PAGE_UNCACHED); >+ else if ((bo_mem->flags & DRM_BO_FLAG_CACHED_MAPPED) && >+ bo_driver->ttm_cache_flush) >+ bo_driver->ttm_cache_flush(ttm); >+ >+ ret = be->func->bind(be, bo_mem); >+ if (ret) { >+ ttm->state = ttm_evicted; >+ DRM_ERROR("Couldn't bind backend.\n"); >+ return ret; >+ } >+ >+ ttm->state = ttm_bound; >+ if (ttm->page_flags & DRM_TTM_PAGE_USER) >+ ttm->page_flags |= DRM_TTM_PAGE_USER_DIRTY; >+ return 0; >+} >+EXPORT_SYMBOL(drm_ttm_bind); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/drm_vm.c linux-2.6.23.i686/drivers/char/drm/drm_vm.c >--- linux-2.6.23.i686.orig/drivers/char/drm/drm_vm.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/drm_vm.c 2008-01-06 09:24:57.000000000 +0100 >@@ -34,14 +34,19 @@ > */ > > #include "drmP.h" >+ > #if defined(__ia64__) > #include <linux/efi.h> > #endif > > static void drm_vm_open(struct vm_area_struct *vma); > static void drm_vm_close(struct vm_area_struct *vma); >+static int drm_bo_mmap_locked(struct vm_area_struct *vma, >+ struct file *filp, >+ drm_local_map_t *map); >+ > >-static pgprot_t drm_io_prot(uint32_t map_type, struct vm_area_struct *vma) >+pgprot_t drm_io_prot(uint32_t map_type, struct vm_area_struct *vma) > { > pgprot_t tmp = vm_get_page_prot(vma->vm_flags); > >@@ -65,6 +70,7 @@ static pgprot_t drm_io_prot(uint32_t map > return tmp; > } > >+ > /** > * \c nopage method for AGP virtual memory. > * >@@ -132,10 +138,13 @@ static __inline__ struct page *drm_do_vm > page = virt_to_page(__va(agpmem->memory->memory[offset])); > get_page(page); > >+#if 0 >+ /* page_count() not defined everywhere */ > DRM_DEBUG > ("baddr = 0x%lx page = 0x%p, offset = 0x%lx, count=%d\n", > baddr, __va(agpmem->memory->memory[offset]), offset, > page_count(page)); >+#endif > > return page; > } >@@ -213,10 +222,9 @@ static void drm_vm_shm_close(struct vm_a > found_maps++; > if (pt->vma == vma) { > list_del(&pt->head); >- drm_free(pt, sizeof(*pt), DRM_MEM_VMAS); >+ drm_ctl_free(pt, sizeof(*pt), DRM_MEM_VMAS); > } > } >- > /* We were the only map that was found */ > if (found_maps == 1 && map->flags & _DRM_REMOVABLE) { > /* Check to see if we are in the maplist, if we are not, then >@@ -255,6 +263,9 @@ static void drm_vm_shm_close(struct vm_a > dmah.size = map->size; > __drm_pci_free(dev, &dmah); > break; >+ case _DRM_TTM: >+ BUG_ON(1); >+ break; > } > drm_free(map, sizeof(*map), DRM_MEM_MAPS); > } >@@ -319,6 +330,7 @@ static __inline__ struct page *drm_do_vm > unsigned long page_offset; > struct page *page; > >+ DRM_DEBUG("\n"); > if (!entry) > return NOPAGE_SIGBUS; /* Error */ > if (address > vma->vm_end) >@@ -367,6 +379,7 @@ static struct page *drm_vm_sg_nopage(str > return drm_do_vm_sg_nopage(vma, address); > } > >+ > /** AGP virtual memory operations */ > static struct vm_operations_struct drm_vm_ops = { > .nopage = drm_vm_nopage, >@@ -413,7 +426,7 @@ static void drm_vm_open_locked(struct vm > vma->vm_start, vma->vm_end - vma->vm_start); > atomic_inc(&dev->vma_count); > >- vma_entry = drm_alloc(sizeof(*vma_entry), DRM_MEM_VMAS); >+ vma_entry = drm_ctl_alloc(sizeof(*vma_entry), DRM_MEM_VMAS); > if (vma_entry) { > vma_entry->vma = vma; > vma_entry->pid = current->pid; >@@ -453,13 +466,14 @@ static void drm_vm_close(struct vm_area_ > list_for_each_entry_safe(pt, temp, &dev->vmalist, head) { > if (pt->vma == vma) { > list_del(&pt->head); >- drm_free(pt, sizeof(*pt), DRM_MEM_VMAS); >+ drm_ctl_free(pt, sizeof(*pt), DRM_MEM_VMAS); > break; > } > } > mutex_unlock(&dev->struct_mutex); > } > >+ > /** > * mmap DMA memory. > * >@@ -487,8 +501,7 @@ static int drm_mmap_dma(struct file *fil > return -EINVAL; > } > >- if (!capable(CAP_SYS_ADMIN) && >- (dma->flags & _DRM_DMA_USE_PCI_RO)) { >+ if (!capable(CAP_SYS_ADMIN) && (dma->flags & _DRM_DMA_USE_PCI_RO)) { > vma->vm_flags &= ~(VM_WRITE | VM_MAYWRITE); > #if defined(__i386__) || defined(__x86_64__) > pgprot_val(vma->vm_page_prot) &= ~_PAGE_RW; >@@ -504,7 +517,6 @@ static int drm_mmap_dma(struct file *fil > } > > vma->vm_ops = &drm_vm_dma_ops; >- > vma->vm_flags |= VM_RESERVED; /* Don't swap */ > > vma->vm_file = filp; /* Needed for drm_vm_open() */ >@@ -516,7 +528,6 @@ unsigned long drm_core_get_map_ofs(struc > { > return map->offset; > } >- > EXPORT_SYMBOL(drm_core_get_map_ofs); > > unsigned long drm_core_get_reg_ofs(struct drm_device *dev) >@@ -527,7 +538,6 @@ unsigned long drm_core_get_reg_ofs(struc > return 0; > #endif > } >- > EXPORT_SYMBOL(drm_core_get_reg_ofs); > > /** >@@ -561,6 +571,7 @@ static int drm_mmap_locked(struct file * > * the AGP mapped at physical address 0 > * --BenH. > */ >+ > if (!vma->vm_pgoff > #if __OS_HAS_AGP > && (!dev->agp >@@ -651,6 +662,8 @@ static int drm_mmap_locked(struct file * > vma->vm_private_data = (void *)map; > vma->vm_flags |= VM_RESERVED; > break; >+ case _DRM_TTM: >+ return drm_bo_mmap_locked(vma, filp, map); > default: > return -EINVAL; /* This should never happen. */ > } >@@ -674,3 +687,210 @@ int drm_mmap(struct file *filp, struct v > return ret; > } > EXPORT_SYMBOL(drm_mmap); >+ >+/** >+ * buffer object vm functions. >+ */ >+ >+/** >+ * \c Pagefault method for buffer objects. >+ * >+ * \param vma Virtual memory area. >+ * \param address File offset. >+ * \return Error or refault. The pfn is manually inserted. >+ * >+ * It's important that pfns are inserted while holding the bo->mutex lock. >+ * otherwise we might race with unmap_mapping_range() which is always >+ * called with the bo->mutex lock held. >+ * >+ * We're modifying the page attribute bits of the vma->vm_page_prot field, >+ * without holding the mmap_sem in write mode. Only in read mode. >+ * These bits are not used by the mm subsystem code, and we consider them >+ * protected by the bo->mutex lock. >+ */ >+ >+#ifdef DRM_FULL_MM_COMPAT >+static unsigned long drm_bo_vm_nopfn(struct vm_area_struct *vma, >+ unsigned long address) >+{ >+ struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; >+ unsigned long page_offset; >+ struct page *page = NULL; >+ struct drm_ttm *ttm; >+ struct drm_device *dev; >+ unsigned long pfn; >+ int err; >+ unsigned long bus_base; >+ unsigned long bus_offset; >+ unsigned long bus_size; >+ unsigned long ret = NOPFN_REFAULT; >+ >+ if (address > vma->vm_end) >+ return NOPFN_SIGBUS; >+ >+ dev = bo->dev; >+ err = drm_bo_read_lock(&dev->bm.bm_lock); >+ if (err) >+ return NOPFN_REFAULT; >+ >+ err = mutex_lock_interruptible(&bo->mutex); >+ if (err) { >+ drm_bo_read_unlock(&dev->bm.bm_lock); >+ return NOPFN_REFAULT; >+ } >+ >+ err = drm_bo_wait(bo, 0, 0, 0); >+ if (err) { >+ ret = (err != -EAGAIN) ? NOPFN_SIGBUS : NOPFN_REFAULT; >+ goto out_unlock; >+ } >+ >+ /* >+ * If buffer happens to be in a non-mappable location, >+ * move it to a mappable. >+ */ >+ >+ if (!(bo->mem.flags & DRM_BO_FLAG_MAPPABLE)) { >+ uint32_t new_flags = bo->mem.proposed_flags | >+ DRM_BO_FLAG_MAPPABLE | >+ DRM_BO_FLAG_FORCE_MAPPABLE; >+ err = drm_bo_move_buffer(bo, new_flags, 0, 0); >+ if (err) { >+ ret = (err != -EAGAIN) ? NOPFN_SIGBUS : NOPFN_REFAULT; >+ goto out_unlock; >+ } >+ } >+ >+ err = drm_bo_pci_offset(dev, &bo->mem, &bus_base, &bus_offset, >+ &bus_size); >+ >+ if (err) { >+ ret = NOPFN_SIGBUS; >+ goto out_unlock; >+ } >+ >+ page_offset = (address - vma->vm_start) >> PAGE_SHIFT; >+ >+ if (bus_size) { >+ struct drm_mem_type_manager *man = &dev->bm.man[bo->mem.mem_type]; >+ >+ pfn = ((bus_base + bus_offset) >> PAGE_SHIFT) + page_offset; >+ vma->vm_page_prot = drm_io_prot(man->drm_bus_maptype, vma); >+ } else { >+ ttm = bo->ttm; >+ >+ drm_ttm_fixup_caching(ttm); >+ page = drm_ttm_get_page(ttm, page_offset); >+ if (!page) { >+ ret = NOPFN_OOM; >+ goto out_unlock; >+ } >+ pfn = page_to_pfn(page); >+ vma->vm_page_prot = (bo->mem.flags & DRM_BO_FLAG_CACHED) ? >+ vm_get_page_prot(vma->vm_flags) : >+ drm_io_prot(_DRM_TTM, vma); >+ } >+ >+ err = vm_insert_pfn(vma, address, pfn); >+ if (err) { >+ ret = (err != -EAGAIN) ? NOPFN_OOM : NOPFN_REFAULT; >+ goto out_unlock; >+ } >+out_unlock: >+ mutex_unlock(&bo->mutex); >+ drm_bo_read_unlock(&dev->bm.bm_lock); >+ return ret; >+} >+#endif >+ >+static void drm_bo_vm_open_locked(struct vm_area_struct *vma) >+{ >+ struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; >+ >+ drm_vm_open_locked(vma); >+ atomic_inc(&bo->usage); >+#ifdef DRM_ODD_MM_COMPAT >+ drm_bo_add_vma(bo, vma); >+#endif >+} >+ >+/** >+ * \c vma open method for buffer objects. >+ * >+ * \param vma virtual memory area. >+ */ >+ >+static void drm_bo_vm_open(struct vm_area_struct *vma) >+{ >+ struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; >+ struct drm_device *dev = bo->dev; >+ >+ mutex_lock(&dev->struct_mutex); >+ drm_bo_vm_open_locked(vma); >+ mutex_unlock(&dev->struct_mutex); >+} >+ >+/** >+ * \c vma close method for buffer objects. >+ * >+ * \param vma virtual memory area. >+ */ >+ >+static void drm_bo_vm_close(struct vm_area_struct *vma) >+{ >+ struct drm_buffer_object *bo = (struct drm_buffer_object *) vma->vm_private_data; >+ struct drm_device *dev = bo->dev; >+ >+ drm_vm_close(vma); >+ if (bo) { >+ mutex_lock(&dev->struct_mutex); >+#ifdef DRM_ODD_MM_COMPAT >+ drm_bo_delete_vma(bo, vma); >+#endif >+ drm_bo_usage_deref_locked((struct drm_buffer_object **) >+ &vma->vm_private_data); >+ mutex_unlock(&dev->struct_mutex); >+ } >+ return; >+} >+ >+static struct vm_operations_struct drm_bo_vm_ops = { >+#ifdef DRM_FULL_MM_COMPAT >+ .nopfn = drm_bo_vm_nopfn, >+#else >+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,19)) >+ .nopfn = drm_bo_vm_nopfn, >+#else >+ .nopage = drm_bo_vm_nopage, >+#endif >+#endif >+ .open = drm_bo_vm_open, >+ .close = drm_bo_vm_close, >+}; >+ >+/** >+ * mmap buffer object memory. >+ * >+ * \param vma virtual memory area. >+ * \param file_priv DRM file private. >+ * \param map The buffer object drm map. >+ * \return zero on success or a negative number on failure. >+ */ >+ >+int drm_bo_mmap_locked(struct vm_area_struct *vma, >+ struct file *filp, >+ drm_local_map_t *map) >+{ >+ vma->vm_ops = &drm_bo_vm_ops; >+ vma->vm_private_data = map->handle; >+ vma->vm_file = filp; >+ vma->vm_flags |= VM_RESERVED | VM_IO; >+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,19)) >+ vma->vm_flags |= VM_PFNMAP; >+#endif >+ drm_bo_vm_open_locked(vma); >+#ifdef DRM_ODD_MM_COMPAT >+ drm_bo_map_bound(vma); >+#endif >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/ffb_context.c linux-2.6.23.i686/drivers/char/drm/ffb_context.c >--- linux-2.6.23.i686.orig/drivers/char/drm/ffb_context.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/ffb_context.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,582 @@ >+/* $Id$ >+ * ffb_context.c: Creator/Creator3D DRI/DRM context switching. >+ * >+ * Copyright (C) 2000 David S. Miller (davem@redhat.com) >+ * >+ * Almost entirely stolen from tdfx_context.c, see there >+ * for authors. >+ */ >+ >+#include <linux/sched.h> >+#include <asm/upa.h> >+ >+#include "drmP.h" >+#include "ffb_drv.h" >+ >+static int ffb_alloc_queue(struct drm_device * dev, int is_2d_only) { >+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private; >+ int i; >+ >+ for (i = 0; i < FFB_MAX_CTXS; i++) { >+ if (fpriv->hw_state[i] == NULL) >+ break; >+ } >+ if (i == FFB_MAX_CTXS) >+ return -1; >+ >+ fpriv->hw_state[i] = kmalloc(sizeof(struct ffb_hw_context), GFP_KERNEL); >+ if (fpriv->hw_state[i] == NULL) >+ return -1; >+ >+ fpriv->hw_state[i]->is_2d_only = is_2d_only; >+ >+ /* Plus one because 0 is the special DRM_KERNEL_CONTEXT. */ >+ return i + 1; >+} >+ >+static void ffb_save_context(ffb_dev_priv_t * fpriv, int idx) >+{ >+ ffb_fbcPtr ffb = fpriv->regs; >+ struct ffb_hw_context *ctx; >+ int i; >+ >+ ctx = fpriv->hw_state[idx - 1]; >+ if (idx == 0 || ctx == NULL) >+ return; >+ >+ if (ctx->is_2d_only) { >+ /* 2D applications only care about certain pieces >+ * of state. >+ */ >+ ctx->drawop = upa_readl(&ffb->drawop); >+ ctx->ppc = upa_readl(&ffb->ppc); >+ ctx->wid = upa_readl(&ffb->wid); >+ ctx->fg = upa_readl(&ffb->fg); >+ ctx->bg = upa_readl(&ffb->bg); >+ ctx->xclip = upa_readl(&ffb->xclip); >+ ctx->fbc = upa_readl(&ffb->fbc); >+ ctx->rop = upa_readl(&ffb->rop); >+ ctx->cmp = upa_readl(&ffb->cmp); >+ ctx->matchab = upa_readl(&ffb->matchab); >+ ctx->magnab = upa_readl(&ffb->magnab); >+ ctx->pmask = upa_readl(&ffb->pmask); >+ ctx->xpmask = upa_readl(&ffb->xpmask); >+ ctx->lpat = upa_readl(&ffb->lpat); >+ ctx->fontxy = upa_readl(&ffb->fontxy); >+ ctx->fontw = upa_readl(&ffb->fontw); >+ ctx->fontinc = upa_readl(&ffb->fontinc); >+ >+ /* stencil/stencilctl only exists on FFB2+ and later >+ * due to the introduction of 3DRAM-III. >+ */ >+ if (fpriv->ffb_type == ffb2_vertical_plus || >+ fpriv->ffb_type == ffb2_horizontal_plus) { >+ ctx->stencil = upa_readl(&ffb->stencil); >+ ctx->stencilctl = upa_readl(&ffb->stencilctl); >+ } >+ >+ for (i = 0; i < 32; i++) >+ ctx->area_pattern[i] = upa_readl(&ffb->pattern[i]); >+ ctx->ucsr = upa_readl(&ffb->ucsr); >+ return; >+ } >+ >+ /* Fetch drawop. */ >+ ctx->drawop = upa_readl(&ffb->drawop); >+ >+ /* If we were saving the vertex registers, this is where >+ * we would do it. We would save 32 32-bit words starting >+ * at ffb->suvtx. >+ */ >+ >+ /* Capture rendering attributes. */ >+ >+ ctx->ppc = upa_readl(&ffb->ppc); /* Pixel Processor Control */ >+ ctx->wid = upa_readl(&ffb->wid); /* Current WID */ >+ ctx->fg = upa_readl(&ffb->fg); /* Constant FG color */ >+ ctx->bg = upa_readl(&ffb->bg); /* Constant BG color */ >+ ctx->consty = upa_readl(&ffb->consty); /* Constant Y */ >+ ctx->constz = upa_readl(&ffb->constz); /* Constant Z */ >+ ctx->xclip = upa_readl(&ffb->xclip); /* X plane clip */ >+ ctx->dcss = upa_readl(&ffb->dcss); /* Depth Cue Scale Slope */ >+ ctx->vclipmin = upa_readl(&ffb->vclipmin); /* Primary XY clip, minimum */ >+ ctx->vclipmax = upa_readl(&ffb->vclipmax); /* Primary XY clip, maximum */ >+ ctx->vclipzmin = upa_readl(&ffb->vclipzmin); /* Primary Z clip, minimum */ >+ ctx->vclipzmax = upa_readl(&ffb->vclipzmax); /* Primary Z clip, maximum */ >+ ctx->dcsf = upa_readl(&ffb->dcsf); /* Depth Cue Scale Front Bound */ >+ ctx->dcsb = upa_readl(&ffb->dcsb); /* Depth Cue Scale Back Bound */ >+ ctx->dczf = upa_readl(&ffb->dczf); /* Depth Cue Scale Z Front */ >+ ctx->dczb = upa_readl(&ffb->dczb); /* Depth Cue Scale Z Back */ >+ ctx->blendc = upa_readl(&ffb->blendc); /* Alpha Blend Control */ >+ ctx->blendc1 = upa_readl(&ffb->blendc1); /* Alpha Blend Color 1 */ >+ ctx->blendc2 = upa_readl(&ffb->blendc2); /* Alpha Blend Color 2 */ >+ ctx->fbc = upa_readl(&ffb->fbc); /* Frame Buffer Control */ >+ ctx->rop = upa_readl(&ffb->rop); /* Raster Operation */ >+ ctx->cmp = upa_readl(&ffb->cmp); /* Compare Controls */ >+ ctx->matchab = upa_readl(&ffb->matchab); /* Buffer A/B Match Ops */ >+ ctx->matchc = upa_readl(&ffb->matchc); /* Buffer C Match Ops */ >+ ctx->magnab = upa_readl(&ffb->magnab); /* Buffer A/B Magnitude Ops */ >+ ctx->magnc = upa_readl(&ffb->magnc); /* Buffer C Magnitude Ops */ >+ ctx->pmask = upa_readl(&ffb->pmask); /* RGB Plane Mask */ >+ ctx->xpmask = upa_readl(&ffb->xpmask); /* X Plane Mask */ >+ ctx->ypmask = upa_readl(&ffb->ypmask); /* Y Plane Mask */ >+ ctx->zpmask = upa_readl(&ffb->zpmask); /* Z Plane Mask */ >+ >+ /* Auxiliary Clips. */ >+ ctx->auxclip0min = upa_readl(&ffb->auxclip[0].min); >+ ctx->auxclip0max = upa_readl(&ffb->auxclip[0].max); >+ ctx->auxclip1min = upa_readl(&ffb->auxclip[1].min); >+ ctx->auxclip1max = upa_readl(&ffb->auxclip[1].max); >+ ctx->auxclip2min = upa_readl(&ffb->auxclip[2].min); >+ ctx->auxclip2max = upa_readl(&ffb->auxclip[2].max); >+ ctx->auxclip3min = upa_readl(&ffb->auxclip[3].min); >+ ctx->auxclip3max = upa_readl(&ffb->auxclip[3].max); >+ >+ ctx->lpat = upa_readl(&ffb->lpat); /* Line Pattern */ >+ ctx->fontxy = upa_readl(&ffb->fontxy); /* XY Font Coordinate */ >+ ctx->fontw = upa_readl(&ffb->fontw); /* Font Width */ >+ ctx->fontinc = upa_readl(&ffb->fontinc); /* Font X/Y Increment */ >+ >+ /* These registers/features only exist on FFB2 and later chips. */ >+ if (fpriv->ffb_type >= ffb2_prototype) { >+ ctx->dcss1 = upa_readl(&ffb->dcss1); /* Depth Cue Scale Slope 1 */ >+ ctx->dcss2 = upa_readl(&ffb->dcss2); /* Depth Cue Scale Slope 2 */ >+ ctx->dcss2 = upa_readl(&ffb->dcss3); /* Depth Cue Scale Slope 3 */ >+ ctx->dcs2 = upa_readl(&ffb->dcs2); /* Depth Cue Scale 2 */ >+ ctx->dcs3 = upa_readl(&ffb->dcs3); /* Depth Cue Scale 3 */ >+ ctx->dcs4 = upa_readl(&ffb->dcs4); /* Depth Cue Scale 4 */ >+ ctx->dcd2 = upa_readl(&ffb->dcd2); /* Depth Cue Depth 2 */ >+ ctx->dcd3 = upa_readl(&ffb->dcd3); /* Depth Cue Depth 3 */ >+ ctx->dcd4 = upa_readl(&ffb->dcd4); /* Depth Cue Depth 4 */ >+ >+ /* And stencil/stencilctl only exists on FFB2+ and later >+ * due to the introduction of 3DRAM-III. >+ */ >+ if (fpriv->ffb_type == ffb2_vertical_plus || >+ fpriv->ffb_type == ffb2_horizontal_plus) { >+ ctx->stencil = upa_readl(&ffb->stencil); >+ ctx->stencilctl = upa_readl(&ffb->stencilctl); >+ } >+ } >+ >+ /* Save the 32x32 area pattern. */ >+ for (i = 0; i < 32; i++) >+ ctx->area_pattern[i] = upa_readl(&ffb->pattern[i]); >+ >+ /* Finally, stash away the User Constol/Status Register. */ >+ ctx->ucsr = upa_readl(&ffb->ucsr); >+} >+ >+static void ffb_restore_context(ffb_dev_priv_t * fpriv, int old, int idx) >+{ >+ ffb_fbcPtr ffb = fpriv->regs; >+ struct ffb_hw_context *ctx; >+ int i; >+ >+ ctx = fpriv->hw_state[idx - 1]; >+ if (idx == 0 || ctx == NULL) >+ return; >+ >+ if (ctx->is_2d_only) { >+ /* 2D applications only care about certain pieces >+ * of state. >+ */ >+ upa_writel(ctx->drawop, &ffb->drawop); >+ >+ /* If we were restoring the vertex registers, this is where >+ * we would do it. We would restore 32 32-bit words starting >+ * at ffb->suvtx. >+ */ >+ >+ upa_writel(ctx->ppc, &ffb->ppc); >+ upa_writel(ctx->wid, &ffb->wid); >+ upa_writel(ctx->fg, &ffb->fg); >+ upa_writel(ctx->bg, &ffb->bg); >+ upa_writel(ctx->xclip, &ffb->xclip); >+ upa_writel(ctx->fbc, &ffb->fbc); >+ upa_writel(ctx->rop, &ffb->rop); >+ upa_writel(ctx->cmp, &ffb->cmp); >+ upa_writel(ctx->matchab, &ffb->matchab); >+ upa_writel(ctx->magnab, &ffb->magnab); >+ upa_writel(ctx->pmask, &ffb->pmask); >+ upa_writel(ctx->xpmask, &ffb->xpmask); >+ upa_writel(ctx->lpat, &ffb->lpat); >+ upa_writel(ctx->fontxy, &ffb->fontxy); >+ upa_writel(ctx->fontw, &ffb->fontw); >+ upa_writel(ctx->fontinc, &ffb->fontinc); >+ >+ /* stencil/stencilctl only exists on FFB2+ and later >+ * due to the introduction of 3DRAM-III. >+ */ >+ if (fpriv->ffb_type == ffb2_vertical_plus || >+ fpriv->ffb_type == ffb2_horizontal_plus) { >+ upa_writel(ctx->stencil, &ffb->stencil); >+ upa_writel(ctx->stencilctl, &ffb->stencilctl); >+ upa_writel(0x80000000, &ffb->fbc); >+ upa_writel((ctx->stencilctl | 0x80000), >+ &ffb->rawstencilctl); >+ upa_writel(ctx->fbc, &ffb->fbc); >+ } >+ >+ for (i = 0; i < 32; i++) >+ upa_writel(ctx->area_pattern[i], &ffb->pattern[i]); >+ upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr); >+ return; >+ } >+ >+ /* Restore drawop. */ >+ upa_writel(ctx->drawop, &ffb->drawop); >+ >+ /* If we were restoring the vertex registers, this is where >+ * we would do it. We would restore 32 32-bit words starting >+ * at ffb->suvtx. >+ */ >+ >+ /* Restore rendering attributes. */ >+ >+ upa_writel(ctx->ppc, &ffb->ppc); /* Pixel Processor Control */ >+ upa_writel(ctx->wid, &ffb->wid); /* Current WID */ >+ upa_writel(ctx->fg, &ffb->fg); /* Constant FG color */ >+ upa_writel(ctx->bg, &ffb->bg); /* Constant BG color */ >+ upa_writel(ctx->consty, &ffb->consty); /* Constant Y */ >+ upa_writel(ctx->constz, &ffb->constz); /* Constant Z */ >+ upa_writel(ctx->xclip, &ffb->xclip); /* X plane clip */ >+ upa_writel(ctx->dcss, &ffb->dcss); /* Depth Cue Scale Slope */ >+ upa_writel(ctx->vclipmin, &ffb->vclipmin); /* Primary XY clip, minimum */ >+ upa_writel(ctx->vclipmax, &ffb->vclipmax); /* Primary XY clip, maximum */ >+ upa_writel(ctx->vclipzmin, &ffb->vclipzmin); /* Primary Z clip, minimum */ >+ upa_writel(ctx->vclipzmax, &ffb->vclipzmax); /* Primary Z clip, maximum */ >+ upa_writel(ctx->dcsf, &ffb->dcsf); /* Depth Cue Scale Front Bound */ >+ upa_writel(ctx->dcsb, &ffb->dcsb); /* Depth Cue Scale Back Bound */ >+ upa_writel(ctx->dczf, &ffb->dczf); /* Depth Cue Scale Z Front */ >+ upa_writel(ctx->dczb, &ffb->dczb); /* Depth Cue Scale Z Back */ >+ upa_writel(ctx->blendc, &ffb->blendc); /* Alpha Blend Control */ >+ upa_writel(ctx->blendc1, &ffb->blendc1); /* Alpha Blend Color 1 */ >+ upa_writel(ctx->blendc2, &ffb->blendc2); /* Alpha Blend Color 2 */ >+ upa_writel(ctx->fbc, &ffb->fbc); /* Frame Buffer Control */ >+ upa_writel(ctx->rop, &ffb->rop); /* Raster Operation */ >+ upa_writel(ctx->cmp, &ffb->cmp); /* Compare Controls */ >+ upa_writel(ctx->matchab, &ffb->matchab); /* Buffer A/B Match Ops */ >+ upa_writel(ctx->matchc, &ffb->matchc); /* Buffer C Match Ops */ >+ upa_writel(ctx->magnab, &ffb->magnab); /* Buffer A/B Magnitude Ops */ >+ upa_writel(ctx->magnc, &ffb->magnc); /* Buffer C Magnitude Ops */ >+ upa_writel(ctx->pmask, &ffb->pmask); /* RGB Plane Mask */ >+ upa_writel(ctx->xpmask, &ffb->xpmask); /* X Plane Mask */ >+ upa_writel(ctx->ypmask, &ffb->ypmask); /* Y Plane Mask */ >+ upa_writel(ctx->zpmask, &ffb->zpmask); /* Z Plane Mask */ >+ >+ /* Auxiliary Clips. */ >+ upa_writel(ctx->auxclip0min, &ffb->auxclip[0].min); >+ upa_writel(ctx->auxclip0max, &ffb->auxclip[0].max); >+ upa_writel(ctx->auxclip1min, &ffb->auxclip[1].min); >+ upa_writel(ctx->auxclip1max, &ffb->auxclip[1].max); >+ upa_writel(ctx->auxclip2min, &ffb->auxclip[2].min); >+ upa_writel(ctx->auxclip2max, &ffb->auxclip[2].max); >+ upa_writel(ctx->auxclip3min, &ffb->auxclip[3].min); >+ upa_writel(ctx->auxclip3max, &ffb->auxclip[3].max); >+ >+ upa_writel(ctx->lpat, &ffb->lpat); /* Line Pattern */ >+ upa_writel(ctx->fontxy, &ffb->fontxy); /* XY Font Coordinate */ >+ upa_writel(ctx->fontw, &ffb->fontw); /* Font Width */ >+ upa_writel(ctx->fontinc, &ffb->fontinc); /* Font X/Y Increment */ >+ >+ /* These registers/features only exist on FFB2 and later chips. */ >+ if (fpriv->ffb_type >= ffb2_prototype) { >+ upa_writel(ctx->dcss1, &ffb->dcss1); /* Depth Cue Scale Slope 1 */ >+ upa_writel(ctx->dcss2, &ffb->dcss2); /* Depth Cue Scale Slope 2 */ >+ upa_writel(ctx->dcss3, &ffb->dcss2); /* Depth Cue Scale Slope 3 */ >+ upa_writel(ctx->dcs2, &ffb->dcs2); /* Depth Cue Scale 2 */ >+ upa_writel(ctx->dcs3, &ffb->dcs3); /* Depth Cue Scale 3 */ >+ upa_writel(ctx->dcs4, &ffb->dcs4); /* Depth Cue Scale 4 */ >+ upa_writel(ctx->dcd2, &ffb->dcd2); /* Depth Cue Depth 2 */ >+ upa_writel(ctx->dcd3, &ffb->dcd3); /* Depth Cue Depth 3 */ >+ upa_writel(ctx->dcd4, &ffb->dcd4); /* Depth Cue Depth 4 */ >+ >+ /* And stencil/stencilctl only exists on FFB2+ and later >+ * due to the introduction of 3DRAM-III. >+ */ >+ if (fpriv->ffb_type == ffb2_vertical_plus || >+ fpriv->ffb_type == ffb2_horizontal_plus) { >+ /* Unfortunately, there is a hardware bug on >+ * the FFB2+ chips which prevents a normal write >+ * to the stencil control register from working >+ * as it should. >+ * >+ * The state controlled by the FFB stencilctl register >+ * really gets transferred to the per-buffer instances >+ * of the stencilctl register in the 3DRAM chips. >+ * >+ * The bug is that FFB does not update buffer C correctly, >+ * so we have to do it by hand for them. >+ */ >+ >+ /* This will update buffers A and B. */ >+ upa_writel(ctx->stencil, &ffb->stencil); >+ upa_writel(ctx->stencilctl, &ffb->stencilctl); >+ >+ /* Force FFB to use buffer C 3dram regs. */ >+ upa_writel(0x80000000, &ffb->fbc); >+ upa_writel((ctx->stencilctl | 0x80000), >+ &ffb->rawstencilctl); >+ >+ /* Now restore the correct FBC controls. */ >+ upa_writel(ctx->fbc, &ffb->fbc); >+ } >+ } >+ >+ /* Restore the 32x32 area pattern. */ >+ for (i = 0; i < 32; i++) >+ upa_writel(ctx->area_pattern[i], &ffb->pattern[i]); >+ >+ /* Finally, stash away the User Constol/Status Register. >+ * The only state we really preserve here is the picking >+ * control. >+ */ >+ upa_writel((ctx->ucsr & 0xf0000), &ffb->ucsr); >+} >+ >+#define FFB_UCSR_FB_BUSY 0x01000000 >+#define FFB_UCSR_RP_BUSY 0x02000000 >+#define FFB_UCSR_ALL_BUSY (FFB_UCSR_RP_BUSY|FFB_UCSR_FB_BUSY) >+ >+static void FFBWait(ffb_fbcPtr ffb) >+{ >+ int limit = 100000; >+ >+ do { >+ u32 regval = upa_readl(&ffb->ucsr); >+ >+ if ((regval & FFB_UCSR_ALL_BUSY) == 0) >+ break; >+ } while (--limit); >+} >+ >+int ffb_context_switch(struct drm_device * dev, int old, int new) { >+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private; >+ >+#if DRM_DMA_HISTOGRAM >+ dev->ctx_start = get_cycles(); >+#endif >+ >+ DRM_DEBUG("Context switch from %d to %d\n", old, new); >+ >+ if (new == dev->last_context || dev->last_context == 0) { >+ dev->last_context = new; >+ return 0; >+ } >+ >+ FFBWait(fpriv->regs); >+ ffb_save_context(fpriv, old); >+ ffb_restore_context(fpriv, old, new); >+ FFBWait(fpriv->regs); >+ >+ dev->last_context = new; >+ >+ return 0; >+} >+ >+int ffb_resctx(struct inode * inode, struct file * filp, unsigned int cmd, >+ unsigned long arg) { >+ drm_ctx_res_t res; >+ drm_ctx_t ctx; >+ int i; >+ >+ DRM_DEBUG("%d\n", DRM_RESERVED_CONTEXTS); >+ if (copy_from_user(&res, (drm_ctx_res_t __user *) arg, sizeof(res))) >+ return -EFAULT; >+ if (res.count >= DRM_RESERVED_CONTEXTS) { >+ memset(&ctx, 0, sizeof(ctx)); >+ for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) { >+ ctx.handle = i; >+ if (copy_to_user(&res.contexts[i], &i, sizeof(i))) >+ return -EFAULT; >+ } >+ } >+ res.count = DRM_RESERVED_CONTEXTS; >+ if (copy_to_user((drm_ctx_res_t __user *) arg, &res, sizeof(res))) >+ return -EFAULT; >+ return 0; >+} >+ >+int ffb_addctx(struct inode * inode, struct file * filp, unsigned int cmd, >+ unsigned long arg) { >+ drm_file_t *priv = filp->private_data; >+ struct drm_device *dev = priv->dev; >+ drm_ctx_t ctx; >+ int idx; >+ >+ if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx))) >+ return -EFAULT; >+ idx = ffb_alloc_queue(dev, (ctx.flags & _DRM_CONTEXT_2DONLY)); >+ if (idx < 0) >+ return -ENFILE; >+ >+ DRM_DEBUG("%d\n", ctx.handle); >+ ctx.handle = idx; >+ if (copy_to_user((drm_ctx_t __user *) arg, &ctx, sizeof(ctx))) >+ return -EFAULT; >+ return 0; >+} >+ >+int ffb_modctx(struct inode * inode, struct file * filp, unsigned int cmd, >+ unsigned long arg) { >+ drm_file_t *priv = filp->private_data; >+ struct drm_device *dev = priv->dev; >+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private; >+ struct ffb_hw_context *hwctx; >+ drm_ctx_t ctx; >+ int idx; >+ >+ if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx))) >+ return -EFAULT; >+ >+ idx = ctx.handle; >+ if (idx <= 0 || idx >= FFB_MAX_CTXS) >+ return -EINVAL; >+ >+ hwctx = fpriv->hw_state[idx - 1]; >+ if (hwctx == NULL) >+ return -EINVAL; >+ >+ if ((ctx.flags & _DRM_CONTEXT_2DONLY) == 0) >+ hwctx->is_2d_only = 0; >+ else >+ hwctx->is_2d_only = 1; >+ >+ return 0; >+} >+ >+int ffb_getctx(struct inode * inode, struct file * filp, unsigned int cmd, >+ unsigned long arg) { >+ drm_file_t *priv = filp->private_data; >+ struct drm_device *dev = priv->dev; >+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private; >+ struct ffb_hw_context *hwctx; >+ drm_ctx_t ctx; >+ int idx; >+ >+ if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx))) >+ return -EFAULT; >+ >+ idx = ctx.handle; >+ if (idx <= 0 || idx >= FFB_MAX_CTXS) >+ return -EINVAL; >+ >+ hwctx = fpriv->hw_state[idx - 1]; >+ if (hwctx == NULL) >+ return -EINVAL; >+ >+ if (hwctx->is_2d_only != 0) >+ ctx.flags = _DRM_CONTEXT_2DONLY; >+ else >+ ctx.flags = 0; >+ >+ if (copy_to_user((drm_ctx_t __user *) arg, &ctx, sizeof(ctx))) >+ return -EFAULT; >+ >+ return 0; >+} >+ >+int ffb_switchctx(struct inode * inode, struct file * filp, unsigned int cmd, >+ unsigned long arg) { >+ drm_file_t *priv = filp->private_data; >+ struct drm_device *dev = priv->dev; >+ drm_ctx_t ctx; >+ >+ if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx))) >+ return -EFAULT; >+ DRM_DEBUG("%d\n", ctx.handle); >+ return ffb_context_switch(dev, dev->last_context, ctx.handle); >+} >+ >+int ffb_newctx(struct inode * inode, struct file * filp, unsigned int cmd, >+ unsigned long arg) { >+ drm_ctx_t ctx; >+ >+ if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx))) >+ return -EFAULT; >+ DRM_DEBUG("%d\n", ctx.handle); >+ >+ return 0; >+} >+ >+int ffb_rmctx(struct inode * inode, struct file * filp, unsigned int cmd, >+ unsigned long arg) { >+ drm_ctx_t ctx; >+ drm_file_t *priv = filp->private_data; >+ struct drm_device *dev = priv->dev; >+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private; >+ int idx; >+ >+ if (copy_from_user(&ctx, (drm_ctx_t __user *) arg, sizeof(ctx))) >+ return -EFAULT; >+ DRM_DEBUG("%d\n", ctx.handle); >+ >+ idx = ctx.handle - 1; >+ if (idx < 0 || idx >= FFB_MAX_CTXS) >+ return -EINVAL; >+ >+ if (fpriv->hw_state[idx] != NULL) { >+ kfree(fpriv->hw_state[idx]); >+ fpriv->hw_state[idx] = NULL; >+ } >+ return 0; >+} >+ >+static void ffb_driver_reclaim_buffers_locked(struct drm_device * dev) >+{ >+ ffb_dev_priv_t *fpriv = (ffb_dev_priv_t *) dev->dev_private; >+ int context = _DRM_LOCKING_CONTEXT(dev->lock.hw_lock->lock); >+ int idx; >+ >+ idx = context - 1; >+ if (fpriv && >+ context != DRM_KERNEL_CONTEXT && fpriv->hw_state[idx] != NULL) { >+ kfree(fpriv->hw_state[idx]); >+ fpriv->hw_state[idx] = NULL; >+ } >+} >+ >+static void ffb_driver_lastclose(struct drm_device * dev) >+{ >+ if (dev->dev_private) >+ kfree(dev->dev_private); >+} >+ >+static void ffb_driver_unload(struct drm_device * dev) >+{ >+ if (ffb_position != NULL) >+ kfree(ffb_position); >+} >+ >+static int ffb_driver_kernel_context_switch_unlock(struct drm_device *dev) >+{ >+ dev->lock.filp = 0; >+ { >+ __volatile__ unsigned int *plock = &dev->lock.hw_lock->lock; >+ unsigned int old, new, prev, ctx; >+ >+ ctx = lock.context; >+ do { >+ old = *plock; >+ new = ctx; >+ prev = cmpxchg(plock, old, new); >+ } while (prev != old); >+ } >+ wake_up_interruptible(&dev->lock.lock_queue); >+} >+ >+unsigned long ffb_driver_get_map_ofs(drm_map_t * map) >+{ >+ return (map->offset & 0xffffffff); >+} >+ >+unsigned long ffb_driver_get_reg_ofs(struct drm_device * dev) >+{ >+ ffb_dev_priv_t *ffb_priv = (ffb_dev_priv_t *) dev->dev_private; >+ >+ if (ffb_priv) >+ return ffb_priv->card_phys_base; >+ >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/ffb_drv.c linux-2.6.23.i686/drivers/char/drm/ffb_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/ffb_drv.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/ffb_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,329 @@ >+/* $Id$ >+ * ffb_drv.c: Creator/Creator3D direct rendering driver. >+ * >+ * Copyright (C) 2000 David S. Miller (davem@redhat.com) >+ */ >+ >+#include <linux/sched.h> >+#include <linux/smp_lock.h> >+#include <asm/shmparam.h> >+#include <asm/oplib.h> >+#include <asm/upa.h> >+ >+#include "drmP.h" >+#include "ffb_drv.h" >+ >+#define DRIVER_AUTHOR "David S. Miller" >+ >+#define DRIVER_NAME "ffb" >+#define DRIVER_DESC "Creator/Creator3D" >+#define DRIVER_DATE "20000517" >+ >+#define DRIVER_MAJOR 0 >+#define DRIVER_MINOR 0 >+#define DRIVER_PATCHLEVEL 1 >+ >+typedef struct _ffb_position_t { >+ int node; >+ int root; >+} ffb_position_t; >+ >+static ffb_position_t *ffb_position; >+ >+static void get_ffb_type(ffb_dev_priv_t *ffb_priv, int instance) >+{ >+ volatile unsigned char *strap_bits; >+ unsigned char val; >+ >+ strap_bits = (volatile unsigned char *) >+ (ffb_priv->card_phys_base + 0x00200000UL); >+ >+ /* Don't ask, you have to read the value twice for whatever >+ * reason to get correct contents. >+ */ >+ val = upa_readb(strap_bits); >+ val = upa_readb(strap_bits); >+ switch (val & 0x78) { >+ case (0x0 << 5) | (0x0 << 3): >+ ffb_priv->ffb_type = ffb1_prototype; >+ printk("ffb%d: Detected FFB1 pre-FCS prototype\n", instance); >+ break; >+ case (0x0 << 5) | (0x1 << 3): >+ ffb_priv->ffb_type = ffb1_standard; >+ printk("ffb%d: Detected FFB1\n", instance); >+ break; >+ case (0x0 << 5) | (0x3 << 3): >+ ffb_priv->ffb_type = ffb1_speedsort; >+ printk("ffb%d: Detected FFB1-SpeedSort\n", instance); >+ break; >+ case (0x1 << 5) | (0x0 << 3): >+ ffb_priv->ffb_type = ffb2_prototype; >+ printk("ffb%d: Detected FFB2/vertical pre-FCS prototype\n", instance); >+ break; >+ case (0x1 << 5) | (0x1 << 3): >+ ffb_priv->ffb_type = ffb2_vertical; >+ printk("ffb%d: Detected FFB2/vertical\n", instance); >+ break; >+ case (0x1 << 5) | (0x2 << 3): >+ ffb_priv->ffb_type = ffb2_vertical_plus; >+ printk("ffb%d: Detected FFB2+/vertical\n", instance); >+ break; >+ case (0x2 << 5) | (0x0 << 3): >+ ffb_priv->ffb_type = ffb2_horizontal; >+ printk("ffb%d: Detected FFB2/horizontal\n", instance); >+ break; >+ case (0x2 << 5) | (0x2 << 3): >+ ffb_priv->ffb_type = ffb2_horizontal; >+ printk("ffb%d: Detected FFB2+/horizontal\n", instance); >+ break; >+ default: >+ ffb_priv->ffb_type = ffb2_vertical; >+ printk("ffb%d: Unknown boardID[%08x], assuming FFB2\n", instance, val); >+ break; >+ }; >+} >+ >+static void ffb_apply_upa_parent_ranges(int parent, >+ struct linux_prom64_registers *regs) >+{ >+ struct linux_prom64_ranges ranges[PROMREG_MAX]; >+ char name[128]; >+ int len, i; >+ >+ prom_getproperty(parent, "name", name, sizeof(name)); >+ if (strcmp(name, "upa") != 0) >+ return; >+ >+ len = prom_getproperty(parent, "ranges", (void *) ranges, sizeof(ranges)); >+ if (len <= 0) >+ return; >+ >+ len /= sizeof(struct linux_prom64_ranges); >+ for (i = 0; i < len; i++) { >+ struct linux_prom64_ranges *rng = &ranges[i]; >+ u64 phys_addr = regs->phys_addr; >+ >+ if (phys_addr >= rng->ot_child_base && >+ phys_addr < (rng->ot_child_base + rng->or_size)) { >+ regs->phys_addr -= rng->ot_child_base; >+ regs->phys_addr += rng->ot_parent_base; >+ return; >+ } >+ } >+ >+ return; >+} >+ >+static int ffb_init_one(struct drm_device *dev, int prom_node, int parent_node, >+ int instance) >+{ >+ struct linux_prom64_registers regs[2*PROMREG_MAX]; >+ ffb_dev_priv_t *ffb_priv = (ffb_dev_priv_t *)dev->dev_private; >+ int i; >+ >+ ffb_priv->prom_node = prom_node; >+ if (prom_getproperty(ffb_priv->prom_node, "reg", >+ (void *)regs, sizeof(regs)) <= 0) { >+ return -EINVAL; >+ } >+ ffb_apply_upa_parent_ranges(parent_node, ®s[0]); >+ ffb_priv->card_phys_base = regs[0].phys_addr; >+ ffb_priv->regs = (ffb_fbcPtr) >+ (regs[0].phys_addr + 0x00600000UL); >+ get_ffb_type(ffb_priv, instance); >+ for (i = 0; i < FFB_MAX_CTXS; i++) >+ ffb_priv->hw_state[i] = NULL; >+ >+ return 0; >+} >+ >+static int __init ffb_count_siblings(int root) >+{ >+ int node, child, count = 0; >+ >+ child = prom_getchild(root); >+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node; >+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) >+ count++; >+ >+ return count; >+} >+ >+static int __init ffb_scan_siblings(int root, int instance) >+{ >+ int node, child; >+ >+ child = prom_getchild(root); >+ for (node = prom_searchsiblings(child, "SUNW,ffb"); node; >+ node = prom_searchsiblings(prom_getsibling(node), "SUNW,ffb")) { >+ ffb_position[instance].node = node; >+ ffb_position[instance].root = root; >+ instance++; >+ } >+ >+ return instance; >+} >+ >+static drm_map_t *ffb_find_map(struct file *filp, unsigned long off) >+{ >+ drm_file_t *priv = filp->private_data; >+ struct drm_device *dev; >+ drm_map_list_t *r_list; >+ struct list_head *list; >+ drm_map_t *map; >+ >+ if (!priv || (dev = priv->dev) == NULL) >+ return NULL; >+ >+ list_for_each(list, &dev->maplist->head) { >+ unsigned long uoff; >+ >+ r_list = (drm_map_list_t *)list; >+ map = r_list->map; >+ if (!map) >+ continue; >+ uoff = (map->offset & 0xffffffff); >+ if (uoff == off) >+ return map; >+ } >+ >+ return NULL; >+} >+ >+unsigned long ffb_get_unmapped_area(struct file *filp, >+ unsigned long hint, >+ unsigned long len, >+ unsigned long pgoff, >+ unsigned long flags) >+{ >+ drm_map_t *map = ffb_find_map(filp, pgoff << PAGE_SHIFT); >+ unsigned long addr = -ENOMEM; >+ >+ if (!map) >+ return get_unmapped_area(NULL, hint, len, pgoff, flags); >+ >+ if (map->type == _DRM_FRAME_BUFFER || >+ map->type == _DRM_REGISTERS) { >+#ifdef HAVE_ARCH_FB_UNMAPPED_AREA >+ addr = get_fb_unmapped_area(filp, hint, len, pgoff, flags); >+#else >+ addr = get_unmapped_area(NULL, hint, len, pgoff, flags); >+#endif >+ } else if (map->type == _DRM_SHM && SHMLBA > PAGE_SIZE) { >+ unsigned long slack = SHMLBA - PAGE_SIZE; >+ >+ addr = get_unmapped_area(NULL, hint, len + slack, pgoff, flags); >+ if (!(addr & ~PAGE_MASK)) { >+ unsigned long kvirt = (unsigned long) map->handle; >+ >+ if ((kvirt & (SHMLBA - 1)) != (addr & (SHMLBA - 1))) { >+ unsigned long koff, aoff; >+ >+ koff = kvirt & (SHMLBA - 1); >+ aoff = addr & (SHMLBA - 1); >+ if (koff < aoff) >+ koff += SHMLBA; >+ >+ addr += (koff - aoff); >+ } >+ } >+ } else { >+ addr = get_unmapped_area(NULL, hint, len, pgoff, flags); >+ } >+ >+ return addr; >+} >+ >+/* This functions must be here since it references drm_numdevs) >+ * which drm_drv.h declares. >+ */ >+static int ffb_driver_firstopen(struct drm_device *dev) >+{ >+ ffb_dev_priv_t *ffb_priv; >+ struct drm_device *temp_dev; >+ int ret = 0; >+ int i; >+ >+ /* Check for the case where no device was found. */ >+ if (ffb_position == NULL) >+ return -ENODEV; >+ >+ /* Find our instance number by finding our device in dev structure */ >+ for (i = 0; i < drm_numdevs; i++) { >+ temp_dev = &(drm_device[i]); >+ if(temp_dev == dev) >+ break; >+ } >+ >+ if (i == drm_numdevs) >+ return -ENODEV; >+ >+ ffb_priv = kmalloc(sizeof(ffb_dev_priv_t), GFP_KERNEL); >+ if (!ffb_priv) >+ return -ENOMEM; >+ memset(ffb_priv, 0, sizeof(*ffb_priv)); >+ dev->dev_private = ffb_priv; >+ >+ ret = ffb_init_one(dev, >+ ffb_position[i].node, >+ ffb_position[i].root, >+ i); >+ return ret; >+} >+ >+#include "drm_pciids.h" >+ >+static struct pci_device_id pciidlist[] = { >+ ffb_PCI_IDS >+}; >+ >+static struct drm_driver ffb_driver = { >+ .release = ffb_driver_reclaim_buffers_locked, >+ .firstopen = ffb_driver_firstopen, >+ .lastclose = ffb_driver_lastclose, >+ .unload = ffb_driver_unload, >+ .kernel_context_switch = ffb_context_switch, >+ .kernel_context_switch_unlock = ffb_driver_kernel_context_switch_unlock, >+ .get_map_ofs = ffb_driver_get_map_ofs, >+ .get_reg_ofs = ffb_driver_get_reg_ofs, >+ .reclaim_buffers = drm_core_reclaim_buffers, >+ fops = { >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .fasync = drm_fasync, >+ .poll = drm_poll, >+ .get_unmapped_area = ffb_get_unmapped_area, >+ }, >+}; >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_probe(pdev, ent, &driver); >+} >+ >+static struct pci_driver pci_driver = { >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), >+}; >+ >+static int __init ffb_init(void) >+{ >+ return drm_init(&pci_driver, pciidlist, &driver); >+} >+ >+static void __exit ffb_exit(void) >+{ >+ drm_exit(&pci_driver); >+} >+ >+module_init(ffb_init); >+module_exit(ffb_exit)); >+ >+MODULE_AUTHOR( DRIVER_AUTHOR ); >+MODULE_DESCRIPTION( DRIVER_DESC ); >+MODULE_LICENSE("GPL and additional rights"); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/ffb_drv.h linux-2.6.23.i686/drivers/char/drm/ffb_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/ffb_drv.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/ffb_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,284 @@ >+/* $Id$ >+ * ffb_drv.h: Creator/Creator3D direct rendering driver. >+ * >+ * Copyright (C) 2000 David S. Miller (davem@redhat.com) >+ */ >+ >+/* Auxilliary clips. */ >+typedef struct { >+ volatile unsigned int min; >+ volatile unsigned int max; >+} ffb_auxclip, *ffb_auxclipPtr; >+ >+/* FFB register set. */ >+typedef struct _ffb_fbc { >+ /* Next vertex registers, on the right we list which drawops >+ * use said register and the logical name the register has in >+ * that context. >+ */ /* DESCRIPTION DRAWOP(NAME) */ >+/*0x00*/unsigned int pad1[3]; /* Reserved */ >+/*0x0c*/volatile unsigned int alpha; /* ALPHA Transparency */ >+/*0x10*/volatile unsigned int red; /* RED */ >+/*0x14*/volatile unsigned int green; /* GREEN */ >+/*0x18*/volatile unsigned int blue; /* BLUE */ >+/*0x1c*/volatile unsigned int z; /* DEPTH */ >+/*0x20*/volatile unsigned int y; /* Y triangle(DOYF) */ >+ /* aadot(DYF) */ >+ /* ddline(DYF) */ >+ /* aaline(DYF) */ >+/*0x24*/volatile unsigned int x; /* X triangle(DOXF) */ >+ /* aadot(DXF) */ >+ /* ddline(DXF) */ >+ /* aaline(DXF) */ >+/*0x28*/unsigned int pad2[2]; /* Reserved */ >+/*0x30*/volatile unsigned int ryf; /* Y (alias to DOYF) ddline(RYF) */ >+ /* aaline(RYF) */ >+ /* triangle(RYF) */ >+/*0x34*/volatile unsigned int rxf; /* X ddline(RXF) */ >+ /* aaline(RXF) */ >+ /* triangle(RXF) */ >+/*0x38*/unsigned int pad3[2]; /* Reserved */ >+/*0x40*/volatile unsigned int dmyf; /* Y (alias to DOYF) triangle(DMYF) */ >+/*0x44*/volatile unsigned int dmxf; /* X triangle(DMXF) */ >+/*0x48*/unsigned int pad4[2]; /* Reserved */ >+/*0x50*/volatile unsigned int ebyi; /* Y (alias to RYI) polygon(EBYI) */ >+/*0x54*/volatile unsigned int ebxi; /* X polygon(EBXI) */ >+/*0x58*/unsigned int pad5[2]; /* Reserved */ >+/*0x60*/volatile unsigned int by; /* Y brline(RYI) */ >+ /* fastfill(OP) */ >+ /* polygon(YI) */ >+ /* rectangle(YI) */ >+ /* bcopy(SRCY) */ >+ /* vscroll(SRCY) */ >+/*0x64*/volatile unsigned int bx; /* X brline(RXI) */ >+ /* polygon(XI) */ >+ /* rectangle(XI) */ >+ /* bcopy(SRCX) */ >+ /* vscroll(SRCX) */ >+ /* fastfill(GO) */ >+/*0x68*/volatile unsigned int dy; /* destination Y fastfill(DSTY) */ >+ /* bcopy(DSRY) */ >+ /* vscroll(DSRY) */ >+/*0x6c*/volatile unsigned int dx; /* destination X fastfill(DSTX) */ >+ /* bcopy(DSTX) */ >+ /* vscroll(DSTX) */ >+/*0x70*/volatile unsigned int bh; /* Y (alias to RYI) brline(DYI) */ >+ /* dot(DYI) */ >+ /* polygon(ETYI) */ >+ /* Height fastfill(H) */ >+ /* bcopy(H) */ >+ /* vscroll(H) */ >+ /* Y count fastfill(NY) */ >+/*0x74*/volatile unsigned int bw; /* X dot(DXI) */ >+ /* brline(DXI) */ >+ /* polygon(ETXI) */ >+ /* fastfill(W) */ >+ /* bcopy(W) */ >+ /* vscroll(W) */ >+ /* fastfill(NX) */ >+/*0x78*/unsigned int pad6[2]; /* Reserved */ >+/*0x80*/unsigned int pad7[32]; /* Reserved */ >+ >+ /* Setup Unit's vertex state register */ >+/*100*/ volatile unsigned int suvtx; >+/*104*/ unsigned int pad8[63]; /* Reserved */ >+ >+ /* Frame Buffer Control Registers */ >+/*200*/ volatile unsigned int ppc; /* Pixel Processor Control */ >+/*204*/ volatile unsigned int wid; /* Current WID */ >+/*208*/ volatile unsigned int fg; /* FG data */ >+/*20c*/ volatile unsigned int bg; /* BG data */ >+/*210*/ volatile unsigned int consty; /* Constant Y */ >+/*214*/ volatile unsigned int constz; /* Constant Z */ >+/*218*/ volatile unsigned int xclip; /* X Clip */ >+/*21c*/ volatile unsigned int dcss; /* Depth Cue Scale Slope */ >+/*220*/ volatile unsigned int vclipmin; /* Viewclip XY Min Bounds */ >+/*224*/ volatile unsigned int vclipmax; /* Viewclip XY Max Bounds */ >+/*228*/ volatile unsigned int vclipzmin; /* Viewclip Z Min Bounds */ >+/*22c*/ volatile unsigned int vclipzmax; /* Viewclip Z Max Bounds */ >+/*230*/ volatile unsigned int dcsf; /* Depth Cue Scale Front Bound */ >+/*234*/ volatile unsigned int dcsb; /* Depth Cue Scale Back Bound */ >+/*238*/ volatile unsigned int dczf; /* Depth Cue Z Front */ >+/*23c*/ volatile unsigned int dczb; /* Depth Cue Z Back */ >+/*240*/ unsigned int pad9; /* Reserved */ >+/*244*/ volatile unsigned int blendc; /* Alpha Blend Control */ >+/*248*/ volatile unsigned int blendc1; /* Alpha Blend Color 1 */ >+/*24c*/ volatile unsigned int blendc2; /* Alpha Blend Color 2 */ >+/*250*/ volatile unsigned int fbramitc; /* FB RAM Interleave Test Control */ >+/*254*/ volatile unsigned int fbc; /* Frame Buffer Control */ >+/*258*/ volatile unsigned int rop; /* Raster OPeration */ >+/*25c*/ volatile unsigned int cmp; /* Frame Buffer Compare */ >+/*260*/ volatile unsigned int matchab; /* Buffer AB Match Mask */ >+/*264*/ volatile unsigned int matchc; /* Buffer C(YZ) Match Mask */ >+/*268*/ volatile unsigned int magnab; /* Buffer AB Magnitude Mask */ >+/*26c*/ volatile unsigned int magnc; /* Buffer C(YZ) Magnitude Mask */ >+/*270*/ volatile unsigned int fbcfg0; /* Frame Buffer Config 0 */ >+/*274*/ volatile unsigned int fbcfg1; /* Frame Buffer Config 1 */ >+/*278*/ volatile unsigned int fbcfg2; /* Frame Buffer Config 2 */ >+/*27c*/ volatile unsigned int fbcfg3; /* Frame Buffer Config 3 */ >+/*280*/ volatile unsigned int ppcfg; /* Pixel Processor Config */ >+/*284*/ volatile unsigned int pick; /* Picking Control */ >+/*288*/ volatile unsigned int fillmode; /* FillMode */ >+/*28c*/ volatile unsigned int fbramwac; /* FB RAM Write Address Control */ >+/*290*/ volatile unsigned int pmask; /* RGB PlaneMask */ >+/*294*/ volatile unsigned int xpmask; /* X PlaneMask */ >+/*298*/ volatile unsigned int ypmask; /* Y PlaneMask */ >+/*29c*/ volatile unsigned int zpmask; /* Z PlaneMask */ >+/*2a0*/ ffb_auxclip auxclip[4]; /* Auxilliary Viewport Clip */ >+ >+ /* New 3dRAM III support regs */ >+/*2c0*/ volatile unsigned int rawblend2; >+/*2c4*/ volatile unsigned int rawpreblend; >+/*2c8*/ volatile unsigned int rawstencil; >+/*2cc*/ volatile unsigned int rawstencilctl; >+/*2d0*/ volatile unsigned int threedram1; >+/*2d4*/ volatile unsigned int threedram2; >+/*2d8*/ volatile unsigned int passin; >+/*2dc*/ volatile unsigned int rawclrdepth; >+/*2e0*/ volatile unsigned int rawpmask; >+/*2e4*/ volatile unsigned int rawcsrc; >+/*2e8*/ volatile unsigned int rawmatch; >+/*2ec*/ volatile unsigned int rawmagn; >+/*2f0*/ volatile unsigned int rawropblend; >+/*2f4*/ volatile unsigned int rawcmp; >+/*2f8*/ volatile unsigned int rawwac; >+/*2fc*/ volatile unsigned int fbramid; >+ >+/*300*/ volatile unsigned int drawop; /* Draw OPeration */ >+/*304*/ unsigned int pad10[2]; /* Reserved */ >+/*30c*/ volatile unsigned int lpat; /* Line Pattern control */ >+/*310*/ unsigned int pad11; /* Reserved */ >+/*314*/ volatile unsigned int fontxy; /* XY Font coordinate */ >+/*318*/ volatile unsigned int fontw; /* Font Width */ >+/*31c*/ volatile unsigned int fontinc; /* Font Increment */ >+/*320*/ volatile unsigned int font; /* Font bits */ >+/*324*/ unsigned int pad12[3]; /* Reserved */ >+/*330*/ volatile unsigned int blend2; >+/*334*/ volatile unsigned int preblend; >+/*338*/ volatile unsigned int stencil; >+/*33c*/ volatile unsigned int stencilctl; >+ >+/*340*/ unsigned int pad13[4]; /* Reserved */ >+/*350*/ volatile unsigned int dcss1; /* Depth Cue Scale Slope 1 */ >+/*354*/ volatile unsigned int dcss2; /* Depth Cue Scale Slope 2 */ >+/*358*/ volatile unsigned int dcss3; /* Depth Cue Scale Slope 3 */ >+/*35c*/ volatile unsigned int widpmask; >+/*360*/ volatile unsigned int dcs2; >+/*364*/ volatile unsigned int dcs3; >+/*368*/ volatile unsigned int dcs4; >+/*36c*/ unsigned int pad14; /* Reserved */ >+/*370*/ volatile unsigned int dcd2; >+/*374*/ volatile unsigned int dcd3; >+/*378*/ volatile unsigned int dcd4; >+/*37c*/ unsigned int pad15; /* Reserved */ >+/*380*/ volatile unsigned int pattern[32]; /* area Pattern */ >+/*400*/ unsigned int pad16[8]; /* Reserved */ >+/*420*/ volatile unsigned int reset; /* chip RESET */ >+/*424*/ unsigned int pad17[247]; /* Reserved */ >+/*800*/ volatile unsigned int devid; /* Device ID */ >+/*804*/ unsigned int pad18[63]; /* Reserved */ >+/*900*/ volatile unsigned int ucsr; /* User Control & Status Register */ >+/*904*/ unsigned int pad19[31]; /* Reserved */ >+/*980*/ volatile unsigned int mer; /* Mode Enable Register */ >+/*984*/ unsigned int pad20[1439]; /* Reserved */ >+} ffb_fbc, *ffb_fbcPtr; >+ >+struct ffb_hw_context { >+ int is_2d_only; >+ >+ unsigned int ppc; >+ unsigned int wid; >+ unsigned int fg; >+ unsigned int bg; >+ unsigned int consty; >+ unsigned int constz; >+ unsigned int xclip; >+ unsigned int dcss; >+ unsigned int vclipmin; >+ unsigned int vclipmax; >+ unsigned int vclipzmin; >+ unsigned int vclipzmax; >+ unsigned int dcsf; >+ unsigned int dcsb; >+ unsigned int dczf; >+ unsigned int dczb; >+ unsigned int blendc; >+ unsigned int blendc1; >+ unsigned int blendc2; >+ unsigned int fbc; >+ unsigned int rop; >+ unsigned int cmp; >+ unsigned int matchab; >+ unsigned int matchc; >+ unsigned int magnab; >+ unsigned int magnc; >+ unsigned int pmask; >+ unsigned int xpmask; >+ unsigned int ypmask; >+ unsigned int zpmask; >+ unsigned int auxclip0min; >+ unsigned int auxclip0max; >+ unsigned int auxclip1min; >+ unsigned int auxclip1max; >+ unsigned int auxclip2min; >+ unsigned int auxclip2max; >+ unsigned int auxclip3min; >+ unsigned int auxclip3max; >+ unsigned int drawop; >+ unsigned int lpat; >+ unsigned int fontxy; >+ unsigned int fontw; >+ unsigned int fontinc; >+ unsigned int area_pattern[32]; >+ unsigned int ucsr; >+ unsigned int stencil; >+ unsigned int stencilctl; >+ unsigned int dcss1; >+ unsigned int dcss2; >+ unsigned int dcss3; >+ unsigned int dcs2; >+ unsigned int dcs3; >+ unsigned int dcs4; >+ unsigned int dcd2; >+ unsigned int dcd3; >+ unsigned int dcd4; >+ unsigned int mer; >+}; >+ >+#define FFB_MAX_CTXS 32 >+ >+enum ffb_chip_type { >+ ffb1_prototype = 0, /* Early pre-FCS FFB */ >+ ffb1_standard, /* First FCS FFB, 100Mhz UPA, 66MHz gclk */ >+ ffb1_speedsort, /* Second FCS FFB, 100Mhz UPA, 75MHz gclk */ >+ ffb2_prototype, /* Early pre-FCS vertical FFB2 */ >+ ffb2_vertical, /* First FCS FFB2/vertical, 100Mhz UPA, 100MHZ gclk, >+ 75(SingleBuffer)/83(DoubleBuffer) MHz fclk */ >+ ffb2_vertical_plus, /* Second FCS FFB2/vertical, same timings */ >+ ffb2_horizontal, /* First FCS FFB2/horizontal, same timings as FFB2/vert */ >+ ffb2_horizontal_plus, /* Second FCS FFB2/horizontal, same timings */ >+ afb_m3, /* FCS Elite3D, 3 float chips */ >+ afb_m6 /* FCS Elite3D, 6 float chips */ >+}; >+ >+typedef struct ffb_dev_priv { >+ /* Misc software state. */ >+ int prom_node; >+ enum ffb_chip_type ffb_type; >+ u64 card_phys_base; >+ struct miscdevice miscdev; >+ >+ /* Controller registers. */ >+ ffb_fbcPtr regs; >+ >+ /* Context table. */ >+ struct ffb_hw_context *hw_state[FFB_MAX_CTXS]; >+} ffb_dev_priv_t; >+ >+extern unsigned long ffb_get_unmapped_area(struct file *filp, >+ unsigned long hint, >+ unsigned long len, >+ unsigned long pgoff, >+ unsigned long flags); >+extern unsigned long ffb_driver_get_map_ofs(drm_map_t *map) >+extern unsigned long ffb_driver_get_reg_ofs(struct drm_device *dev) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i810_dma.c linux-2.6.23.i686/drivers/char/drm/i810_dma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i810_dma.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i810_dma.c 2008-01-06 09:24:57.000000000 +0100 >@@ -30,21 +30,40 @@ > * > */ > >+#include <linux/interrupt.h> /* For task queue support */ >+#include <linux/delay.h> >+#include <linux/pagemap.h> >+ > #include "drmP.h" > #include "drm.h" > #include "i810_drm.h" > #include "i810_drv.h" >-#include <linux/interrupt.h> /* For task queue support */ >-#include <linux/delay.h> >-#include <linux/pagemap.h> > > #define I810_BUF_FREE 2 > #define I810_BUF_CLIENT 1 >-#define I810_BUF_HARDWARE 0 >+#define I810_BUF_HARDWARE 0 > > #define I810_BUF_UNMAPPED 0 > #define I810_BUF_MAPPED 1 > >+static inline void i810_print_status_page(struct drm_device * dev) >+{ >+ struct drm_device_dma *dma = dev->dma; >+ drm_i810_private_t *dev_priv = dev->dev_private; >+ u32 *temp = dev_priv->hw_status_page; >+ int i; >+ >+ DRM_DEBUG("hw_status: Interrupt Status : %x\n", temp[0]); >+ DRM_DEBUG("hw_status: LpRing Head ptr : %x\n", temp[1]); >+ DRM_DEBUG("hw_status: IRing Head ptr : %x\n", temp[2]); >+ DRM_DEBUG("hw_status: Reserved : %x\n", temp[3]); >+ DRM_DEBUG("hw_status: Last Render: %x\n", temp[4]); >+ DRM_DEBUG("hw_status: Driver Counter : %d\n", temp[5]); >+ for (i = 6; i < dma->buf_count + 6; i++) { >+ DRM_DEBUG("buffer status idx : %d used: %d\n", i - 6, temp[i]); >+ } >+} >+ > static struct drm_buf *i810_freelist_get(struct drm_device * dev) > { > struct drm_device_dma *dma = dev->dma; >@@ -848,7 +867,7 @@ static void i810_dma_quiescent(struct dr > drm_i810_private_t *dev_priv = dev->dev_private; > RING_LOCALS; > >-/* printk("%s\n", __FUNCTION__); */ >+/* printk("%s\n", __FUNCTION__); */ > > i810_kernel_lost_context(dev); > >@@ -869,7 +888,7 @@ static int i810_flush_queue(struct drm_d > int i, ret = 0; > RING_LOCALS; > >-/* printk("%s\n", __FUNCTION__); */ >+/* printk("%s\n", __FUNCTION__); */ > > i810_kernel_lost_context(dev); > >@@ -897,7 +916,7 @@ static int i810_flush_queue(struct drm_d > } > > /* Must be called with the lock held */ >-static void i810_reclaim_buffers(struct drm_device * dev, >+static void i810_reclaim_buffers(struct drm_device *dev, > struct drm_file *file_priv) > { > struct drm_device_dma *dma = dev->dma; >@@ -1166,7 +1185,6 @@ static int i810_ov0_flip(struct drm_devi > drm_i810_private_t *dev_priv = (drm_i810_private_t *) dev->dev_private; > > LOCK_TEST_WITH_RETURN(dev, file_priv); >- > //Tell the overlay to update > I810_WRITE(0x30000, dev_priv->overlay_physical | 0x80000000); > >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i810_drm.h linux-2.6.23.i686/drivers/char/drm/i810_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/i810_drm.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i810_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -119,24 +119,6 @@ typedef struct _drm_i810_init { > unsigned int pitch_bits; > } drm_i810_init_t; > >-/* This is the init structure prior to v1.2 */ >-typedef struct _drm_i810_pre12_init { >- drm_i810_init_func_t func; >- unsigned int mmio_offset; >- unsigned int buffers_offset; >- int sarea_priv_offset; >- unsigned int ring_start; >- unsigned int ring_end; >- unsigned int ring_size; >- unsigned int front_offset; >- unsigned int back_offset; >- unsigned int depth_offset; >- unsigned int w; >- unsigned int h; >- unsigned int pitch; >- unsigned int pitch_bits; >-} drm_i810_pre12_init_t; >- > /* Warning: If you change the SAREA structure you must change the Xserver > * structure as well */ > >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i810_drv.c linux-2.6.23.i686/drivers/char/drm/i810_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i810_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/i810_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -41,9 +41,10 @@ static struct pci_device_id pciidlist[] > i810_PCI_IDS > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = >- DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | DRIVER_USE_MTRR | >+ DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | /* DRIVER_USE_MTRR | */ > DRIVER_HAVE_DMA | DRIVER_DMA_QUEUE, > .dev_priv_size = sizeof(drm_i810_buf_priv_t), > .load = i810_driver_load, >@@ -56,19 +57,20 @@ static struct drm_driver driver = { > .get_reg_ofs = drm_core_get_reg_ofs, > .ioctls = i810_ioctls, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >- }, >- >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+ }, > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >- }, >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), >+ }, > > .name = DRIVER_NAME, > .desc = DRIVER_DESC, >@@ -78,10 +80,15 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ > static int __init i810_init(void) > { > driver.num_ioctls = i810_max_ioctl; >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit i810_exit(void) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i810_drv.h linux-2.6.23.i686/drivers/char/drm/i810_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/i810_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i810_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -25,7 +25,7 @@ > * DEALINGS IN THE SOFTWARE. > * > * Authors: Rickard E. (Rik) Faith <faith@valinux.com> >- * Jeff Hartmann <jhartmann@valinux.com> >+ * Jeff Hartmann <jhartmann@valinux.com> > * > */ > >@@ -134,7 +134,7 @@ extern int i810_max_ioctl; > #define I810_ADDR(reg) (I810_BASE(reg) + reg) > #define I810_DEREF(reg) *(__volatile__ int *)I810_ADDR(reg) > #define I810_READ(reg) I810_DEREF(reg) >-#define I810_WRITE(reg,val) do { I810_DEREF(reg) = val; } while (0) >+#define I810_WRITE(reg,val) do { I810_DEREF(reg) = val; } while (0) > #define I810_DEREF16(reg) *(__volatile__ u16 *)I810_ADDR(reg) > #define I810_READ16(reg) I810_DEREF16(reg) > #define I810_WRITE16(reg,val) do { I810_DEREF16(reg) = val; } while (0) >@@ -144,8 +144,8 @@ extern int i810_max_ioctl; > volatile char *virt; > > #define BEGIN_LP_RING(n) do { \ >- if (I810_VERBOSE) \ >- DRM_DEBUG("BEGIN_LP_RING(%d) in %s\n", n, __FUNCTION__); \ >+ if (I810_VERBOSE) \ >+ DRM_DEBUG("BEGIN_LP_RING(%d) in %s\n", n, __FUNCTION__);\ > if (dev_priv->ring.space < n*4) \ > i810_wait_ring(dev, n*4); \ > dev_priv->ring.space -= n*4; \ >@@ -154,20 +154,20 @@ extern int i810_max_ioctl; > virt = dev_priv->ring.virtual_start; \ > } while (0) > >-#define ADVANCE_LP_RING() do { \ >- if (I810_VERBOSE) DRM_DEBUG("ADVANCE_LP_RING\n"); \ >- dev_priv->ring.tail = outring; \ >- I810_WRITE(LP_RING + RING_TAIL, outring); \ >+#define ADVANCE_LP_RING() do { \ >+ if (I810_VERBOSE) DRM_DEBUG("ADVANCE_LP_RING\n"); \ >+ dev_priv->ring.tail = outring; \ >+ I810_WRITE(LP_RING + RING_TAIL, outring); \ > } while(0) > >-#define OUT_RING(n) do { \ >+#define OUT_RING(n) do { \ > if (I810_VERBOSE) DRM_DEBUG(" OUT_RING %x\n", (int)(n)); \ >- *(volatile unsigned int *)(virt + outring) = n; \ >- outring += 4; \ >- outring &= ringmask; \ >+ *(volatile unsigned int *)(virt + outring) = n; \ >+ outring += 4; \ >+ outring &= ringmask; \ > } while (0) > >-#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) >+#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) > #define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23)) > #define CMD_REPORT_HEAD (7<<23) > #define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1) >@@ -184,28 +184,28 @@ extern int i810_max_ioctl; > > #define I810REG_HWSTAM 0x02098 > #define I810REG_INT_IDENTITY_R 0x020a4 >-#define I810REG_INT_MASK_R 0x020a8 >+#define I810REG_INT_MASK_R 0x020a8 > #define I810REG_INT_ENABLE_R 0x020a0 > >-#define LP_RING 0x2030 >-#define HP_RING 0x2040 >-#define RING_TAIL 0x00 >+#define LP_RING 0x2030 >+#define HP_RING 0x2040 >+#define RING_TAIL 0x00 > #define TAIL_ADDR 0x000FFFF8 >-#define RING_HEAD 0x04 >-#define HEAD_WRAP_COUNT 0xFFE00000 >-#define HEAD_WRAP_ONE 0x00200000 >-#define HEAD_ADDR 0x001FFFFC >-#define RING_START 0x08 >-#define START_ADDR 0x00FFFFF8 >-#define RING_LEN 0x0C >-#define RING_NR_PAGES 0x000FF000 >-#define RING_REPORT_MASK 0x00000006 >-#define RING_REPORT_64K 0x00000002 >-#define RING_REPORT_128K 0x00000004 >-#define RING_NO_REPORT 0x00000000 >-#define RING_VALID_MASK 0x00000001 >-#define RING_VALID 0x00000001 >-#define RING_INVALID 0x00000000 >+#define RING_HEAD 0x04 >+#define HEAD_WRAP_COUNT 0xFFE00000 >+#define HEAD_WRAP_ONE 0x00200000 >+#define HEAD_ADDR 0x001FFFFC >+#define RING_START 0x08 >+#define START_ADDR 0x00FFFFF8 >+#define RING_LEN 0x0C >+#define RING_NR_PAGES 0x000FF000 >+#define RING_REPORT_MASK 0x00000006 >+#define RING_REPORT_64K 0x00000002 >+#define RING_REPORT_128K 0x00000004 >+#define RING_NO_REPORT 0x00000000 >+#define RING_VALID_MASK 0x00000001 >+#define RING_VALID 0x00000001 >+#define RING_INVALID 0x00000000 > > #define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19)) > #define SC_UPDATE_SCISSOR (0x1<<1) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i830_dma.c linux-2.6.23.i686/drivers/char/drm/i830_dma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i830_dma.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i830_dma.c 1970-01-01 01:00:00.000000000 +0100 >@@ -1,1553 +0,0 @@ >-/* i830_dma.c -- DMA support for the I830 -*- linux-c -*- >- * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com >- * >- * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. >- * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. >- * All Rights Reserved. >- * >- * Permission is hereby granted, free of charge, to any person obtaining a >- * copy of this software and associated documentation files (the "Software"), >- * to deal in the Software without restriction, including without limitation >- * the rights to use, copy, modify, merge, publish, distribute, sublicense, >- * and/or sell copies of the Software, and to permit persons to whom the >- * Software is furnished to do so, subject to the following conditions: >- * >- * The above copyright notice and this permission notice (including the next >- * paragraph) shall be included in all copies or substantial portions of the >- * Software. >- * >- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >- * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >- * DEALINGS IN THE SOFTWARE. >- * >- * Authors: Rickard E. (Rik) Faith <faith@valinux.com> >- * Jeff Hartmann <jhartmann@valinux.com> >- * Keith Whitwell <keith@tungstengraphics.com> >- * Abraham vd Merwe <abraham@2d3d.co.za> >- * >- */ >- >-#include "drmP.h" >-#include "drm.h" >-#include "i830_drm.h" >-#include "i830_drv.h" >-#include <linux/interrupt.h> /* For task queue support */ >-#include <linux/pagemap.h> /* For FASTCALL on unlock_page() */ >-#include <linux/delay.h> >-#include <asm/uaccess.h> >- >-#define I830_BUF_FREE 2 >-#define I830_BUF_CLIENT 1 >-#define I830_BUF_HARDWARE 0 >- >-#define I830_BUF_UNMAPPED 0 >-#define I830_BUF_MAPPED 1 >- >-static struct drm_buf *i830_freelist_get(struct drm_device * dev) >-{ >- struct drm_device_dma *dma = dev->dma; >- int i; >- int used; >- >- /* Linear search might not be the best solution */ >- >- for (i = 0; i < dma->buf_count; i++) { >- struct drm_buf *buf = dma->buflist[i]; >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- /* In use is already a pointer */ >- used = cmpxchg(buf_priv->in_use, I830_BUF_FREE, >- I830_BUF_CLIENT); >- if (used == I830_BUF_FREE) { >- return buf; >- } >- } >- return NULL; >-} >- >-/* This should only be called if the buffer is not sent to the hardware >- * yet, the hardware updates in use for us once its on the ring buffer. >- */ >- >-static int i830_freelist_put(struct drm_device * dev, struct drm_buf * buf) >-{ >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- int used; >- >- /* In use is already a pointer */ >- used = cmpxchg(buf_priv->in_use, I830_BUF_CLIENT, I830_BUF_FREE); >- if (used != I830_BUF_CLIENT) { >- DRM_ERROR("Freeing buffer thats not in use : %d\n", buf->idx); >- return -EINVAL; >- } >- >- return 0; >-} >- >-static int i830_mmap_buffers(struct file *filp, struct vm_area_struct *vma) >-{ >- struct drm_file *priv = filp->private_data; >- struct drm_device *dev; >- drm_i830_private_t *dev_priv; >- struct drm_buf *buf; >- drm_i830_buf_priv_t *buf_priv; >- >- lock_kernel(); >- dev = priv->head->dev; >- dev_priv = dev->dev_private; >- buf = dev_priv->mmap_buffer; >- buf_priv = buf->dev_private; >- >- vma->vm_flags |= (VM_IO | VM_DONTCOPY); >- vma->vm_file = filp; >- >- buf_priv->currently_mapped = I830_BUF_MAPPED; >- unlock_kernel(); >- >- if (io_remap_pfn_range(vma, vma->vm_start, >- vma->vm_pgoff, >- vma->vm_end - vma->vm_start, vma->vm_page_prot)) >- return -EAGAIN; >- return 0; >-} >- >-static const struct file_operations i830_buffer_fops = { >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = i830_mmap_buffers, >- .fasync = drm_fasync, >-}; >- >-static int i830_map_buffer(struct drm_buf * buf, struct drm_file *file_priv) >-{ >- struct drm_device *dev = file_priv->head->dev; >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- drm_i830_private_t *dev_priv = dev->dev_private; >- const struct file_operations *old_fops; >- unsigned long virtual; >- int retcode = 0; >- >- if (buf_priv->currently_mapped == I830_BUF_MAPPED) >- return -EINVAL; >- >- down_write(¤t->mm->mmap_sem); >- old_fops = file_priv->filp->f_op; >- file_priv->filp->f_op = &i830_buffer_fops; >- dev_priv->mmap_buffer = buf; >- virtual = do_mmap(file_priv->filp, 0, buf->total, PROT_READ | PROT_WRITE, >- MAP_SHARED, buf->bus_address); >- dev_priv->mmap_buffer = NULL; >- file_priv->filp->f_op = old_fops; >- if (IS_ERR((void *)virtual)) { /* ugh */ >- /* Real error */ >- DRM_ERROR("mmap error\n"); >- retcode = PTR_ERR((void *)virtual); >- buf_priv->virtual = NULL; >- } else { >- buf_priv->virtual = (void __user *)virtual; >- } >- up_write(¤t->mm->mmap_sem); >- >- return retcode; >-} >- >-static int i830_unmap_buffer(struct drm_buf * buf) >-{ >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- int retcode = 0; >- >- if (buf_priv->currently_mapped != I830_BUF_MAPPED) >- return -EINVAL; >- >- down_write(¤t->mm->mmap_sem); >- retcode = do_munmap(current->mm, >- (unsigned long)buf_priv->virtual, >- (size_t) buf->total); >- up_write(¤t->mm->mmap_sem); >- >- buf_priv->currently_mapped = I830_BUF_UNMAPPED; >- buf_priv->virtual = NULL; >- >- return retcode; >-} >- >-static int i830_dma_get_buffer(struct drm_device * dev, drm_i830_dma_t * d, >- struct drm_file *file_priv) >-{ >- struct drm_buf *buf; >- drm_i830_buf_priv_t *buf_priv; >- int retcode = 0; >- >- buf = i830_freelist_get(dev); >- if (!buf) { >- retcode = -ENOMEM; >- DRM_DEBUG("retcode=%d\n", retcode); >- return retcode; >- } >- >- retcode = i830_map_buffer(buf, file_priv); >- if (retcode) { >- i830_freelist_put(dev, buf); >- DRM_ERROR("mapbuf failed, retcode %d\n", retcode); >- return retcode; >- } >- buf->file_priv = file_priv; >- buf_priv = buf->dev_private; >- d->granted = 1; >- d->request_idx = buf->idx; >- d->request_size = buf->total; >- d->virtual = buf_priv->virtual; >- >- return retcode; >-} >- >-static int i830_dma_cleanup(struct drm_device * dev) >-{ >- struct drm_device_dma *dma = dev->dma; >- >- /* Make sure interrupts are disabled here because the uninstall ioctl >- * may not have been called from userspace and after dev_private >- * is freed, it's too late. >- */ >- if (dev->irq_enabled) >- drm_irq_uninstall(dev); >- >- if (dev->dev_private) { >- int i; >- drm_i830_private_t *dev_priv = >- (drm_i830_private_t *) dev->dev_private; >- >- if (dev_priv->ring.virtual_start) { >- drm_core_ioremapfree(&dev_priv->ring.map, dev); >- } >- if (dev_priv->hw_status_page) { >- pci_free_consistent(dev->pdev, PAGE_SIZE, >- dev_priv->hw_status_page, >- dev_priv->dma_status_page); >- /* Need to rewrite hardware status page */ >- I830_WRITE(0x02080, 0x1ffff000); >- } >- >- drm_free(dev->dev_private, sizeof(drm_i830_private_t), >- DRM_MEM_DRIVER); >- dev->dev_private = NULL; >- >- for (i = 0; i < dma->buf_count; i++) { >- struct drm_buf *buf = dma->buflist[i]; >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- if (buf_priv->kernel_virtual && buf->total) >- drm_core_ioremapfree(&buf_priv->map, dev); >- } >- } >- return 0; >-} >- >-int i830_wait_ring(struct drm_device * dev, int n, const char *caller) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_ring_buffer_t *ring = &(dev_priv->ring); >- int iters = 0; >- unsigned long end; >- unsigned int last_head = I830_READ(LP_RING + RING_HEAD) & HEAD_ADDR; >- >- end = jiffies + (HZ * 3); >- while (ring->space < n) { >- ring->head = I830_READ(LP_RING + RING_HEAD) & HEAD_ADDR; >- ring->space = ring->head - (ring->tail + 8); >- if (ring->space < 0) >- ring->space += ring->Size; >- >- if (ring->head != last_head) { >- end = jiffies + (HZ * 3); >- last_head = ring->head; >- } >- >- iters++; >- if (time_before(end, jiffies)) { >- DRM_ERROR("space: %d wanted %d\n", ring->space, n); >- DRM_ERROR("lockup\n"); >- goto out_wait_ring; >- } >- udelay(1); >- dev_priv->sarea_priv->perf_boxes |= I830_BOX_WAIT; >- } >- >- out_wait_ring: >- return iters; >-} >- >-static void i830_kernel_lost_context(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_ring_buffer_t *ring = &(dev_priv->ring); >- >- ring->head = I830_READ(LP_RING + RING_HEAD) & HEAD_ADDR; >- ring->tail = I830_READ(LP_RING + RING_TAIL) & TAIL_ADDR; >- ring->space = ring->head - (ring->tail + 8); >- if (ring->space < 0) >- ring->space += ring->Size; >- >- if (ring->head == ring->tail) >- dev_priv->sarea_priv->perf_boxes |= I830_BOX_RING_EMPTY; >-} >- >-static int i830_freelist_init(struct drm_device * dev, drm_i830_private_t * dev_priv) >-{ >- struct drm_device_dma *dma = dev->dma; >- int my_idx = 36; >- u32 *hw_status = (u32 *) (dev_priv->hw_status_page + my_idx); >- int i; >- >- if (dma->buf_count > 1019) { >- /* Not enough space in the status page for the freelist */ >- return -EINVAL; >- } >- >- for (i = 0; i < dma->buf_count; i++) { >- struct drm_buf *buf = dma->buflist[i]; >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- >- buf_priv->in_use = hw_status++; >- buf_priv->my_use_idx = my_idx; >- my_idx += 4; >- >- *buf_priv->in_use = I830_BUF_FREE; >- >- buf_priv->map.offset = buf->bus_address; >- buf_priv->map.size = buf->total; >- buf_priv->map.type = _DRM_AGP; >- buf_priv->map.flags = 0; >- buf_priv->map.mtrr = 0; >- >- drm_core_ioremap(&buf_priv->map, dev); >- buf_priv->kernel_virtual = buf_priv->map.handle; >- } >- return 0; >-} >- >-static int i830_dma_initialize(struct drm_device * dev, >- drm_i830_private_t * dev_priv, >- drm_i830_init_t * init) >-{ >- struct drm_map_list *r_list; >- >- memset(dev_priv, 0, sizeof(drm_i830_private_t)); >- >- list_for_each_entry(r_list, &dev->maplist, head) { >- if (r_list->map && >- r_list->map->type == _DRM_SHM && >- r_list->map->flags & _DRM_CONTAINS_LOCK) { >- dev_priv->sarea_map = r_list->map; >- break; >- } >- } >- >- if (!dev_priv->sarea_map) { >- dev->dev_private = (void *)dev_priv; >- i830_dma_cleanup(dev); >- DRM_ERROR("can not find sarea!\n"); >- return -EINVAL; >- } >- dev_priv->mmio_map = drm_core_findmap(dev, init->mmio_offset); >- if (!dev_priv->mmio_map) { >- dev->dev_private = (void *)dev_priv; >- i830_dma_cleanup(dev); >- DRM_ERROR("can not find mmio map!\n"); >- return -EINVAL; >- } >- dev->agp_buffer_token = init->buffers_offset; >- dev->agp_buffer_map = drm_core_findmap(dev, init->buffers_offset); >- if (!dev->agp_buffer_map) { >- dev->dev_private = (void *)dev_priv; >- i830_dma_cleanup(dev); >- DRM_ERROR("can not find dma buffer map!\n"); >- return -EINVAL; >- } >- >- dev_priv->sarea_priv = (drm_i830_sarea_t *) >- ((u8 *) dev_priv->sarea_map->handle + init->sarea_priv_offset); >- >- dev_priv->ring.Start = init->ring_start; >- dev_priv->ring.End = init->ring_end; >- dev_priv->ring.Size = init->ring_size; >- >- dev_priv->ring.map.offset = dev->agp->base + init->ring_start; >- dev_priv->ring.map.size = init->ring_size; >- dev_priv->ring.map.type = _DRM_AGP; >- dev_priv->ring.map.flags = 0; >- dev_priv->ring.map.mtrr = 0; >- >- drm_core_ioremap(&dev_priv->ring.map, dev); >- >- if (dev_priv->ring.map.handle == NULL) { >- dev->dev_private = (void *)dev_priv; >- i830_dma_cleanup(dev); >- DRM_ERROR("can not ioremap virtual address for" >- " ring buffer\n"); >- return -ENOMEM; >- } >- >- dev_priv->ring.virtual_start = dev_priv->ring.map.handle; >- >- dev_priv->ring.tail_mask = dev_priv->ring.Size - 1; >- >- dev_priv->w = init->w; >- dev_priv->h = init->h; >- dev_priv->pitch = init->pitch; >- dev_priv->back_offset = init->back_offset; >- dev_priv->depth_offset = init->depth_offset; >- dev_priv->front_offset = init->front_offset; >- >- dev_priv->front_di1 = init->front_offset | init->pitch_bits; >- dev_priv->back_di1 = init->back_offset | init->pitch_bits; >- dev_priv->zi1 = init->depth_offset | init->pitch_bits; >- >- DRM_DEBUG("front_di1 %x\n", dev_priv->front_di1); >- DRM_DEBUG("back_offset %x\n", dev_priv->back_offset); >- DRM_DEBUG("back_di1 %x\n", dev_priv->back_di1); >- DRM_DEBUG("pitch_bits %x\n", init->pitch_bits); >- >- dev_priv->cpp = init->cpp; >- /* We are using separate values as placeholders for mechanisms for >- * private backbuffer/depthbuffer usage. >- */ >- >- dev_priv->back_pitch = init->back_pitch; >- dev_priv->depth_pitch = init->depth_pitch; >- dev_priv->do_boxes = 0; >- dev_priv->use_mi_batchbuffer_start = 0; >- >- /* Program Hardware Status Page */ >- dev_priv->hw_status_page = >- pci_alloc_consistent(dev->pdev, PAGE_SIZE, >- &dev_priv->dma_status_page); >- if (!dev_priv->hw_status_page) { >- dev->dev_private = (void *)dev_priv; >- i830_dma_cleanup(dev); >- DRM_ERROR("Can not allocate hardware status page\n"); >- return -ENOMEM; >- } >- memset(dev_priv->hw_status_page, 0, PAGE_SIZE); >- DRM_DEBUG("hw status page @ %p\n", dev_priv->hw_status_page); >- >- I830_WRITE(0x02080, dev_priv->dma_status_page); >- DRM_DEBUG("Enabled hardware status page\n"); >- >- /* Now we need to init our freelist */ >- if (i830_freelist_init(dev, dev_priv) != 0) { >- dev->dev_private = (void *)dev_priv; >- i830_dma_cleanup(dev); >- DRM_ERROR("Not enough space in the status page for" >- " the freelist\n"); >- return -ENOMEM; >- } >- dev->dev_private = (void *)dev_priv; >- >- return 0; >-} >- >-static int i830_dma_init(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- drm_i830_private_t *dev_priv; >- drm_i830_init_t *init = data; >- int retcode = 0; >- >- switch (init->func) { >- case I830_INIT_DMA: >- dev_priv = drm_alloc(sizeof(drm_i830_private_t), >- DRM_MEM_DRIVER); >- if (dev_priv == NULL) >- return -ENOMEM; >- retcode = i830_dma_initialize(dev, dev_priv, init); >- break; >- case I830_CLEANUP_DMA: >- retcode = i830_dma_cleanup(dev); >- break; >- default: >- retcode = -EINVAL; >- break; >- } >- >- return retcode; >-} >- >-#define GFX_OP_STIPPLE ((0x3<<29)|(0x1d<<24)|(0x83<<16)) >-#define ST1_ENABLE (1<<16) >-#define ST1_MASK (0xffff) >- >-/* Most efficient way to verify state for the i830 is as it is >- * emitted. Non-conformant state is silently dropped. >- */ >-static void i830EmitContextVerified(struct drm_device * dev, unsigned int *code) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- int i, j = 0; >- unsigned int tmp; >- RING_LOCALS; >- >- BEGIN_LP_RING(I830_CTX_SETUP_SIZE + 4); >- >- for (i = 0; i < I830_CTXREG_BLENDCOLR0; i++) { >- tmp = code[i]; >- if ((tmp & (7 << 29)) == CMD_3D && >- (tmp & (0x1f << 24)) < (0x1d << 24)) { >- OUT_RING(tmp); >- j++; >- } else { >- DRM_ERROR("Skipping %d\n", i); >- } >- } >- >- OUT_RING(STATE3D_CONST_BLEND_COLOR_CMD); >- OUT_RING(code[I830_CTXREG_BLENDCOLR]); >- j += 2; >- >- for (i = I830_CTXREG_VF; i < I830_CTXREG_MCSB0; i++) { >- tmp = code[i]; >- if ((tmp & (7 << 29)) == CMD_3D && >- (tmp & (0x1f << 24)) < (0x1d << 24)) { >- OUT_RING(tmp); >- j++; >- } else { >- DRM_ERROR("Skipping %d\n", i); >- } >- } >- >- OUT_RING(STATE3D_MAP_COORD_SETBIND_CMD); >- OUT_RING(code[I830_CTXREG_MCSB1]); >- j += 2; >- >- if (j & 1) >- OUT_RING(0); >- >- ADVANCE_LP_RING(); >-} >- >-static void i830EmitTexVerified(struct drm_device * dev, unsigned int *code) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- int i, j = 0; >- unsigned int tmp; >- RING_LOCALS; >- >- if (code[I830_TEXREG_MI0] == GFX_OP_MAP_INFO || >- (code[I830_TEXREG_MI0] & ~(0xf * LOAD_TEXTURE_MAP0)) == >- (STATE3D_LOAD_STATE_IMMEDIATE_2 | 4)) { >- >- BEGIN_LP_RING(I830_TEX_SETUP_SIZE); >- >- OUT_RING(code[I830_TEXREG_MI0]); /* TM0LI */ >- OUT_RING(code[I830_TEXREG_MI1]); /* TM0S0 */ >- OUT_RING(code[I830_TEXREG_MI2]); /* TM0S1 */ >- OUT_RING(code[I830_TEXREG_MI3]); /* TM0S2 */ >- OUT_RING(code[I830_TEXREG_MI4]); /* TM0S3 */ >- OUT_RING(code[I830_TEXREG_MI5]); /* TM0S4 */ >- >- for (i = 6; i < I830_TEX_SETUP_SIZE; i++) { >- tmp = code[i]; >- OUT_RING(tmp); >- j++; >- } >- >- if (j & 1) >- OUT_RING(0); >- >- ADVANCE_LP_RING(); >- } else >- printk("rejected packet %x\n", code[0]); >-} >- >-static void i830EmitTexBlendVerified(struct drm_device * dev, >- unsigned int *code, unsigned int num) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- int i, j = 0; >- unsigned int tmp; >- RING_LOCALS; >- >- if (!num) >- return; >- >- BEGIN_LP_RING(num + 1); >- >- for (i = 0; i < num; i++) { >- tmp = code[i]; >- OUT_RING(tmp); >- j++; >- } >- >- if (j & 1) >- OUT_RING(0); >- >- ADVANCE_LP_RING(); >-} >- >-static void i830EmitTexPalette(struct drm_device * dev, >- unsigned int *palette, int number, int is_shared) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- int i; >- RING_LOCALS; >- >- return; >- >- BEGIN_LP_RING(258); >- >- if (is_shared == 1) { >- OUT_RING(CMD_OP_MAP_PALETTE_LOAD | >- MAP_PALETTE_NUM(0) | MAP_PALETTE_BOTH); >- } else { >- OUT_RING(CMD_OP_MAP_PALETTE_LOAD | MAP_PALETTE_NUM(number)); >- } >- for (i = 0; i < 256; i++) { >- OUT_RING(palette[i]); >- } >- OUT_RING(0); >- /* KW: WHERE IS THE ADVANCE_LP_RING? This is effectively a noop! >- */ >-} >- >-/* Need to do some additional checking when setting the dest buffer. >- */ >-static void i830EmitDestVerified(struct drm_device * dev, unsigned int *code) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- unsigned int tmp; >- RING_LOCALS; >- >- BEGIN_LP_RING(I830_DEST_SETUP_SIZE + 10); >- >- tmp = code[I830_DESTREG_CBUFADDR]; >- if (tmp == dev_priv->front_di1 || tmp == dev_priv->back_di1) { >- if (((int)outring) & 8) { >- OUT_RING(0); >- OUT_RING(0); >- } >- >- OUT_RING(CMD_OP_DESTBUFFER_INFO); >- OUT_RING(BUF_3D_ID_COLOR_BACK | >- BUF_3D_PITCH(dev_priv->back_pitch * dev_priv->cpp) | >- BUF_3D_USE_FENCE); >- OUT_RING(tmp); >- OUT_RING(0); >- >- OUT_RING(CMD_OP_DESTBUFFER_INFO); >- OUT_RING(BUF_3D_ID_DEPTH | BUF_3D_USE_FENCE | >- BUF_3D_PITCH(dev_priv->depth_pitch * dev_priv->cpp)); >- OUT_RING(dev_priv->zi1); >- OUT_RING(0); >- } else { >- DRM_ERROR("bad di1 %x (allow %x or %x)\n", >- tmp, dev_priv->front_di1, dev_priv->back_di1); >- } >- >- /* invarient: >- */ >- >- OUT_RING(GFX_OP_DESTBUFFER_VARS); >- OUT_RING(code[I830_DESTREG_DV1]); >- >- OUT_RING(GFX_OP_DRAWRECT_INFO); >- OUT_RING(code[I830_DESTREG_DR1]); >- OUT_RING(code[I830_DESTREG_DR2]); >- OUT_RING(code[I830_DESTREG_DR3]); >- OUT_RING(code[I830_DESTREG_DR4]); >- >- /* Need to verify this */ >- tmp = code[I830_DESTREG_SENABLE]; >- if ((tmp & ~0x3) == GFX_OP_SCISSOR_ENABLE) { >- OUT_RING(tmp); >- } else { >- DRM_ERROR("bad scissor enable\n"); >- OUT_RING(0); >- } >- >- OUT_RING(GFX_OP_SCISSOR_RECT); >- OUT_RING(code[I830_DESTREG_SR1]); >- OUT_RING(code[I830_DESTREG_SR2]); >- OUT_RING(0); >- >- ADVANCE_LP_RING(); >-} >- >-static void i830EmitStippleVerified(struct drm_device * dev, unsigned int *code) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- RING_LOCALS; >- >- BEGIN_LP_RING(2); >- OUT_RING(GFX_OP_STIPPLE); >- OUT_RING(code[1]); >- ADVANCE_LP_RING(); >-} >- >-static void i830EmitState(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_sarea_t *sarea_priv = dev_priv->sarea_priv; >- unsigned int dirty = sarea_priv->dirty; >- >- DRM_DEBUG("%s %x\n", __FUNCTION__, dirty); >- >- if (dirty & I830_UPLOAD_BUFFERS) { >- i830EmitDestVerified(dev, sarea_priv->BufferState); >- sarea_priv->dirty &= ~I830_UPLOAD_BUFFERS; >- } >- >- if (dirty & I830_UPLOAD_CTX) { >- i830EmitContextVerified(dev, sarea_priv->ContextState); >- sarea_priv->dirty &= ~I830_UPLOAD_CTX; >- } >- >- if (dirty & I830_UPLOAD_TEX0) { >- i830EmitTexVerified(dev, sarea_priv->TexState[0]); >- sarea_priv->dirty &= ~I830_UPLOAD_TEX0; >- } >- >- if (dirty & I830_UPLOAD_TEX1) { >- i830EmitTexVerified(dev, sarea_priv->TexState[1]); >- sarea_priv->dirty &= ~I830_UPLOAD_TEX1; >- } >- >- if (dirty & I830_UPLOAD_TEXBLEND0) { >- i830EmitTexBlendVerified(dev, sarea_priv->TexBlendState[0], >- sarea_priv->TexBlendStateWordsUsed[0]); >- sarea_priv->dirty &= ~I830_UPLOAD_TEXBLEND0; >- } >- >- if (dirty & I830_UPLOAD_TEXBLEND1) { >- i830EmitTexBlendVerified(dev, sarea_priv->TexBlendState[1], >- sarea_priv->TexBlendStateWordsUsed[1]); >- sarea_priv->dirty &= ~I830_UPLOAD_TEXBLEND1; >- } >- >- if (dirty & I830_UPLOAD_TEX_PALETTE_SHARED) { >- i830EmitTexPalette(dev, sarea_priv->Palette[0], 0, 1); >- } else { >- if (dirty & I830_UPLOAD_TEX_PALETTE_N(0)) { >- i830EmitTexPalette(dev, sarea_priv->Palette[0], 0, 0); >- sarea_priv->dirty &= ~I830_UPLOAD_TEX_PALETTE_N(0); >- } >- if (dirty & I830_UPLOAD_TEX_PALETTE_N(1)) { >- i830EmitTexPalette(dev, sarea_priv->Palette[1], 1, 0); >- sarea_priv->dirty &= ~I830_UPLOAD_TEX_PALETTE_N(1); >- } >- >- /* 1.3: >- */ >-#if 0 >- if (dirty & I830_UPLOAD_TEX_PALETTE_N(2)) { >- i830EmitTexPalette(dev, sarea_priv->Palette2[0], 0, 0); >- sarea_priv->dirty &= ~I830_UPLOAD_TEX_PALETTE_N(2); >- } >- if (dirty & I830_UPLOAD_TEX_PALETTE_N(3)) { >- i830EmitTexPalette(dev, sarea_priv->Palette2[1], 1, 0); >- sarea_priv->dirty &= ~I830_UPLOAD_TEX_PALETTE_N(2); >- } >-#endif >- } >- >- /* 1.3: >- */ >- if (dirty & I830_UPLOAD_STIPPLE) { >- i830EmitStippleVerified(dev, sarea_priv->StippleState); >- sarea_priv->dirty &= ~I830_UPLOAD_STIPPLE; >- } >- >- if (dirty & I830_UPLOAD_TEX2) { >- i830EmitTexVerified(dev, sarea_priv->TexState2); >- sarea_priv->dirty &= ~I830_UPLOAD_TEX2; >- } >- >- if (dirty & I830_UPLOAD_TEX3) { >- i830EmitTexVerified(dev, sarea_priv->TexState3); >- sarea_priv->dirty &= ~I830_UPLOAD_TEX3; >- } >- >- if (dirty & I830_UPLOAD_TEXBLEND2) { >- i830EmitTexBlendVerified(dev, >- sarea_priv->TexBlendState2, >- sarea_priv->TexBlendStateWordsUsed2); >- >- sarea_priv->dirty &= ~I830_UPLOAD_TEXBLEND2; >- } >- >- if (dirty & I830_UPLOAD_TEXBLEND3) { >- i830EmitTexBlendVerified(dev, >- sarea_priv->TexBlendState3, >- sarea_priv->TexBlendStateWordsUsed3); >- sarea_priv->dirty &= ~I830_UPLOAD_TEXBLEND3; >- } >-} >- >-/* ================================================================ >- * Performance monitoring functions >- */ >- >-static void i830_fill_box(struct drm_device * dev, >- int x, int y, int w, int h, int r, int g, int b) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- u32 color; >- unsigned int BR13, CMD; >- RING_LOCALS; >- >- BR13 = (0xF0 << 16) | (dev_priv->pitch * dev_priv->cpp) | (1 << 24); >- CMD = XY_COLOR_BLT_CMD; >- x += dev_priv->sarea_priv->boxes[0].x1; >- y += dev_priv->sarea_priv->boxes[0].y1; >- >- if (dev_priv->cpp == 4) { >- BR13 |= (1 << 25); >- CMD |= (XY_COLOR_BLT_WRITE_ALPHA | XY_COLOR_BLT_WRITE_RGB); >- color = (((0xff) << 24) | (r << 16) | (g << 8) | b); >- } else { >- color = (((r & 0xf8) << 8) | >- ((g & 0xfc) << 3) | ((b & 0xf8) >> 3)); >- } >- >- BEGIN_LP_RING(6); >- OUT_RING(CMD); >- OUT_RING(BR13); >- OUT_RING((y << 16) | x); >- OUT_RING(((y + h) << 16) | (x + w)); >- >- if (dev_priv->current_page == 1) { >- OUT_RING(dev_priv->front_offset); >- } else { >- OUT_RING(dev_priv->back_offset); >- } >- >- OUT_RING(color); >- ADVANCE_LP_RING(); >-} >- >-static void i830_cp_performance_boxes(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- >- /* Purple box for page flipping >- */ >- if (dev_priv->sarea_priv->perf_boxes & I830_BOX_FLIP) >- i830_fill_box(dev, 4, 4, 8, 8, 255, 0, 255); >- >- /* Red box if we have to wait for idle at any point >- */ >- if (dev_priv->sarea_priv->perf_boxes & I830_BOX_WAIT) >- i830_fill_box(dev, 16, 4, 8, 8, 255, 0, 0); >- >- /* Blue box: lost context? >- */ >- if (dev_priv->sarea_priv->perf_boxes & I830_BOX_LOST_CONTEXT) >- i830_fill_box(dev, 28, 4, 8, 8, 0, 0, 255); >- >- /* Yellow box for texture swaps >- */ >- if (dev_priv->sarea_priv->perf_boxes & I830_BOX_TEXTURE_LOAD) >- i830_fill_box(dev, 40, 4, 8, 8, 255, 255, 0); >- >- /* Green box if hardware never idles (as far as we can tell) >- */ >- if (!(dev_priv->sarea_priv->perf_boxes & I830_BOX_RING_EMPTY)) >- i830_fill_box(dev, 64, 4, 8, 8, 0, 255, 0); >- >- /* Draw bars indicating number of buffers allocated >- * (not a great measure, easily confused) >- */ >- if (dev_priv->dma_used) { >- int bar = dev_priv->dma_used / 10240; >- if (bar > 100) >- bar = 100; >- if (bar < 1) >- bar = 1; >- i830_fill_box(dev, 4, 16, bar, 4, 196, 128, 128); >- dev_priv->dma_used = 0; >- } >- >- dev_priv->sarea_priv->perf_boxes = 0; >-} >- >-static void i830_dma_dispatch_clear(struct drm_device * dev, int flags, >- unsigned int clear_color, >- unsigned int clear_zval, >- unsigned int clear_depthmask) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_sarea_t *sarea_priv = dev_priv->sarea_priv; >- int nbox = sarea_priv->nbox; >- struct drm_clip_rect *pbox = sarea_priv->boxes; >- int pitch = dev_priv->pitch; >- int cpp = dev_priv->cpp; >- int i; >- unsigned int BR13, CMD, D_CMD; >- RING_LOCALS; >- >- if (dev_priv->current_page == 1) { >- unsigned int tmp = flags; >- >- flags &= ~(I830_FRONT | I830_BACK); >- if (tmp & I830_FRONT) >- flags |= I830_BACK; >- if (tmp & I830_BACK) >- flags |= I830_FRONT; >- } >- >- i830_kernel_lost_context(dev); >- >- switch (cpp) { >- case 2: >- BR13 = (0xF0 << 16) | (pitch * cpp) | (1 << 24); >- D_CMD = CMD = XY_COLOR_BLT_CMD; >- break; >- case 4: >- BR13 = (0xF0 << 16) | (pitch * cpp) | (1 << 24) | (1 << 25); >- CMD = (XY_COLOR_BLT_CMD | XY_COLOR_BLT_WRITE_ALPHA | >- XY_COLOR_BLT_WRITE_RGB); >- D_CMD = XY_COLOR_BLT_CMD; >- if (clear_depthmask & 0x00ffffff) >- D_CMD |= XY_COLOR_BLT_WRITE_RGB; >- if (clear_depthmask & 0xff000000) >- D_CMD |= XY_COLOR_BLT_WRITE_ALPHA; >- break; >- default: >- BR13 = (0xF0 << 16) | (pitch * cpp) | (1 << 24); >- D_CMD = CMD = XY_COLOR_BLT_CMD; >- break; >- } >- >- if (nbox > I830_NR_SAREA_CLIPRECTS) >- nbox = I830_NR_SAREA_CLIPRECTS; >- >- for (i = 0; i < nbox; i++, pbox++) { >- if (pbox->x1 > pbox->x2 || >- pbox->y1 > pbox->y2 || >- pbox->x2 > dev_priv->w || pbox->y2 > dev_priv->h) >- continue; >- >- if (flags & I830_FRONT) { >- DRM_DEBUG("clear front\n"); >- BEGIN_LP_RING(6); >- OUT_RING(CMD); >- OUT_RING(BR13); >- OUT_RING((pbox->y1 << 16) | pbox->x1); >- OUT_RING((pbox->y2 << 16) | pbox->x2); >- OUT_RING(dev_priv->front_offset); >- OUT_RING(clear_color); >- ADVANCE_LP_RING(); >- } >- >- if (flags & I830_BACK) { >- DRM_DEBUG("clear back\n"); >- BEGIN_LP_RING(6); >- OUT_RING(CMD); >- OUT_RING(BR13); >- OUT_RING((pbox->y1 << 16) | pbox->x1); >- OUT_RING((pbox->y2 << 16) | pbox->x2); >- OUT_RING(dev_priv->back_offset); >- OUT_RING(clear_color); >- ADVANCE_LP_RING(); >- } >- >- if (flags & I830_DEPTH) { >- DRM_DEBUG("clear depth\n"); >- BEGIN_LP_RING(6); >- OUT_RING(D_CMD); >- OUT_RING(BR13); >- OUT_RING((pbox->y1 << 16) | pbox->x1); >- OUT_RING((pbox->y2 << 16) | pbox->x2); >- OUT_RING(dev_priv->depth_offset); >- OUT_RING(clear_zval); >- ADVANCE_LP_RING(); >- } >- } >-} >- >-static void i830_dma_dispatch_swap(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_sarea_t *sarea_priv = dev_priv->sarea_priv; >- int nbox = sarea_priv->nbox; >- struct drm_clip_rect *pbox = sarea_priv->boxes; >- int pitch = dev_priv->pitch; >- int cpp = dev_priv->cpp; >- int i; >- unsigned int CMD, BR13; >- RING_LOCALS; >- >- DRM_DEBUG("swapbuffers\n"); >- >- i830_kernel_lost_context(dev); >- >- if (dev_priv->do_boxes) >- i830_cp_performance_boxes(dev); >- >- switch (cpp) { >- case 2: >- BR13 = (pitch * cpp) | (0xCC << 16) | (1 << 24); >- CMD = XY_SRC_COPY_BLT_CMD; >- break; >- case 4: >- BR13 = (pitch * cpp) | (0xCC << 16) | (1 << 24) | (1 << 25); >- CMD = (XY_SRC_COPY_BLT_CMD | XY_SRC_COPY_BLT_WRITE_ALPHA | >- XY_SRC_COPY_BLT_WRITE_RGB); >- break; >- default: >- BR13 = (pitch * cpp) | (0xCC << 16) | (1 << 24); >- CMD = XY_SRC_COPY_BLT_CMD; >- break; >- } >- >- if (nbox > I830_NR_SAREA_CLIPRECTS) >- nbox = I830_NR_SAREA_CLIPRECTS; >- >- for (i = 0; i < nbox; i++, pbox++) { >- if (pbox->x1 > pbox->x2 || >- pbox->y1 > pbox->y2 || >- pbox->x2 > dev_priv->w || pbox->y2 > dev_priv->h) >- continue; >- >- DRM_DEBUG("dispatch swap %d,%d-%d,%d!\n", >- pbox->x1, pbox->y1, pbox->x2, pbox->y2); >- >- BEGIN_LP_RING(8); >- OUT_RING(CMD); >- OUT_RING(BR13); >- OUT_RING((pbox->y1 << 16) | pbox->x1); >- OUT_RING((pbox->y2 << 16) | pbox->x2); >- >- if (dev_priv->current_page == 0) >- OUT_RING(dev_priv->front_offset); >- else >- OUT_RING(dev_priv->back_offset); >- >- OUT_RING((pbox->y1 << 16) | pbox->x1); >- OUT_RING(BR13 & 0xffff); >- >- if (dev_priv->current_page == 0) >- OUT_RING(dev_priv->back_offset); >- else >- OUT_RING(dev_priv->front_offset); >- >- ADVANCE_LP_RING(); >- } >-} >- >-static void i830_dma_dispatch_flip(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- RING_LOCALS; >- >- DRM_DEBUG("%s: page=%d pfCurrentPage=%d\n", >- __FUNCTION__, >- dev_priv->current_page, >- dev_priv->sarea_priv->pf_current_page); >- >- i830_kernel_lost_context(dev); >- >- if (dev_priv->do_boxes) { >- dev_priv->sarea_priv->perf_boxes |= I830_BOX_FLIP; >- i830_cp_performance_boxes(dev); >- } >- >- BEGIN_LP_RING(2); >- OUT_RING(INST_PARSER_CLIENT | INST_OP_FLUSH | INST_FLUSH_MAP_CACHE); >- OUT_RING(0); >- ADVANCE_LP_RING(); >- >- BEGIN_LP_RING(6); >- OUT_RING(CMD_OP_DISPLAYBUFFER_INFO | ASYNC_FLIP); >- OUT_RING(0); >- if (dev_priv->current_page == 0) { >- OUT_RING(dev_priv->back_offset); >- dev_priv->current_page = 1; >- } else { >- OUT_RING(dev_priv->front_offset); >- dev_priv->current_page = 0; >- } >- OUT_RING(0); >- ADVANCE_LP_RING(); >- >- BEGIN_LP_RING(2); >- OUT_RING(MI_WAIT_FOR_EVENT | MI_WAIT_FOR_PLANE_A_FLIP); >- OUT_RING(0); >- ADVANCE_LP_RING(); >- >- dev_priv->sarea_priv->pf_current_page = dev_priv->current_page; >-} >- >-static void i830_dma_dispatch_vertex(struct drm_device * dev, >- struct drm_buf * buf, int discard, int used) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- drm_i830_sarea_t *sarea_priv = dev_priv->sarea_priv; >- struct drm_clip_rect *box = sarea_priv->boxes; >- int nbox = sarea_priv->nbox; >- unsigned long address = (unsigned long)buf->bus_address; >- unsigned long start = address - dev->agp->base; >- int i = 0, u; >- RING_LOCALS; >- >- i830_kernel_lost_context(dev); >- >- if (nbox > I830_NR_SAREA_CLIPRECTS) >- nbox = I830_NR_SAREA_CLIPRECTS; >- >- if (discard) { >- u = cmpxchg(buf_priv->in_use, I830_BUF_CLIENT, >- I830_BUF_HARDWARE); >- if (u != I830_BUF_CLIENT) { >- DRM_DEBUG("xxxx 2\n"); >- } >- } >- >- if (used > 4 * 1023) >- used = 0; >- >- if (sarea_priv->dirty) >- i830EmitState(dev); >- >- DRM_DEBUG("dispatch vertex addr 0x%lx, used 0x%x nbox %d\n", >- address, used, nbox); >- >- dev_priv->counter++; >- DRM_DEBUG("dispatch counter : %ld\n", dev_priv->counter); >- DRM_DEBUG("i830_dma_dispatch\n"); >- DRM_DEBUG("start : %lx\n", start); >- DRM_DEBUG("used : %d\n", used); >- DRM_DEBUG("start + used - 4 : %ld\n", start + used - 4); >- >- if (buf_priv->currently_mapped == I830_BUF_MAPPED) { >- u32 *vp = buf_priv->kernel_virtual; >- >- vp[0] = (GFX_OP_PRIMITIVE | >- sarea_priv->vertex_prim | ((used / 4) - 2)); >- >- if (dev_priv->use_mi_batchbuffer_start) { >- vp[used / 4] = MI_BATCH_BUFFER_END; >- used += 4; >- } >- >- if (used & 4) { >- vp[used / 4] = 0; >- used += 4; >- } >- >- i830_unmap_buffer(buf); >- } >- >- if (used) { >- do { >- if (i < nbox) { >- BEGIN_LP_RING(6); >- OUT_RING(GFX_OP_DRAWRECT_INFO); >- OUT_RING(sarea_priv-> >- BufferState[I830_DESTREG_DR1]); >- OUT_RING(box[i].x1 | (box[i].y1 << 16)); >- OUT_RING(box[i].x2 | (box[i].y2 << 16)); >- OUT_RING(sarea_priv-> >- BufferState[I830_DESTREG_DR4]); >- OUT_RING(0); >- ADVANCE_LP_RING(); >- } >- >- if (dev_priv->use_mi_batchbuffer_start) { >- BEGIN_LP_RING(2); >- OUT_RING(MI_BATCH_BUFFER_START | (2 << 6)); >- OUT_RING(start | MI_BATCH_NON_SECURE); >- ADVANCE_LP_RING(); >- } else { >- BEGIN_LP_RING(4); >- OUT_RING(MI_BATCH_BUFFER); >- OUT_RING(start | MI_BATCH_NON_SECURE); >- OUT_RING(start + used - 4); >- OUT_RING(0); >- ADVANCE_LP_RING(); >- } >- >- } while (++i < nbox); >- } >- >- if (discard) { >- dev_priv->counter++; >- >- (void)cmpxchg(buf_priv->in_use, I830_BUF_CLIENT, >- I830_BUF_HARDWARE); >- >- BEGIN_LP_RING(8); >- OUT_RING(CMD_STORE_DWORD_IDX); >- OUT_RING(20); >- OUT_RING(dev_priv->counter); >- OUT_RING(CMD_STORE_DWORD_IDX); >- OUT_RING(buf_priv->my_use_idx); >- OUT_RING(I830_BUF_FREE); >- OUT_RING(CMD_REPORT_HEAD); >- OUT_RING(0); >- ADVANCE_LP_RING(); >- } >-} >- >-static void i830_dma_quiescent(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- RING_LOCALS; >- >- i830_kernel_lost_context(dev); >- >- BEGIN_LP_RING(4); >- OUT_RING(INST_PARSER_CLIENT | INST_OP_FLUSH | INST_FLUSH_MAP_CACHE); >- OUT_RING(CMD_REPORT_HEAD); >- OUT_RING(0); >- OUT_RING(0); >- ADVANCE_LP_RING(); >- >- i830_wait_ring(dev, dev_priv->ring.Size - 8, __FUNCTION__); >-} >- >-static int i830_flush_queue(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- struct drm_device_dma *dma = dev->dma; >- int i, ret = 0; >- RING_LOCALS; >- >- i830_kernel_lost_context(dev); >- >- BEGIN_LP_RING(2); >- OUT_RING(CMD_REPORT_HEAD); >- OUT_RING(0); >- ADVANCE_LP_RING(); >- >- i830_wait_ring(dev, dev_priv->ring.Size - 8, __FUNCTION__); >- >- for (i = 0; i < dma->buf_count; i++) { >- struct drm_buf *buf = dma->buflist[i]; >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- >- int used = cmpxchg(buf_priv->in_use, I830_BUF_HARDWARE, >- I830_BUF_FREE); >- >- if (used == I830_BUF_HARDWARE) >- DRM_DEBUG("reclaimed from HARDWARE\n"); >- if (used == I830_BUF_CLIENT) >- DRM_DEBUG("still on client\n"); >- } >- >- return ret; >-} >- >-/* Must be called with the lock held */ >-static void i830_reclaim_buffers(struct drm_device * dev, struct drm_file *file_priv) >-{ >- struct drm_device_dma *dma = dev->dma; >- int i; >- >- if (!dma) >- return; >- if (!dev->dev_private) >- return; >- if (!dma->buflist) >- return; >- >- i830_flush_queue(dev); >- >- for (i = 0; i < dma->buf_count; i++) { >- struct drm_buf *buf = dma->buflist[i]; >- drm_i830_buf_priv_t *buf_priv = buf->dev_private; >- >- if (buf->file_priv == file_priv && buf_priv) { >- int used = cmpxchg(buf_priv->in_use, I830_BUF_CLIENT, >- I830_BUF_FREE); >- >- if (used == I830_BUF_CLIENT) >- DRM_DEBUG("reclaimed from client\n"); >- if (buf_priv->currently_mapped == I830_BUF_MAPPED) >- buf_priv->currently_mapped = I830_BUF_UNMAPPED; >- } >- } >-} >- >-static int i830_flush_ioctl(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- LOCK_TEST_WITH_RETURN(dev, file_priv); >- >- i830_flush_queue(dev); >- return 0; >-} >- >-static int i830_dma_vertex(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- struct drm_device_dma *dma = dev->dma; >- drm_i830_private_t *dev_priv = (drm_i830_private_t *) dev->dev_private; >- u32 *hw_status = dev_priv->hw_status_page; >- drm_i830_sarea_t *sarea_priv = (drm_i830_sarea_t *) >- dev_priv->sarea_priv; >- drm_i830_vertex_t *vertex = data; >- >- LOCK_TEST_WITH_RETURN(dev, file_priv); >- >- DRM_DEBUG("i830 dma vertex, idx %d used %d discard %d\n", >- vertex->idx, vertex->used, vertex->discard); >- >- if (vertex->idx < 0 || vertex->idx > dma->buf_count) >- return -EINVAL; >- >- i830_dma_dispatch_vertex(dev, >- dma->buflist[vertex->idx], >- vertex->discard, vertex->used); >- >- sarea_priv->last_enqueue = dev_priv->counter - 1; >- sarea_priv->last_dispatch = (int)hw_status[5]; >- >- return 0; >-} >- >-static int i830_clear_bufs(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- drm_i830_clear_t *clear = data; >- >- LOCK_TEST_WITH_RETURN(dev, file_priv); >- >- /* GH: Someone's doing nasty things... */ >- if (!dev->dev_private) { >- return -EINVAL; >- } >- >- i830_dma_dispatch_clear(dev, clear->flags, >- clear->clear_color, >- clear->clear_depth, clear->clear_depthmask); >- return 0; >-} >- >-static int i830_swap_bufs(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- DRM_DEBUG("i830_swap_bufs\n"); >- >- LOCK_TEST_WITH_RETURN(dev, file_priv); >- >- i830_dma_dispatch_swap(dev); >- return 0; >-} >- >-/* Not sure why this isn't set all the time: >- */ >-static void i830_do_init_pageflip(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- >- DRM_DEBUG("%s\n", __FUNCTION__); >- dev_priv->page_flipping = 1; >- dev_priv->current_page = 0; >- dev_priv->sarea_priv->pf_current_page = dev_priv->current_page; >-} >- >-static int i830_do_cleanup_pageflip(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- >- DRM_DEBUG("%s\n", __FUNCTION__); >- if (dev_priv->current_page != 0) >- i830_dma_dispatch_flip(dev); >- >- dev_priv->page_flipping = 0; >- return 0; >-} >- >-static int i830_flip_bufs(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- >- DRM_DEBUG("%s\n", __FUNCTION__); >- >- LOCK_TEST_WITH_RETURN(dev, file_priv); >- >- if (!dev_priv->page_flipping) >- i830_do_init_pageflip(dev); >- >- i830_dma_dispatch_flip(dev); >- return 0; >-} >- >-static int i830_getage(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- drm_i830_private_t *dev_priv = (drm_i830_private_t *) dev->dev_private; >- u32 *hw_status = dev_priv->hw_status_page; >- drm_i830_sarea_t *sarea_priv = (drm_i830_sarea_t *) >- dev_priv->sarea_priv; >- >- sarea_priv->last_dispatch = (int)hw_status[5]; >- return 0; >-} >- >-static int i830_getbuf(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- int retcode = 0; >- drm_i830_dma_t *d = data; >- drm_i830_private_t *dev_priv = (drm_i830_private_t *) dev->dev_private; >- u32 *hw_status = dev_priv->hw_status_page; >- drm_i830_sarea_t *sarea_priv = (drm_i830_sarea_t *) >- dev_priv->sarea_priv; >- >- DRM_DEBUG("getbuf\n"); >- >- LOCK_TEST_WITH_RETURN(dev, file_priv); >- >- d->granted = 0; >- >- retcode = i830_dma_get_buffer(dev, d, file_priv); >- >- DRM_DEBUG("i830_dma: %d returning %d, granted = %d\n", >- current->pid, retcode, d->granted); >- >- sarea_priv->last_dispatch = (int)hw_status[5]; >- >- return retcode; >-} >- >-static int i830_copybuf(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- /* Never copy - 2.4.x doesn't need it */ >- return 0; >-} >- >-static int i830_docopy(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- return 0; >-} >- >-static int i830_getparam(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_getparam_t *param = data; >- int value; >- >- if (!dev_priv) { >- DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >- return -EINVAL; >- } >- >- switch (param->param) { >- case I830_PARAM_IRQ_ACTIVE: >- value = dev->irq_enabled; >- break; >- default: >- return -EINVAL; >- } >- >- if (copy_to_user(param->value, &value, sizeof(int))) { >- DRM_ERROR("copy_to_user\n"); >- return -EFAULT; >- } >- >- return 0; >-} >- >-static int i830_setparam(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_setparam_t *param = data; >- >- if (!dev_priv) { >- DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >- return -EINVAL; >- } >- >- switch (param->param) { >- case I830_SETPARAM_USE_MI_BATCHBUFFER_START: >- dev_priv->use_mi_batchbuffer_start = param->value; >- break; >- default: >- return -EINVAL; >- } >- >- return 0; >-} >- >-int i830_driver_load(struct drm_device *dev, unsigned long flags) >-{ >- /* i830 has 4 more counters */ >- dev->counters += 4; >- dev->types[6] = _DRM_STAT_IRQ; >- dev->types[7] = _DRM_STAT_PRIMARY; >- dev->types[8] = _DRM_STAT_SECONDARY; >- dev->types[9] = _DRM_STAT_DMA; >- >- return 0; >-} >- >-void i830_driver_lastclose(struct drm_device * dev) >-{ >- i830_dma_cleanup(dev); >-} >- >-void i830_driver_preclose(struct drm_device * dev, struct drm_file *file_priv) >-{ >- if (dev->dev_private) { >- drm_i830_private_t *dev_priv = dev->dev_private; >- if (dev_priv->page_flipping) { >- i830_do_cleanup_pageflip(dev); >- } >- } >-} >- >-void i830_driver_reclaim_buffers_locked(struct drm_device * dev, struct drm_file *file_priv) >-{ >- i830_reclaim_buffers(dev, file_priv); >-} >- >-int i830_driver_dma_quiescent(struct drm_device * dev) >-{ >- i830_dma_quiescent(dev); >- return 0; >-} >- >-struct drm_ioctl_desc i830_ioctls[] = { >- DRM_IOCTL_DEF(DRM_I830_INIT, i830_dma_init, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >- DRM_IOCTL_DEF(DRM_I830_VERTEX, i830_dma_vertex, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_CLEAR, i830_clear_bufs, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_FLUSH, i830_flush_ioctl, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_GETAGE, i830_getage, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_GETBUF, i830_getbuf, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_SWAP, i830_swap_bufs, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_COPY, i830_copybuf, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_DOCOPY, i830_docopy, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_FLIP, i830_flip_bufs, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_IRQ_EMIT, i830_irq_emit, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_IRQ_WAIT, i830_irq_wait, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_GETPARAM, i830_getparam, DRM_AUTH), >- DRM_IOCTL_DEF(DRM_I830_SETPARAM, i830_setparam, DRM_AUTH) >-}; >- >-int i830_max_ioctl = DRM_ARRAY_SIZE(i830_ioctls); >- >-/** >- * Determine if the device really is AGP or not. >- * >- * All Intel graphics chipsets are treated as AGP, even if they are really >- * PCI-e. >- * >- * \param dev The device to be tested. >- * >- * \returns >- * A value of 1 is always retured to indictate every i8xx is AGP. >- */ >-int i830_driver_device_is_agp(struct drm_device * dev) >-{ >- return 1; >-} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i830_drm.h linux-2.6.23.i686/drivers/char/drm/i830_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/i830_drm.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/i830_drm.h 1970-01-01 01:00:00.000000000 +0100 >@@ -1,342 +0,0 @@ >-#ifndef _I830_DRM_H_ >-#define _I830_DRM_H_ >- >-/* WARNING: These defines must be the same as what the Xserver uses. >- * if you change them, you must change the defines in the Xserver. >- * >- * KW: Actually, you can't ever change them because doing so would >- * break backwards compatibility. >- */ >- >-#ifndef _I830_DEFINES_ >-#define _I830_DEFINES_ >- >-#define I830_DMA_BUF_ORDER 12 >-#define I830_DMA_BUF_SZ (1<<I830_DMA_BUF_ORDER) >-#define I830_DMA_BUF_NR 256 >-#define I830_NR_SAREA_CLIPRECTS 8 >- >-/* Each region is a minimum of 64k, and there are at most 64 of them. >- */ >-#define I830_NR_TEX_REGIONS 64 >-#define I830_LOG_MIN_TEX_REGION_SIZE 16 >- >-/* KW: These aren't correct but someone set them to two and then >- * released the module. Now we can't change them as doing so would >- * break backwards compatibility. >- */ >-#define I830_TEXTURE_COUNT 2 >-#define I830_TEXBLEND_COUNT I830_TEXTURE_COUNT >- >-#define I830_TEXBLEND_SIZE 12 /* (4 args + op) * 2 + COLOR_FACTOR */ >- >-#define I830_UPLOAD_CTX 0x1 >-#define I830_UPLOAD_BUFFERS 0x2 >-#define I830_UPLOAD_CLIPRECTS 0x4 >-#define I830_UPLOAD_TEX0_IMAGE 0x100 /* handled clientside */ >-#define I830_UPLOAD_TEX0_CUBE 0x200 /* handled clientside */ >-#define I830_UPLOAD_TEX1_IMAGE 0x400 /* handled clientside */ >-#define I830_UPLOAD_TEX1_CUBE 0x800 /* handled clientside */ >-#define I830_UPLOAD_TEX2_IMAGE 0x1000 /* handled clientside */ >-#define I830_UPLOAD_TEX2_CUBE 0x2000 /* handled clientside */ >-#define I830_UPLOAD_TEX3_IMAGE 0x4000 /* handled clientside */ >-#define I830_UPLOAD_TEX3_CUBE 0x8000 /* handled clientside */ >-#define I830_UPLOAD_TEX_N_IMAGE(n) (0x100 << (n * 2)) >-#define I830_UPLOAD_TEX_N_CUBE(n) (0x200 << (n * 2)) >-#define I830_UPLOAD_TEXIMAGE_MASK 0xff00 >-#define I830_UPLOAD_TEX0 0x10000 >-#define I830_UPLOAD_TEX1 0x20000 >-#define I830_UPLOAD_TEX2 0x40000 >-#define I830_UPLOAD_TEX3 0x80000 >-#define I830_UPLOAD_TEX_N(n) (0x10000 << (n)) >-#define I830_UPLOAD_TEX_MASK 0xf0000 >-#define I830_UPLOAD_TEXBLEND0 0x100000 >-#define I830_UPLOAD_TEXBLEND1 0x200000 >-#define I830_UPLOAD_TEXBLEND2 0x400000 >-#define I830_UPLOAD_TEXBLEND3 0x800000 >-#define I830_UPLOAD_TEXBLEND_N(n) (0x100000 << (n)) >-#define I830_UPLOAD_TEXBLEND_MASK 0xf00000 >-#define I830_UPLOAD_TEX_PALETTE_N(n) (0x1000000 << (n)) >-#define I830_UPLOAD_TEX_PALETTE_SHARED 0x4000000 >-#define I830_UPLOAD_STIPPLE 0x8000000 >- >-/* Indices into buf.Setup where various bits of state are mirrored per >- * context and per buffer. These can be fired at the card as a unit, >- * or in a piecewise fashion as required. >- */ >- >-/* Destbuffer state >- * - backbuffer linear offset and pitch -- invarient in the current dri >- * - zbuffer linear offset and pitch -- also invarient >- * - drawing origin in back and depth buffers. >- * >- * Keep the depth/back buffer state here to accommodate private buffers >- * in the future. >- */ >- >-#define I830_DESTREG_CBUFADDR 0 >-#define I830_DESTREG_DBUFADDR 1 >-#define I830_DESTREG_DV0 2 >-#define I830_DESTREG_DV1 3 >-#define I830_DESTREG_SENABLE 4 >-#define I830_DESTREG_SR0 5 >-#define I830_DESTREG_SR1 6 >-#define I830_DESTREG_SR2 7 >-#define I830_DESTREG_DR0 8 >-#define I830_DESTREG_DR1 9 >-#define I830_DESTREG_DR2 10 >-#define I830_DESTREG_DR3 11 >-#define I830_DESTREG_DR4 12 >-#define I830_DEST_SETUP_SIZE 13 >- >-/* Context state >- */ >-#define I830_CTXREG_STATE1 0 >-#define I830_CTXREG_STATE2 1 >-#define I830_CTXREG_STATE3 2 >-#define I830_CTXREG_STATE4 3 >-#define I830_CTXREG_STATE5 4 >-#define I830_CTXREG_IALPHAB 5 >-#define I830_CTXREG_STENCILTST 6 >-#define I830_CTXREG_ENABLES_1 7 >-#define I830_CTXREG_ENABLES_2 8 >-#define I830_CTXREG_AA 9 >-#define I830_CTXREG_FOGCOLOR 10 >-#define I830_CTXREG_BLENDCOLR0 11 >-#define I830_CTXREG_BLENDCOLR 12 /* Dword 1 of 2 dword command */ >-#define I830_CTXREG_VF 13 >-#define I830_CTXREG_VF2 14 >-#define I830_CTXREG_MCSB0 15 >-#define I830_CTXREG_MCSB1 16 >-#define I830_CTX_SETUP_SIZE 17 >- >-/* 1.3: Stipple state >- */ >-#define I830_STPREG_ST0 0 >-#define I830_STPREG_ST1 1 >-#define I830_STP_SETUP_SIZE 2 >- >-/* Texture state (per tex unit) >- */ >- >-#define I830_TEXREG_MI0 0 /* GFX_OP_MAP_INFO (6 dwords) */ >-#define I830_TEXREG_MI1 1 >-#define I830_TEXREG_MI2 2 >-#define I830_TEXREG_MI3 3 >-#define I830_TEXREG_MI4 4 >-#define I830_TEXREG_MI5 5 >-#define I830_TEXREG_MF 6 /* GFX_OP_MAP_FILTER */ >-#define I830_TEXREG_MLC 7 /* GFX_OP_MAP_LOD_CTL */ >-#define I830_TEXREG_MLL 8 /* GFX_OP_MAP_LOD_LIMITS */ >-#define I830_TEXREG_MCS 9 /* GFX_OP_MAP_COORD_SETS */ >-#define I830_TEX_SETUP_SIZE 10 >- >-#define I830_TEXREG_TM0LI 0 /* load immediate 2 texture map n */ >-#define I830_TEXREG_TM0S0 1 >-#define I830_TEXREG_TM0S1 2 >-#define I830_TEXREG_TM0S2 3 >-#define I830_TEXREG_TM0S3 4 >-#define I830_TEXREG_TM0S4 5 >-#define I830_TEXREG_NOP0 6 /* noop */ >-#define I830_TEXREG_NOP1 7 /* noop */ >-#define I830_TEXREG_NOP2 8 /* noop */ >-#define __I830_TEXREG_MCS 9 /* GFX_OP_MAP_COORD_SETS -- shared */ >-#define __I830_TEX_SETUP_SIZE 10 >- >-#define I830_FRONT 0x1 >-#define I830_BACK 0x2 >-#define I830_DEPTH 0x4 >- >-#endif /* _I830_DEFINES_ */ >- >-typedef struct _drm_i830_init { >- enum { >- I830_INIT_DMA = 0x01, >- I830_CLEANUP_DMA = 0x02 >- } func; >- unsigned int mmio_offset; >- unsigned int buffers_offset; >- int sarea_priv_offset; >- unsigned int ring_start; >- unsigned int ring_end; >- unsigned int ring_size; >- unsigned int front_offset; >- unsigned int back_offset; >- unsigned int depth_offset; >- unsigned int w; >- unsigned int h; >- unsigned int pitch; >- unsigned int pitch_bits; >- unsigned int back_pitch; >- unsigned int depth_pitch; >- unsigned int cpp; >-} drm_i830_init_t; >- >-/* Warning: If you change the SAREA structure you must change the Xserver >- * structure as well */ >- >-typedef struct _drm_i830_tex_region { >- unsigned char next, prev; /* indices to form a circular LRU */ >- unsigned char in_use; /* owned by a client, or free? */ >- int age; /* tracked by clients to update local LRU's */ >-} drm_i830_tex_region_t; >- >-typedef struct _drm_i830_sarea { >- unsigned int ContextState[I830_CTX_SETUP_SIZE]; >- unsigned int BufferState[I830_DEST_SETUP_SIZE]; >- unsigned int TexState[I830_TEXTURE_COUNT][I830_TEX_SETUP_SIZE]; >- unsigned int TexBlendState[I830_TEXBLEND_COUNT][I830_TEXBLEND_SIZE]; >- unsigned int TexBlendStateWordsUsed[I830_TEXBLEND_COUNT]; >- unsigned int Palette[2][256]; >- unsigned int dirty; >- >- unsigned int nbox; >- struct drm_clip_rect boxes[I830_NR_SAREA_CLIPRECTS]; >- >- /* Maintain an LRU of contiguous regions of texture space. If >- * you think you own a region of texture memory, and it has an >- * age different to the one you set, then you are mistaken and >- * it has been stolen by another client. If global texAge >- * hasn't changed, there is no need to walk the list. >- * >- * These regions can be used as a proxy for the fine-grained >- * texture information of other clients - by maintaining them >- * in the same lru which is used to age their own textures, >- * clients have an approximate lru for the whole of global >- * texture space, and can make informed decisions as to which >- * areas to kick out. There is no need to choose whether to >- * kick out your own texture or someone else's - simply eject >- * them all in LRU order. >- */ >- >- drm_i830_tex_region_t texList[I830_NR_TEX_REGIONS + 1]; >- /* Last elt is sentinal */ >- int texAge; /* last time texture was uploaded */ >- int last_enqueue; /* last time a buffer was enqueued */ >- int last_dispatch; /* age of the most recently dispatched buffer */ >- int last_quiescent; /* */ >- int ctxOwner; /* last context to upload state */ >- >- int vertex_prim; >- >- int pf_enabled; /* is pageflipping allowed? */ >- int pf_active; >- int pf_current_page; /* which buffer is being displayed? */ >- >- int perf_boxes; /* performance boxes to be displayed */ >- >- /* Here's the state for texunits 2,3: >- */ >- unsigned int TexState2[I830_TEX_SETUP_SIZE]; >- unsigned int TexBlendState2[I830_TEXBLEND_SIZE]; >- unsigned int TexBlendStateWordsUsed2; >- >- unsigned int TexState3[I830_TEX_SETUP_SIZE]; >- unsigned int TexBlendState3[I830_TEXBLEND_SIZE]; >- unsigned int TexBlendStateWordsUsed3; >- >- unsigned int StippleState[I830_STP_SETUP_SIZE]; >-} drm_i830_sarea_t; >- >-/* Flags for perf_boxes >- */ >-#define I830_BOX_RING_EMPTY 0x1 /* populated by kernel */ >-#define I830_BOX_FLIP 0x2 /* populated by kernel */ >-#define I830_BOX_WAIT 0x4 /* populated by kernel & client */ >-#define I830_BOX_TEXTURE_LOAD 0x8 /* populated by kernel */ >-#define I830_BOX_LOST_CONTEXT 0x10 /* populated by client */ >- >-/* I830 specific ioctls >- * The device specific ioctl range is 0x40 to 0x79. >- */ >-#define DRM_I830_INIT 0x00 >-#define DRM_I830_VERTEX 0x01 >-#define DRM_I830_CLEAR 0x02 >-#define DRM_I830_FLUSH 0x03 >-#define DRM_I830_GETAGE 0x04 >-#define DRM_I830_GETBUF 0x05 >-#define DRM_I830_SWAP 0x06 >-#define DRM_I830_COPY 0x07 >-#define DRM_I830_DOCOPY 0x08 >-#define DRM_I830_FLIP 0x09 >-#define DRM_I830_IRQ_EMIT 0x0a >-#define DRM_I830_IRQ_WAIT 0x0b >-#define DRM_I830_GETPARAM 0x0c >-#define DRM_I830_SETPARAM 0x0d >- >-#define DRM_IOCTL_I830_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_INIT, drm_i830_init_t) >-#define DRM_IOCTL_I830_VERTEX DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_VERTEX, drm_i830_vertex_t) >-#define DRM_IOCTL_I830_CLEAR DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_CLEAR, drm_i830_clear_t) >-#define DRM_IOCTL_I830_FLUSH DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_FLUSH) >-#define DRM_IOCTL_I830_GETAGE DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_GETAGE) >-#define DRM_IOCTL_I830_GETBUF DRM_IOWR(DRM_COMMAND_BASE + DRM_IOCTL_I830_GETBUF, drm_i830_dma_t) >-#define DRM_IOCTL_I830_SWAP DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_SWAP) >-#define DRM_IOCTL_I830_COPY DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_COPY, drm_i830_copy_t) >-#define DRM_IOCTL_I830_DOCOPY DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_DOCOPY) >-#define DRM_IOCTL_I830_FLIP DRM_IO ( DRM_COMMAND_BASE + DRM_IOCTL_I830_FLIP) >-#define DRM_IOCTL_I830_IRQ_EMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_IOCTL_I830_IRQ_EMIT, drm_i830_irq_emit_t) >-#define DRM_IOCTL_I830_IRQ_WAIT DRM_IOW( DRM_COMMAND_BASE + DRM_IOCTL_I830_IRQ_WAIT, drm_i830_irq_wait_t) >-#define DRM_IOCTL_I830_GETPARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_IOCTL_I830_GETPARAM, drm_i830_getparam_t) >-#define DRM_IOCTL_I830_SETPARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_IOCTL_I830_SETPARAM, drm_i830_setparam_t) >- >-typedef struct _drm_i830_clear { >- int clear_color; >- int clear_depth; >- int flags; >- unsigned int clear_colormask; >- unsigned int clear_depthmask; >-} drm_i830_clear_t; >- >-/* These may be placeholders if we have more cliprects than >- * I830_NR_SAREA_CLIPRECTS. In that case, the client sets discard to >- * false, indicating that the buffer will be dispatched again with a >- * new set of cliprects. >- */ >-typedef struct _drm_i830_vertex { >- int idx; /* buffer index */ >- int used; /* nr bytes in use */ >- int discard; /* client is finished with the buffer? */ >-} drm_i830_vertex_t; >- >-typedef struct _drm_i830_copy_t { >- int idx; /* buffer index */ >- int used; /* nr bytes in use */ >- void __user *address; /* Address to copy from */ >-} drm_i830_copy_t; >- >-typedef struct drm_i830_dma { >- void __user *virtual; >- int request_idx; >- int request_size; >- int granted; >-} drm_i830_dma_t; >- >-/* 1.3: Userspace can request & wait on irq's: >- */ >-typedef struct drm_i830_irq_emit { >- int __user *irq_seq; >-} drm_i830_irq_emit_t; >- >-typedef struct drm_i830_irq_wait { >- int irq_seq; >-} drm_i830_irq_wait_t; >- >-/* 1.3: New ioctl to query kernel params: >- */ >-#define I830_PARAM_IRQ_ACTIVE 1 >- >-typedef struct drm_i830_getparam { >- int param; >- int __user *value; >-} drm_i830_getparam_t; >- >-/* 1.3: New ioctl to set kernel params: >- */ >-#define I830_SETPARAM_USE_MI_BATCHBUFFER_START 1 >- >-typedef struct drm_i830_setparam { >- int param; >- int value; >-} drm_i830_setparam_t; >- >-#endif /* _I830_DRM_H_ */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i830_drv.c linux-2.6.23.i686/drivers/char/drm/i830_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i830_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/i830_drv.c 1970-01-01 01:00:00.000000000 +0100 >@@ -1,108 +0,0 @@ >-/* i830_drv.c -- I810 driver -*- linux-c -*- >- * Created: Mon Dec 13 01:56:22 1999 by jhartmann@precisioninsight.com >- * >- * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. >- * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. >- * All Rights Reserved. >- * >- * Permission is hereby granted, free of charge, to any person obtaining a >- * copy of this software and associated documentation files (the "Software"), >- * to deal in the Software without restriction, including without limitation >- * the rights to use, copy, modify, merge, publish, distribute, sublicense, >- * and/or sell copies of the Software, and to permit persons to whom the >- * Software is furnished to do so, subject to the following conditions: >- * >- * The above copyright notice and this permission notice (including the next >- * paragraph) shall be included in all copies or substantial portions of the >- * Software. >- * >- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >- * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR >- * OTHER DEALINGS IN THE SOFTWARE. >- * >- * Authors: >- * Rickard E. (Rik) Faith <faith@valinux.com> >- * Jeff Hartmann <jhartmann@valinux.com> >- * Gareth Hughes <gareth@valinux.com> >- * Abraham vd Merwe <abraham@2d3d.co.za> >- * Keith Whitwell <keith@tungstengraphics.com> >- */ >- >-#include "drmP.h" >-#include "drm.h" >-#include "i830_drm.h" >-#include "i830_drv.h" >- >-#include "drm_pciids.h" >- >-static struct pci_device_id pciidlist[] = { >- i830_PCI_IDS >-}; >- >-static struct drm_driver driver = { >- .driver_features = >- DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | DRIVER_USE_MTRR | >- DRIVER_HAVE_DMA | DRIVER_DMA_QUEUE, >-#if USE_IRQS >- .driver_features |= DRIVER_HAVE_IRQ | DRIVER_SHARED_IRQ, >-#endif >- .dev_priv_size = sizeof(drm_i830_buf_priv_t), >- .load = i830_driver_load, >- .lastclose = i830_driver_lastclose, >- .preclose = i830_driver_preclose, >- .device_is_agp = i830_driver_device_is_agp, >- .reclaim_buffers_locked = i830_driver_reclaim_buffers_locked, >- .dma_quiescent = i830_driver_dma_quiescent, >- .get_map_ofs = drm_core_get_map_ofs, >- .get_reg_ofs = drm_core_get_reg_ofs, >-#if USE_IRQS >- .irq_preinstall = i830_driver_irq_preinstall, >- .irq_postinstall = i830_driver_irq_postinstall, >- .irq_uninstall = i830_driver_irq_uninstall, >- .irq_handler = i830_driver_irq_handler, >-#endif >- .ioctls = i830_ioctls, >- .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >- }, >- >- .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >- }, >- >- .name = DRIVER_NAME, >- .desc = DRIVER_DESC, >- .date = DRIVER_DATE, >- .major = DRIVER_MAJOR, >- .minor = DRIVER_MINOR, >- .patchlevel = DRIVER_PATCHLEVEL, >-}; >- >-static int __init i830_init(void) >-{ >- driver.num_ioctls = i830_max_ioctl; >- return drm_init(&driver); >-} >- >-static void __exit i830_exit(void) >-{ >- drm_exit(&driver); >-} >- >-module_init(i830_init); >-module_exit(i830_exit); >- >-MODULE_AUTHOR(DRIVER_AUTHOR); >-MODULE_DESCRIPTION(DRIVER_DESC); >-MODULE_LICENSE("GPL and additional rights"); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i830_drv.h linux-2.6.23.i686/drivers/char/drm/i830_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/i830_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i830_drv.h 1970-01-01 01:00:00.000000000 +0100 >@@ -1,293 +0,0 @@ >-/* i830_drv.h -- Private header for the I830 driver -*- linux-c -*- >- * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com >- * >- * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. >- * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. >- * All rights reserved. >- * >- * Permission is hereby granted, free of charge, to any person obtaining a >- * copy of this software and associated documentation files (the "Software"), >- * to deal in the Software without restriction, including without limitation >- * the rights to use, copy, modify, merge, publish, distribute, sublicense, >- * and/or sell copies of the Software, and to permit persons to whom the >- * Software is furnished to do so, subject to the following conditions: >- * >- * The above copyright notice and this permission notice (including the next >- * paragraph) shall be included in all copies or substantial portions of the >- * Software. >- * >- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >- * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >- * DEALINGS IN THE SOFTWARE. >- * >- * Authors: Rickard E. (Rik) Faith <faith@valinux.com> >- * Jeff Hartmann <jhartmann@valinux.com> >- * >- */ >- >-#ifndef _I830_DRV_H_ >-#define _I830_DRV_H_ >- >-/* General customization: >- */ >- >-#define DRIVER_AUTHOR "VA Linux Systems Inc." >- >-#define DRIVER_NAME "i830" >-#define DRIVER_DESC "Intel 830M" >-#define DRIVER_DATE "20021108" >- >-/* Interface history: >- * >- * 1.1: Original. >- * 1.2: ? >- * 1.3: New irq emit/wait ioctls. >- * New pageflip ioctl. >- * New getparam ioctl. >- * State for texunits 3&4 in sarea. >- * New (alternative) layout for texture state. >- */ >-#define DRIVER_MAJOR 1 >-#define DRIVER_MINOR 3 >-#define DRIVER_PATCHLEVEL 2 >- >-/* Driver will work either way: IRQ's save cpu time when waiting for >- * the card, but are subject to subtle interactions between bios, >- * hardware and the driver. >- */ >-/* XXX: Add vblank support? */ >-#define USE_IRQS 0 >- >-typedef struct drm_i830_buf_priv { >- u32 *in_use; >- int my_use_idx; >- int currently_mapped; >- void __user *virtual; >- void *kernel_virtual; >- drm_local_map_t map; >-} drm_i830_buf_priv_t; >- >-typedef struct _drm_i830_ring_buffer { >- int tail_mask; >- unsigned long Start; >- unsigned long End; >- unsigned long Size; >- u8 *virtual_start; >- int head; >- int tail; >- int space; >- drm_local_map_t map; >-} drm_i830_ring_buffer_t; >- >-typedef struct drm_i830_private { >- struct drm_map *sarea_map; >- struct drm_map *mmio_map; >- >- drm_i830_sarea_t *sarea_priv; >- drm_i830_ring_buffer_t ring; >- >- void *hw_status_page; >- unsigned long counter; >- >- dma_addr_t dma_status_page; >- >- struct drm_buf *mmap_buffer; >- >- u32 front_di1, back_di1, zi1; >- >- int back_offset; >- int depth_offset; >- int front_offset; >- int w, h; >- int pitch; >- int back_pitch; >- int depth_pitch; >- unsigned int cpp; >- >- int do_boxes; >- int dma_used; >- >- int current_page; >- int page_flipping; >- >- wait_queue_head_t irq_queue; >- atomic_t irq_received; >- atomic_t irq_emitted; >- >- int use_mi_batchbuffer_start; >- >-} drm_i830_private_t; >- >-extern struct drm_ioctl_desc i830_ioctls[]; >-extern int i830_max_ioctl; >- >-/* i830_irq.c */ >-extern int i830_irq_emit(struct drm_device *dev, void *data, >- struct drm_file *file_priv); >-extern int i830_irq_wait(struct drm_device *dev, void *data, >- struct drm_file *file_priv); >- >-extern irqreturn_t i830_driver_irq_handler(DRM_IRQ_ARGS); >-extern void i830_driver_irq_preinstall(struct drm_device * dev); >-extern void i830_driver_irq_postinstall(struct drm_device * dev); >-extern void i830_driver_irq_uninstall(struct drm_device * dev); >-extern int i830_driver_load(struct drm_device *, unsigned long flags); >-extern void i830_driver_preclose(struct drm_device * dev, >- struct drm_file *file_priv); >-extern void i830_driver_lastclose(struct drm_device * dev); >-extern void i830_driver_reclaim_buffers_locked(struct drm_device * dev, >- struct drm_file *file_priv); >-extern int i830_driver_dma_quiescent(struct drm_device * dev); >-extern int i830_driver_device_is_agp(struct drm_device * dev); >- >-#define I830_READ(reg) DRM_READ32(dev_priv->mmio_map, reg) >-#define I830_WRITE(reg,val) DRM_WRITE32(dev_priv->mmio_map, reg, val) >-#define I830_READ16(reg) DRM_READ16(dev_priv->mmio_map, reg) >-#define I830_WRITE16(reg,val) DRM_WRITE16(dev_priv->mmio_map, reg, val) >- >-#define I830_VERBOSE 0 >- >-#define RING_LOCALS unsigned int outring, ringmask, outcount; \ >- volatile char *virt; >- >-#define BEGIN_LP_RING(n) do { \ >- if (I830_VERBOSE) \ >- printk("BEGIN_LP_RING(%d) in %s\n", \ >- n, __FUNCTION__); \ >- if (dev_priv->ring.space < n*4) \ >- i830_wait_ring(dev, n*4, __FUNCTION__); \ >- outcount = 0; \ >- outring = dev_priv->ring.tail; \ >- ringmask = dev_priv->ring.tail_mask; \ >- virt = dev_priv->ring.virtual_start; \ >-} while (0) >- >-#define OUT_RING(n) do { \ >- if (I830_VERBOSE) printk(" OUT_RING %x\n", (int)(n)); \ >- *(volatile unsigned int *)(virt + outring) = n; \ >- outcount++; \ >- outring += 4; \ >- outring &= ringmask; \ >-} while (0) >- >-#define ADVANCE_LP_RING() do { \ >- if (I830_VERBOSE) printk("ADVANCE_LP_RING %x\n", outring); \ >- dev_priv->ring.tail = outring; \ >- dev_priv->ring.space -= outcount * 4; \ >- I830_WRITE(LP_RING + RING_TAIL, outring); \ >-} while(0) >- >-extern int i830_wait_ring(struct drm_device * dev, int n, const char *caller); >- >-#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) >-#define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23)) >-#define CMD_REPORT_HEAD (7<<23) >-#define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1) >-#define CMD_OP_BATCH_BUFFER ((0x0<<29)|(0x30<<23)|0x1) >- >-#define STATE3D_LOAD_STATE_IMMEDIATE_2 ((0x3<<29)|(0x1d<<24)|(0x03<<16)) >-#define LOAD_TEXTURE_MAP0 (1<<11) >- >-#define INST_PARSER_CLIENT 0x00000000 >-#define INST_OP_FLUSH 0x02000000 >-#define INST_FLUSH_MAP_CACHE 0x00000001 >- >-#define BB1_START_ADDR_MASK (~0x7) >-#define BB1_PROTECTED (1<<0) >-#define BB1_UNPROTECTED (0<<0) >-#define BB2_END_ADDR_MASK (~0x7) >- >-#define I830REG_HWSTAM 0x02098 >-#define I830REG_INT_IDENTITY_R 0x020a4 >-#define I830REG_INT_MASK_R 0x020a8 >-#define I830REG_INT_ENABLE_R 0x020a0 >- >-#define I830_IRQ_RESERVED ((1<<13)|(3<<2)) >- >-#define LP_RING 0x2030 >-#define HP_RING 0x2040 >-#define RING_TAIL 0x00 >-#define TAIL_ADDR 0x001FFFF8 >-#define RING_HEAD 0x04 >-#define HEAD_WRAP_COUNT 0xFFE00000 >-#define HEAD_WRAP_ONE 0x00200000 >-#define HEAD_ADDR 0x001FFFFC >-#define RING_START 0x08 >-#define START_ADDR 0x0xFFFFF000 >-#define RING_LEN 0x0C >-#define RING_NR_PAGES 0x001FF000 >-#define RING_REPORT_MASK 0x00000006 >-#define RING_REPORT_64K 0x00000002 >-#define RING_REPORT_128K 0x00000004 >-#define RING_NO_REPORT 0x00000000 >-#define RING_VALID_MASK 0x00000001 >-#define RING_VALID 0x00000001 >-#define RING_INVALID 0x00000000 >- >-#define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19)) >-#define SC_UPDATE_SCISSOR (0x1<<1) >-#define SC_ENABLE_MASK (0x1<<0) >-#define SC_ENABLE (0x1<<0) >- >-#define GFX_OP_SCISSOR_INFO ((0x3<<29)|(0x1d<<24)|(0x81<<16)|(0x1)) >-#define SCI_YMIN_MASK (0xffff<<16) >-#define SCI_XMIN_MASK (0xffff<<0) >-#define SCI_YMAX_MASK (0xffff<<16) >-#define SCI_XMAX_MASK (0xffff<<0) >- >-#define GFX_OP_SCISSOR_ENABLE ((0x3<<29)|(0x1c<<24)|(0x10<<19)) >-#define GFX_OP_SCISSOR_RECT ((0x3<<29)|(0x1d<<24)|(0x81<<16)|1) >-#define GFX_OP_COLOR_FACTOR ((0x3<<29)|(0x1d<<24)|(0x1<<16)|0x0) >-#define GFX_OP_STIPPLE ((0x3<<29)|(0x1d<<24)|(0x83<<16)) >-#define GFX_OP_MAP_INFO ((0x3<<29)|(0x1d<<24)|0x4) >-#define GFX_OP_DESTBUFFER_VARS ((0x3<<29)|(0x1d<<24)|(0x85<<16)|0x0) >-#define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) >-#define GFX_OP_PRIMITIVE ((0x3<<29)|(0x1f<<24)) >- >-#define CMD_OP_DESTBUFFER_INFO ((0x3<<29)|(0x1d<<24)|(0x8e<<16)|1) >- >-#define CMD_OP_DISPLAYBUFFER_INFO ((0x0<<29)|(0x14<<23)|2) >-#define ASYNC_FLIP (1<<22) >- >-#define CMD_3D (0x3<<29) >-#define STATE3D_CONST_BLEND_COLOR_CMD (CMD_3D|(0x1d<<24)|(0x88<<16)) >-#define STATE3D_MAP_COORD_SETBIND_CMD (CMD_3D|(0x1d<<24)|(0x02<<16)) >- >-#define BR00_BITBLT_CLIENT 0x40000000 >-#define BR00_OP_COLOR_BLT 0x10000000 >-#define BR00_OP_SRC_COPY_BLT 0x10C00000 >-#define BR13_SOLID_PATTERN 0x80000000 >- >-#define BUF_3D_ID_COLOR_BACK (0x3<<24) >-#define BUF_3D_ID_DEPTH (0x7<<24) >-#define BUF_3D_USE_FENCE (1<<23) >-#define BUF_3D_PITCH(x) (((x)/4)<<2) >- >-#define CMD_OP_MAP_PALETTE_LOAD ((3<<29)|(0x1d<<24)|(0x82<<16)|255) >-#define MAP_PALETTE_NUM(x) ((x<<8) & (1<<8)) >-#define MAP_PALETTE_BOTH (1<<11) >- >-#define XY_COLOR_BLT_CMD ((2<<29)|(0x50<<22)|0x4) >-#define XY_COLOR_BLT_WRITE_ALPHA (1<<21) >-#define XY_COLOR_BLT_WRITE_RGB (1<<20) >- >-#define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6) >-#define XY_SRC_COPY_BLT_WRITE_ALPHA (1<<21) >-#define XY_SRC_COPY_BLT_WRITE_RGB (1<<20) >- >-#define MI_BATCH_BUFFER ((0x30<<23)|1) >-#define MI_BATCH_BUFFER_START (0x31<<23) >-#define MI_BATCH_BUFFER_END (0xA<<23) >-#define MI_BATCH_NON_SECURE (1) >- >-#define MI_WAIT_FOR_EVENT ((0x3<<23)) >-#define MI_WAIT_FOR_PLANE_A_FLIP (1<<2) >-#define MI_WAIT_FOR_PLANE_A_SCANLINES (1<<1) >- >-#define MI_LOAD_SCAN_LINES_INCL ((0x12<<23)) >- >-#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i830_irq.c linux-2.6.23.i686/drivers/char/drm/i830_irq.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i830_irq.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i830_irq.c 1970-01-01 01:00:00.000000000 +0100 >@@ -1,186 +0,0 @@ >-/* i830_dma.c -- DMA support for the I830 -*- linux-c -*- >- * >- * Copyright 2002 Tungsten Graphics, Inc. >- * All Rights Reserved. >- * >- * Permission is hereby granted, free of charge, to any person obtaining a >- * copy of this software and associated documentation files (the "Software"), >- * to deal in the Software without restriction, including without limitation >- * the rights to use, copy, modify, merge, publish, distribute, sublicense, >- * and/or sell copies of the Software, and to permit persons to whom the >- * Software is furnished to do so, subject to the following conditions: >- * >- * The above copyright notice and this permission notice (including the next >- * paragraph) shall be included in all copies or substantial portions of the >- * Software. >- * >- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >- * TUNGSTEN GRAPHICS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >- * DEALINGS IN THE SOFTWARE. >- * >- * Authors: Keith Whitwell <keith@tungstengraphics.com> >- * >- */ >- >-#include "drmP.h" >-#include "drm.h" >-#include "i830_drm.h" >-#include "i830_drv.h" >-#include <linux/interrupt.h> /* For task queue support */ >-#include <linux/delay.h> >- >-irqreturn_t i830_driver_irq_handler(DRM_IRQ_ARGS) >-{ >- struct drm_device *dev = (struct drm_device *) arg; >- drm_i830_private_t *dev_priv = (drm_i830_private_t *) dev->dev_private; >- u16 temp; >- >- temp = I830_READ16(I830REG_INT_IDENTITY_R); >- DRM_DEBUG("%x\n", temp); >- >- if (!(temp & 2)) >- return IRQ_NONE; >- >- I830_WRITE16(I830REG_INT_IDENTITY_R, temp); >- >- atomic_inc(&dev_priv->irq_received); >- wake_up_interruptible(&dev_priv->irq_queue); >- >- return IRQ_HANDLED; >-} >- >-static int i830_emit_irq(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- RING_LOCALS; >- >- DRM_DEBUG("%s\n", __FUNCTION__); >- >- atomic_inc(&dev_priv->irq_emitted); >- >- BEGIN_LP_RING(2); >- OUT_RING(0); >- OUT_RING(GFX_OP_USER_INTERRUPT); >- ADVANCE_LP_RING(); >- >- return atomic_read(&dev_priv->irq_emitted); >-} >- >-static int i830_wait_irq(struct drm_device * dev, int irq_nr) >-{ >- drm_i830_private_t *dev_priv = (drm_i830_private_t *) dev->dev_private; >- DECLARE_WAITQUEUE(entry, current); >- unsigned long end = jiffies + HZ * 3; >- int ret = 0; >- >- DRM_DEBUG("%s\n", __FUNCTION__); >- >- if (atomic_read(&dev_priv->irq_received) >= irq_nr) >- return 0; >- >- dev_priv->sarea_priv->perf_boxes |= I830_BOX_WAIT; >- >- add_wait_queue(&dev_priv->irq_queue, &entry); >- >- for (;;) { >- __set_current_state(TASK_INTERRUPTIBLE); >- if (atomic_read(&dev_priv->irq_received) >= irq_nr) >- break; >- if ((signed)(end - jiffies) <= 0) { >- DRM_ERROR("timeout iir %x imr %x ier %x hwstam %x\n", >- I830_READ16(I830REG_INT_IDENTITY_R), >- I830_READ16(I830REG_INT_MASK_R), >- I830_READ16(I830REG_INT_ENABLE_R), >- I830_READ16(I830REG_HWSTAM)); >- >- ret = -EBUSY; /* Lockup? Missed irq? */ >- break; >- } >- schedule_timeout(HZ * 3); >- if (signal_pending(current)) { >- ret = -EINTR; >- break; >- } >- } >- >- __set_current_state(TASK_RUNNING); >- remove_wait_queue(&dev_priv->irq_queue, &entry); >- return ret; >-} >- >-/* Needs the lock as it touches the ring. >- */ >-int i830_irq_emit(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_irq_emit_t *emit = data; >- int result; >- >- LOCK_TEST_WITH_RETURN(dev, file_priv); >- >- if (!dev_priv) { >- DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >- return -EINVAL; >- } >- >- result = i830_emit_irq(dev); >- >- if (copy_to_user(emit->irq_seq, &result, sizeof(int))) { >- DRM_ERROR("copy_to_user\n"); >- return -EFAULT; >- } >- >- return 0; >-} >- >-/* Doesn't need the hardware lock. >- */ >-int i830_irq_wait(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >-{ >- drm_i830_private_t *dev_priv = dev->dev_private; >- drm_i830_irq_wait_t *irqwait = data; >- >- if (!dev_priv) { >- DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >- return -EINVAL; >- } >- >- return i830_wait_irq(dev, irqwait->irq_seq); >-} >- >-/* drm_dma.h hooks >-*/ >-void i830_driver_irq_preinstall(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = (drm_i830_private_t *) dev->dev_private; >- >- I830_WRITE16(I830REG_HWSTAM, 0xffff); >- I830_WRITE16(I830REG_INT_MASK_R, 0x0); >- I830_WRITE16(I830REG_INT_ENABLE_R, 0x0); >- atomic_set(&dev_priv->irq_received, 0); >- atomic_set(&dev_priv->irq_emitted, 0); >- init_waitqueue_head(&dev_priv->irq_queue); >-} >- >-void i830_driver_irq_postinstall(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = (drm_i830_private_t *) dev->dev_private; >- >- I830_WRITE16(I830REG_INT_ENABLE_R, 0x2); >-} >- >-void i830_driver_irq_uninstall(struct drm_device * dev) >-{ >- drm_i830_private_t *dev_priv = (drm_i830_private_t *) dev->dev_private; >- if (!dev_priv) >- return; >- >- I830_WRITE16(I830REG_INT_MASK_R, 0xffff); >- I830_WRITE16(I830REG_INT_ENABLE_R, 0x0); >-} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_buffer.c linux-2.6.23.i686/drivers/char/drm/i915_buffer.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_buffer.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i915_buffer.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,292 @@ >+/************************************************************************** >+ * >+ * Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+#include "i915_drm.h" >+#include "i915_drv.h" >+ >+struct drm_ttm_backend *i915_create_ttm_backend_entry(struct drm_device *dev) >+{ >+ return drm_agp_init_ttm(dev); >+} >+ >+int i915_fence_type(struct drm_buffer_object *bo, >+ uint32_t *fclass, >+ uint32_t *type) >+{ >+ if (bo->mem.proposed_flags & (DRM_BO_FLAG_READ | DRM_BO_FLAG_WRITE)) >+ *type = 3; >+ else >+ *type = 1; >+ return 0; >+} >+ >+int i915_invalidate_caches(struct drm_device *dev, uint64_t flags) >+{ >+ /* >+ * FIXME: Only emit once per batchbuffer submission. >+ */ >+ >+ uint32_t flush_cmd = MI_NO_WRITE_FLUSH; >+ >+ if (flags & DRM_BO_FLAG_READ) >+ flush_cmd |= MI_READ_FLUSH; >+ if (flags & DRM_BO_FLAG_EXE) >+ flush_cmd |= MI_EXE_FLUSH; >+ >+ return i915_emit_mi_flush(dev, flush_cmd); >+} >+ >+int i915_init_mem_type(struct drm_device *dev, uint32_t type, >+ struct drm_mem_type_manager *man) >+{ >+ switch (type) { >+ case DRM_BO_MEM_LOCAL: >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_MEMTYPE_CACHED; >+ man->drm_bus_maptype = 0; >+ man->gpu_offset = 0; >+ break; >+ case DRM_BO_MEM_TT: >+ if (!(drm_core_has_AGP(dev) && dev->agp)) { >+ DRM_ERROR("AGP is not enabled for memory type %u\n", >+ (unsigned)type); >+ return -EINVAL; >+ } >+ man->io_offset = dev->agp->agp_info.aper_base; >+ man->io_size = dev->agp->agp_info.aper_size * 1024 * 1024; >+ man->io_addr = NULL; >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_MEMTYPE_CSELECT | _DRM_FLAG_NEEDS_IOREMAP; >+ man->drm_bus_maptype = _DRM_AGP; >+ man->gpu_offset = 0; >+ break; >+ case DRM_BO_MEM_PRIV0: >+ if (!(drm_core_has_AGP(dev) && dev->agp)) { >+ DRM_ERROR("AGP is not enabled for memory type %u\n", >+ (unsigned)type); >+ return -EINVAL; >+ } >+ man->io_offset = dev->agp->agp_info.aper_base; >+ man->io_size = dev->agp->agp_info.aper_size * 1024 * 1024; >+ man->io_addr = NULL; >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_MEMTYPE_FIXED | _DRM_FLAG_NEEDS_IOREMAP; >+ man->drm_bus_maptype = _DRM_AGP; >+ man->gpu_offset = 0; >+ break; >+ default: >+ DRM_ERROR("Unsupported memory type %u\n", (unsigned)type); >+ return -EINVAL; >+ } >+ return 0; >+} >+ >+/* >+ * i915_evict_flags: >+ * >+ * @bo: the buffer object to be evicted >+ * >+ * Return the bo flags for a buffer which is not mapped to the hardware. >+ * These will be placed in proposed_flags so that when the move is >+ * finished, they'll end up in bo->mem.flags >+ */ >+uint64_t i915_evict_flags(struct drm_buffer_object *bo) >+{ >+ switch (bo->mem.mem_type) { >+ case DRM_BO_MEM_LOCAL: >+ case DRM_BO_MEM_TT: >+ return DRM_BO_FLAG_MEM_LOCAL; >+ default: >+ return DRM_BO_FLAG_MEM_TT | DRM_BO_FLAG_CACHED; >+ } >+} >+ >+#if 0 /* See comment below */ >+ >+static void i915_emit_copy_blit(struct drm_device * dev, >+ uint32_t src_offset, >+ uint32_t dst_offset, >+ uint32_t pages, int direction) >+{ >+ uint32_t cur_pages; >+ uint32_t stride = PAGE_SIZE; >+ drm_i915_private_t *dev_priv = dev->dev_private; >+ RING_LOCALS; >+ >+ if (!dev_priv) >+ return; >+ >+ i915_kernel_lost_context(dev); >+ while (pages > 0) { >+ cur_pages = pages; >+ if (cur_pages > 2048) >+ cur_pages = 2048; >+ pages -= cur_pages; >+ >+ BEGIN_LP_RING(6); >+ OUT_RING(SRC_COPY_BLT_CMD | XY_SRC_COPY_BLT_WRITE_ALPHA | >+ XY_SRC_COPY_BLT_WRITE_RGB); >+ OUT_RING((stride & 0xffff) | (0xcc << 16) | (1 << 24) | >+ (1 << 25) | (direction ? (1 << 30) : 0)); >+ OUT_RING((cur_pages << 16) | PAGE_SIZE); >+ OUT_RING(dst_offset); >+ OUT_RING(stride & 0xffff); >+ OUT_RING(src_offset); >+ ADVANCE_LP_RING(); >+ } >+ return; >+} >+ >+static int i915_move_blit(struct drm_buffer_object * bo, >+ int evict, int no_wait, struct drm_bo_mem_reg * new_mem) >+{ >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ int dir = 0; >+ >+ if ((old_mem->mem_type == new_mem->mem_type) && >+ (new_mem->mm_node->start < >+ old_mem->mm_node->start + old_mem->mm_node->size)) { >+ dir = 1; >+ } >+ >+ i915_emit_copy_blit(bo->dev, >+ old_mem->mm_node->start << PAGE_SHIFT, >+ new_mem->mm_node->start << PAGE_SHIFT, >+ new_mem->num_pages, dir); >+ >+ i915_emit_mi_flush(bo->dev, MI_READ_FLUSH | MI_EXE_FLUSH); >+ >+ return drm_bo_move_accel_cleanup(bo, evict, no_wait, 0, >+ DRM_FENCE_TYPE_EXE | >+ DRM_I915_FENCE_TYPE_RW, >+ DRM_I915_FENCE_FLAG_FLUSHED, new_mem); >+} >+ >+/* >+ * Flip destination ttm into cached-coherent AGP, >+ * then blit and subsequently move out again. >+ */ >+ >+static int i915_move_flip(struct drm_buffer_object * bo, >+ int evict, int no_wait, struct drm_bo_mem_reg * new_mem) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_bo_mem_reg tmp_mem; >+ int ret; >+ >+ tmp_mem = *new_mem; >+ tmp_mem.mm_node = NULL; >+ tmp_mem.mask = DRM_BO_FLAG_MEM_TT | >+ DRM_BO_FLAG_CACHED | DRM_BO_FLAG_FORCE_CACHING; >+ >+ ret = drm_bo_mem_space(bo, &tmp_mem, no_wait); >+ if (ret) >+ return ret; >+ >+ ret = drm_bind_ttm(bo->ttm, &tmp_mem); >+ if (ret) >+ goto out_cleanup; >+ >+ ret = i915_move_blit(bo, 1, no_wait, &tmp_mem); >+ if (ret) >+ goto out_cleanup; >+ >+ ret = drm_bo_move_ttm(bo, evict, no_wait, new_mem); >+out_cleanup: >+ if (tmp_mem.mm_node) { >+ mutex_lock(&dev->struct_mutex); >+ if (tmp_mem.mm_node != bo->pinned_node) >+ drm_mm_put_block(tmp_mem.mm_node); >+ tmp_mem.mm_node = NULL; >+ mutex_unlock(&dev->struct_mutex); >+ } >+ return ret; >+} >+ >+#endif >+ >+/* >+ * Disable i915_move_flip for now, since we can't guarantee that the hardware >+ * lock is held here. To re-enable we need to make sure either >+ * a) The X server is using DRM to submit commands to the ring, or >+ * b) DRM can use the HP ring for these blits. This means i915 needs to >+ * implement a new ring submission mechanism and fence class. >+ */ >+int i915_move(struct drm_buffer_object *bo, >+ int evict, int no_wait, struct drm_bo_mem_reg *new_mem) >+{ >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ >+ if (old_mem->mem_type == DRM_BO_MEM_LOCAL) { >+ return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); >+ } else if (new_mem->mem_type == DRM_BO_MEM_LOCAL) { >+ if (0) /*i915_move_flip(bo, evict, no_wait, new_mem)*/ >+ return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); >+ } else { >+ if (0) /*i915_move_blit(bo, evict, no_wait, new_mem)*/ >+ return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); >+ } >+ return 0; >+} >+ >+#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,24)) >+static inline void clflush(volatile void *__p) >+{ >+ asm volatile("clflush %0" : "+m" (*(char __force *)__p)); >+} >+#endif >+ >+static inline void drm_cache_flush_addr(void *virt) >+{ >+ int i; >+ >+ for (i = 0; i < PAGE_SIZE; i += boot_cpu_data.x86_clflush_size) >+ clflush(virt+i); >+} >+ >+static inline void drm_cache_flush_page(struct page *p) >+{ >+ drm_cache_flush_addr(page_address(p)); >+} >+ >+void i915_flush_ttm(struct drm_ttm *ttm) >+{ >+ int i; >+ >+ if (!ttm) >+ return; >+ >+ DRM_MEMORYBARRIER(); >+ for (i = ttm->num_pages - 1; i >= 0; i--) >+ drm_cache_flush_page(drm_ttm_get_page(ttm, i)); >+ DRM_MEMORYBARRIER(); >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_compat.c linux-2.6.23.i686/drivers/char/drm/i915_compat.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_compat.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i915_compat.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,205 @@ >+#include "drmP.h" >+ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,25) >+ >+#include "i915_drm.h" >+#include "i915_drv.h" >+ >+#define PCI_DEVICE_ID_INTEL_82946GZ_HB 0x2970 >+#define PCI_DEVICE_ID_INTEL_82965G_1_HB 0x2980 >+#define PCI_DEVICE_ID_INTEL_82965Q_HB 0x2990 >+#define PCI_DEVICE_ID_INTEL_82965G_HB 0x29A0 >+#define PCI_DEVICE_ID_INTEL_82965GM_HB 0x2A00 >+#define PCI_DEVICE_ID_INTEL_82965GME_HB 0x2A10 >+#define PCI_DEVICE_ID_INTEL_82945GME_HB 0x27AC >+#define PCI_DEVICE_ID_INTEL_G33_HB 0x29C0 >+#define PCI_DEVICE_ID_INTEL_Q35_HB 0x29B0 >+#define PCI_DEVICE_ID_INTEL_Q33_HB 0x29D0 >+ >+#define I915_IFPADDR 0x60 >+#define I965_IFPADDR 0x70 >+ >+static struct _i9xx_private_compat { >+ void __iomem *flush_page; >+ struct resource ifp_resource; >+} i9xx_private; >+ >+static struct _i8xx_private_compat { >+ void *flush_page; >+ struct page *page; >+} i8xx_private; >+ >+static void >+intel_compat_align_resource(void *data, struct resource *res, >+ resource_size_t size, resource_size_t align) >+{ >+ return; >+} >+ >+ >+static int intel_alloc_chipset_flush_resource(struct pci_dev *pdev) >+{ >+ int ret; >+ ret = pci_bus_alloc_resource(pdev->bus, &i9xx_private.ifp_resource, PAGE_SIZE, >+ PAGE_SIZE, PCIBIOS_MIN_MEM, 0, >+ intel_compat_align_resource, pdev); >+ if (ret != 0) >+ return ret; >+ >+ return 0; >+} >+ >+static void intel_i915_setup_chipset_flush(struct pci_dev *pdev) >+{ >+ int ret; >+ u32 temp; >+ >+ pci_read_config_dword(pdev, I915_IFPADDR, &temp); >+ if (!(temp & 0x1)) { >+ intel_alloc_chipset_flush_resource(pdev); >+ >+ pci_write_config_dword(pdev, I915_IFPADDR, (i9xx_private.ifp_resource.start & 0xffffffff) | 0x1); >+ } else { >+ temp &= ~1; >+ >+ i9xx_private.ifp_resource.start = temp; >+ i9xx_private.ifp_resource.end = temp + PAGE_SIZE; >+ ret = request_resource(&iomem_resource, &i9xx_private.ifp_resource); >+ if (ret) { >+ i9xx_private.ifp_resource.start = 0; >+ printk("Failed inserting resource into tree\n"); >+ } >+ } >+} >+ >+static void intel_i965_g33_setup_chipset_flush(struct pci_dev *pdev) >+{ >+ u32 temp_hi, temp_lo; >+ int ret; >+ >+ pci_read_config_dword(pdev, I965_IFPADDR + 4, &temp_hi); >+ pci_read_config_dword(pdev, I965_IFPADDR, &temp_lo); >+ >+ if (!(temp_lo & 0x1)) { >+ >+ intel_alloc_chipset_flush_resource(pdev); >+ >+ pci_write_config_dword(pdev, I965_IFPADDR + 4, >+ upper_32_bits(i9xx_private.ifp_resource.start)); >+ pci_write_config_dword(pdev, I965_IFPADDR, (i9xx_private.ifp_resource.start & 0xffffffff) | 0x1); >+ } else { >+ u64 l64; >+ >+ temp_lo &= ~0x1; >+ l64 = ((u64)temp_hi << 32) | temp_lo; >+ >+ i9xx_private.ifp_resource.start = l64; >+ i9xx_private.ifp_resource.end = l64 + PAGE_SIZE; >+ ret = request_resource(&iomem_resource, &i9xx_private.ifp_resource); >+ if (!ret) { >+ i9xx_private.ifp_resource.start = 0; >+ printk("Failed inserting resource into tree\n"); >+ } >+ } >+} >+ >+static void intel_i8xx_fini_flush(struct drm_device *dev) >+{ >+ kunmap(i8xx_private.page); >+ i8xx_private.flush_page = NULL; >+ unmap_page_from_agp(i8xx_private.page); >+ flush_agp_mappings(); >+ >+ __free_page(i8xx_private.page); >+} >+ >+static void intel_i8xx_setup_flush(struct drm_device *dev) >+{ >+ >+ i8xx_private.page = alloc_page(GFP_KERNEL | __GFP_ZERO | GFP_DMA32); >+ if (!i8xx_private.page) { >+ return; >+ } >+ >+ /* make page uncached */ >+ map_page_into_agp(i8xx_private.page); >+ flush_agp_mappings(); >+ >+ i8xx_private.flush_page = kmap(i8xx_private.page); >+ if (!i8xx_private.flush_page) >+ intel_i8xx_fini_flush(dev); >+} >+ >+ >+static void intel_i8xx_flush_page(struct drm_device *dev) >+{ >+ unsigned int *pg = i8xx_private.flush_page; >+ int i; >+ >+ /* HAI NUT CAN I HAZ HAMMER?? */ >+ for (i = 0; i < 256; i++) >+ *(pg + i) = i; >+ >+ DRM_MEMORYBARRIER(); >+} >+ >+static void intel_i9xx_setup_flush(struct drm_device *dev) >+{ >+ struct pci_dev *agp_dev = dev->agp->agp_info.device; >+ >+ i9xx_private.ifp_resource.name = "GMCH IFPBAR"; >+ i9xx_private.ifp_resource.flags = IORESOURCE_MEM; >+ >+ /* Setup chipset flush for 915 */ >+ if (IS_I965G(dev) || IS_G33(dev)) { >+ intel_i965_g33_setup_chipset_flush(agp_dev); >+ } else { >+ intel_i915_setup_chipset_flush(agp_dev); >+ } >+ >+ if (i9xx_private.ifp_resource.start) { >+ i9xx_private.flush_page = ioremap_nocache(i9xx_private.ifp_resource.start, PAGE_SIZE); >+ if (!i9xx_private.flush_page) >+ printk("unable to ioremap flush page - no chipset flushing"); >+ } >+} >+ >+static void intel_i9xx_fini_flush(struct drm_device *dev) >+{ >+ iounmap(i9xx_private.flush_page); >+ release_resource(&i9xx_private.ifp_resource); >+} >+ >+static void intel_i9xx_flush_page(struct drm_device *dev) >+{ >+ if (i9xx_private.flush_page) >+ writel(1, i9xx_private.flush_page); >+} >+ >+void intel_init_chipset_flush_compat(struct drm_device *dev) >+{ >+ /* not flush on i8xx */ >+ if (IS_I9XX(dev)) >+ intel_i9xx_setup_flush(dev); >+ else >+ intel_i8xx_setup_flush(dev); >+ >+} >+ >+void intel_fini_chipset_flush_compat(struct drm_device *dev) >+{ >+ /* not flush on i8xx */ >+ if (IS_I9XX(dev)) >+ intel_i9xx_fini_flush(dev); >+ else >+ intel_i8xx_fini_flush(dev); >+} >+ >+void drm_agp_chipset_flush(struct drm_device *dev) >+{ >+ if (IS_I9XX(dev)) >+ intel_i9xx_flush_page(dev); >+ else >+ intel_i8xx_flush_page(dev); >+} >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_dma.c linux-2.6.23.i686/drivers/char/drm/i915_dma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_dma.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i915_dma.c 2008-01-06 09:24:57.000000000 +0100 >@@ -31,17 +31,6 @@ > #include "i915_drm.h" > #include "i915_drv.h" > >-#define IS_I965G(dev) (dev->pci_device == 0x2972 || \ >- dev->pci_device == 0x2982 || \ >- dev->pci_device == 0x2992 || \ >- dev->pci_device == 0x29A2 || \ >- dev->pci_device == 0x2A02 || \ >- dev->pci_device == 0x2A12) >- >-#define IS_G33(dev) (dev->pci_device == 0x29b2 || \ >- dev->pci_device == 0x29c2 || \ >- dev->pci_device == 0x29d2) >- > /* Really want an OS-independent resettable timer. Would like to have > * this loop run for (eg) 3 sec, but have the timer reset every time > * the head pointer changes, so that EBUSY only happens if the ring >@@ -62,12 +51,11 @@ int i915_wait_ring(struct drm_device * d > if (ring->space >= n) > return 0; > >- dev_priv->sarea_priv->perf_boxes |= I915_BOX_WAIT; >- > if (ring->head != last_head) > i = 0; > > last_head = ring->head; >+ DRM_UDELAY(1); > } > > return -EBUSY; >@@ -83,13 +71,11 @@ void i915_kernel_lost_context(struct drm > ring->space = ring->head - (ring->tail + 8); > if (ring->space < 0) > ring->space += ring->Size; >- >- if (ring->head == ring->tail) >- dev_priv->sarea_priv->perf_boxes |= I915_BOX_RING_EMPTY; > } > > static int i915_dma_cleanup(struct drm_device * dev) > { >+ drm_i915_private_t *dev_priv = dev->dev_private; > /* Make sure interrupts are disabled here because the uninstall ioctl > * may not have been called from userspace and after dev_private > * is freed, it's too late. >@@ -97,57 +83,51 @@ static int i915_dma_cleanup(struct drm_d > if (dev->irq) > drm_irq_uninstall(dev); > >- if (dev->dev_private) { >- drm_i915_private_t *dev_priv = >- (drm_i915_private_t *) dev->dev_private; >- >- if (dev_priv->ring.virtual_start) { >- drm_core_ioremapfree(&dev_priv->ring.map, dev); >- } >- >- if (dev_priv->status_page_dmah) { >- drm_pci_free(dev, dev_priv->status_page_dmah); >- /* Need to rewrite hardware status page */ >- I915_WRITE(0x02080, 0x1ffff000); >- } >- >- if (dev_priv->status_gfx_addr) { >- dev_priv->status_gfx_addr = 0; >- drm_core_ioremapfree(&dev_priv->hws_map, dev); >- I915_WRITE(0x2080, 0x1ffff000); >- } >+ if (dev_priv->ring.virtual_start) { >+ drm_core_ioremapfree(&dev_priv->ring.map, dev); >+ dev_priv->ring.virtual_start = 0; >+ dev_priv->ring.map.handle = 0; >+ dev_priv->ring.map.size = 0; >+ } > >- drm_free(dev->dev_private, sizeof(drm_i915_private_t), >- DRM_MEM_DRIVER); >+ if (dev_priv->status_page_dmah) { >+ drm_pci_free(dev, dev_priv->status_page_dmah); >+ dev_priv->status_page_dmah = NULL; >+ /* Need to rewrite hardware status page */ >+ I915_WRITE(0x02080, 0x1ffff000); >+ } > >- dev->dev_private = NULL; >+ if (dev_priv->status_gfx_addr) { >+ dev_priv->status_gfx_addr = 0; >+ drm_core_ioremapfree(&dev_priv->hws_map, dev); >+ I915_WRITE(0x02080, 0x1ffff000); > } > > return 0; > } > >-static int i915_initialize(struct drm_device * dev, >- drm_i915_private_t * dev_priv, >- drm_i915_init_t * init) >+static int i915_initialize(struct drm_device * dev, drm_i915_init_t * init) > { >- memset(dev_priv, 0, sizeof(drm_i915_private_t)); >+ drm_i915_private_t *dev_priv = dev->dev_private; > > dev_priv->sarea = drm_getsarea(dev); > if (!dev_priv->sarea) { > DRM_ERROR("can not find sarea!\n"); >- dev->dev_private = (void *)dev_priv; > i915_dma_cleanup(dev); > return -EINVAL; > } > > dev_priv->mmio_map = drm_core_findmap(dev, init->mmio_offset); > if (!dev_priv->mmio_map) { >- dev->dev_private = (void *)dev_priv; > i915_dma_cleanup(dev); > DRM_ERROR("can not find mmio map!\n"); > return -EINVAL; > } > >+#ifdef I915_HAVE_BUFFER >+ dev_priv->max_validate_buffers = I915_MAX_VALIDATE_BUFFERS; >+#endif >+ > dev_priv->sarea_priv = (drm_i915_sarea_t *) > ((u8 *) dev_priv->sarea->handle + init->sarea_priv_offset); > >@@ -165,7 +145,6 @@ static int i915_initialize(struct drm_de > drm_core_ioremap(&dev_priv->ring.map, dev); > > if (dev_priv->ring.map.handle == NULL) { >- dev->dev_private = (void *)dev_priv; > i915_dma_cleanup(dev); > DRM_ERROR("can not ioremap virtual address for" > " ring buffer\n"); >@@ -175,10 +154,7 @@ static int i915_initialize(struct drm_de > dev_priv->ring.virtual_start = dev_priv->ring.map.handle; > > dev_priv->cpp = init->cpp; >- dev_priv->back_offset = init->back_offset; >- dev_priv->front_offset = init->front_offset; >- dev_priv->current_page = 0; >- dev_priv->sarea_priv->pf_current_page = dev_priv->current_page; >+ dev_priv->sarea_priv->pf_current_page = 0; > > /* We are using separate values as placeholders for mechanisms for > * private backbuffer/depthbuffer usage. >@@ -191,13 +167,16 @@ static int i915_initialize(struct drm_de > */ > dev_priv->allow_batchbuffer = 1; > >+ /* Enable vblank on pipe A for older X servers >+ */ >+ dev_priv->vblank_pipe = DRM_I915_VBLANK_PIPE_A; >+ > /* Program Hardware Status Page */ > if (!IS_G33(dev)) { > dev_priv->status_page_dmah = > drm_pci_alloc(dev, PAGE_SIZE, PAGE_SIZE, 0xffffffff); > > if (!dev_priv->status_page_dmah) { >- dev->dev_private = (void *)dev_priv; > i915_dma_cleanup(dev); > DRM_ERROR("Can not allocate hardware status page\n"); > return -ENOMEM; >@@ -206,10 +185,13 @@ static int i915_initialize(struct drm_de > dev_priv->dma_status_page = dev_priv->status_page_dmah->busaddr; > > memset(dev_priv->hw_status_page, 0, PAGE_SIZE); >+ > I915_WRITE(0x02080, dev_priv->dma_status_page); > } > DRM_DEBUG("Enabled hardware status page\n"); >- dev->dev_private = (void *)dev_priv; >+#ifdef I915_HAVE_BUFFER >+ mutex_init(&dev_priv->cmdbuf_mutex); >+#endif > return 0; > } > >@@ -217,7 +199,7 @@ static int i915_dma_resume(struct drm_de > { > drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; > >- DRM_DEBUG("%s\n", __FUNCTION__); >+ DRM_DEBUG("\n"); > > if (!dev_priv->sarea) { > DRM_ERROR("can not find sarea!\n"); >@@ -254,17 +236,12 @@ static int i915_dma_resume(struct drm_de > static int i915_dma_init(struct drm_device *dev, void *data, > struct drm_file *file_priv) > { >- drm_i915_private_t *dev_priv; > drm_i915_init_t *init = data; > int retcode = 0; > > switch (init->func) { > case I915_INIT_DMA: >- dev_priv = drm_alloc(sizeof(drm_i915_private_t), >- DRM_MEM_DRIVER); >- if (dev_priv == NULL) >- return -ENOMEM; >- retcode = i915_initialize(dev, dev_priv, init); >+ retcode = i915_initialize(dev, init); > break; > case I915_CLEANUP_DMA: > retcode = i915_dma_cleanup(dev); >@@ -351,12 +328,13 @@ static int validate_cmd(int cmd) > { > int ret = do_validate_cmd(cmd); > >-/* printk("validate_cmd( %x ): %d\n", cmd, ret); */ >+/* printk("validate_cmd( %x ): %d\n", cmd, ret); */ > > return ret; > } > >-static int i915_emit_cmds(struct drm_device * dev, int __user * buffer, int dwords) >+static int i915_emit_cmds(struct drm_device *dev, int __user *buffer, >+ int dwords) > { > drm_i915_private_t *dev_priv = dev->dev_private; > int i; >@@ -438,15 +416,17 @@ static int i915_emit_box(struct drm_devi > * emit. For now, do it in both places: > */ > >-static void i915_emit_breadcrumb(struct drm_device *dev) >+void i915_emit_breadcrumb(struct drm_device *dev) > { > drm_i915_private_t *dev_priv = dev->dev_private; > RING_LOCALS; > >- dev_priv->sarea_priv->last_enqueue = ++dev_priv->counter; >+ if (++dev_priv->counter > BREADCRUMB_MASK) { >+ dev_priv->counter = 1; >+ DRM_DEBUG("Breadcrumb counter wrapped around\n"); >+ } > >- if (dev_priv->counter > 0x7FFFFFFFUL) >- dev_priv->sarea_priv->last_enqueue = dev_priv->counter = 1; >+ dev_priv->sarea_priv->last_enqueue = dev_priv->counter; > > BEGIN_LP_RING(4); > OUT_RING(CMD_STORE_DWORD_IDX); >@@ -456,14 +436,39 @@ static void i915_emit_breadcrumb(struct > ADVANCE_LP_RING(); > } > >+ >+int i915_emit_mi_flush(struct drm_device *dev, uint32_t flush) >+{ >+ drm_i915_private_t *dev_priv = dev->dev_private; >+ uint32_t flush_cmd = CMD_MI_FLUSH; >+ RING_LOCALS; >+ >+ flush_cmd |= flush; >+ >+ i915_kernel_lost_context(dev); >+ >+ BEGIN_LP_RING(4); >+ OUT_RING(flush_cmd); >+ OUT_RING(0); >+ OUT_RING(0); >+ OUT_RING(0); >+ ADVANCE_LP_RING(); >+ >+ return 0; >+} >+ >+ > static int i915_dispatch_cmdbuffer(struct drm_device * dev, > drm_i915_cmdbuffer_t * cmd) > { >+#ifdef I915_HAVE_FENCE >+ drm_i915_private_t *dev_priv = dev->dev_private; >+#endif > int nbox = cmd->num_cliprects; > int i = 0, count, ret; > > if (cmd->sz & 0x3) { >- DRM_ERROR("alignment"); >+ DRM_ERROR("alignment\n"); > return -EINVAL; > } > >@@ -485,6 +490,9 @@ static int i915_dispatch_cmdbuffer(struc > } > > i915_emit_breadcrumb(dev); >+#ifdef I915_HAVE_FENCE >+ drm_fence_flush_old(dev, 0, dev_priv->counter); >+#endif > return 0; > } > >@@ -498,7 +506,7 @@ static int i915_dispatch_batchbuffer(str > RING_LOCALS; > > if ((batch->start | batch->used) & 0x7) { >- DRM_ERROR("alignment"); >+ DRM_ERROR("alignment\n"); > return -EINVAL; > } > >@@ -524,6 +532,7 @@ static int i915_dispatch_batchbuffer(str > OUT_RING(batch->start | MI_BATCH_NON_SECURE); > } > ADVANCE_LP_RING(); >+ > } else { > BEGIN_LP_RING(4); > OUT_RING(MI_BATCH_BUFFER); >@@ -535,59 +544,86 @@ static int i915_dispatch_batchbuffer(str > } > > i915_emit_breadcrumb(dev); >- >+#ifdef I915_HAVE_FENCE >+ drm_fence_flush_old(dev, 0, dev_priv->counter); >+#endif > return 0; > } > >-static int i915_dispatch_flip(struct drm_device * dev) >+static void i915_do_dispatch_flip(struct drm_device * dev, int plane, int sync) > { > drm_i915_private_t *dev_priv = dev->dev_private; >+ u32 num_pages, current_page, next_page, dspbase; >+ int shift = 2 * plane, x, y; > RING_LOCALS; > >- DRM_DEBUG("%s: page=%d pfCurrentPage=%d\n", >- __FUNCTION__, >- dev_priv->current_page, >- dev_priv->sarea_priv->pf_current_page); >+ /* Calculate display base offset */ >+ num_pages = dev_priv->sarea_priv->third_handle ? 3 : 2; >+ current_page = (dev_priv->sarea_priv->pf_current_page >> shift) & 0x3; >+ next_page = (current_page + 1) % num_pages; > >- i915_kernel_lost_context(dev); >- >- BEGIN_LP_RING(2); >- OUT_RING(INST_PARSER_CLIENT | INST_OP_FLUSH | INST_FLUSH_MAP_CACHE); >- OUT_RING(0); >- ADVANCE_LP_RING(); >+ switch (next_page) { >+ default: >+ case 0: >+ dspbase = dev_priv->sarea_priv->front_offset; >+ break; >+ case 1: >+ dspbase = dev_priv->sarea_priv->back_offset; >+ break; >+ case 2: >+ dspbase = dev_priv->sarea_priv->third_offset; >+ break; >+ } > >- BEGIN_LP_RING(6); >- OUT_RING(CMD_OP_DISPLAYBUFFER_INFO | ASYNC_FLIP); >- OUT_RING(0); >- if (dev_priv->current_page == 0) { >- OUT_RING(dev_priv->back_offset); >- dev_priv->current_page = 1; >+ if (plane == 0) { >+ x = dev_priv->sarea_priv->planeA_x; >+ y = dev_priv->sarea_priv->planeA_y; > } else { >- OUT_RING(dev_priv->front_offset); >- dev_priv->current_page = 0; >+ x = dev_priv->sarea_priv->planeB_x; >+ y = dev_priv->sarea_priv->planeB_y; > } >- OUT_RING(0); >- ADVANCE_LP_RING(); > >- BEGIN_LP_RING(2); >- OUT_RING(MI_WAIT_FOR_EVENT | MI_WAIT_FOR_PLANE_A_FLIP); >- OUT_RING(0); >- ADVANCE_LP_RING(); >+ dspbase += (y * dev_priv->sarea_priv->pitch + x) * dev_priv->cpp; > >- dev_priv->sarea_priv->last_enqueue = dev_priv->counter++; >+ DRM_DEBUG("plane=%d current_page=%d dspbase=0x%x\n", plane, current_page, >+ dspbase); > > BEGIN_LP_RING(4); >- OUT_RING(CMD_STORE_DWORD_IDX); >- OUT_RING(20); >- OUT_RING(dev_priv->counter); >- OUT_RING(0); >+ OUT_RING(sync ? 0 : >+ (MI_WAIT_FOR_EVENT | (plane ? MI_WAIT_FOR_PLANE_B_FLIP : >+ MI_WAIT_FOR_PLANE_A_FLIP))); >+ OUT_RING(CMD_OP_DISPLAYBUFFER_INFO | (sync ? 0 : ASYNC_FLIP) | >+ (plane ? DISPLAY_PLANE_B : DISPLAY_PLANE_A)); >+ OUT_RING(dev_priv->sarea_priv->pitch * dev_priv->cpp); >+ OUT_RING(dspbase); > ADVANCE_LP_RING(); > >- dev_priv->sarea_priv->pf_current_page = dev_priv->current_page; >- return 0; >+ dev_priv->sarea_priv->pf_current_page &= ~(0x3 << shift); >+ dev_priv->sarea_priv->pf_current_page |= next_page << shift; >+} >+ >+void i915_dispatch_flip(struct drm_device * dev, int planes, int sync) >+{ >+ drm_i915_private_t *dev_priv = dev->dev_private; >+ int i; >+ >+ DRM_DEBUG("planes=0x%x pfCurrentPage=%d\n", >+ planes, dev_priv->sarea_priv->pf_current_page); >+ >+ i915_emit_mi_flush(dev, MI_READ_FLUSH | MI_EXE_FLUSH); >+ >+ for (i = 0; i < 2; i++) >+ if (planes & (1 << i)) >+ i915_do_dispatch_flip(dev, i, sync); >+ >+ i915_emit_breadcrumb(dev); >+#ifdef I915_HAVE_FENCE >+ if (!sync) >+ drm_fence_flush_old(dev, 0, dev_priv->counter); >+#endif > } > >-static int i915_quiescent(struct drm_device * dev) >+static int i915_quiescent(struct drm_device *dev) > { > drm_i915_private_t *dev_priv = dev->dev_private; > >@@ -598,6 +634,7 @@ static int i915_quiescent(struct drm_dev > static int i915_flush_ioctl(struct drm_device *dev, void *data, > struct drm_file *file_priv) > { >+ > LOCK_TEST_WITH_RETURN(dev, file_priv); > > return i915_quiescent(dev); >@@ -607,7 +644,6 @@ static int i915_batchbuffer(struct drm_d > struct drm_file *file_priv) > { > drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >- u32 *hw_status = dev_priv->hw_status_page; > drm_i915_sarea_t *sarea_priv = (drm_i915_sarea_t *) > dev_priv->sarea_priv; > drm_i915_batchbuffer_t *batch = data; >@@ -624,13 +660,13 @@ static int i915_batchbuffer(struct drm_d > LOCK_TEST_WITH_RETURN(dev, file_priv); > > if (batch->num_cliprects && DRM_VERIFYAREA_READ(batch->cliprects, >- batch->num_cliprects * >- sizeof(struct drm_clip_rect))) >+ batch->num_cliprects * >+ sizeof(struct drm_clip_rect))) > return -EFAULT; > > ret = i915_dispatch_batchbuffer(dev, batch); > >- sarea_priv->last_dispatch = (int)hw_status[5]; >+ sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); > return ret; > } > >@@ -638,7 +674,6 @@ static int i915_cmdbuffer(struct drm_dev > struct drm_file *file_priv) > { > drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >- u32 *hw_status = dev_priv->hw_status_page; > drm_i915_sarea_t *sarea_priv = (drm_i915_sarea_t *) > dev_priv->sarea_priv; > drm_i915_cmdbuffer_t *cmdbuf = data; >@@ -663,20 +698,492 @@ static int i915_cmdbuffer(struct drm_dev > return ret; > } > >- sarea_priv->last_dispatch = (int)hw_status[5]; >+ sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); > return 0; > } > >-static int i915_flip_bufs(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >+#if DRM_DEBUG_CODE >+#define DRM_DEBUG_RELOCATION (drm_debug != 0) >+#else >+#define DRM_DEBUG_RELOCATION 0 >+#endif >+ >+#ifdef I915_HAVE_BUFFER >+ >+struct i915_relocatee_info { >+ struct drm_buffer_object *buf; >+ unsigned long offset; >+ u32 *data_page; >+ unsigned page_offset; >+ struct drm_bo_kmap_obj kmap; >+ int is_iomem; >+}; >+ >+struct drm_i915_validate_buffer { >+ struct drm_buffer_object *buffer; >+ int presumed_offset_correct; >+}; >+ >+static void i915_dereference_buffers_locked(struct drm_i915_validate_buffer *buffers, >+ unsigned num_buffers) >+{ >+ while (num_buffers--) >+ drm_bo_usage_deref_locked(&buffers[num_buffers].buffer); >+} >+ >+int i915_apply_reloc(struct drm_file *file_priv, int num_buffers, >+ struct drm_i915_validate_buffer *buffers, >+ struct i915_relocatee_info *relocatee, >+ uint32_t *reloc) > { >- DRM_DEBUG("%s\n", __FUNCTION__); >+ unsigned index; >+ unsigned long new_cmd_offset; >+ u32 val; >+ int ret; >+ >+ if (reloc[2] >= num_buffers) { >+ DRM_ERROR("Illegal relocation buffer %08X\n", reloc[2]); >+ return -EINVAL; >+ } >+ >+ /* >+ * Short-circuit relocations that were correctly >+ * guessed by the client >+ */ >+ if (buffers[reloc[2]].presumed_offset_correct && !DRM_DEBUG_RELOCATION) >+ return 0; >+ >+ new_cmd_offset = reloc[0]; >+ if (!relocatee->data_page || >+ !drm_bo_same_page(relocatee->offset, new_cmd_offset)) { >+ drm_bo_kunmap(&relocatee->kmap); >+ relocatee->offset = new_cmd_offset; >+ mutex_lock (&relocatee->buf->mutex); >+ ret = drm_bo_wait (relocatee->buf, 0, 0, FALSE); >+ mutex_unlock (&relocatee->buf->mutex); >+ if (ret) { >+ DRM_ERROR("Could not wait for buffer to apply relocs\n %08lx", new_cmd_offset); >+ return ret; >+ } >+ ret = drm_bo_kmap(relocatee->buf, new_cmd_offset >> PAGE_SHIFT, >+ 1, &relocatee->kmap); >+ if (ret) { >+ DRM_ERROR("Could not map command buffer to apply relocs\n %08lx", new_cmd_offset); >+ return ret; >+ } >+ >+ relocatee->data_page = drm_bmo_virtual(&relocatee->kmap, >+ &relocatee->is_iomem); >+ relocatee->page_offset = (relocatee->offset & PAGE_MASK); >+ } >+ >+ val = buffers[reloc[2]].buffer->offset; >+ index = (reloc[0] - relocatee->page_offset) >> 2; >+ >+ /* add in validate */ >+ val = val + reloc[1]; >+ >+ if (DRM_DEBUG_RELOCATION) { >+ if (buffers[reloc[2]].presumed_offset_correct && >+ relocatee->data_page[index] != val) { >+ DRM_DEBUG ("Relocation mismatch source %d target %d buffer %d user %08x kernel %08x\n", >+ reloc[0], reloc[1], reloc[2], relocatee->data_page[index], val); >+ } >+ } >+ relocatee->data_page[index] = val; >+ return 0; >+} >+ >+int i915_process_relocs(struct drm_file *file_priv, >+ uint32_t buf_handle, >+ uint32_t *reloc_buf_handle, >+ struct i915_relocatee_info *relocatee, >+ struct drm_i915_validate_buffer *buffers, >+ uint32_t num_buffers) >+{ >+ struct drm_device *dev = file_priv->head->dev; >+ struct drm_buffer_object *reloc_list_object; >+ uint32_t cur_handle = *reloc_buf_handle; >+ uint32_t *reloc_page; >+ int ret, reloc_is_iomem, reloc_stride; >+ uint32_t num_relocs, reloc_offset, reloc_end, reloc_page_offset, next_offset, cur_offset; >+ struct drm_bo_kmap_obj reloc_kmap; >+ >+ memset(&reloc_kmap, 0, sizeof(reloc_kmap)); >+ >+ mutex_lock(&dev->struct_mutex); >+ reloc_list_object = drm_lookup_buffer_object(file_priv, cur_handle, 1); >+ mutex_unlock(&dev->struct_mutex); >+ if (!reloc_list_object) >+ return -EINVAL; >+ >+ ret = drm_bo_kmap(reloc_list_object, 0, 1, &reloc_kmap); >+ if (ret) { >+ DRM_ERROR("Could not map relocation buffer.\n"); >+ goto out; >+ } >+ >+ reloc_page = drm_bmo_virtual(&reloc_kmap, &reloc_is_iomem); >+ num_relocs = reloc_page[0] & 0xffff; >+ >+ if ((reloc_page[0] >> 16) & 0xffff) { >+ DRM_ERROR("Unsupported relocation type requested\n"); >+ goto out; >+ } >+ >+ /* get next relocate buffer handle */ >+ *reloc_buf_handle = reloc_page[1]; >+ reloc_stride = I915_RELOC0_STRIDE * sizeof(uint32_t); /* may be different for other types of relocs */ >+ >+ DRM_DEBUG("num relocs is %d, next is %08X\n", num_relocs, reloc_page[1]); >+ >+ reloc_page_offset = 0; >+ reloc_offset = I915_RELOC_HEADER * sizeof(uint32_t); >+ reloc_end = reloc_offset + (num_relocs * reloc_stride); >+ >+ do { >+ next_offset = drm_bo_offset_end(reloc_offset, reloc_end); >+ >+ do { >+ cur_offset = ((reloc_offset + reloc_page_offset) & ~PAGE_MASK) / sizeof(uint32_t); >+ ret = i915_apply_reloc(file_priv, num_buffers, >+ buffers, relocatee, &reloc_page[cur_offset]); >+ if (ret) >+ goto out; >+ >+ reloc_offset += reloc_stride; >+ } while (reloc_offset < next_offset); >+ >+ drm_bo_kunmap(&reloc_kmap); >+ >+ reloc_offset = next_offset; >+ if (reloc_offset != reloc_end) { >+ ret = drm_bo_kmap(reloc_list_object, reloc_offset >> PAGE_SHIFT, 1, &reloc_kmap); >+ if (ret) { >+ DRM_ERROR("Could not map relocation buffer.\n"); >+ goto out; >+ } >+ >+ reloc_page = drm_bmo_virtual(&reloc_kmap, &reloc_is_iomem); >+ reloc_page_offset = reloc_offset & ~PAGE_MASK; >+ } >+ >+ } while (reloc_offset != reloc_end); >+out: >+ drm_bo_kunmap(&relocatee->kmap); >+ relocatee->data_page = NULL; >+ >+ drm_bo_kunmap(&reloc_kmap); >+ >+ mutex_lock(&dev->struct_mutex); >+ drm_bo_usage_deref_locked(&reloc_list_object); >+ mutex_unlock(&dev->struct_mutex); >+ >+ return ret; >+} >+ >+static int i915_exec_reloc(struct drm_file *file_priv, drm_handle_t buf_handle, >+ drm_handle_t buf_reloc_handle, >+ struct drm_i915_validate_buffer *buffers, >+ uint32_t buf_count) >+{ >+ struct drm_device *dev = file_priv->head->dev; >+ struct i915_relocatee_info relocatee; >+ int ret = 0; >+ int b; >+ >+ /* >+ * Short circuit relocations when all previous >+ * buffers offsets were correctly guessed by >+ * the client >+ */ >+ if (!DRM_DEBUG_RELOCATION) { >+ for (b = 0; b < buf_count; b++) >+ if (!buffers[b].presumed_offset_correct) >+ break; >+ >+ if (b == buf_count) >+ return 0; >+ } >+ >+ memset(&relocatee, 0, sizeof(relocatee)); >+ >+ mutex_lock(&dev->struct_mutex); >+ relocatee.buf = drm_lookup_buffer_object(file_priv, buf_handle, 1); >+ mutex_unlock(&dev->struct_mutex); >+ if (!relocatee.buf) { >+ DRM_DEBUG("relocatee buffer invalid %08x\n", buf_handle); >+ ret = -EINVAL; >+ goto out_err; >+ } >+ >+ while (buf_reloc_handle) { >+ ret = i915_process_relocs(file_priv, buf_handle, &buf_reloc_handle, &relocatee, buffers, buf_count); >+ if (ret) { >+ DRM_ERROR("process relocs failed\n"); >+ break; >+ } >+ } >+ >+ mutex_lock(&dev->struct_mutex); >+ drm_bo_usage_deref_locked(&relocatee.buf); >+ mutex_unlock(&dev->struct_mutex); >+ >+out_err: >+ return ret; >+} >+ >+/* >+ * Validate, add fence and relocate a block of bos from a userspace list >+ */ >+int i915_validate_buffer_list(struct drm_file *file_priv, >+ unsigned int fence_class, uint64_t data, >+ struct drm_i915_validate_buffer *buffers, >+ uint32_t *num_buffers) >+{ >+ struct drm_i915_op_arg arg; >+ struct drm_bo_op_req *req = &arg.d.req; >+ struct drm_bo_arg_rep rep; >+ unsigned long next = 0; >+ int ret = 0; >+ unsigned buf_count = 0; >+ struct drm_device *dev = file_priv->head->dev; >+ uint32_t buf_reloc_handle, buf_handle; >+ >+ >+ do { >+ if (buf_count >= *num_buffers) { >+ DRM_ERROR("Buffer count exceeded %d\n.", *num_buffers); >+ ret = -EINVAL; >+ goto out_err; >+ } >+ >+ buffers[buf_count].buffer = NULL; >+ buffers[buf_count].presumed_offset_correct = 0; >+ >+ if (copy_from_user(&arg, (void __user *)(unsigned long)data, sizeof(arg))) { >+ ret = -EFAULT; >+ goto out_err; >+ } >+ >+ if (arg.handled) { >+ data = arg.next; >+ mutex_lock(&dev->struct_mutex); >+ buffers[buf_count].buffer = drm_lookup_buffer_object(file_priv, req->arg_handle, 1); >+ mutex_unlock(&dev->struct_mutex); >+ buf_count++; >+ continue; >+ } >+ >+ rep.ret = 0; >+ if (req->op != drm_bo_validate) { >+ DRM_ERROR >+ ("Buffer object operation wasn't \"validate\".\n"); >+ rep.ret = -EINVAL; >+ goto out_err; >+ } >+ >+ buf_handle = req->bo_req.handle; >+ buf_reloc_handle = arg.reloc_handle; >+ >+ if (buf_reloc_handle) { >+ ret = i915_exec_reloc(file_priv, buf_handle, buf_reloc_handle, buffers, buf_count); >+ if (ret) >+ goto out_err; >+ DRM_MEMORYBARRIER(); >+ } >+ >+ rep.ret = drm_bo_handle_validate(file_priv, req->bo_req.handle, >+ req->bo_req.flags, req->bo_req.mask, >+ req->bo_req.hint, >+ req->bo_req.fence_class, 0, >+ &rep.bo_info, >+ &buffers[buf_count].buffer); >+ >+ if (rep.ret) { >+ DRM_ERROR("error on handle validate %d\n", rep.ret); >+ goto out_err; >+ } >+ /* >+ * If the user provided a presumed offset hint, check whether >+ * the buffer is in the same place, if so, relocations relative to >+ * this buffer need not be performed >+ */ >+ if ((req->bo_req.hint & DRM_BO_HINT_PRESUMED_OFFSET) && >+ buffers[buf_count].buffer->offset == req->bo_req.presumed_offset) { >+ buffers[buf_count].presumed_offset_correct = 1; >+ } >+ >+ next = arg.next; >+ arg.handled = 1; >+ arg.d.rep = rep; >+ >+ if (copy_to_user((void __user *)(unsigned long)data, &arg, sizeof(arg))) >+ return -EFAULT; >+ >+ data = next; >+ buf_count++; >+ >+ } while (next != 0); >+ *num_buffers = buf_count; >+ return 0; >+out_err: >+ mutex_lock(&dev->struct_mutex); >+ i915_dereference_buffers_locked(buffers, buf_count); >+ mutex_unlock(&dev->struct_mutex); >+ *num_buffers = 0; >+ return (ret) ? ret : rep.ret; >+} >+ >+static int i915_execbuffer(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >+ drm_i915_sarea_t *sarea_priv = (drm_i915_sarea_t *) >+ dev_priv->sarea_priv; >+ struct drm_i915_execbuffer *exec_buf = data; >+ struct _drm_i915_batchbuffer *batch = &exec_buf->batch; >+ struct drm_fence_arg *fence_arg = &exec_buf->fence_arg; >+ int num_buffers; >+ int ret; >+ struct drm_i915_validate_buffer *buffers; >+ struct drm_fence_object *fence; >+ >+ if (!dev_priv->allow_batchbuffer) { >+ DRM_ERROR("Batchbuffer ioctl disabled\n"); >+ return -EINVAL; >+ } >+ >+ >+ if (batch->num_cliprects && DRM_VERIFYAREA_READ(batch->cliprects, >+ batch->num_cliprects * >+ sizeof(struct drm_clip_rect))) >+ return -EFAULT; >+ >+ if (exec_buf->num_buffers > dev_priv->max_validate_buffers) >+ return -EINVAL; >+ >+ >+ ret = drm_bo_read_lock(&dev->bm.bm_lock); >+ if (ret) >+ return ret; >+ >+ /* >+ * The cmdbuf_mutex makes sure the validate-submit-fence >+ * operation is atomic. >+ */ >+ >+ ret = mutex_lock_interruptible(&dev_priv->cmdbuf_mutex); >+ if (ret) { >+ drm_bo_read_unlock(&dev->bm.bm_lock); >+ return -EAGAIN; >+ } >+ >+ num_buffers = exec_buf->num_buffers; >+ >+ buffers = drm_calloc(num_buffers, sizeof(struct drm_i915_validate_buffer), DRM_MEM_DRIVER); >+ if (!buffers) { >+ drm_bo_read_unlock(&dev->bm.bm_lock); >+ mutex_unlock(&dev_priv->cmdbuf_mutex); >+ return -ENOMEM; >+ } >+ >+ /* validate buffer list + fixup relocations */ >+ ret = i915_validate_buffer_list(file_priv, 0, exec_buf->ops_list, >+ buffers, &num_buffers); >+ if (ret) >+ goto out_free; >+ >+ /* make sure all previous memory operations have passed */ >+ DRM_MEMORYBARRIER(); >+ drm_agp_chipset_flush(dev); >+ >+ /* submit buffer */ >+ batch->start = buffers[num_buffers-1].buffer->offset; >+ >+ DRM_DEBUG("i915 exec batchbuffer, start %x used %d cliprects %d\n", >+ batch->start, batch->used, batch->num_cliprects); >+ >+ ret = i915_dispatch_batchbuffer(dev, batch); >+ if (ret) >+ goto out_err0; >+ >+ sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); >+ >+ /* fence */ >+ ret = drm_fence_buffer_objects(dev, NULL, 0, NULL, &fence); >+ if (ret) >+ goto out_err0; >+ >+ if (!(fence_arg->flags & DRM_FENCE_FLAG_NO_USER)) { >+ ret = drm_fence_add_user_object(file_priv, fence, fence_arg->flags & DRM_FENCE_FLAG_SHAREABLE); >+ if (!ret) { >+ fence_arg->handle = fence->base.hash.key; >+ fence_arg->fence_class = fence->fence_class; >+ fence_arg->type = fence->type; >+ fence_arg->signaled = fence->signaled; >+ } >+ } >+ drm_fence_usage_deref_unlocked(&fence); >+out_err0: >+ >+ /* handle errors */ >+ mutex_lock(&dev->struct_mutex); >+ i915_dereference_buffers_locked(buffers, num_buffers); >+ mutex_unlock(&dev->struct_mutex); >+ >+out_free: >+ drm_free(buffers, (exec_buf->num_buffers * sizeof(struct drm_buffer_object *)), DRM_MEM_DRIVER); >+ >+ mutex_unlock(&dev_priv->cmdbuf_mutex); >+ drm_bo_read_unlock(&dev->bm.bm_lock); >+ return ret; >+} >+#endif >+ >+static int i915_do_cleanup_pageflip(struct drm_device * dev) >+{ >+ drm_i915_private_t *dev_priv = dev->dev_private; >+ int i, planes, num_pages = dev_priv->sarea_priv->third_handle ? 3 : 2; >+ >+ DRM_DEBUG("\n"); >+ >+ for (i = 0, planes = 0; i < 2; i++) >+ if (dev_priv->sarea_priv->pf_current_page & (0x3 << (2 * i))) { >+ dev_priv->sarea_priv->pf_current_page = >+ (dev_priv->sarea_priv->pf_current_page & >+ ~(0x3 << (2 * i))) | ((num_pages - 1) << (2 * i)); >+ >+ planes |= 1 << i; >+ } >+ >+ if (planes) >+ i915_dispatch_flip(dev, planes, 0); >+ >+ return 0; >+} >+ >+static int i915_flip_bufs(struct drm_device *dev, void *data, struct drm_file *file_priv) >+{ >+ drm_i915_flip_t *param = data; >+ >+ DRM_DEBUG("\n"); > > LOCK_TEST_WITH_RETURN(dev, file_priv); > >- return i915_dispatch_flip(dev); >+ /* This is really planes */ >+ if (param->pipes & ~0x3) { >+ DRM_ERROR("Invalid planes 0x%x, only <= 0x3 is valid\n", >+ param->pipes); >+ return -EINVAL; >+ } >+ >+ i915_dispatch_flip(dev, param->pipes, 0); >+ >+ return 0; > } > >+ > static int i915_getparam(struct drm_device *dev, void *data, > struct drm_file *file_priv) > { >@@ -742,6 +1249,63 @@ static int i915_setparam(struct drm_devi > return 0; > } > >+drm_i915_mmio_entry_t mmio_table[] = { >+ [MMIO_REGS_PS_DEPTH_COUNT] = { >+ I915_MMIO_MAY_READ|I915_MMIO_MAY_WRITE, >+ 0x2350, >+ 8 >+ } >+}; >+ >+static int mmio_table_size = sizeof(mmio_table)/sizeof(drm_i915_mmio_entry_t); >+ >+static int i915_mmio(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ uint32_t buf[8]; >+ drm_i915_private_t *dev_priv = dev->dev_private; >+ drm_i915_mmio_entry_t *e; >+ drm_i915_mmio_t *mmio = data; >+ void __iomem *base; >+ int i; >+ >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ >+ if (mmio->reg >= mmio_table_size) >+ return -EINVAL; >+ >+ e = &mmio_table[mmio->reg]; >+ base = (u8 *) dev_priv->mmio_map->handle + e->offset; >+ >+ switch (mmio->read_write) { >+ case I915_MMIO_READ: >+ if (!(e->flag & I915_MMIO_MAY_READ)) >+ return -EINVAL; >+ for (i = 0; i < e->size / 4; i++) >+ buf[i] = I915_READ(e->offset + i * 4); >+ if (DRM_COPY_TO_USER(mmio->data, buf, e->size)) { >+ DRM_ERROR("DRM_COPY_TO_USER failed\n"); >+ return -EFAULT; >+ } >+ break; >+ >+ case I915_MMIO_WRITE: >+ if (!(e->flag & I915_MMIO_MAY_WRITE)) >+ return -EINVAL; >+ if (DRM_COPY_FROM_USER(buf, mmio->data, e->size)) { >+ DRM_ERROR("DRM_COPY_TO_USER failed\n"); >+ return -EFAULT; >+ } >+ for (i = 0; i < e->size / 4; i++) >+ I915_WRITE(e->offset + i * 4, buf[i]); >+ break; >+ } >+ return 0; >+} >+ > static int i915_set_status_page(struct drm_device *dev, void *data, > struct drm_file *file_priv) > { >@@ -752,12 +1316,11 @@ static int i915_set_status_page(struct d > DRM_ERROR("%s called with no initialization\n", __FUNCTION__); > return -EINVAL; > } >- >- printk(KERN_DEBUG "set status page addr 0x%08x\n", (u32)hws->addr); >+ DRM_DEBUG("set status page addr 0x%08x\n", (u32)hws->addr); > > dev_priv->status_gfx_addr = hws->addr & (0x1ffff<<12); > >- dev_priv->hws_map.offset = dev->agp->agp_info.aper_base + hws->addr; >+ dev_priv->hws_map.offset = dev->agp->base + hws->addr; > dev_priv->hws_map.size = 4*1024; > dev_priv->hws_map.type = 0; > dev_priv->hws_map.flags = 0; >@@ -765,7 +1328,6 @@ static int i915_set_status_page(struct d > > drm_core_ioremap(&dev_priv->hws_map, dev); > if (dev_priv->hws_map.handle == NULL) { >- dev->dev_private = (void *)dev_priv; > i915_dma_cleanup(dev); > dev_priv->status_gfx_addr = 0; > DRM_ERROR("can not ioremap virtual address for" >@@ -784,6 +1346,10 @@ static int i915_set_status_page(struct d > > int i915_driver_load(struct drm_device *dev, unsigned long flags) > { >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ unsigned long base, size; >+ int ret = 0, mmio_bar = IS_I9XX(dev) ? 0 : 1; >+ > /* i915 has 4 more counters */ > dev->counters += 4; > dev->types[6] = _DRM_STAT_IRQ; >@@ -791,24 +1357,63 @@ int i915_driver_load(struct drm_device * > dev->types[8] = _DRM_STAT_SECONDARY; > dev->types[9] = _DRM_STAT_DMA; > >+ dev_priv = drm_alloc(sizeof(drm_i915_private_t), DRM_MEM_DRIVER); >+ if (dev_priv == NULL) >+ return -ENOMEM; >+ >+ memset(dev_priv, 0, sizeof(drm_i915_private_t)); >+ >+ dev->dev_private = (void *)dev_priv; >+ >+ /* Add register map (needed for suspend/resume) */ >+ base = drm_get_resource_start(dev, mmio_bar); >+ size = drm_get_resource_len(dev, mmio_bar); >+ >+ ret = drm_addmap(dev, base, size, _DRM_REGISTERS, >+ _DRM_KERNEL | _DRM_DRIVER, &dev_priv->mmio_map); >+ >+#ifdef __linux__ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,25) >+ intel_init_chipset_flush_compat(dev); >+#endif >+#endif >+ >+ return ret; >+} >+ >+int i915_driver_unload(struct drm_device *dev) >+{ >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ >+ if (dev_priv->mmio_map) >+ drm_rmmap(dev, dev_priv->mmio_map); >+ >+ drm_free(dev->dev_private, sizeof(drm_i915_private_t), >+ DRM_MEM_DRIVER); >+#ifdef __linux__ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,25) >+ intel_fini_chipset_flush_compat(dev); >+#endif >+#endif > return 0; > } > > void i915_driver_lastclose(struct drm_device * dev) > { >- if (dev->dev_private) { >- drm_i915_private_t *dev_priv = dev->dev_private; >+ drm_i915_private_t *dev_priv = dev->dev_private; >+ >+ if (drm_getsarea(dev) && dev_priv->sarea_priv) >+ i915_do_cleanup_pageflip(dev); >+ if (dev_priv->agp_heap) > i915_mem_takedown(&(dev_priv->agp_heap)); >- } >+ > i915_dma_cleanup(dev); > } > > void i915_driver_preclose(struct drm_device * dev, struct drm_file *file_priv) > { >- if (dev->dev_private) { >- drm_i915_private_t *dev_priv = dev->dev_private; >- i915_mem_release(dev, file_priv, dev_priv->agp_heap); >- } >+ drm_i915_private_t *dev_priv = dev->dev_private; >+ i915_mem_release(dev, file_priv, dev_priv->agp_heap); > } > > struct drm_ioctl_desc i915_ioctls[] = { >@@ -828,7 +1433,11 @@ struct drm_ioctl_desc i915_ioctls[] = { > DRM_IOCTL_DEF(DRM_I915_SET_VBLANK_PIPE, i915_vblank_pipe_set, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY ), > DRM_IOCTL_DEF(DRM_I915_GET_VBLANK_PIPE, i915_vblank_pipe_get, DRM_AUTH ), > DRM_IOCTL_DEF(DRM_I915_VBLANK_SWAP, i915_vblank_swap, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_I915_MMIO, i915_mmio, DRM_AUTH), > DRM_IOCTL_DEF(DRM_I915_HWS_ADDR, i915_set_status_page, DRM_AUTH), >+#ifdef I915_HAVE_BUFFER >+ DRM_IOCTL_DEF(DRM_I915_EXECBUFFER, i915_execbuffer, DRM_AUTH), >+#endif > }; > > int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls); >@@ -848,3 +1457,11 @@ int i915_driver_device_is_agp(struct drm > { > return 1; > } >+ >+int i915_driver_firstopen(struct drm_device *dev) >+{ >+#ifdef I915_HAVE_BUFFER >+ drm_bo_driver_init(dev); >+#endif >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_drm.h linux-2.6.23.i686/drivers/char/drm/i915_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_drm.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/i915_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -105,16 +105,32 @@ typedef struct _drm_i915_sarea { > unsigned int rotated_tiled; > unsigned int rotated2_tiled; > >- int pipeA_x; >- int pipeA_y; >- int pipeA_w; >- int pipeA_h; >- int pipeB_x; >- int pipeB_y; >- int pipeB_w; >- int pipeB_h; >+ int planeA_x; >+ int planeA_y; >+ int planeA_w; >+ int planeA_h; >+ int planeB_x; >+ int planeB_y; >+ int planeB_w; >+ int planeB_h; >+ >+ /* Triple buffering */ >+ drm_handle_t third_handle; >+ int third_offset; >+ int third_size; >+ unsigned int third_tiled; > } drm_i915_sarea_t; > >+/* Driver specific fence types and classes. >+ */ >+ >+/* The only fence class we support */ >+#define DRM_I915_FENCE_CLASS_ACCEL 0 >+/* Fence type that guarantees read-write flush */ >+#define DRM_I915_FENCE_TYPE_RW 2 >+/* MI_FLUSH programmed just before the fence */ >+#define DRM_I915_FENCE_FLAG_FLUSHED 0x01000000 >+ > /* Flags for perf_boxes > */ > #define I915_BOX_RING_EMPTY 0x1 >@@ -142,11 +158,13 @@ typedef struct _drm_i915_sarea { > #define DRM_I915_SET_VBLANK_PIPE 0x0d > #define DRM_I915_GET_VBLANK_PIPE 0x0e > #define DRM_I915_VBLANK_SWAP 0x0f >+#define DRM_I915_MMIO 0x10 > #define DRM_I915_HWS_ADDR 0x11 >+#define DRM_I915_EXECBUFFER 0x12 > > #define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t) > #define DRM_IOCTL_I915_FLUSH DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLUSH) >-#define DRM_IOCTL_I915_FLIP DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLIP) >+#define DRM_IOCTL_I915_FLIP DRM_IOW( DRM_COMMAND_BASE + DRM_I915_FLIP, drm_i915_flip_t) > #define DRM_IOCTL_I915_BATCHBUFFER DRM_IOW( DRM_COMMAND_BASE + DRM_I915_BATCHBUFFER, drm_i915_batchbuffer_t) > #define DRM_IOCTL_I915_IRQ_EMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_IRQ_EMIT, drm_i915_irq_emit_t) > #define DRM_IOCTL_I915_IRQ_WAIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_IRQ_WAIT, drm_i915_irq_wait_t) >@@ -160,6 +178,20 @@ typedef struct _drm_i915_sarea { > #define DRM_IOCTL_I915_SET_VBLANK_PIPE DRM_IOW( DRM_COMMAND_BASE + DRM_I915_SET_VBLANK_PIPE, drm_i915_vblank_pipe_t) > #define DRM_IOCTL_I915_GET_VBLANK_PIPE DRM_IOR( DRM_COMMAND_BASE + DRM_I915_GET_VBLANK_PIPE, drm_i915_vblank_pipe_t) > #define DRM_IOCTL_I915_VBLANK_SWAP DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_VBLANK_SWAP, drm_i915_vblank_swap_t) >+#define DRM_IOCTL_I915_MMIO DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_MMIO, drm_i915_mmio) >+#define DRM_IOCTL_I915_EXECBUFFER DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_EXECBUFFER, struct drm_i915_execbuffer) >+ >+/* Asynchronous page flipping: >+ */ >+typedef struct drm_i915_flip { >+ /* >+ * This is really talking about planes, and we could rename it >+ * except for the fact that some of the duplicated i915_drm.h files >+ * out there check for HAVE_I915_FLIP and so might pick up this >+ * version. >+ */ >+ int pipes; >+} drm_i915_flip_t; > > /* Allow drivers to submit batchbuffers directly to hardware, relying > * on the security mechanisms provided by hardware. >@@ -263,8 +295,73 @@ typedef struct drm_i915_vblank_swap { > unsigned int sequence; > } drm_i915_vblank_swap_t; > >+#define I915_MMIO_READ 0 >+#define I915_MMIO_WRITE 1 >+ >+#define I915_MMIO_MAY_READ 0x1 >+#define I915_MMIO_MAY_WRITE 0x2 >+ >+#define MMIO_REGS_IA_PRIMATIVES_COUNT 0 >+#define MMIO_REGS_IA_VERTICES_COUNT 1 >+#define MMIO_REGS_VS_INVOCATION_COUNT 2 >+#define MMIO_REGS_GS_PRIMITIVES_COUNT 3 >+#define MMIO_REGS_GS_INVOCATION_COUNT 4 >+#define MMIO_REGS_CL_PRIMITIVES_COUNT 5 >+#define MMIO_REGS_CL_INVOCATION_COUNT 6 >+#define MMIO_REGS_PS_INVOCATION_COUNT 7 >+#define MMIO_REGS_PS_DEPTH_COUNT 8 >+ >+typedef struct drm_i915_mmio_entry { >+ unsigned int flag; >+ unsigned int offset; >+ unsigned int size; >+} drm_i915_mmio_entry_t; >+ >+typedef struct drm_i915_mmio { >+ unsigned int read_write:1; >+ unsigned int reg:31; >+ void __user *data; >+} drm_i915_mmio_t; >+ > typedef struct drm_i915_hws_addr { > uint64_t addr; > } drm_i915_hws_addr_t; > >+/* >+ * Relocation header is 4 uint32_ts >+ * 0 - (16-bit relocation type << 16)| 16 bit reloc count >+ * 1 - buffer handle for another list of relocs >+ * 2-3 - spare. >+ */ >+#define I915_RELOC_HEADER 4 >+ >+/* >+ * type 0 relocation has 4-uint32_t stride >+ * 0 - offset into buffer >+ * 1 - delta to add in >+ * 2 - index into buffer list >+ * 3 - reserved (for optimisations later). >+ */ >+#define I915_RELOC_TYPE_0 0 >+#define I915_RELOC0_STRIDE 4 >+ >+struct drm_i915_op_arg { >+ uint64_t next; >+ uint32_t reloc_handle; >+ int handled; >+ union { >+ struct drm_bo_op_req req; >+ struct drm_bo_arg_rep rep; >+ } d; >+ >+}; >+ >+struct drm_i915_execbuffer { >+ uint64_t ops_list; >+ uint32_t num_buffers; >+ struct _drm_i915_batchbuffer batch; >+ drm_context_t context; /* for lockless use in the future */ >+ struct drm_fence_arg fence_arg; >+}; >+ > #endif /* _I915_DRM_H_ */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_drv.c linux-2.6.23.i686/drivers/char/drm/i915_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/i915_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -38,17 +38,513 @@ static struct pci_device_id pciidlist[] > i915_PCI_IDS > }; > >+#ifdef I915_HAVE_FENCE >+static struct drm_fence_driver i915_fence_driver = { >+ .num_classes = 1, >+ .wrap_diff = (1U << (BREADCRUMB_BITS - 1)), >+ .flush_diff = (1U << (BREADCRUMB_BITS - 2)), >+ .sequence_mask = BREADCRUMB_MASK, >+ .lazy_capable = 1, >+ .emit = i915_fence_emit_sequence, >+ .poke_flush = i915_poke_flush, >+ .has_irq = i915_fence_has_irq, >+}; >+#endif >+#ifdef I915_HAVE_BUFFER >+ >+static uint32_t i915_mem_prios[] = {DRM_BO_MEM_PRIV0, DRM_BO_MEM_TT, DRM_BO_MEM_LOCAL}; >+static uint32_t i915_busy_prios[] = {DRM_BO_MEM_TT, DRM_BO_MEM_PRIV0, DRM_BO_MEM_LOCAL}; >+ >+static struct drm_bo_driver i915_bo_driver = { >+ .mem_type_prio = i915_mem_prios, >+ .mem_busy_prio = i915_busy_prios, >+ .num_mem_type_prio = sizeof(i915_mem_prios)/sizeof(uint32_t), >+ .num_mem_busy_prio = sizeof(i915_busy_prios)/sizeof(uint32_t), >+ .create_ttm_backend_entry = i915_create_ttm_backend_entry, >+ .fence_type = i915_fence_type, >+ .invalidate_caches = i915_invalidate_caches, >+ .init_mem_type = i915_init_mem_type, >+ .evict_flags = i915_evict_flags, >+ .move = i915_move, >+ .ttm_cache_flush = i915_flush_ttm, >+}; >+#endif >+ >+enum pipe { >+ PIPE_A = 0, >+ PIPE_B, >+}; >+ >+static bool i915_pipe_enabled(struct drm_device *dev, enum pipe pipe) >+{ >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ >+ if (pipe == PIPE_A) >+ return (I915_READ(DPLL_A) & DPLL_VCO_ENABLE); >+ else >+ return (I915_READ(DPLL_B) & DPLL_VCO_ENABLE); >+} >+ >+static void i915_save_palette(struct drm_device *dev, enum pipe pipe) >+{ >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ unsigned long reg = (pipe == PIPE_A ? PALETTE_A : PALETTE_B); >+ u32 *array; >+ int i; >+ >+ if (!i915_pipe_enabled(dev, pipe)) >+ return; >+ >+ if (pipe == PIPE_A) >+ array = dev_priv->save_palette_a; >+ else >+ array = dev_priv->save_palette_b; >+ >+ for(i = 0; i < 256; i++) >+ array[i] = I915_READ(reg + (i << 2)); >+} >+ >+static void i915_restore_palette(struct drm_device *dev, enum pipe pipe) >+{ >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ unsigned long reg = (pipe == PIPE_A ? PALETTE_A : PALETTE_B); >+ u32 *array; >+ int i; >+ >+ if (!i915_pipe_enabled(dev, pipe)) >+ return; >+ >+ if (pipe == PIPE_A) >+ array = dev_priv->save_palette_a; >+ else >+ array = dev_priv->save_palette_b; >+ >+ for(i = 0; i < 256; i++) >+ I915_WRITE(reg + (i << 2), array[i]); >+} >+ >+static u8 i915_read_indexed(u16 index_port, u16 data_port, u8 reg) >+{ >+ outb(reg, index_port); >+ return inb(data_port); >+} >+ >+static u8 i915_read_ar(u16 st01, u8 reg, u16 palette_enable) >+{ >+ inb(st01); >+ outb(palette_enable | reg, VGA_AR_INDEX); >+ return inb(VGA_AR_DATA_READ); >+} >+ >+static void i915_write_ar(u8 st01, u8 reg, u8 val, u16 palette_enable) >+{ >+ inb(st01); >+ outb(palette_enable | reg, VGA_AR_INDEX); >+ outb(val, VGA_AR_DATA_WRITE); >+} >+ >+static void i915_write_indexed(u16 index_port, u16 data_port, u8 reg, u8 val) >+{ >+ outb(reg, index_port); >+ outb(val, data_port); >+} >+ >+static void i915_save_vga(struct drm_device *dev) >+{ >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ int i; >+ u16 cr_index, cr_data, st01; >+ >+ /* VGA color palette registers */ >+ dev_priv->saveDACMASK = inb(VGA_DACMASK); >+ /* DACCRX automatically increments during read */ >+ outb(0, VGA_DACRX); >+ /* Read 3 bytes of color data from each index */ >+ for (i = 0; i < 256 * 3; i++) >+ dev_priv->saveDACDATA[i] = inb(VGA_DACDATA); >+ >+ /* MSR bits */ >+ dev_priv->saveMSR = inb(VGA_MSR_READ); >+ if (dev_priv->saveMSR & VGA_MSR_CGA_MODE) { >+ cr_index = VGA_CR_INDEX_CGA; >+ cr_data = VGA_CR_DATA_CGA; >+ st01 = VGA_ST01_CGA; >+ } else { >+ cr_index = VGA_CR_INDEX_MDA; >+ cr_data = VGA_CR_DATA_MDA; >+ st01 = VGA_ST01_MDA; >+ } >+ >+ /* CRT controller regs */ >+ i915_write_indexed(cr_index, cr_data, 0x11, >+ i915_read_indexed(cr_index, cr_data, 0x11) & >+ (~0x80)); >+ for (i = 0; i < 0x24; i++) >+ dev_priv->saveCR[i] = >+ i915_read_indexed(cr_index, cr_data, i); >+ /* Make sure we don't turn off CR group 0 writes */ >+ dev_priv->saveCR[0x11] &= ~0x80; >+ >+ /* Attribute controller registers */ >+ inb(st01); >+ dev_priv->saveAR_INDEX = inb(VGA_AR_INDEX); >+ for (i = 0; i < 20; i++) >+ dev_priv->saveAR[i] = i915_read_ar(st01, i, 0); >+ inb(st01); >+ outb(dev_priv->saveAR_INDEX, VGA_AR_INDEX); >+ >+ /* Graphics controller registers */ >+ for (i = 0; i < 9; i++) >+ dev_priv->saveGR[i] = >+ i915_read_indexed(VGA_GR_INDEX, VGA_GR_DATA, i); >+ >+ dev_priv->saveGR[0x10] = >+ i915_read_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x10); >+ dev_priv->saveGR[0x11] = >+ i915_read_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x11); >+ dev_priv->saveGR[0x18] = >+ i915_read_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x18); >+ >+ /* Sequencer registers */ >+ for (i = 0; i < 8; i++) >+ dev_priv->saveSR[i] = >+ i915_read_indexed(VGA_SR_INDEX, VGA_SR_DATA, i); >+} >+ >+static void i915_restore_vga(struct drm_device *dev) >+{ >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ int i; >+ u16 cr_index, cr_data, st01; >+ >+ /* MSR bits */ >+ outb(dev_priv->saveMSR, VGA_MSR_WRITE); >+ if (dev_priv->saveMSR & VGA_MSR_CGA_MODE) { >+ cr_index = VGA_CR_INDEX_CGA; >+ cr_data = VGA_CR_DATA_CGA; >+ st01 = VGA_ST01_CGA; >+ } else { >+ cr_index = VGA_CR_INDEX_MDA; >+ cr_data = VGA_CR_DATA_MDA; >+ st01 = VGA_ST01_MDA; >+ } >+ >+ /* Sequencer registers, don't write SR07 */ >+ for (i = 0; i < 7; i++) >+ i915_write_indexed(VGA_SR_INDEX, VGA_SR_DATA, i, >+ dev_priv->saveSR[i]); >+ >+ /* CRT controller regs */ >+ /* Enable CR group 0 writes */ >+ i915_write_indexed(cr_index, cr_data, 0x11, dev_priv->saveCR[0x11]); >+ for (i = 0; i < 0x24; i++) >+ i915_write_indexed(cr_index, cr_data, i, dev_priv->saveCR[i]); >+ >+ /* Graphics controller regs */ >+ for (i = 0; i < 9; i++) >+ i915_write_indexed(VGA_GR_INDEX, VGA_GR_DATA, i, >+ dev_priv->saveGR[i]); >+ >+ i915_write_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x10, >+ dev_priv->saveGR[0x10]); >+ i915_write_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x11, >+ dev_priv->saveGR[0x11]); >+ i915_write_indexed(VGA_GR_INDEX, VGA_GR_DATA, 0x18, >+ dev_priv->saveGR[0x18]); >+ >+ /* Attribute controller registers */ >+ for (i = 0; i < 20; i++) >+ i915_write_ar(st01, i, dev_priv->saveAR[i], 0); >+ inb(st01); /* switch back to index mode */ >+ outb(dev_priv->saveAR_INDEX | 0x20, VGA_AR_INDEX); >+ >+ /* VGA color palette registers */ >+ outb(dev_priv->saveDACMASK, VGA_DACMASK); >+ /* DACCRX automatically increments during read */ >+ outb(0, VGA_DACWX); >+ /* Read 3 bytes of color data from each index */ >+ for (i = 0; i < 256 * 3; i++) >+ outb(dev_priv->saveDACDATA[i], VGA_DACDATA); >+ >+} >+ >+static int i915_suspend(struct drm_device *dev) >+{ >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ int i; >+ >+ if (!dev || !dev_priv) { >+ printk(KERN_ERR "dev: %p, dev_priv: %p\n", dev, dev_priv); >+ printk(KERN_ERR "DRM not initialized, aborting suspend.\n"); >+ return -ENODEV; >+ } >+ >+ pci_save_state(dev->pdev); >+ pci_read_config_byte(dev->pdev, LBB, &dev_priv->saveLBB); >+ >+ /* Pipe & plane A info */ >+ dev_priv->savePIPEACONF = I915_READ(PIPEACONF); >+ dev_priv->savePIPEASRC = I915_READ(PIPEASRC); >+ dev_priv->saveFPA0 = I915_READ(FPA0); >+ dev_priv->saveFPA1 = I915_READ(FPA1); >+ dev_priv->saveDPLL_A = I915_READ(DPLL_A); >+ if (IS_I965G(dev)) >+ dev_priv->saveDPLL_A_MD = I915_READ(DPLL_A_MD); >+ dev_priv->saveHTOTAL_A = I915_READ(HTOTAL_A); >+ dev_priv->saveHBLANK_A = I915_READ(HBLANK_A); >+ dev_priv->saveHSYNC_A = I915_READ(HSYNC_A); >+ dev_priv->saveVTOTAL_A = I915_READ(VTOTAL_A); >+ dev_priv->saveVBLANK_A = I915_READ(VBLANK_A); >+ dev_priv->saveVSYNC_A = I915_READ(VSYNC_A); >+ dev_priv->saveBCLRPAT_A = I915_READ(BCLRPAT_A); >+ >+ dev_priv->saveDSPACNTR = I915_READ(DSPACNTR); >+ dev_priv->saveDSPASTRIDE = I915_READ(DSPASTRIDE); >+ dev_priv->saveDSPASIZE = I915_READ(DSPASIZE); >+ dev_priv->saveDSPAPOS = I915_READ(DSPAPOS); >+ dev_priv->saveDSPABASE = I915_READ(DSPABASE); >+ if (IS_I965G(dev)) { >+ dev_priv->saveDSPASURF = I915_READ(DSPASURF); >+ dev_priv->saveDSPATILEOFF = I915_READ(DSPATILEOFF); >+ } >+ i915_save_palette(dev, PIPE_A); >+ >+ /* Pipe & plane B info */ >+ dev_priv->savePIPEBCONF = I915_READ(PIPEBCONF); >+ dev_priv->savePIPEBSRC = I915_READ(PIPEBSRC); >+ dev_priv->saveFPB0 = I915_READ(FPB0); >+ dev_priv->saveFPB1 = I915_READ(FPB1); >+ dev_priv->saveDPLL_B = I915_READ(DPLL_B); >+ if (IS_I965G(dev)) >+ dev_priv->saveDPLL_B_MD = I915_READ(DPLL_B_MD); >+ dev_priv->saveHTOTAL_B = I915_READ(HTOTAL_B); >+ dev_priv->saveHBLANK_B = I915_READ(HBLANK_B); >+ dev_priv->saveHSYNC_B = I915_READ(HSYNC_B); >+ dev_priv->saveVTOTAL_B = I915_READ(VTOTAL_B); >+ dev_priv->saveVBLANK_B = I915_READ(VBLANK_B); >+ dev_priv->saveVSYNC_B = I915_READ(VSYNC_B); >+ dev_priv->saveBCLRPAT_A = I915_READ(BCLRPAT_A); >+ >+ dev_priv->saveDSPBCNTR = I915_READ(DSPBCNTR); >+ dev_priv->saveDSPBSTRIDE = I915_READ(DSPBSTRIDE); >+ dev_priv->saveDSPBSIZE = I915_READ(DSPBSIZE); >+ dev_priv->saveDSPBPOS = I915_READ(DSPBPOS); >+ dev_priv->saveDSPBBASE = I915_READ(DSPBBASE); >+ if (IS_I965GM(dev)) { >+ dev_priv->saveDSPBSURF = I915_READ(DSPBSURF); >+ dev_priv->saveDSPBTILEOFF = I915_READ(DSPBTILEOFF); >+ } >+ i915_save_palette(dev, PIPE_B); >+ >+ /* CRT state */ >+ dev_priv->saveADPA = I915_READ(ADPA); >+ >+ /* LVDS state */ >+ dev_priv->savePP_CONTROL = I915_READ(PP_CONTROL); >+ dev_priv->savePFIT_PGM_RATIOS = I915_READ(PFIT_PGM_RATIOS); >+ dev_priv->saveBLC_PWM_CTL = I915_READ(BLC_PWM_CTL); >+ if (IS_I965G(dev)) >+ dev_priv->saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_CTL2); >+ if (IS_MOBILE(dev) && !IS_I830(dev)) >+ dev_priv->saveLVDS = I915_READ(LVDS); >+ if (!IS_I830(dev) && !IS_845G(dev)) >+ dev_priv->savePFIT_CONTROL = I915_READ(PFIT_CONTROL); >+ dev_priv->saveLVDSPP_ON = I915_READ(LVDSPP_ON); >+ dev_priv->saveLVDSPP_OFF = I915_READ(LVDSPP_OFF); >+ dev_priv->savePP_CYCLE = I915_READ(PP_CYCLE); >+ >+ /* FIXME: save TV & SDVO state */ >+ >+ /* FBC state */ >+ dev_priv->saveFBC_CFB_BASE = I915_READ(FBC_CFB_BASE); >+ dev_priv->saveFBC_LL_BASE = I915_READ(FBC_LL_BASE); >+ dev_priv->saveFBC_CONTROL2 = I915_READ(FBC_CONTROL2); >+ dev_priv->saveFBC_CONTROL = I915_READ(FBC_CONTROL); >+ >+ /* VGA state */ >+ dev_priv->saveVCLK_DIVISOR_VGA0 = I915_READ(VCLK_DIVISOR_VGA0); >+ dev_priv->saveVCLK_DIVISOR_VGA1 = I915_READ(VCLK_DIVISOR_VGA1); >+ dev_priv->saveVCLK_POST_DIV = I915_READ(VCLK_POST_DIV); >+ dev_priv->saveVGACNTRL = I915_READ(VGACNTRL); >+ >+ /* Scratch space */ >+ for (i = 0; i < 16; i++) { >+ dev_priv->saveSWF0[i] = I915_READ(SWF0 + (i << 2)); >+ dev_priv->saveSWF1[i] = I915_READ(SWF10 + (i << 2)); >+ } >+ for (i = 0; i < 3; i++) >+ dev_priv->saveSWF2[i] = I915_READ(SWF30 + (i << 2)); >+ >+ i915_save_vga(dev); >+ >+ /* Shut down the device */ >+ pci_disable_device(dev->pdev); >+ pci_set_power_state(dev->pdev, PCI_D3hot); >+ >+ return 0; >+} >+ >+static int i915_resume(struct drm_device *dev) >+{ >+ struct drm_i915_private *dev_priv = dev->dev_private; >+ int i; >+ >+ pci_set_power_state(dev->pdev, PCI_D0); >+ pci_restore_state(dev->pdev); >+ if (pci_enable_device(dev->pdev)) >+ return -1; >+ >+ pci_write_config_byte(dev->pdev, LBB, dev_priv->saveLBB); >+ >+ /* Pipe & plane A info */ >+ /* Prime the clock */ >+ if (dev_priv->saveDPLL_A & DPLL_VCO_ENABLE) { >+ I915_WRITE(DPLL_A, dev_priv->saveDPLL_A & >+ ~DPLL_VCO_ENABLE); >+ udelay(150); >+ } >+ I915_WRITE(FPA0, dev_priv->saveFPA0); >+ I915_WRITE(FPA1, dev_priv->saveFPA1); >+ /* Actually enable it */ >+ I915_WRITE(DPLL_A, dev_priv->saveDPLL_A); >+ udelay(150); >+ if (IS_I965G(dev)) >+ I915_WRITE(DPLL_A_MD, dev_priv->saveDPLL_A_MD); >+ udelay(150); >+ >+ /* Restore mode */ >+ I915_WRITE(HTOTAL_A, dev_priv->saveHTOTAL_A); >+ I915_WRITE(HBLANK_A, dev_priv->saveHBLANK_A); >+ I915_WRITE(HSYNC_A, dev_priv->saveHSYNC_A); >+ I915_WRITE(VTOTAL_A, dev_priv->saveVTOTAL_A); >+ I915_WRITE(VBLANK_A, dev_priv->saveVBLANK_A); >+ I915_WRITE(VSYNC_A, dev_priv->saveVSYNC_A); >+ I915_WRITE(BCLRPAT_A, dev_priv->saveBCLRPAT_A); >+ >+ /* Restore plane info */ >+ I915_WRITE(DSPASIZE, dev_priv->saveDSPASIZE); >+ I915_WRITE(DSPAPOS, dev_priv->saveDSPAPOS); >+ I915_WRITE(PIPEASRC, dev_priv->savePIPEASRC); >+ I915_WRITE(DSPABASE, dev_priv->saveDSPABASE); >+ I915_WRITE(DSPASTRIDE, dev_priv->saveDSPASTRIDE); >+ if (IS_I965G(dev)) { >+ I915_WRITE(DSPASURF, dev_priv->saveDSPASURF); >+ I915_WRITE(DSPATILEOFF, dev_priv->saveDSPATILEOFF); >+ } >+ >+ if ((dev_priv->saveDPLL_A & DPLL_VCO_ENABLE) && >+ (dev_priv->saveDPLL_A & DPLL_VGA_MODE_DIS)) >+ I915_WRITE(PIPEACONF, dev_priv->savePIPEACONF); >+ >+ i915_restore_palette(dev, PIPE_A); >+ /* Enable the plane */ >+ I915_WRITE(DSPACNTR, dev_priv->saveDSPACNTR); >+ I915_WRITE(DSPABASE, I915_READ(DSPABASE)); >+ >+ /* Pipe & plane B info */ >+ if (dev_priv->saveDPLL_B & DPLL_VCO_ENABLE) { >+ I915_WRITE(DPLL_B, dev_priv->saveDPLL_B & >+ ~DPLL_VCO_ENABLE); >+ udelay(150); >+ } >+ I915_WRITE(FPB0, dev_priv->saveFPB0); >+ I915_WRITE(FPB1, dev_priv->saveFPB1); >+ /* Actually enable it */ >+ I915_WRITE(DPLL_B, dev_priv->saveDPLL_B); >+ udelay(150); >+ if (IS_I965G(dev)) >+ I915_WRITE(DPLL_B_MD, dev_priv->saveDPLL_B_MD); >+ udelay(150); >+ >+ /* Restore mode */ >+ I915_WRITE(HTOTAL_B, dev_priv->saveHTOTAL_B); >+ I915_WRITE(HBLANK_B, dev_priv->saveHBLANK_B); >+ I915_WRITE(HSYNC_B, dev_priv->saveHSYNC_B); >+ I915_WRITE(VTOTAL_B, dev_priv->saveVTOTAL_B); >+ I915_WRITE(VBLANK_B, dev_priv->saveVBLANK_B); >+ I915_WRITE(VSYNC_B, dev_priv->saveVSYNC_B); >+ I915_WRITE(BCLRPAT_B, dev_priv->saveBCLRPAT_B); >+ >+ /* Restore plane info */ >+ I915_WRITE(DSPBSIZE, dev_priv->saveDSPBSIZE); >+ I915_WRITE(DSPBPOS, dev_priv->saveDSPBPOS); >+ I915_WRITE(PIPEBSRC, dev_priv->savePIPEBSRC); >+ I915_WRITE(DSPBBASE, dev_priv->saveDSPBBASE); >+ I915_WRITE(DSPBSTRIDE, dev_priv->saveDSPBSTRIDE); >+ if (IS_I965G(dev)) { >+ I915_WRITE(DSPBSURF, dev_priv->saveDSPBSURF); >+ I915_WRITE(DSPBTILEOFF, dev_priv->saveDSPBTILEOFF); >+ } >+ >+ if ((dev_priv->saveDPLL_B & DPLL_VCO_ENABLE) && >+ (dev_priv->saveDPLL_B & DPLL_VGA_MODE_DIS)) >+ I915_WRITE(PIPEBCONF, dev_priv->savePIPEBCONF); >+ i915_restore_palette(dev, PIPE_A); >+ /* Enable the plane */ >+ I915_WRITE(DSPBCNTR, dev_priv->saveDSPBCNTR); >+ I915_WRITE(DSPBBASE, I915_READ(DSPBBASE)); >+ >+ /* CRT state */ >+ I915_WRITE(ADPA, dev_priv->saveADPA); >+ >+ /* LVDS state */ >+ if (IS_I965G(dev)) >+ I915_WRITE(BLC_PWM_CTL2, dev_priv->saveBLC_PWM_CTL2); >+ if (IS_MOBILE(dev) && !IS_I830(dev)) >+ I915_WRITE(LVDS, dev_priv->saveLVDS); >+ if (!IS_I830(dev) && !IS_845G(dev)) >+ I915_WRITE(PFIT_CONTROL, dev_priv->savePFIT_CONTROL); >+ >+ I915_WRITE(PFIT_PGM_RATIOS, dev_priv->savePFIT_PGM_RATIOS); >+ I915_WRITE(BLC_PWM_CTL, dev_priv->saveBLC_PWM_CTL); >+ I915_WRITE(LVDSPP_ON, dev_priv->saveLVDSPP_ON); >+ I915_WRITE(LVDSPP_OFF, dev_priv->saveLVDSPP_OFF); >+ I915_WRITE(PP_CYCLE, dev_priv->savePP_CYCLE); >+ I915_WRITE(PP_CONTROL, dev_priv->savePP_CONTROL); >+ >+ /* FIXME: restore TV & SDVO state */ >+ >+ /* FBC info */ >+ I915_WRITE(FBC_CFB_BASE, dev_priv->saveFBC_CFB_BASE); >+ I915_WRITE(FBC_LL_BASE, dev_priv->saveFBC_LL_BASE); >+ I915_WRITE(FBC_CONTROL2, dev_priv->saveFBC_CONTROL2); >+ I915_WRITE(FBC_CONTROL, dev_priv->saveFBC_CONTROL); >+ >+ /* VGA state */ >+ I915_WRITE(VGACNTRL, dev_priv->saveVGACNTRL); >+ I915_WRITE(VCLK_DIVISOR_VGA0, dev_priv->saveVCLK_DIVISOR_VGA0); >+ I915_WRITE(VCLK_DIVISOR_VGA1, dev_priv->saveVCLK_DIVISOR_VGA1); >+ I915_WRITE(VCLK_POST_DIV, dev_priv->saveVCLK_POST_DIV); >+ udelay(150); >+ >+ for (i = 0; i < 16; i++) { >+ I915_WRITE(SWF0 + (i << 2), dev_priv->saveSWF0[i]); >+ I915_WRITE(SWF10 + (i << 2), dev_priv->saveSWF1[i+7]); >+ } >+ for (i = 0; i < 3; i++) >+ I915_WRITE(SWF30 + (i << 2), dev_priv->saveSWF2[i]); >+ >+ i915_restore_vga(dev); >+ >+ return 0; >+} >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > /* don't use mtrr's here, the Xserver or user space app should > * deal with them for intel hardware. > */ > .driver_features = >- DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | /* DRIVER_USE_MTRR |*/ >+ DRIVER_USE_AGP | DRIVER_REQUIRE_AGP | /* DRIVER_USE_MTRR | */ > DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_IRQ_VBL | > DRIVER_IRQ_VBL2, > .load = i915_driver_load, >+ .unload = i915_driver_unload, >+ .firstopen = i915_driver_firstopen, > .lastclose = i915_driver_lastclose, > .preclose = i915_driver_preclose, >+ .suspend = i915_suspend, >+ .resume = i915_resume, > .device_is_agp = i915_driver_device_is_agp, > .vblank_wait = i915_driver_vblank_wait, > .vblank_wait2 = i915_driver_vblank_wait2, >@@ -61,23 +557,29 @@ static struct drm_driver driver = { > .get_reg_ofs = drm_core_get_reg_ofs, > .ioctls = i915_ioctls, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >-#ifdef CONFIG_COMPAT >- .compat_ioctl = i915_compat_ioctl, >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+#if defined(CONFIG_COMPAT) && LINUX_VERSION_CODE > KERNEL_VERSION(2,6,9) >+ .compat_ioctl = i915_compat_ioctl, > #endif >- }, >- >+ }, > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >- }, >- >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), >+ }, >+#ifdef I915_HAVE_FENCE >+ .fence_driver = &i915_fence_driver, >+#endif >+#ifdef I915_HAVE_BUFFER >+ .bo_driver = &i915_bo_driver, >+#endif > .name = DRIVER_NAME, > .desc = DRIVER_DESC, > .date = DRIVER_DATE, >@@ -86,10 +588,15 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ > static int __init i915_init(void) > { > driver.num_ioctls = i915_max_ioctl; >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit i915_exit(void) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_drv.h linux-2.6.23.i686/drivers/char/drm/i915_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i915_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -37,7 +37,12 @@ > > #define DRIVER_NAME "i915" > #define DRIVER_DESC "Intel Graphics" >-#define DRIVER_DATE "20060119" >+#define DRIVER_DATE "20070209" >+ >+#if defined(__linux__) >+#define I915_HAVE_FENCE >+#define I915_HAVE_BUFFER >+#endif > > /* Interface history: > * >@@ -48,11 +53,24 @@ > * 1.5: Add vblank pipe configuration > * 1.6: - New ioctl for scheduling buffer swaps on vertical blank > * - Support vertical blank on secondary display pipe >+ * 1.8: New ioctl for ARB_Occlusion_Query >+ * 1.9: Usable page flipping and triple buffering >+ * 1.10: Plane/pipe disentangling >+ * 1.11: TTM superioctl >+ * 1.12: TTM relocation optimization > */ > #define DRIVER_MAJOR 1 >+#if defined(I915_HAVE_FENCE) && defined(I915_HAVE_BUFFER) >+#define DRIVER_MINOR 12 >+#else > #define DRIVER_MINOR 6 >+#endif > #define DRIVER_PATCHLEVEL 0 > >+#ifdef I915_HAVE_BUFFER >+#define I915_MAX_VALIDATE_BUFFERS 4096 >+#endif >+ > typedef struct _drm_i915_ring_buffer { > int tail_mask; > unsigned long Start; >@@ -76,8 +94,9 @@ struct mem_block { > typedef struct _drm_i915_vbl_swap { > struct list_head head; > drm_drawable_t drw_id; >- unsigned int pipe; >+ unsigned int plane; > unsigned int sequence; >+ int flip; > } drm_i915_vbl_swap_t; > > typedef struct drm_i915_private { >@@ -90,15 +109,11 @@ typedef struct drm_i915_private { > drm_dma_handle_t *status_page_dmah; > void *hw_status_page; > dma_addr_t dma_status_page; >- unsigned long counter; >+ uint32_t counter; > unsigned int status_gfx_addr; > drm_local_map_t hws_map; > > unsigned int cpp; >- int back_offset; >- int front_offset; >- int current_page; >- int page_flipping; > int use_mi_batchbuffer_start; > > wait_queue_head_t irq_queue; >@@ -110,24 +125,132 @@ typedef struct drm_i915_private { > struct mem_block *agp_heap; > unsigned int sr01, adpa, ppcr, dvob, dvoc, lvds; > int vblank_pipe; >+ DRM_SPINTYPE user_irq_lock; >+ int user_irq_refcount; >+ int fence_irq_on; >+ uint32_t irq_enable_reg; >+ int irq_enabled; >+ >+#ifdef I915_HAVE_FENCE >+ uint32_t flush_sequence; >+ uint32_t flush_flags; >+ uint32_t flush_pending; >+ uint32_t saved_flush_status; >+#endif >+#ifdef I915_HAVE_BUFFER >+ void *agp_iomap; >+ unsigned int max_validate_buffers; >+ struct mutex cmdbuf_mutex; >+#endif > >- spinlock_t swaps_lock; >+ DRM_SPINTYPE swaps_lock; > drm_i915_vbl_swap_t vbl_swaps; > unsigned int swaps_pending; >+ >+ /* Register state */ >+ u8 saveLBB; >+ u32 saveDSPACNTR; >+ u32 saveDSPBCNTR; >+ u32 savePIPEACONF; >+ u32 savePIPEBCONF; >+ u32 savePIPEASRC; >+ u32 savePIPEBSRC; >+ u32 saveFPA0; >+ u32 saveFPA1; >+ u32 saveDPLL_A; >+ u32 saveDPLL_A_MD; >+ u32 saveHTOTAL_A; >+ u32 saveHBLANK_A; >+ u32 saveHSYNC_A; >+ u32 saveVTOTAL_A; >+ u32 saveVBLANK_A; >+ u32 saveVSYNC_A; >+ u32 saveBCLRPAT_A; >+ u32 saveDSPASTRIDE; >+ u32 saveDSPASIZE; >+ u32 saveDSPAPOS; >+ u32 saveDSPABASE; >+ u32 saveDSPASURF; >+ u32 saveDSPATILEOFF; >+ u32 savePFIT_PGM_RATIOS; >+ u32 saveBLC_PWM_CTL; >+ u32 saveBLC_PWM_CTL2; >+ u32 saveFPB0; >+ u32 saveFPB1; >+ u32 saveDPLL_B; >+ u32 saveDPLL_B_MD; >+ u32 saveHTOTAL_B; >+ u32 saveHBLANK_B; >+ u32 saveHSYNC_B; >+ u32 saveVTOTAL_B; >+ u32 saveVBLANK_B; >+ u32 saveVSYNC_B; >+ u32 saveBCLRPAT_B; >+ u32 saveDSPBSTRIDE; >+ u32 saveDSPBSIZE; >+ u32 saveDSPBPOS; >+ u32 saveDSPBBASE; >+ u32 saveDSPBSURF; >+ u32 saveDSPBTILEOFF; >+ u32 saveVCLK_DIVISOR_VGA0; >+ u32 saveVCLK_DIVISOR_VGA1; >+ u32 saveVCLK_POST_DIV; >+ u32 saveVGACNTRL; >+ u32 saveADPA; >+ u32 saveLVDS; >+ u32 saveLVDSPP_ON; >+ u32 saveLVDSPP_OFF; >+ u32 saveDVOA; >+ u32 saveDVOB; >+ u32 saveDVOC; >+ u32 savePP_ON; >+ u32 savePP_OFF; >+ u32 savePP_CONTROL; >+ u32 savePP_CYCLE; >+ u32 savePFIT_CONTROL; >+ u32 save_palette_a[256]; >+ u32 save_palette_b[256]; >+ u32 saveFBC_CFB_BASE; >+ u32 saveFBC_LL_BASE; >+ u32 saveFBC_CONTROL; >+ u32 saveFBC_CONTROL2; >+ u32 saveSWF0[16]; >+ u32 saveSWF1[16]; >+ u32 saveSWF2[3]; >+ u8 saveMSR; >+ u8 saveSR[8]; >+ u8 saveGR[24]; >+ u8 saveAR_INDEX; >+ u8 saveAR[20]; >+ u8 saveDACMASK; >+ u8 saveDACDATA[256*3]; /* 256 3-byte colors */ >+ u8 saveCR[36]; > } drm_i915_private_t; > >+enum intel_chip_family { >+ CHIP_I8XX = 0x01, >+ CHIP_I9XX = 0x02, >+ CHIP_I915 = 0x04, >+ CHIP_I965 = 0x08, >+}; >+ > extern struct drm_ioctl_desc i915_ioctls[]; > extern int i915_max_ioctl; > > /* i915_dma.c */ > extern void i915_kernel_lost_context(struct drm_device * dev); > extern int i915_driver_load(struct drm_device *, unsigned long flags); >+extern int i915_driver_unload(struct drm_device *); > extern void i915_driver_lastclose(struct drm_device * dev); > extern void i915_driver_preclose(struct drm_device *dev, > struct drm_file *file_priv); > extern int i915_driver_device_is_agp(struct drm_device * dev); > extern long i915_compat_ioctl(struct file *filp, unsigned int cmd, > unsigned long arg); >+extern void i915_emit_breadcrumb(struct drm_device *dev); >+extern void i915_dispatch_flip(struct drm_device * dev, int pipes, int sync); >+extern int i915_emit_mi_flush(struct drm_device *dev, uint32_t flush); >+extern int i915_driver_firstopen(struct drm_device *dev); > > /* i915_irq.c */ > extern int i915_irq_emit(struct drm_device *dev, void *data, >@@ -145,6 +268,9 @@ extern int i915_vblank_pipe_set(struct d > struct drm_file *file_priv); > extern int i915_vblank_pipe_get(struct drm_device *dev, void *data, > struct drm_file *file_priv); >+extern int i915_emit_irq(struct drm_device *dev); >+extern void i915_user_irq_on(drm_i915_private_t *dev_priv); >+extern void i915_user_irq_off(drm_i915_private_t *dev_priv); > extern int i915_vblank_swap(struct drm_device *dev, void *data, > struct drm_file *file_priv); > >@@ -159,24 +285,58 @@ extern int i915_mem_destroy_heap(struct > struct drm_file *file_priv); > extern void i915_mem_takedown(struct mem_block **heap); > extern void i915_mem_release(struct drm_device * dev, >- struct drm_file *file_priv, struct mem_block *heap); >+ struct drm_file *file_priv, >+ struct mem_block *heap); >+#ifdef I915_HAVE_FENCE >+/* i915_fence.c */ >+ >+ >+extern void i915_fence_handler(struct drm_device *dev); >+extern int i915_fence_emit_sequence(struct drm_device *dev, uint32_t class, >+ uint32_t flags, >+ uint32_t *sequence, >+ uint32_t *native_type); >+extern void i915_poke_flush(struct drm_device *dev, uint32_t class); >+extern int i915_fence_has_irq(struct drm_device *dev, uint32_t class, uint32_t flags); >+#endif >+ >+#ifdef I915_HAVE_BUFFER >+/* i915_buffer.c */ >+extern struct drm_ttm_backend *i915_create_ttm_backend_entry(struct drm_device *dev); >+extern int i915_fence_type(struct drm_buffer_object *bo, uint32_t *fclass, >+ uint32_t *type); >+extern int i915_invalidate_caches(struct drm_device *dev, uint64_t buffer_flags); >+extern int i915_init_mem_type(struct drm_device *dev, uint32_t type, >+ struct drm_mem_type_manager *man); >+extern uint64_t i915_evict_flags(struct drm_buffer_object *bo); >+extern int i915_move(struct drm_buffer_object *bo, int evict, >+ int no_wait, struct drm_bo_mem_reg *new_mem); >+void i915_flush_ttm(struct drm_ttm *ttm); >+#endif >+ >+#ifdef __linux__ >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,25) >+extern void intel_init_chipset_flush_compat(struct drm_device *dev); >+extern void intel_fini_chipset_flush_compat(struct drm_device *dev); >+#endif >+#endif > > #define I915_READ(reg) DRM_READ32(dev_priv->mmio_map, (reg)) > #define I915_WRITE(reg,val) DRM_WRITE32(dev_priv->mmio_map, (reg), (val)) >-#define I915_READ16(reg) DRM_READ16(dev_priv->mmio_map, (reg)) >+#define I915_READ16(reg) DRM_READ16(dev_priv->mmio_map, (reg)) > #define I915_WRITE16(reg,val) DRM_WRITE16(dev_priv->mmio_map, (reg), (val)) > > #define I915_VERBOSE 0 > > #define RING_LOCALS unsigned int outring, ringmask, outcount; \ >- volatile char *virt; >+ volatile char *virt; > > #define BEGIN_LP_RING(n) do { \ > if (I915_VERBOSE) \ > DRM_DEBUG("BEGIN_LP_RING(%d) in %s\n", \ >- (n), __FUNCTION__); \ >- if (dev_priv->ring.space < (n)*4) \ >- i915_wait_ring(dev, (n)*4, __FUNCTION__); \ >+ (n), __FUNCTION__); \ >+ if (dev_priv->ring.space < (n)*4) \ >+ i915_wait_ring(dev, (n)*4, __FUNCTION__); \ > outcount = 0; \ > outring = dev_priv->ring.tail; \ > ringmask = dev_priv->ring.tail_mask; \ >@@ -185,8 +345,8 @@ extern void i915_mem_release(struct drm_ > > #define OUT_RING(n) do { \ > if (I915_VERBOSE) DRM_DEBUG(" OUT_RING %x\n", (int)(n)); \ >- *(volatile unsigned int *)(virt + outring) = (n); \ >- outcount++; \ >+ *(volatile unsigned int *)(virt + outring) = (n); \ >+ outcount++; \ > outring += 4; \ > outring &= ringmask; \ > } while (0) >@@ -200,25 +360,115 @@ extern void i915_mem_release(struct drm_ > > extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller); > >-#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) >+/* Extended config space */ >+#define LBB 0xf4 >+ >+/* VGA stuff */ >+ >+#define VGA_ST01_MDA 0x3ba >+#define VGA_ST01_CGA 0x3da >+ >+#define VGA_MSR_WRITE 0x3c2 >+#define VGA_MSR_READ 0x3cc >+#define VGA_MSR_MEM_EN (1<<1) >+#define VGA_MSR_CGA_MODE (1<<0) >+ >+#define VGA_SR_INDEX 0x3c4 >+#define VGA_SR_DATA 0x3c5 >+ >+#define VGA_AR_INDEX 0x3c0 >+#define VGA_AR_VID_EN (1<<5) >+#define VGA_AR_DATA_WRITE 0x3c0 >+#define VGA_AR_DATA_READ 0x3c1 >+ >+#define VGA_GR_INDEX 0x3ce >+#define VGA_GR_DATA 0x3cf >+/* GR05 */ >+#define VGA_GR_MEM_READ_MODE_SHIFT 3 >+#define VGA_GR_MEM_READ_MODE_PLANE 1 >+/* GR06 */ >+#define VGA_GR_MEM_MODE_MASK 0xc >+#define VGA_GR_MEM_MODE_SHIFT 2 >+#define VGA_GR_MEM_A0000_AFFFF 0 >+#define VGA_GR_MEM_A0000_BFFFF 1 >+#define VGA_GR_MEM_B0000_B7FFF 2 >+#define VGA_GR_MEM_B0000_BFFFF 3 >+ >+#define VGA_DACMASK 0x3c6 >+#define VGA_DACRX 0x3c7 >+#define VGA_DACWX 0x3c8 >+#define VGA_DACDATA 0x3c9 >+ >+#define VGA_CR_INDEX_MDA 0x3b4 >+#define VGA_CR_DATA_MDA 0x3b5 >+#define VGA_CR_INDEX_CGA 0x3d4 >+#define VGA_CR_DATA_CGA 0x3d5 >+ >+#define GFX_OP_USER_INTERRUPT ((0<<29)|(2<<23)) > #define GFX_OP_BREAKPOINT_INTERRUPT ((0<<29)|(1<<23)) > #define CMD_REPORT_HEAD (7<<23) > #define CMD_STORE_DWORD_IDX ((0x21<<23) | 0x1) > #define CMD_OP_BATCH_BUFFER ((0x0<<29)|(0x30<<23)|0x1) > >-#define INST_PARSER_CLIENT 0x00000000 >-#define INST_OP_FLUSH 0x02000000 >-#define INST_FLUSH_MAP_CACHE 0x00000001 >+#define CMD_MI_FLUSH (0x04 << 23) >+#define MI_NO_WRITE_FLUSH (1 << 2) >+#define MI_READ_FLUSH (1 << 0) >+#define MI_EXE_FLUSH (1 << 1) >+#define MI_END_SCENE (1 << 4) /* flush binner and incr scene count */ >+#define MI_SCENE_COUNT (1 << 3) /* just increment scene count */ >+ >+/* Packet to load a register value from the ring/batch command stream: >+ */ >+#define CMD_MI_LOAD_REGISTER_IMM ((0x22 << 23)|0x1) > > #define BB1_START_ADDR_MASK (~0x7) > #define BB1_PROTECTED (1<<0) > #define BB1_UNPROTECTED (0<<0) > #define BB2_END_ADDR_MASK (~0x7) > >+/* Framebuffer compression */ >+#define FBC_CFB_BASE 0x03200 /* 4k page aligned */ >+#define FBC_LL_BASE 0x03204 /* 4k page aligned */ >+#define FBC_CONTROL 0x03208 >+#define FBC_CTL_EN (1<<31) >+#define FBC_CTL_PERIODIC (1<<30) >+#define FBC_CTL_INTERVAL_SHIFT (16) >+#define FBC_CTL_UNCOMPRESSIBLE (1<<14) >+#define FBC_CTL_STRIDE_SHIFT (5) >+#define FBC_CTL_FENCENO (1<<0) >+#define FBC_COMMAND 0x0320c >+#define FBC_CMD_COMPRESS (1<<0) >+#define FBC_STATUS 0x03210 >+#define FBC_STAT_COMPRESSING (1<<31) >+#define FBC_STAT_COMPRESSED (1<<30) >+#define FBC_STAT_MODIFIED (1<<29) >+#define FBC_STAT_CURRENT_LINE (1<<0) >+#define FBC_CONTROL2 0x03214 >+#define FBC_CTL_FENCE_DBL (0<<4) >+#define FBC_CTL_IDLE_IMM (0<<2) >+#define FBC_CTL_IDLE_FULL (1<<2) >+#define FBC_CTL_IDLE_LINE (2<<2) >+#define FBC_CTL_IDLE_DEBUG (3<<2) >+#define FBC_CTL_CPU_FENCE (1<<1) >+#define FBC_CTL_PLANEA (0<<0) >+#define FBC_CTL_PLANEB (1<<0) >+#define FBC_FENCE_OFF 0x0321b >+ >+#define FBC_LL_SIZE (1536) >+#define FBC_LL_PAD (32) >+ >+/* Interrupt bits: >+ */ >+#define USER_INT_FLAG (1<<1) >+#define VSYNC_PIPEB_FLAG (1<<5) >+#define VSYNC_PIPEA_FLAG (1<<7) >+#define HWB_OOM_FLAG (1<<13) /* binner out of memory */ >+ > #define I915REG_HWSTAM 0x02098 > #define I915REG_INT_IDENTITY_R 0x020a4 >-#define I915REG_INT_MASK_R 0x020a8 >+#define I915REG_INT_MASK_R 0x020a8 > #define I915REG_INT_ENABLE_R 0x020a0 >+#define I915REG_INSTPM 0x020c0 > > #define I915REG_PIPEASTAT 0x70024 > #define I915REG_PIPEBSTAT 0x71024 >@@ -229,7 +479,7 @@ extern int i915_wait_ring(struct drm_dev > #define SRX_INDEX 0x3c4 > #define SRX_DATA 0x3c5 > #define SR01 1 >-#define SR01_SCREEN_OFF (1<<5) >+#define SR01_SCREEN_OFF (1<<5) > > #define PPCR 0x61204 > #define PPCR_ON (1<<0) >@@ -249,31 +499,129 @@ extern int i915_wait_ring(struct drm_dev > #define ADPA_DPMS_OFF (3<<10) > > #define NOPID 0x2094 >-#define LP_RING 0x2030 >-#define HP_RING 0x2040 >-#define RING_TAIL 0x00 >+#define LP_RING 0x2030 >+#define HP_RING 0x2040 >+/* The binner has its own ring buffer: >+ */ >+#define HWB_RING 0x2400 >+ >+#define RING_TAIL 0x00 > #define TAIL_ADDR 0x001FFFF8 >-#define RING_HEAD 0x04 >-#define HEAD_WRAP_COUNT 0xFFE00000 >-#define HEAD_WRAP_ONE 0x00200000 >-#define HEAD_ADDR 0x001FFFFC >-#define RING_START 0x08 >-#define START_ADDR 0x0xFFFFF000 >-#define RING_LEN 0x0C >-#define RING_NR_PAGES 0x001FF000 >-#define RING_REPORT_MASK 0x00000006 >-#define RING_REPORT_64K 0x00000002 >-#define RING_REPORT_128K 0x00000004 >-#define RING_NO_REPORT 0x00000000 >-#define RING_VALID_MASK 0x00000001 >-#define RING_VALID 0x00000001 >-#define RING_INVALID 0x00000000 >+#define RING_HEAD 0x04 >+#define HEAD_WRAP_COUNT 0xFFE00000 >+#define HEAD_WRAP_ONE 0x00200000 >+#define HEAD_ADDR 0x001FFFFC >+#define RING_START 0x08 >+#define START_ADDR 0x0xFFFFF000 >+#define RING_LEN 0x0C >+#define RING_NR_PAGES 0x001FF000 >+#define RING_REPORT_MASK 0x00000006 >+#define RING_REPORT_64K 0x00000002 >+#define RING_REPORT_128K 0x00000004 >+#define RING_NO_REPORT 0x00000000 >+#define RING_VALID_MASK 0x00000001 >+#define RING_VALID 0x00000001 >+#define RING_INVALID 0x00000000 >+ >+/* Instruction parser error reg: >+ */ >+#define IPEIR 0x2088 >+ >+/* Scratch pad debug 0 reg: >+ */ >+#define SCPD0 0x209c >+ >+/* Error status reg: >+ */ >+#define ESR 0x20b8 >+ >+/* Secondary DMA fetch address debug reg: >+ */ >+#define DMA_FADD_S 0x20d4 >+ >+/* Cache mode 0 reg. >+ * - Manipulating render cache behaviour is central >+ * to the concept of zone rendering, tuning this reg can help avoid >+ * unnecessary render cache reads and even writes (for z/stencil) >+ * at beginning and end of scene. >+ * >+ * - To change a bit, write to this reg with a mask bit set and the >+ * bit of interest either set or cleared. EG: (BIT<<16) | BIT to set. >+ */ >+#define Cache_Mode_0 0x2120 >+#define CM0_MASK_SHIFT 16 >+#define CM0_IZ_OPT_DISABLE (1<<6) >+#define CM0_ZR_OPT_DISABLE (1<<5) >+#define CM0_DEPTH_EVICT_DISABLE (1<<4) >+#define CM0_COLOR_EVICT_DISABLE (1<<3) >+#define CM0_DEPTH_WRITE_DISABLE (1<<1) >+#define CM0_RC_OP_FLUSH_DISABLE (1<<0) >+ >+ >+/* Graphics flush control. A CPU write flushes the GWB of all writes. >+ * The data is discarded. >+ */ >+#define GFX_FLSH_CNTL 0x2170 >+ >+/* Binner control. Defines the location of the bin pointer list: >+ */ >+#define BINCTL 0x2420 >+#define BC_MASK (1 << 9) >+ >+/* Binned scene info. >+ */ >+#define BINSCENE 0x2428 >+#define BS_OP_LOAD (1 << 8) >+#define BS_MASK (1 << 22) >+ >+/* Bin command parser debug reg: >+ */ >+#define BCPD 0x2480 >+ >+/* Bin memory control debug reg: >+ */ >+#define BMCD 0x2484 >+ >+/* Bin data cache debug reg: >+ */ >+#define BDCD 0x2488 >+ >+/* Binner pointer cache debug reg: >+ */ >+#define BPCD 0x248c >+ >+/* Binner scratch pad debug reg: >+ */ >+#define BINSKPD 0x24f0 >+ >+/* HWB scratch pad debug reg: >+ */ >+#define HWBSKPD 0x24f4 >+ >+/* Binner memory pool reg: >+ */ >+#define BMP_BUFFER 0x2430 >+#define BMP_PAGE_SIZE_4K (0 << 10) >+#define BMP_BUFFER_SIZE_SHIFT 1 >+#define BMP_ENABLE (1 << 0) >+ >+/* Get/put memory from the binner memory pool: >+ */ >+#define BMP_GET 0x2438 >+#define BMP_PUT 0x2440 >+#define BMP_OFFSET_SHIFT 5 >+ >+/* 3D state packets: >+ */ >+#define GFX_OP_RASTER_RULES ((0x3<<29)|(0x7<<24)) > > #define GFX_OP_SCISSOR ((0x3<<29)|(0x1c<<24)|(0x10<<19)) > #define SC_UPDATE_SCISSOR (0x1<<1) > #define SC_ENABLE_MASK (0x1<<0) > #define SC_ENABLE (0x1<<0) > >+#define GFX_OP_LOAD_INDIRECT ((0x3<<29)|(0x1d<<24)|(0x7<<16)) >+ > #define GFX_OP_SCISSOR_INFO ((0x3<<29)|(0x1d<<24)|(0x81<<16)|(0x1)) > #define SCI_YMIN_MASK (0xffff<<16) > #define SCI_XMIN_MASK (0xffff<<0) >@@ -288,19 +636,22 @@ extern int i915_wait_ring(struct drm_dev > #define GFX_OP_DESTBUFFER_VARS ((0x3<<29)|(0x1d<<24)|(0x85<<16)|0x0) > #define GFX_OP_DRAWRECT_INFO ((0x3<<29)|(0x1d<<24)|(0x80<<16)|(0x3)) > >-#define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2) >+#define GFX_OP_DRAWRECT_INFO_I965 ((0x7900<<16)|0x2) > >+#define SRC_COPY_BLT_CMD ((2<<29)|(0x43<<22)|4) > #define XY_SRC_COPY_BLT_CMD ((2<<29)|(0x53<<22)|6) > #define XY_SRC_COPY_BLT_WRITE_ALPHA (1<<21) > #define XY_SRC_COPY_BLT_WRITE_RGB (1<<20) > >-#define MI_BATCH_BUFFER ((0x30<<23)|1) >-#define MI_BATCH_BUFFER_START (0x31<<23) >-#define MI_BATCH_BUFFER_END (0xA<<23) >+#define MI_BATCH_BUFFER ((0x30<<23)|1) >+#define MI_BATCH_BUFFER_START (0x31<<23) >+#define MI_BATCH_BUFFER_END (0xA<<23) > #define MI_BATCH_NON_SECURE (1) >+ > #define MI_BATCH_NON_SECURE_I965 (1<<8) > > #define MI_WAIT_FOR_EVENT ((0x3<<23)) >+#define MI_WAIT_FOR_PLANE_B_FLIP (1<<6) > #define MI_WAIT_FOR_PLANE_A_FLIP (1<<2) > #define MI_WAIT_FOR_PLANE_A_SCANLINES (1<<1) > >@@ -308,9 +659,528 @@ extern int i915_wait_ring(struct drm_dev > > #define CMD_OP_DISPLAYBUFFER_INFO ((0x0<<29)|(0x14<<23)|2) > #define ASYNC_FLIP (1<<22) >+#define DISPLAY_PLANE_A (0<<20) >+#define DISPLAY_PLANE_B (1<<20) >+ >+/* Display regs */ >+#define DSPACNTR 0x70180 >+#define DSPBCNTR 0x71180 >+#define DISPPLANE_SEL_PIPE_MASK (1<<24) >+ >+/* Define the region of interest for the binner: >+ */ >+#define CMD_OP_BIN_CONTROL ((0x3<<29)|(0x1d<<24)|(0x84<<16)|4) > > #define CMD_OP_DESTBUFFER_INFO ((0x3<<29)|(0x1d<<24)|(0x8e<<16)|1) > >-#define READ_BREADCRUMB(dev_priv) (((u32 *)(dev_priv->hw_status_page))[5]) >+#define BREADCRUMB_BITS 31 >+#define BREADCRUMB_MASK ((1U << BREADCRUMB_BITS) - 1) >+ >+#define READ_BREADCRUMB(dev_priv) (((volatile u32*)(dev_priv->hw_status_page))[5]) >+#define READ_HWSP(dev_priv, reg) (((volatile u32*)(dev_priv->hw_status_page))[reg]) >+ >+#define BLC_PWM_CTL 0x61254 >+#define BACKLIGHT_MODULATION_FREQ_SHIFT (17) >+ >+#define BLC_PWM_CTL2 0x61250 >+/** >+ * This is the most significant 15 bits of the number of backlight cycles in a >+ * complete cycle of the modulated backlight control. >+ * >+ * The actual value is this field multiplied by two. >+ */ >+#define BACKLIGHT_MODULATION_FREQ_MASK (0x7fff << 17) >+#define BLM_LEGACY_MODE (1 << 16) >+/** >+ * This is the number of cycles out of the backlight modulation cycle for which >+ * the backlight is on. >+ * >+ * This field must be no greater than the number of cycles in the complete >+ * backlight modulation cycle. >+ */ >+#define BACKLIGHT_DUTY_CYCLE_SHIFT (0) >+#define BACKLIGHT_DUTY_CYCLE_MASK (0xffff) >+ >+#define I915_GCFGC 0xf0 >+#define I915_LOW_FREQUENCY_ENABLE (1 << 7) >+#define I915_DISPLAY_CLOCK_190_200_MHZ (0 << 4) >+#define I915_DISPLAY_CLOCK_333_MHZ (4 << 4) >+#define I915_DISPLAY_CLOCK_MASK (7 << 4) >+ >+#define I855_HPLLCC 0xc0 >+#define I855_CLOCK_CONTROL_MASK (3 << 0) >+#define I855_CLOCK_133_200 (0 << 0) >+#define I855_CLOCK_100_200 (1 << 0) >+#define I855_CLOCK_100_133 (2 << 0) >+#define I855_CLOCK_166_250 (3 << 0) >+ >+/* p317, 319 >+ */ >+#define VCLK2_VCO_M 0x6008 /* treat as 16 bit? (includes msbs) */ >+#define VCLK2_VCO_N 0x600a >+#define VCLK2_VCO_DIV_SEL 0x6012 >+ >+#define VCLK_DIVISOR_VGA0 0x6000 >+#define VCLK_DIVISOR_VGA1 0x6004 >+#define VCLK_POST_DIV 0x6010 >+/** Selects a post divisor of 4 instead of 2. */ >+# define VGA1_PD_P2_DIV_4 (1 << 15) >+/** Overrides the p2 post divisor field */ >+# define VGA1_PD_P1_DIV_2 (1 << 13) >+# define VGA1_PD_P1_SHIFT 8 >+/** P1 value is 2 greater than this field */ >+# define VGA1_PD_P1_MASK (0x1f << 8) >+/** Selects a post divisor of 4 instead of 2. */ >+# define VGA0_PD_P2_DIV_4 (1 << 7) >+/** Overrides the p2 post divisor field */ >+# define VGA0_PD_P1_DIV_2 (1 << 5) >+# define VGA0_PD_P1_SHIFT 0 >+/** P1 value is 2 greater than this field */ >+# define VGA0_PD_P1_MASK (0x1f << 0) >+ >+/* I830 CRTC registers */ >+#define HTOTAL_A 0x60000 >+#define HBLANK_A 0x60004 >+#define HSYNC_A 0x60008 >+#define VTOTAL_A 0x6000c >+#define VBLANK_A 0x60010 >+#define VSYNC_A 0x60014 >+#define PIPEASRC 0x6001c >+#define BCLRPAT_A 0x60020 >+#define VSYNCSHIFT_A 0x60028 >+ >+#define HTOTAL_B 0x61000 >+#define HBLANK_B 0x61004 >+#define HSYNC_B 0x61008 >+#define VTOTAL_B 0x6100c >+#define VBLANK_B 0x61010 >+#define VSYNC_B 0x61014 >+#define PIPEBSRC 0x6101c >+#define BCLRPAT_B 0x61020 >+#define VSYNCSHIFT_B 0x61028 >+ >+#define PP_STATUS 0x61200 >+# define PP_ON (1 << 31) >+/** >+ * Indicates that all dependencies of the panel are on: >+ * >+ * - PLL enabled >+ * - pipe enabled >+ * - LVDS/DVOB/DVOC on >+ */ >+# define PP_READY (1 << 30) >+# define PP_SEQUENCE_NONE (0 << 28) >+# define PP_SEQUENCE_ON (1 << 28) >+# define PP_SEQUENCE_OFF (2 << 28) >+# define PP_SEQUENCE_MASK 0x30000000 >+#define PP_CONTROL 0x61204 >+# define POWER_TARGET_ON (1 << 0) >+ >+#define LVDSPP_ON 0x61208 >+#define LVDSPP_OFF 0x6120c >+#define PP_CYCLE 0x61210 >+ >+#define PFIT_CONTROL 0x61230 >+# define PFIT_ENABLE (1 << 31) >+# define PFIT_PIPE_MASK (3 << 29) >+# define PFIT_PIPE_SHIFT 29 >+# define VERT_INTERP_DISABLE (0 << 10) >+# define VERT_INTERP_BILINEAR (1 << 10) >+# define VERT_INTERP_MASK (3 << 10) >+# define VERT_AUTO_SCALE (1 << 9) >+# define HORIZ_INTERP_DISABLE (0 << 6) >+# define HORIZ_INTERP_BILINEAR (1 << 6) >+# define HORIZ_INTERP_MASK (3 << 6) >+# define HORIZ_AUTO_SCALE (1 << 5) >+# define PANEL_8TO6_DITHER_ENABLE (1 << 3) >+ >+#define PFIT_PGM_RATIOS 0x61234 >+# define PFIT_VERT_SCALE_MASK 0xfff00000 >+# define PFIT_HORIZ_SCALE_MASK 0x0000fff0 >+ >+#define PFIT_AUTO_RATIOS 0x61238 >+ >+ >+#define DPLL_A 0x06014 >+#define DPLL_B 0x06018 >+# define DPLL_VCO_ENABLE (1 << 31) >+# define DPLL_DVO_HIGH_SPEED (1 << 30) >+# define DPLL_SYNCLOCK_ENABLE (1 << 29) >+# define DPLL_VGA_MODE_DIS (1 << 28) >+# define DPLLB_MODE_DAC_SERIAL (1 << 26) /* i915 */ >+# define DPLLB_MODE_LVDS (2 << 26) /* i915 */ >+# define DPLL_MODE_MASK (3 << 26) >+# define DPLL_DAC_SERIAL_P2_CLOCK_DIV_10 (0 << 24) /* i915 */ >+# define DPLL_DAC_SERIAL_P2_CLOCK_DIV_5 (1 << 24) /* i915 */ >+# define DPLLB_LVDS_P2_CLOCK_DIV_14 (0 << 24) /* i915 */ >+# define DPLLB_LVDS_P2_CLOCK_DIV_7 (1 << 24) /* i915 */ >+# define DPLL_P2_CLOCK_DIV_MASK 0x03000000 /* i915 */ >+# define DPLL_FPA01_P1_POST_DIV_MASK 0x00ff0000 /* i915 */ >+/** >+ * The i830 generation, in DAC/serial mode, defines p1 as two plus this >+ * bitfield, or just 2 if PLL_P1_DIVIDE_BY_TWO is set. >+ */ >+# define DPLL_FPA01_P1_POST_DIV_MASK_I830 0x001f0000 >+/** >+ * The i830 generation, in LVDS mode, defines P1 as the bit number set within >+ * this field (only one bit may be set). >+ */ >+# define DPLL_FPA01_P1_POST_DIV_MASK_I830_LVDS 0x003f0000 >+# define DPLL_FPA01_P1_POST_DIV_SHIFT 16 >+# define PLL_P2_DIVIDE_BY_4 (1 << 23) /* i830, required in DVO non-gang */ >+# define PLL_P1_DIVIDE_BY_TWO (1 << 21) /* i830 */ >+# define PLL_REF_INPUT_DREFCLK (0 << 13) >+# define PLL_REF_INPUT_TVCLKINA (1 << 13) /* i830 */ >+# define PLL_REF_INPUT_TVCLKINBC (2 << 13) /* SDVO TVCLKIN */ >+# define PLLB_REF_INPUT_SPREADSPECTRUMIN (3 << 13) >+# define PLL_REF_INPUT_MASK (3 << 13) >+# define PLL_LOAD_PULSE_PHASE_SHIFT 9 >+/* >+ * Parallel to Serial Load Pulse phase selection. >+ * Selects the phase for the 10X DPLL clock for the PCIe >+ * digital display port. The range is 4 to 13; 10 or more >+ * is just a flip delay. The default is 6 >+ */ >+# define PLL_LOAD_PULSE_PHASE_MASK (0xf << PLL_LOAD_PULSE_PHASE_SHIFT) >+# define DISPLAY_RATE_SELECT_FPA1 (1 << 8) >+ >+/** >+ * SDVO multiplier for 945G/GM. Not used on 965. >+ * >+ * \sa DPLL_MD_UDI_MULTIPLIER_MASK >+ */ >+# define SDVO_MULTIPLIER_MASK 0x000000ff >+# define SDVO_MULTIPLIER_SHIFT_HIRES 4 >+# define SDVO_MULTIPLIER_SHIFT_VGA 0 >+ >+/** @defgroup DPLL_MD >+ * @{ >+ */ >+/** Pipe A SDVO/UDI clock multiplier/divider register for G965. */ >+#define DPLL_A_MD 0x0601c >+/** Pipe B SDVO/UDI clock multiplier/divider register for G965. */ >+#define DPLL_B_MD 0x06020 >+/** >+ * UDI pixel divider, controlling how many pixels are stuffed into a packet. >+ * >+ * Value is pixels minus 1. Must be set to 1 pixel for SDVO. >+ */ >+# define DPLL_MD_UDI_DIVIDER_MASK 0x3f000000 >+# define DPLL_MD_UDI_DIVIDER_SHIFT 24 >+/** UDI pixel divider for VGA, same as DPLL_MD_UDI_DIVIDER_MASK. */ >+# define DPLL_MD_VGA_UDI_DIVIDER_MASK 0x003f0000 >+# define DPLL_MD_VGA_UDI_DIVIDER_SHIFT 16 >+/** >+ * SDVO/UDI pixel multiplier. >+ * >+ * SDVO requires that the bus clock rate be between 1 and 2 Ghz, and the bus >+ * clock rate is 10 times the DPLL clock. At low resolution/refresh rate >+ * modes, the bus rate would be below the limits, so SDVO allows for stuffing >+ * dummy bytes in the datastream at an increased clock rate, with both sides of >+ * the link knowing how many bytes are fill. >+ * >+ * So, for a mode with a dotclock of 65Mhz, we would want to double the clock >+ * rate to 130Mhz to get a bus rate of 1.30Ghz. The DPLL clock rate would be >+ * set to 130Mhz, and the SDVO multiplier set to 2x in this register and >+ * through an SDVO command. >+ * >+ * This register field has values of multiplication factor minus 1, with >+ * a maximum multiplier of 5 for SDVO. >+ */ >+# define DPLL_MD_UDI_MULTIPLIER_MASK 0x00003f00 >+# define DPLL_MD_UDI_MULTIPLIER_SHIFT 8 >+/** SDVO/UDI pixel multiplier for VGA, same as DPLL_MD_UDI_MULTIPLIER_MASK. >+ * This best be set to the default value (3) or the CRT won't work. No, >+ * I don't entirely understand what this does... >+ */ >+# define DPLL_MD_VGA_UDI_MULTIPLIER_MASK 0x0000003f >+# define DPLL_MD_VGA_UDI_MULTIPLIER_SHIFT 0 >+/** @} */ >+ >+#define DPLL_TEST 0x606c >+# define DPLLB_TEST_SDVO_DIV_1 (0 << 22) >+# define DPLLB_TEST_SDVO_DIV_2 (1 << 22) >+# define DPLLB_TEST_SDVO_DIV_4 (2 << 22) >+# define DPLLB_TEST_SDVO_DIV_MASK (3 << 22) >+# define DPLLB_TEST_N_BYPASS (1 << 19) >+# define DPLLB_TEST_M_BYPASS (1 << 18) >+# define DPLLB_INPUT_BUFFER_ENABLE (1 << 16) >+# define DPLLA_TEST_N_BYPASS (1 << 3) >+# define DPLLA_TEST_M_BYPASS (1 << 2) >+# define DPLLA_INPUT_BUFFER_ENABLE (1 << 0) >+ >+#define ADPA 0x61100 >+#define ADPA_DAC_ENABLE (1<<31) >+#define ADPA_DAC_DISABLE 0 >+#define ADPA_PIPE_SELECT_MASK (1<<30) >+#define ADPA_PIPE_A_SELECT 0 >+#define ADPA_PIPE_B_SELECT (1<<30) >+#define ADPA_USE_VGA_HVPOLARITY (1<<15) >+#define ADPA_SETS_HVPOLARITY 0 >+#define ADPA_VSYNC_CNTL_DISABLE (1<<11) >+#define ADPA_VSYNC_CNTL_ENABLE 0 >+#define ADPA_HSYNC_CNTL_DISABLE (1<<10) >+#define ADPA_HSYNC_CNTL_ENABLE 0 >+#define ADPA_VSYNC_ACTIVE_HIGH (1<<4) >+#define ADPA_VSYNC_ACTIVE_LOW 0 >+#define ADPA_HSYNC_ACTIVE_HIGH (1<<3) >+#define ADPA_HSYNC_ACTIVE_LOW 0 >+ >+#define FPA0 0x06040 >+#define FPA1 0x06044 >+#define FPB0 0x06048 >+#define FPB1 0x0604c >+# define FP_N_DIV_MASK 0x003f0000 >+# define FP_N_DIV_SHIFT 16 >+# define FP_M1_DIV_MASK 0x00003f00 >+# define FP_M1_DIV_SHIFT 8 >+# define FP_M2_DIV_MASK 0x0000003f >+# define FP_M2_DIV_SHIFT 0 >+ >+ >+#define PORT_HOTPLUG_EN 0x61110 >+# define SDVOB_HOTPLUG_INT_EN (1 << 26) >+# define SDVOC_HOTPLUG_INT_EN (1 << 25) >+# define TV_HOTPLUG_INT_EN (1 << 18) >+# define CRT_HOTPLUG_INT_EN (1 << 9) >+# define CRT_HOTPLUG_FORCE_DETECT (1 << 3) >+ >+#define PORT_HOTPLUG_STAT 0x61114 >+# define CRT_HOTPLUG_INT_STATUS (1 << 11) >+# define TV_HOTPLUG_INT_STATUS (1 << 10) >+# define CRT_HOTPLUG_MONITOR_MASK (3 << 8) >+# define CRT_HOTPLUG_MONITOR_COLOR (3 << 8) >+# define CRT_HOTPLUG_MONITOR_MONO (2 << 8) >+# define CRT_HOTPLUG_MONITOR_NONE (0 << 8) >+# define SDVOC_HOTPLUG_INT_STATUS (1 << 7) >+# define SDVOB_HOTPLUG_INT_STATUS (1 << 6) >+ >+#define SDVOB 0x61140 >+#define SDVOC 0x61160 >+#define SDVO_ENABLE (1 << 31) >+#define SDVO_PIPE_B_SELECT (1 << 30) >+#define SDVO_STALL_SELECT (1 << 29) >+#define SDVO_INTERRUPT_ENABLE (1 << 26) >+/** >+ * 915G/GM SDVO pixel multiplier. >+ * >+ * Programmed value is multiplier - 1, up to 5x. >+ * >+ * \sa DPLL_MD_UDI_MULTIPLIER_MASK >+ */ >+#define SDVO_PORT_MULTIPLY_MASK (7 << 23) >+#define SDVO_PORT_MULTIPLY_SHIFT 23 >+#define SDVO_PHASE_SELECT_MASK (15 << 19) >+#define SDVO_PHASE_SELECT_DEFAULT (6 << 19) >+#define SDVO_CLOCK_OUTPUT_INVERT (1 << 18) >+#define SDVOC_GANG_MODE (1 << 16) >+#define SDVO_BORDER_ENABLE (1 << 7) >+#define SDVOB_PCIE_CONCURRENCY (1 << 3) >+#define SDVO_DETECTED (1 << 2) >+/* Bits to be preserved when writing */ >+#define SDVOB_PRESERVE_MASK ((1 << 17) | (1 << 16) | (1 << 14)) >+#define SDVOC_PRESERVE_MASK (1 << 17) >+ >+/** @defgroup LVDS >+ * @{ >+ */ >+/** >+ * This register controls the LVDS output enable, pipe selection, and data >+ * format selection. >+ * >+ * All of the clock/data pairs are force powered down by power sequencing. >+ */ >+#define LVDS 0x61180 >+/** >+ * Enables the LVDS port. This bit must be set before DPLLs are enabled, as >+ * the DPLL semantics change when the LVDS is assigned to that pipe. >+ */ >+# define LVDS_PORT_EN (1 << 31) >+/** Selects pipe B for LVDS data. Must be set on pre-965. */ >+# define LVDS_PIPEB_SELECT (1 << 30) >+ >+/** >+ * Enables the A0-A2 data pairs and CLKA, containing 18 bits of color data per >+ * pixel. >+ */ >+# define LVDS_A0A2_CLKA_POWER_MASK (3 << 8) >+# define LVDS_A0A2_CLKA_POWER_DOWN (0 << 8) >+# define LVDS_A0A2_CLKA_POWER_UP (3 << 8) >+/** >+ * Controls the A3 data pair, which contains the additional LSBs for 24 bit >+ * mode. Only enabled if LVDS_A0A2_CLKA_POWER_UP also indicates it should be >+ * on. >+ */ >+# define LVDS_A3_POWER_MASK (3 << 6) >+# define LVDS_A3_POWER_DOWN (0 << 6) >+# define LVDS_A3_POWER_UP (3 << 6) >+/** >+ * Controls the CLKB pair. This should only be set when LVDS_B0B3_POWER_UP >+ * is set. >+ */ >+# define LVDS_CLKB_POWER_MASK (3 << 4) >+# define LVDS_CLKB_POWER_DOWN (0 << 4) >+# define LVDS_CLKB_POWER_UP (3 << 4) >+ >+/** >+ * Controls the B0-B3 data pairs. This must be set to match the DPLL p2 >+ * setting for whether we are in dual-channel mode. The B3 pair will >+ * additionally only be powered up when LVDS_A3_POWER_UP is set. >+ */ >+# define LVDS_B0B3_POWER_MASK (3 << 2) >+# define LVDS_B0B3_POWER_DOWN (0 << 2) >+# define LVDS_B0B3_POWER_UP (3 << 2) >+ >+#define PIPEACONF 0x70008 >+#define PIPEACONF_ENABLE (1<<31) >+#define PIPEACONF_DISABLE 0 >+#define PIPEACONF_DOUBLE_WIDE (1<<30) >+#define I965_PIPECONF_ACTIVE (1<<30) >+#define PIPEACONF_SINGLE_WIDE 0 >+#define PIPEACONF_PIPE_UNLOCKED 0 >+#define PIPEACONF_PIPE_LOCKED (1<<25) >+#define PIPEACONF_PALETTE 0 >+#define PIPEACONF_GAMMA (1<<24) >+#define PIPECONF_FORCE_BORDER (1<<25) >+#define PIPECONF_PROGRESSIVE (0 << 21) >+#define PIPECONF_INTERLACE_W_FIELD_INDICATION (6 << 21) >+#define PIPECONF_INTERLACE_FIELD_0_ONLY (7 << 21) >+ >+#define PIPEBCONF 0x71008 >+#define PIPEBCONF_ENABLE (1<<31) >+#define PIPEBCONF_DISABLE 0 >+#define PIPEBCONF_DOUBLE_WIDE (1<<30) >+#define PIPEBCONF_DISABLE 0 >+#define PIPEBCONF_GAMMA (1<<24) >+#define PIPEBCONF_PALETTE 0 >+ >+#define PIPEBGCMAXRED 0x71010 >+#define PIPEBGCMAXGREEN 0x71014 >+#define PIPEBGCMAXBLUE 0x71018 >+#define PIPEBSTAT 0x71024 >+#define PIPEBFRAMEHIGH 0x71040 >+#define PIPEBFRAMEPIXEL 0x71044 >+ >+#define DSPACNTR 0x70180 >+#define DSPBCNTR 0x71180 >+#define DISPLAY_PLANE_ENABLE (1<<31) >+#define DISPLAY_PLANE_DISABLE 0 >+#define DISPPLANE_GAMMA_ENABLE (1<<30) >+#define DISPPLANE_GAMMA_DISABLE 0 >+#define DISPPLANE_PIXFORMAT_MASK (0xf<<26) >+#define DISPPLANE_8BPP (0x2<<26) >+#define DISPPLANE_15_16BPP (0x4<<26) >+#define DISPPLANE_16BPP (0x5<<26) >+#define DISPPLANE_32BPP_NO_ALPHA (0x6<<26) >+#define DISPPLANE_32BPP (0x7<<26) >+#define DISPPLANE_STEREO_ENABLE (1<<25) >+#define DISPPLANE_STEREO_DISABLE 0 >+#define DISPPLANE_SEL_PIPE_MASK (1<<24) >+#define DISPPLANE_SEL_PIPE_A 0 >+#define DISPPLANE_SEL_PIPE_B (1<<24) >+#define DISPPLANE_SRC_KEY_ENABLE (1<<22) >+#define DISPPLANE_SRC_KEY_DISABLE 0 >+#define DISPPLANE_LINE_DOUBLE (1<<20) >+#define DISPPLANE_NO_LINE_DOUBLE 0 >+#define DISPPLANE_STEREO_POLARITY_FIRST 0 >+#define DISPPLANE_STEREO_POLARITY_SECOND (1<<18) >+/* plane B only */ >+#define DISPPLANE_ALPHA_TRANS_ENABLE (1<<15) >+#define DISPPLANE_ALPHA_TRANS_DISABLE 0 >+#define DISPPLANE_SPRITE_ABOVE_DISPLAYA 0 >+#define DISPPLANE_SPRITE_ABOVE_OVERLAY (1) >+ >+#define DSPABASE 0x70184 >+#define DSPASTRIDE 0x70188 >+ >+#define DSPBBASE 0x71184 >+#define DSPBADDR DSPBBASE >+#define DSPBSTRIDE 0x71188 >+ >+#define DSPAKEYVAL 0x70194 >+#define DSPAKEYMASK 0x70198 >+ >+#define DSPAPOS 0x7018C /* reserved */ >+#define DSPASIZE 0x70190 >+#define DSPBPOS 0x7118C >+#define DSPBSIZE 0x71190 >+ >+#define DSPASURF 0x7019C >+#define DSPATILEOFF 0x701A4 >+ >+#define DSPBSURF 0x7119C >+#define DSPBTILEOFF 0x711A4 >+ >+#define VGACNTRL 0x71400 >+# define VGA_DISP_DISABLE (1 << 31) >+# define VGA_2X_MODE (1 << 30) >+# define VGA_PIPE_B_SELECT (1 << 29) >+ >+/* >+ * Some BIOS scratch area registers. The 845 (and 830?) store the amount >+ * of video memory available to the BIOS in SWF1. >+ */ >+ >+#define SWF0 0x71410 >+ >+/* >+ * 855 scratch registers. >+ */ >+#define SWF10 0x70410 >+ >+#define SWF30 0x72414 >+ >+/* >+ * Overlay registers. These are overlay registers accessed via MMIO. >+ * Those loaded via the overlay register page are defined in i830_video.c. >+ */ >+#define OVADD 0x30000 >+ >+#define DOVSTA 0x30008 >+#define OC_BUF (0x3<<20) >+ >+#define OGAMC5 0x30010 >+#define OGAMC4 0x30014 >+#define OGAMC3 0x30018 >+#define OGAMC2 0x3001c >+#define OGAMC1 0x30020 >+#define OGAMC0 0x30024 >+/* >+ * Palette registers >+ */ >+#define PALETTE_A 0x0a000 >+#define PALETTE_B 0x0a800 >+ >+#define IS_I830(dev) ((dev)->pci_device == 0x3577) >+#define IS_845G(dev) ((dev)->pci_device == 0x2562) >+#define IS_I85X(dev) ((dev)->pci_device == 0x3582) >+#define IS_I855(dev) ((dev)->pci_device == 0x3582) >+#define IS_I865G(dev) ((dev)->pci_device == 0x2572) >+ >+#define IS_I915G(dev) (dev->pci_device == 0x2582)/* || dev->pci_device == PCI_DEVICE_ID_INTELPCI_CHIP_E7221_G)*/ >+#define IS_I915GM(dev) ((dev)->pci_device == 0x2592) >+#define IS_I945G(dev) ((dev)->pci_device == 0x2772) >+#define IS_I945GM(dev) ((dev)->pci_device == 0x27A2) >+ >+#define IS_I965G(dev) ((dev)->pci_device == 0x2972 || \ >+ (dev)->pci_device == 0x2982 || \ >+ (dev)->pci_device == 0x2992 || \ >+ (dev)->pci_device == 0x29A2 || \ >+ (dev)->pci_device == 0x2A02 || \ >+ (dev)->pci_device == 0x2A12) >+ >+#define IS_I965GM(dev) ((dev)->pci_device == 0x2A02) >+ >+#define IS_G33(dev) ((dev)->pci_device == 0x29C2 || \ >+ (dev)->pci_device == 0x29B2 || \ >+ (dev)->pci_device == 0x29D2) >+ >+#define IS_I9XX(dev) (IS_I915G(dev) || IS_I915GM(dev) || IS_I945G(dev) || \ >+ IS_I945GM(dev) || IS_I965G(dev) || IS_G33(dev)) >+ >+#define IS_MOBILE(dev) (IS_I830(dev) || IS_I85X(dev) || IS_I915GM(dev) || \ >+ IS_I945GM(dev) || IS_I965GM(dev)) >+ >+#define PRIMARY_RINGBUFFER_SIZE (128*1024) > > #endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_fence.c linux-2.6.23.i686/drivers/char/drm/i915_fence.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_fence.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i915_fence.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,159 @@ >+/************************************************************************** >+ * >+ * Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "i915_drm.h" >+#include "i915_drv.h" >+ >+/* >+ * Implements an intel sync flush operation. >+ */ >+ >+static void i915_perform_flush(struct drm_device *dev) >+{ >+ drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >+ struct drm_fence_manager *fm = &dev->fm; >+ struct drm_fence_class_manager *fc = &fm->fence_class[0]; >+ struct drm_fence_driver *driver = dev->driver->fence_driver; >+ uint32_t flush_flags = 0; >+ uint32_t flush_sequence = 0; >+ uint32_t i_status; >+ uint32_t diff; >+ uint32_t sequence; >+ int rwflush; >+ >+ if (!dev_priv) >+ return; >+ >+ if (fc->pending_exe_flush) { >+ sequence = READ_BREADCRUMB(dev_priv); >+ >+ /* >+ * First update fences with the current breadcrumb. >+ */ >+ >+ diff = (sequence - fc->last_exe_flush) & BREADCRUMB_MASK; >+ if (diff < driver->wrap_diff && diff != 0) { >+ drm_fence_handler(dev, 0, sequence, >+ DRM_FENCE_TYPE_EXE, 0); >+ } >+ >+ if (dev_priv->fence_irq_on && !fc->pending_exe_flush) { >+ i915_user_irq_off(dev_priv); >+ dev_priv->fence_irq_on = 0; >+ } else if (!dev_priv->fence_irq_on && fc->pending_exe_flush) { >+ i915_user_irq_on(dev_priv); >+ dev_priv->fence_irq_on = 1; >+ } >+ } >+ >+ if (dev_priv->flush_pending) { >+ i_status = READ_HWSP(dev_priv, 0); >+ if ((i_status & (1 << 12)) != >+ (dev_priv->saved_flush_status & (1 << 12))) { >+ flush_flags = dev_priv->flush_flags; >+ flush_sequence = dev_priv->flush_sequence; >+ dev_priv->flush_pending = 0; >+ drm_fence_handler(dev, 0, flush_sequence, flush_flags, 0); >+ } >+ } >+ >+ rwflush = fc->pending_flush & DRM_I915_FENCE_TYPE_RW; >+ if (rwflush && !dev_priv->flush_pending) { >+ dev_priv->flush_sequence = (uint32_t) READ_BREADCRUMB(dev_priv); >+ dev_priv->flush_flags = fc->pending_flush; >+ dev_priv->saved_flush_status = READ_HWSP(dev_priv, 0); >+ I915_WRITE(I915REG_INSTPM, (1 << 5) | (1 << 21)); >+ dev_priv->flush_pending = 1; >+ fc->pending_flush &= ~DRM_I915_FENCE_TYPE_RW; >+ } >+ >+ if (dev_priv->flush_pending) { >+ i_status = READ_HWSP(dev_priv, 0); >+ if ((i_status & (1 << 12)) != >+ (dev_priv->saved_flush_status & (1 << 12))) { >+ flush_flags = dev_priv->flush_flags; >+ flush_sequence = dev_priv->flush_sequence; >+ dev_priv->flush_pending = 0; >+ drm_fence_handler(dev, 0, flush_sequence, flush_flags, 0); >+ } >+ } >+ >+} >+ >+void i915_poke_flush(struct drm_device *dev, uint32_t class) >+{ >+ struct drm_fence_manager *fm = &dev->fm; >+ unsigned long flags; >+ >+ write_lock_irqsave(&fm->lock, flags); >+ i915_perform_flush(dev); >+ write_unlock_irqrestore(&fm->lock, flags); >+} >+ >+int i915_fence_emit_sequence(struct drm_device *dev, uint32_t class, >+ uint32_t flags, uint32_t *sequence, >+ uint32_t *native_type) >+{ >+ drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >+ if (!dev_priv) >+ return -EINVAL; >+ >+ i915_emit_irq(dev); >+ *sequence = (uint32_t) dev_priv->counter; >+ *native_type = DRM_FENCE_TYPE_EXE; >+ if (flags & DRM_I915_FENCE_FLAG_FLUSHED) >+ *native_type |= DRM_I915_FENCE_TYPE_RW; >+ >+ return 0; >+} >+ >+void i915_fence_handler(struct drm_device *dev) >+{ >+ struct drm_fence_manager *fm = &dev->fm; >+ >+ write_lock(&fm->lock); >+ i915_perform_flush(dev); >+ write_unlock(&fm->lock); >+} >+ >+int i915_fence_has_irq(struct drm_device *dev, uint32_t class, uint32_t flags) >+{ >+ /* >+ * We have an irq that tells us when we have a new breadcrumb. >+ */ >+ >+ if (class == 0 && flags == DRM_FENCE_TYPE_EXE) >+ return 1; >+ >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_ioc32.c linux-2.6.23.i686/drivers/char/drm/i915_ioc32.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_ioc32.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/i915_ioc32.c 2008-01-06 09:24:57.000000000 +0100 >@@ -34,6 +34,7 @@ > #include "drmP.h" > #include "drm.h" > #include "i915_drm.h" >+#include "i915_drv.h" > > typedef struct _drm_i915_batchbuffer32 { > int start; /* agp offset */ >@@ -41,7 +42,7 @@ typedef struct _drm_i915_batchbuffer32 { > int DR1; /* hw flags for GFX_OP_DRAWRECT_INFO */ > int DR4; /* window origin for GFX_OP_DRAWRECT_INFO */ > int num_cliprects; /* mulitpass with multiple cliprects? */ >- u32 cliprects; /* pointer to userspace cliprects */ >+ u32 cliprects; /* pointer to userspace cliprects */ > } drm_i915_batchbuffer32_t; > > static int compat_i915_batchbuffer(struct file *file, unsigned int cmd, >@@ -66,18 +67,18 @@ static int compat_i915_batchbuffer(struc > &batchbuffer->cliprects)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_I915_BATCHBUFFER, >- (unsigned long)batchbuffer); >+ (unsigned long) batchbuffer); > } > > typedef struct _drm_i915_cmdbuffer32 { >- u32 buf; /* pointer to userspace command buffer */ >+ u32 buf; /* pointer to userspace command buffer */ > int sz; /* nr bytes in buf */ > int DR1; /* hw flags for GFX_OP_DRAWRECT_INFO */ > int DR4; /* window origin for GFX_OP_DRAWRECT_INFO */ > int num_cliprects; /* mulitpass with multiple cliprects? */ >- u32 cliprects; /* pointer to userspace cliprects */ >+ u32 cliprects; /* pointer to userspace cliprects */ > } drm_i915_cmdbuffer32_t; > > static int compat_i915_cmdbuffer(struct file *file, unsigned int cmd, >@@ -102,8 +103,8 @@ static int compat_i915_cmdbuffer(struct > &cmdbuffer->cliprects)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_I915_CMDBUFFER, (unsigned long)cmdbuffer); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_I915_CMDBUFFER, (unsigned long) cmdbuffer); > } > > typedef struct drm_i915_irq_emit32 { >@@ -116,7 +117,7 @@ static int compat_i915_irq_emit(struct f > drm_i915_irq_emit32_t req32; > drm_i915_irq_emit_t __user *request; > >- if (copy_from_user(&req32, (void __user *)arg, sizeof(req32))) >+ if (copy_from_user(&req32, (void __user *) arg, sizeof(req32))) > return -EFAULT; > > request = compat_alloc_user_space(sizeof(*request)); >@@ -125,8 +126,8 @@ static int compat_i915_irq_emit(struct f > &request->irq_seq)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_I915_IRQ_EMIT, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_I915_IRQ_EMIT, (unsigned long) request); > } > typedef struct drm_i915_getparam32 { > int param; >@@ -139,7 +140,7 @@ static int compat_i915_getparam(struct f > drm_i915_getparam32_t req32; > drm_i915_getparam_t __user *request; > >- if (copy_from_user(&req32, (void __user *)arg, sizeof(req32))) >+ if (copy_from_user(&req32, (void __user *) arg, sizeof(req32))) > return -EFAULT; > > request = compat_alloc_user_space(sizeof(*request)); >@@ -149,8 +150,8 @@ static int compat_i915_getparam(struct f > &request->value)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_I915_GETPARAM, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_I915_GETPARAM, (unsigned long) request); > } > > typedef struct drm_i915_mem_alloc32 { >@@ -166,7 +167,7 @@ static int compat_i915_alloc(struct file > drm_i915_mem_alloc32_t req32; > drm_i915_mem_alloc_t __user *request; > >- if (copy_from_user(&req32, (void __user *)arg, sizeof(req32))) >+ if (copy_from_user(&req32, (void __user *) arg, sizeof(req32))) > return -EFAULT; > > request = compat_alloc_user_space(sizeof(*request)); >@@ -178,16 +179,77 @@ static int compat_i915_alloc(struct file > &request->region_offset)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_I915_ALLOC, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_I915_ALLOC, (unsigned long) request); > } > >+typedef struct drm_i915_execbuffer32 { >+ uint64_t ops_list; >+ uint32_t num_buffers; >+ struct _drm_i915_batchbuffer32 batch; >+ drm_context_t context; >+ struct drm_fence_arg fence_arg; >+} drm_i915_execbuffer32_t; >+ >+static int compat_i915_execbuffer(struct file *file, unsigned int cmd, >+ unsigned long arg) >+{ >+ drm_i915_execbuffer32_t req32; >+ struct drm_i915_execbuffer __user *request; >+ int err; >+ >+ if (copy_from_user(&req32, (void __user *) arg, sizeof(req32))) >+ return -EFAULT; >+ >+ request = compat_alloc_user_space(sizeof(*request)); >+ >+ if (!access_ok(VERIFY_WRITE, request, sizeof(*request)) >+ || __put_user(req32.ops_list, &request->ops_list) >+ || __put_user(req32.num_buffers, &request->num_buffers) >+ || __put_user(req32.context, &request->context) >+ || __copy_to_user(&request->fence_arg, &req32.fence_arg, >+ sizeof(req32.fence_arg)) >+ || __put_user(req32.batch.start, &request->batch.start) >+ || __put_user(req32.batch.used, &request->batch.used) >+ || __put_user(req32.batch.DR1, &request->batch.DR1) >+ || __put_user(req32.batch.DR4, &request->batch.DR4) >+ || __put_user(req32.batch.num_cliprects, >+ &request->batch.num_cliprects) >+ || __put_user((int __user *)(unsigned long)req32.batch.cliprects, >+ &request->batch.cliprects)) >+ return -EFAULT; >+ >+ err = drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_I915_EXECBUFFER, (unsigned long)request); >+ >+ if (err) >+ return err; >+ >+ if (__get_user(req32.fence_arg.handle, &request->fence_arg.handle) >+ || __get_user(req32.fence_arg.fence_class, &request->fence_arg.fence_class) >+ || __get_user(req32.fence_arg.type, &request->fence_arg.type) >+ || __get_user(req32.fence_arg.flags, &request->fence_arg.flags) >+ || __get_user(req32.fence_arg.signaled, &request->fence_arg.signaled) >+ || __get_user(req32.fence_arg.error, &request->fence_arg.error) >+ || __get_user(req32.fence_arg.sequence, &request->fence_arg.sequence)) >+ return -EFAULT; >+ >+ if (copy_to_user((void __user *)arg, &req32, sizeof(req32))) >+ return -EFAULT; >+ >+ return 0; >+} >+ >+ > drm_ioctl_compat_t *i915_compat_ioctls[] = { > [DRM_I915_BATCHBUFFER] = compat_i915_batchbuffer, > [DRM_I915_CMDBUFFER] = compat_i915_cmdbuffer, > [DRM_I915_GETPARAM] = compat_i915_getparam, > [DRM_I915_IRQ_EMIT] = compat_i915_irq_emit, >- [DRM_I915_ALLOC] = compat_i915_alloc >+ [DRM_I915_ALLOC] = compat_i915_alloc, >+#ifdef I915_HAVE_BUFFER >+ [DRM_I915_EXECBUFFER] = compat_i915_execbuffer, >+#endif > }; > > /** >@@ -213,9 +275,9 @@ long i915_compat_ioctl(struct file *filp > > lock_kernel(); /* XXX for now */ > if (fn != NULL) >- ret = (*fn) (filp, cmd, arg); >+ ret = (*fn)(filp, cmd, arg); > else >- ret = drm_ioctl(filp->f_path.dentry->d_inode, filp, cmd, arg); >+ ret = drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg); > unlock_kernel(); > > return ret; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_irq.c linux-2.6.23.i686/drivers/char/drm/i915_irq.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_irq.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i915_irq.c 2008-01-06 09:24:57.000000000 +0100 >@@ -38,6 +38,73 @@ > #define MAX_NOPID ((u32)~0) > > /** >+ * i915_get_pipe - return the the pipe associated with a given plane >+ * @dev: DRM device >+ * @plane: plane to look for >+ * >+ * We need to get the pipe associated with a given plane to correctly perform >+ * vblank driven swapping, and they may not always be equal. So look up the >+ * pipe associated with @plane here. >+ */ >+static int >+i915_get_pipe(struct drm_device *dev, int plane) >+{ >+ drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >+ u32 dspcntr; >+ >+ dspcntr = plane ? I915_READ(DSPBCNTR) : I915_READ(DSPACNTR); >+ >+ return dspcntr & DISPPLANE_SEL_PIPE_MASK ? 1 : 0; >+} >+ >+/** >+ * Emit a synchronous flip. >+ * >+ * This function must be called with the drawable spinlock held. >+ */ >+static void >+i915_dispatch_vsync_flip(struct drm_device *dev, struct drm_drawable_info *drw, >+ int plane) >+{ >+ drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >+ drm_i915_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ u16 x1, y1, x2, y2; >+ int pf_planes = 1 << plane; >+ >+ DRM_SPINLOCK_ASSERT(&dev->drw_lock); >+ >+ /* If the window is visible on the other plane, we have to flip on that >+ * plane as well. >+ */ >+ if (plane == 1) { >+ x1 = sarea_priv->planeA_x; >+ y1 = sarea_priv->planeA_y; >+ x2 = x1 + sarea_priv->planeA_w; >+ y2 = y1 + sarea_priv->planeA_h; >+ } else { >+ x1 = sarea_priv->planeB_x; >+ y1 = sarea_priv->planeB_y; >+ x2 = x1 + sarea_priv->planeB_w; >+ y2 = y1 + sarea_priv->planeB_h; >+ } >+ >+ if (x2 > 0 && y2 > 0) { >+ int i, num_rects = drw->num_rects; >+ struct drm_clip_rect *rect = drw->rects; >+ >+ for (i = 0; i < num_rects; i++) >+ if (!(rect[i].x1 >= x2 || rect[i].y1 >= y2 || >+ rect[i].x2 <= x1 || rect[i].y2 <= y1)) { >+ pf_planes = 0x3; >+ >+ break; >+ } >+ } >+ >+ i915_dispatch_flip(dev, pf_planes, 1); >+} >+ >+/** > * Emit blits for scheduled buffer swaps. > * > * This function will be called with the HW lock held. >@@ -45,14 +112,13 @@ > static void i915_vblank_tasklet(struct drm_device *dev) > { > drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >- unsigned long irqflags; > struct list_head *list, *tmp, hits, *hit; >- int nhits, nrects, slice[2], upper[2], lower[2], i; >+ int nhits, nrects, slice[2], upper[2], lower[2], i, num_pages; > unsigned counter[2] = { atomic_read(&dev->vbl_received), > atomic_read(&dev->vbl_received2) }; > struct drm_drawable_info *drw; > drm_i915_sarea_t *sarea_priv = dev_priv->sarea_priv; >- u32 cpp = dev_priv->cpp; >+ u32 cpp = dev_priv->cpp, offsets[3]; > u32 cmd = (cpp == 4) ? (XY_SRC_COPY_BLT_CMD | > XY_SRC_COPY_BLT_WRITE_ALPHA | > XY_SRC_COPY_BLT_WRITE_RGB) >@@ -67,28 +133,34 @@ static void i915_vblank_tasklet(struct d > > nhits = nrects = 0; > >- spin_lock_irqsave(&dev_priv->swaps_lock, irqflags); >+ /* No irqsave/restore necessary. This tasklet may be run in an >+ * interrupt context or normal context, but we don't have to worry >+ * about getting interrupted by something acquiring the lock, because >+ * we are the interrupt context thing that acquires the lock. >+ */ >+ DRM_SPINLOCK(&dev_priv->swaps_lock); > > /* Find buffer swaps scheduled for this vertical blank */ > list_for_each_safe(list, tmp, &dev_priv->vbl_swaps.head) { > drm_i915_vbl_swap_t *vbl_swap = > list_entry(list, drm_i915_vbl_swap_t, head); >+ int pipe = i915_get_pipe(dev, vbl_swap->plane); > >- if ((counter[vbl_swap->pipe] - vbl_swap->sequence) > (1<<23)) >+ if ((counter[pipe] - vbl_swap->sequence) > (1<<23)) > continue; > > list_del(list); > dev_priv->swaps_pending--; > >- spin_unlock(&dev_priv->swaps_lock); >- spin_lock(&dev->drw_lock); >+ DRM_SPINUNLOCK(&dev_priv->swaps_lock); >+ DRM_SPINLOCK(&dev->drw_lock); > > drw = drm_get_drawable_info(dev, vbl_swap->drw_id); > > if (!drw) { >- spin_unlock(&dev->drw_lock); >+ DRM_SPINUNLOCK(&dev->drw_lock); > drm_free(vbl_swap, sizeof(*vbl_swap), DRM_MEM_DRIVER); >- spin_lock(&dev_priv->swaps_lock); >+ DRM_SPINLOCK(&dev_priv->swaps_lock); > continue; > } > >@@ -105,7 +177,7 @@ static void i915_vblank_tasklet(struct d > } > } > >- spin_unlock(&dev->drw_lock); >+ DRM_SPINUNLOCK(&dev->drw_lock); > > /* List of hits was empty, or we reached the end of it */ > if (hit == &hits) >@@ -113,38 +185,29 @@ static void i915_vblank_tasklet(struct d > > nhits++; > >- spin_lock(&dev_priv->swaps_lock); >+ DRM_SPINLOCK(&dev_priv->swaps_lock); > } > >+ DRM_SPINUNLOCK(&dev_priv->swaps_lock); >+ > if (nhits == 0) { >- spin_unlock_irqrestore(&dev_priv->swaps_lock, irqflags); > return; > } > >- spin_unlock(&dev_priv->swaps_lock); >- > i915_kernel_lost_context(dev); > >- BEGIN_LP_RING(6); >- >- OUT_RING(GFX_OP_DRAWRECT_INFO); >- OUT_RING(0); >- OUT_RING(0); >- OUT_RING(sarea_priv->width | sarea_priv->height << 16); >- OUT_RING(sarea_priv->width | sarea_priv->height << 16); >- OUT_RING(0); >- >- ADVANCE_LP_RING(); >- >- sarea_priv->ctxOwner = DRM_KERNEL_CONTEXT; >- > upper[0] = upper[1] = 0; >- slice[0] = max(sarea_priv->pipeA_h / nhits, 1); >- slice[1] = max(sarea_priv->pipeB_h / nhits, 1); >- lower[0] = sarea_priv->pipeA_y + slice[0]; >- lower[1] = sarea_priv->pipeB_y + slice[0]; >+ slice[0] = max(sarea_priv->planeA_h / nhits, 1); >+ slice[1] = max(sarea_priv->planeB_h / nhits, 1); >+ lower[0] = sarea_priv->planeA_y + slice[0]; >+ lower[1] = sarea_priv->planeB_y + slice[0]; >+ >+ offsets[0] = sarea_priv->front_offset; >+ offsets[1] = sarea_priv->back_offset; >+ offsets[2] = sarea_priv->third_offset; >+ num_pages = sarea_priv->third_handle ? 3 : 2; > >- spin_lock(&dev->drw_lock); >+ DRM_SPINLOCK(&dev->drw_lock); > > /* Emit blits for buffer swaps, partitioning both outputs into as many > * slices as there are buffer swaps scheduled in order to avoid tearing >@@ -154,6 +217,8 @@ static void i915_vblank_tasklet(struct d > for (i = 0; i++ < nhits; > upper[0] = lower[0], lower[0] += slice[0], > upper[1] = lower[1], lower[1] += slice[1]) { >+ int init_drawrect = 1; >+ > if (i == nhits) > lower[0] = lower[1] = sarea_priv->height; > >@@ -161,7 +226,7 @@ static void i915_vblank_tasklet(struct d > drm_i915_vbl_swap_t *swap_hit = > list_entry(hit, drm_i915_vbl_swap_t, head); > struct drm_clip_rect *rect; >- int num_rects, pipe; >+ int num_rects, plane, front, back; > unsigned short top, bottom; > > drw = drm_get_drawable_info(dev, swap_hit->drw_id); >@@ -169,10 +234,37 @@ static void i915_vblank_tasklet(struct d > if (!drw) > continue; > >+ plane = swap_hit->plane; >+ >+ if (swap_hit->flip) { >+ i915_dispatch_vsync_flip(dev, drw, plane); >+ continue; >+ } >+ >+ if (init_drawrect) { >+ BEGIN_LP_RING(6); >+ >+ OUT_RING(GFX_OP_DRAWRECT_INFO); >+ OUT_RING(0); >+ OUT_RING(0); >+ OUT_RING(sarea_priv->width | sarea_priv->height << 16); >+ OUT_RING(sarea_priv->width | sarea_priv->height << 16); >+ OUT_RING(0); >+ >+ ADVANCE_LP_RING(); >+ >+ sarea_priv->ctxOwner = DRM_KERNEL_CONTEXT; >+ >+ init_drawrect = 0; >+ } >+ > rect = drw->rects; >- pipe = swap_hit->pipe; >- top = upper[pipe]; >- bottom = lower[pipe]; >+ top = upper[plane]; >+ bottom = lower[plane]; >+ >+ front = (dev_priv->sarea_priv->pf_current_page >> >+ (2 * plane)) & 0x3; >+ back = (front + 1) % num_pages; > > for (num_rects = drw->num_rects; num_rects--; rect++) { > int y1 = max(rect->y1, top); >@@ -187,17 +279,17 @@ static void i915_vblank_tasklet(struct d > OUT_RING(pitchropcpp); > OUT_RING((y1 << 16) | rect->x1); > OUT_RING((y2 << 16) | rect->x2); >- OUT_RING(sarea_priv->front_offset); >+ OUT_RING(offsets[front]); > OUT_RING((y1 << 16) | rect->x1); > OUT_RING(pitchropcpp & 0xffff); >- OUT_RING(sarea_priv->back_offset); >+ OUT_RING(offsets[back]); > > ADVANCE_LP_RING(); > } > } > } > >- spin_unlock_irqrestore(&dev->drw_lock, irqflags); >+ DRM_SPINUNLOCK(&dev->drw_lock); > > list_for_each_safe(hit, tmp, &hits) { > drm_i915_vbl_swap_t *swap_hit = >@@ -220,11 +312,11 @@ irqreturn_t i915_driver_irq_handler(DRM_ > pipeb_stats = I915_READ(I915REG_PIPEBSTAT); > > temp = I915_READ16(I915REG_INT_IDENTITY_R); >+ temp &= (dev_priv->irq_enable_reg | USER_INT_FLAG); > >- temp &= (USER_INT_FLAG | VSYNC_PIPEA_FLAG | VSYNC_PIPEB_FLAG); >- >+#if 0 > DRM_DEBUG("%s flag=%08x\n", __FUNCTION__, temp); >- >+#endif > if (temp == 0) > return IRQ_NONE; > >@@ -234,8 +326,12 @@ irqreturn_t i915_driver_irq_handler(DRM_ > > dev_priv->sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); > >- if (temp & USER_INT_FLAG) >+ if (temp & USER_INT_FLAG) { > DRM_WAKEUP(&dev_priv->irq_queue); >+#ifdef I915_HAVE_FENCE >+ i915_fence_handler(dev); >+#endif >+ } > > if (temp & (VSYNC_PIPEA_FLAG | VSYNC_PIPEB_FLAG)) { > int vblank_pipe = dev_priv->vblank_pipe; >@@ -269,7 +365,7 @@ irqreturn_t i915_driver_irq_handler(DRM_ > return IRQ_HANDLED; > } > >-static int i915_emit_irq(struct drm_device * dev) >+int i915_emit_irq(struct drm_device *dev) > { > drm_i915_private_t *dev_priv = dev->dev_private; > RING_LOCALS; >@@ -278,23 +374,38 @@ static int i915_emit_irq(struct drm_devi > > DRM_DEBUG("%s\n", __FUNCTION__); > >- dev_priv->sarea_priv->last_enqueue = ++dev_priv->counter; >+ i915_emit_breadcrumb(dev); > >- if (dev_priv->counter > 0x7FFFFFFFUL) >- dev_priv->sarea_priv->last_enqueue = dev_priv->counter = 1; >- >- BEGIN_LP_RING(6); >- OUT_RING(CMD_STORE_DWORD_IDX); >- OUT_RING(20); >- OUT_RING(dev_priv->counter); >- OUT_RING(0); >+ BEGIN_LP_RING(2); > OUT_RING(0); > OUT_RING(GFX_OP_USER_INTERRUPT); > ADVANCE_LP_RING(); >- >+ > return dev_priv->counter; > } > >+void i915_user_irq_on(drm_i915_private_t *dev_priv) >+{ >+ DRM_SPINLOCK(&dev_priv->user_irq_lock); >+ if (dev_priv->irq_enabled && (++dev_priv->user_irq_refcount == 1)){ >+ dev_priv->irq_enable_reg |= USER_INT_FLAG; >+ I915_WRITE16(I915REG_INT_ENABLE_R, dev_priv->irq_enable_reg); >+ } >+ DRM_SPINUNLOCK(&dev_priv->user_irq_lock); >+ >+} >+ >+void i915_user_irq_off(drm_i915_private_t *dev_priv) >+{ >+ DRM_SPINLOCK(&dev_priv->user_irq_lock); >+ if (dev_priv->irq_enabled && (--dev_priv->user_irq_refcount == 0)) { >+ // dev_priv->irq_enable_reg &= ~USER_INT_FLAG; >+ // I915_WRITE16(I915REG_INT_ENABLE_R, dev_priv->irq_enable_reg); >+ } >+ DRM_SPINUNLOCK(&dev_priv->user_irq_lock); >+} >+ >+ > static int i915_wait_irq(struct drm_device * dev, int irq_nr) > { > drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >@@ -306,10 +417,10 @@ static int i915_wait_irq(struct drm_devi > if (READ_BREADCRUMB(dev_priv) >= irq_nr) > return 0; > >- dev_priv->sarea_priv->perf_boxes |= I915_BOX_WAIT; >- >+ i915_user_irq_on(dev_priv); > DRM_WAIT_ON(ret, dev_priv->irq_queue, 3 * DRM_HZ, > READ_BREADCRUMB(dev_priv) >= irq_nr); >+ i915_user_irq_off(dev_priv); > > if (ret == -EBUSY) { > DRM_ERROR("%s: EBUSY -- rec: %d emitted: %d\n", >@@ -321,7 +432,8 @@ static int i915_wait_irq(struct drm_devi > return ret; > } > >-static int i915_driver_vblank_do_wait(struct drm_device *dev, unsigned int *sequence, >+static int i915_driver_vblank_do_wait(struct drm_device *dev, >+ unsigned int *sequence, > atomic_t *counter) > { > drm_i915_private_t *dev_priv = dev->dev_private; >@@ -336,21 +448,33 @@ static int i915_driver_vblank_do_wait(st > DRM_WAIT_ON(ret, dev->vbl_queue, 3 * DRM_HZ, > (((cur_vblank = atomic_read(counter)) > - *sequence) <= (1<<23))); >- >+ > *sequence = cur_vblank; > > return ret; > } > >- > int i915_driver_vblank_wait(struct drm_device *dev, unsigned int *sequence) > { >- return i915_driver_vblank_do_wait(dev, sequence, &dev->vbl_received); >+ atomic_t *counter; >+ >+ if (i915_get_pipe(dev, 0) == 0) >+ counter = &dev->vbl_received; >+ else >+ counter = &dev->vbl_received2; >+ return i915_driver_vblank_do_wait(dev, sequence, counter); > } > > int i915_driver_vblank_wait2(struct drm_device *dev, unsigned int *sequence) > { >- return i915_driver_vblank_do_wait(dev, sequence, &dev->vbl_received2); >+ atomic_t *counter; >+ >+ if (i915_get_pipe(dev, 1) == 0) >+ counter = &dev->vbl_received; >+ else >+ counter = &dev->vbl_received2; >+ >+ return i915_driver_vblank_do_wait(dev, sequence, counter); > } > > /* Needs the lock as it touches the ring. >@@ -382,7 +506,7 @@ int i915_irq_emit(struct drm_device *dev > /* Doesn't need the hardware lock. > */ > int i915_irq_wait(struct drm_device *dev, void *data, >- struct drm_file *file_priv) >+ struct drm_file *file_priv) > { > drm_i915_private_t *dev_priv = dev->dev_private; > drm_i915_irq_wait_t *irqwait = data; >@@ -398,15 +522,15 @@ int i915_irq_wait(struct drm_device *dev > static void i915_enable_interrupt (struct drm_device *dev) > { > drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; >- u16 flag; > >- flag = 0; >+ dev_priv->irq_enable_reg = USER_INT_FLAG; > if (dev_priv->vblank_pipe & DRM_I915_VBLANK_PIPE_A) >- flag |= VSYNC_PIPEA_FLAG; >+ dev_priv->irq_enable_reg |= VSYNC_PIPEA_FLAG; > if (dev_priv->vblank_pipe & DRM_I915_VBLANK_PIPE_B) >- flag |= VSYNC_PIPEB_FLAG; >+ dev_priv->irq_enable_reg |= VSYNC_PIPEB_FLAG; > >- I915_WRITE16(I915REG_INT_ENABLE_R, USER_INT_FLAG | flag); >+ I915_WRITE16(I915REG_INT_ENABLE_R, dev_priv->irq_enable_reg); >+ dev_priv->irq_enabled = 1; > } > > /* Set the vblank monitor pipe >@@ -423,7 +547,7 @@ int i915_vblank_pipe_set(struct drm_devi > } > > if (pipe->pipe & ~(DRM_I915_VBLANK_PIPE_A|DRM_I915_VBLANK_PIPE_B)) { >- DRM_ERROR("%s called with invalid pipe 0x%x\n", >+ DRM_ERROR("%s called with invalid pipe 0x%x\n", > __FUNCTION__, pipe->pipe); > return -EINVAL; > } >@@ -466,7 +590,7 @@ int i915_vblank_swap(struct drm_device * > drm_i915_private_t *dev_priv = dev->dev_private; > drm_i915_vblank_swap_t *swap = data; > drm_i915_vbl_swap_t *vbl_swap; >- unsigned int pipe, seqtype, curseq; >+ unsigned int pipe, seqtype, curseq, plane; > unsigned long irqflags; > struct list_head *list; > >@@ -481,12 +605,14 @@ int i915_vblank_swap(struct drm_device * > } > > if (swap->seqtype & ~(_DRM_VBLANK_RELATIVE | _DRM_VBLANK_ABSOLUTE | >- _DRM_VBLANK_SECONDARY | _DRM_VBLANK_NEXTONMISS)) { >+ _DRM_VBLANK_SECONDARY | _DRM_VBLANK_NEXTONMISS | >+ _DRM_VBLANK_FLIP)) { > DRM_ERROR("Invalid sequence type 0x%x\n", swap->seqtype); > return -EINVAL; > } > >- pipe = (swap->seqtype & _DRM_VBLANK_SECONDARY) ? 1 : 0; >+ plane = (swap->seqtype & _DRM_VBLANK_SECONDARY) ? 1 : 0; >+ pipe = i915_get_pipe(dev, plane); > > seqtype = swap->seqtype & (_DRM_VBLANK_RELATIVE | _DRM_VBLANK_ABSOLUTE); > >@@ -495,15 +621,20 @@ int i915_vblank_swap(struct drm_device * > return -EINVAL; > } > >- spin_lock_irqsave(&dev->drw_lock, irqflags); >+ DRM_SPINLOCK_IRQSAVE(&dev->drw_lock, irqflags); > >+ /* It makes no sense to schedule a swap for a drawable that doesn't have >+ * valid information at this point. E.g. this could mean that the X >+ * server is too old to push drawable information to the DRM, in which >+ * case all such swaps would become ineffective. >+ */ > if (!drm_get_drawable_info(dev, swap->drawable)) { >- spin_unlock_irqrestore(&dev->drw_lock, irqflags); >+ DRM_SPINUNLOCK_IRQRESTORE(&dev->drw_lock, irqflags); > DRM_DEBUG("Invalid drawable ID %d\n", swap->drawable); > return -EINVAL; > } > >- spin_unlock_irqrestore(&dev->drw_lock, irqflags); >+ DRM_SPINUNLOCK_IRQRESTORE(&dev->drw_lock, irqflags); > > curseq = atomic_read(pipe ? &dev->vbl_received2 : &dev->vbl_received); > >@@ -519,21 +650,50 @@ int i915_vblank_swap(struct drm_device * > } > } > >- spin_lock_irqsave(&dev_priv->swaps_lock, irqflags); >+ if (swap->seqtype & _DRM_VBLANK_FLIP) { >+ swap->sequence--; >+ >+ if ((curseq - swap->sequence) <= (1<<23)) { >+ struct drm_drawable_info *drw; >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ DRM_SPINLOCK_IRQSAVE(&dev->drw_lock, irqflags); >+ >+ drw = drm_get_drawable_info(dev, swap->drawable); >+ >+ if (!drw) { >+ DRM_SPINUNLOCK_IRQRESTORE(&dev->drw_lock, >+ irqflags); >+ DRM_DEBUG("Invalid drawable ID %d\n", >+ swap->drawable); >+ return -EINVAL; >+ } >+ >+ i915_dispatch_vsync_flip(dev, drw, plane); >+ >+ DRM_SPINUNLOCK_IRQRESTORE(&dev->drw_lock, irqflags); >+ >+ return 0; >+ } >+ } >+ >+ DRM_SPINLOCK_IRQSAVE(&dev_priv->swaps_lock, irqflags); > > list_for_each(list, &dev_priv->vbl_swaps.head) { > vbl_swap = list_entry(list, drm_i915_vbl_swap_t, head); > > if (vbl_swap->drw_id == swap->drawable && >- vbl_swap->pipe == pipe && >+ vbl_swap->plane == plane && > vbl_swap->sequence == swap->sequence) { >- spin_unlock_irqrestore(&dev_priv->swaps_lock, irqflags); >+ vbl_swap->flip = (swap->seqtype & _DRM_VBLANK_FLIP); >+ DRM_SPINUNLOCK_IRQRESTORE(&dev_priv->swaps_lock, irqflags); > DRM_DEBUG("Already scheduled\n"); > return 0; > } > } > >- spin_unlock_irqrestore(&dev_priv->swaps_lock, irqflags); >+ DRM_SPINUNLOCK_IRQRESTORE(&dev_priv->swaps_lock, irqflags); > > if (dev_priv->swaps_pending >= 100) { > DRM_DEBUG("Too many swaps queued\n"); >@@ -550,15 +710,19 @@ int i915_vblank_swap(struct drm_device * > DRM_DEBUG("\n"); > > vbl_swap->drw_id = swap->drawable; >- vbl_swap->pipe = pipe; >+ vbl_swap->plane = plane; > vbl_swap->sequence = swap->sequence; >+ vbl_swap->flip = (swap->seqtype & _DRM_VBLANK_FLIP); > >- spin_lock_irqsave(&dev_priv->swaps_lock, irqflags); >+ if (vbl_swap->flip) >+ swap->sequence++; > >- list_add_tail((struct list_head *)vbl_swap, &dev_priv->vbl_swaps.head); >+ DRM_SPINLOCK_IRQSAVE(&dev_priv->swaps_lock, irqflags); >+ >+ list_add_tail(&vbl_swap->head, &dev_priv->vbl_swaps.head); > dev_priv->swaps_pending++; > >- spin_unlock_irqrestore(&dev_priv->swaps_lock, irqflags); >+ DRM_SPINUNLOCK_IRQRESTORE(&dev_priv->swaps_lock, irqflags); > > return 0; > } >@@ -569,7 +733,7 @@ void i915_driver_irq_preinstall(struct d > { > drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; > >- I915_WRITE16(I915REG_HWSTAM, 0xfffe); >+ I915_WRITE16(I915REG_HWSTAM, 0xeffe); > I915_WRITE16(I915REG_INT_MASK_R, 0x0); > I915_WRITE16(I915REG_INT_ENABLE_R, 0x0); > } >@@ -578,14 +742,21 @@ void i915_driver_irq_postinstall(struct > { > drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; > >- spin_lock_init(&dev_priv->swaps_lock); >+ DRM_SPININIT(&dev_priv->swaps_lock, "swap"); > INIT_LIST_HEAD(&dev_priv->vbl_swaps.head); > dev_priv->swaps_pending = 0; > >- if (!dev_priv->vblank_pipe) >- dev_priv->vblank_pipe = DRM_I915_VBLANK_PIPE_A; >+ DRM_SPININIT(&dev_priv->user_irq_lock, "userirq"); >+ dev_priv->user_irq_refcount = 0; >+ > i915_enable_interrupt(dev); > DRM_INIT_WAITQUEUE(&dev_priv->irq_queue); >+ >+ /* >+ * Initialize the hardware status page IRQ location. >+ */ >+ >+ I915_WRITE(I915REG_INSTPM, (1 << 5) | (1 << 21)); > } > > void i915_driver_irq_uninstall(struct drm_device * dev) >@@ -596,6 +767,7 @@ void i915_driver_irq_uninstall(struct dr > if (!dev_priv) > return; > >+ dev_priv->irq_enabled = 0; > I915_WRITE16(I915REG_HWSTAM, 0xffff); > I915_WRITE16(I915REG_INT_MASK_R, 0xffff); > I915_WRITE16(I915REG_INT_ENABLE_R, 0x0); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/i915_mem.c linux-2.6.23.i686/drivers/char/drm/i915_mem.c >--- linux-2.6.23.i686.orig/drivers/char/drm/i915_mem.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/i915_mem.c 2008-01-06 09:24:57.000000000 +0100 >@@ -375,7 +375,7 @@ int i915_mem_destroy_heap( struct drm_de > DRM_ERROR("get_heap failed"); > return -EFAULT; > } >- >+ > if (!*heap) { > DRM_ERROR("heap not initialized?"); > return -EFAULT; >@@ -384,4 +384,3 @@ int i915_mem_destroy_heap( struct drm_de > i915_mem_takedown( heap ); > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/imagine_drv.c linux-2.6.23.i686/drivers/char/drm/imagine_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/imagine_drv.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/imagine_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,85 @@ >+/* >+ * Copyright 2005 Adam Jackson. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * on the rights to use, copy, modify, merge, publish, distribute, sub >+ * license, and/or sell copies of the Software, and to permit persons to whom >+ * the Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * ADAM JACKSON BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER >+ * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN >+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ */ >+ >+/* derived from tdfx_drv.c */ >+ >+#include "drmP.h" >+#include "imagine_drv.h" >+ >+#include "drm_pciids.h" >+ >+static struct drm_driver driver; >+ >+static struct pci_device_id pciidlist[] = { >+ imagine_PCI_IDS >+}; >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ >+static struct drm_driver driver = { >+ .driver_features = DRIVER_USE_MTRR, >+ .reclaim_buffers = drm_core_reclaim_buffers, >+ .get_map_ofs = drm_core_get_map_ofs, >+ .get_reg_ofs = drm_core_get_reg_ofs, >+ .fops = { >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+ }, >+ .pci_driver = { >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), >+ }, >+ >+ .name = DRIVER_NAME, >+ .desc = DRIVER_DESC, >+ .date = DRIVER_DATE, >+ .major = DRIVER_MAJOR, >+ .minor = DRIVER_MINOR, >+ .patchlevel = DRIVER_PATCHLEVEL, >+}; >+ >+static int __init imagine_init(void) >+{ >+ return drm_init(&driver, pciidlist); >+} >+ >+static void __exit imagine_exit(void) >+{ >+ drm_exit(&driver); >+} >+ >+module_init(imagine_init); >+module_exit(imagine_exit); >+ >+MODULE_AUTHOR(DRIVER_AUTHOR); >+MODULE_DESCRIPTION(DRIVER_DESC); >+MODULE_LICENSE("GPL and additional rights"); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/Kconfig linux-2.6.23.i686/drivers/char/drm/Kconfig >--- linux-2.6.23.i686.orig/drivers/char/drm/Kconfig 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/Kconfig 2008-01-06 09:24:57.000000000 +0100 >@@ -37,8 +37,8 @@ config DRM_RADEON > help > Choose this option if you have an ATI Radeon graphics card. There > are both PCI and AGP versions. You don't need to choose this to >- run the Radeon in plain VGA mode. >- >+ run the Radeon in plain VGA mode. There is a product page at >+ <http://www.ati.com/na/pages/products/pc/radeon32/index.html>. > If M is selected, the module will be called radeon. > > config DRM_I810 >@@ -54,49 +54,35 @@ choice > depends on DRM && AGP && AGP_INTEL > optional > >-config DRM_I830 >- tristate "i830 driver" >- help >- Choose this option if you have a system that has Intel 830M, 845G, >- 852GM, 855GM or 865G integrated graphics. If M is selected, the >- module will be called i830. AGP support is required for this driver >- to work. This driver is used by the older X releases X.org 6.7 and >- XFree86 4.3. If unsure, build this and i915 as modules and the X server >- will load the correct one. >- > config DRM_I915 > tristate "i915 driver" > help > Choose this option if you have a system that has Intel 830M, 845G, >- 852GM, 855GM 865G or 915G integrated graphics. If M is selected, the >- module will be called i915. AGP support is required for this driver >- to work. This driver is used by the Intel driver in X.org 6.8 and >- XFree86 4.4 and above. If unsure, build this and i830 as modules and >- the X server will load the correct one. >+ 852GM, 855GM, 865G, 915G, 915GM, 945G, 945GM and 965G integrated >+ graphics. If M is selected, the module will be called i915. >+ AGP support is required for this driver to work. > > endchoice > > config DRM_MGA > tristate "Matrox g200/g400" >- depends on DRM >+ depends on DRM && (!X86_64 || BROKEN) && (!PPC || BROKEN) > help >- Choose this option if you have a Matrox G200, G400 or G450 graphics >- card. If M is selected, the module will be called mga. AGP >- support is required for this driver to work. >+ Choose this option if you have a Matrox G200, G400, G450 or G550 >+ graphics card. If M is selected, the module will be called mga. > > config DRM_SIS > tristate "SiS video cards" >- depends on DRM && AGP >+ depends on DRM > help > Choose this option if you have a SiS 630 or compatible video >- chipset. If M is selected the module will be called sis. AGP >- support is required for this driver to work. >+ chipset. If M is selected the module will be called sis. > > config DRM_VIA > tristate "Via unichrome video cards" >- depends on DRM >+ depends on DRM > help >- Choose this option if you have a Via unichrome or compatible video >+ Choose this option if you have a Via unichrome or compatible video > chipset. If M is selected the module will be called via. > > config DRM_SAVAGE >@@ -110,4 +96,17 @@ config DRM_NOUVEAU > tristate "Nvidia video cards" > depends on DRM > help >- Choose this for nvidia open source 3d driver >+ Choose this for nvidia open source 3d driver >+ >+#config DRM_MACH64 >+# tristate "ATI Rage Pro (Mach64)" >+# depends on DRM && PCI >+# help >+# Choose this option if you have an ATI Rage Pro (mach64 chipset) >+# graphics card. Example cards include: 3D Rage Pro, Xpert 98, >+# 3D Rage LT Pro, 3D Rage XL/XC, and 3D Rage Mobility (P/M, M1). >+# Cards earlier than ATI Rage Pro (e.g. Rage II) are not supported. >+# If M is selected, the module will be called mach64. AGP support for >+# this card is strongly suggested (unless you have a PCI version). >+ >+ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mach64_dma.c linux-2.6.23.i686/drivers/char/drm/mach64_dma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mach64_dma.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mach64_dma.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,1785 @@ >+/* mach64_dma.c -- DMA support for mach64 (Rage Pro) driver -*- linux-c -*- */ >+/** >+ * \file mach64_dma.c >+ * DMA support for mach64 (Rage Pro) driver >+ * >+ * \author Gareth Hughes <gareth@valinux.com> >+ * \author Frank C. Earl <fearl@airmail.net> >+ * \author Leif Delgass <ldelgass@retinalburn.net> >+ * \author José Fonseca <j_r_fonseca@yahoo.co.uk> >+ */ >+ >+/* >+ * Copyright 2000 Gareth Hughes >+ * Copyright 2002 Frank C. Earl >+ * Copyright 2002-2003 Leif Delgass >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT OWNER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER >+ * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN >+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ */ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "mach64_drm.h" >+#include "mach64_drv.h" >+ >+/*******************************************************************/ >+/** \name Engine, FIFO control */ >+/*@{*/ >+ >+/** >+ * Waits for free entries in the FIFO. >+ * >+ * \note Most writes to Mach64 registers are automatically routed through >+ * command FIFO which is 16 entry deep. Prior to writing to any draw engine >+ * register one has to ensure that enough FIFO entries are available by calling >+ * this function. Failure to do so may cause the engine to lock. >+ * >+ * \param dev_priv pointer to device private data structure. >+ * \param entries number of free entries in the FIFO to wait for. >+ * >+ * \returns zero on success, or -EBUSY if the timeout (specificed by >+ * drm_mach64_private::usec_timeout) occurs. >+ */ >+int mach64_do_wait_for_fifo(drm_mach64_private_t * dev_priv, int entries) >+{ >+ int slots = 0, i; >+ >+ for (i = 0; i < dev_priv->usec_timeout; i++) { >+ slots = (MACH64_READ(MACH64_FIFO_STAT) & MACH64_FIFO_SLOT_MASK); >+ if (slots <= (0x8000 >> entries)) >+ return 0; >+ DRM_UDELAY(1); >+ } >+ >+ DRM_INFO("%s failed! slots=%d entries=%d\n", __FUNCTION__, slots, >+ entries); >+ return -EBUSY; >+} >+ >+/** >+ * Wait for the draw engine to be idle. >+ */ >+int mach64_do_wait_for_idle(drm_mach64_private_t * dev_priv) >+{ >+ int i, ret; >+ >+ ret = mach64_do_wait_for_fifo(dev_priv, 16); >+ if (ret < 0) >+ return ret; >+ >+ for (i = 0; i < dev_priv->usec_timeout; i++) { >+ if (!(MACH64_READ(MACH64_GUI_STAT) & MACH64_GUI_ACTIVE)) { >+ return 0; >+ } >+ DRM_UDELAY(1); >+ } >+ >+ DRM_INFO("%s failed! GUI_STAT=0x%08x\n", __FUNCTION__, >+ MACH64_READ(MACH64_GUI_STAT)); >+ mach64_dump_ring_info(dev_priv); >+ return -EBUSY; >+} >+ >+/** >+ * Wait for free entries in the ring buffer. >+ * >+ * The Mach64 bus master can be configured to act as a virtual FIFO, using a >+ * circular buffer (commonly referred as "ring buffer" in other drivers) with >+ * pointers to engine commands. This allows the CPU to do other things while >+ * the graphics engine is busy, i.e., DMA mode. >+ * >+ * This function should be called before writing new entries to the ring >+ * buffer. >+ * >+ * \param dev_priv pointer to device private data structure. >+ * \param n number of free entries in the ring buffer to wait for. >+ * >+ * \returns zero on success, or -EBUSY if the timeout (specificed by >+ * drm_mach64_private_t::usec_timeout) occurs. >+ * >+ * \sa mach64_dump_ring_info() >+ */ >+int mach64_wait_ring(drm_mach64_private_t * dev_priv, int n) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ int i; >+ >+ for (i = 0; i < dev_priv->usec_timeout; i++) { >+ mach64_update_ring_snapshot(dev_priv); >+ if (ring->space >= n) { >+ if (i > 0) { >+ DRM_DEBUG("%s: %d usecs\n", __FUNCTION__, i); >+ } >+ return 0; >+ } >+ DRM_UDELAY(1); >+ } >+ >+ /* FIXME: This is being ignored... */ >+ DRM_ERROR("failed!\n"); >+ mach64_dump_ring_info(dev_priv); >+ return -EBUSY; >+} >+ >+/** >+ * Wait until all DMA requests have been processed... >+ * >+ * \sa mach64_wait_ring() >+ */ >+static int mach64_ring_idle(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ u32 head; >+ int i; >+ >+ head = ring->head; >+ i = 0; >+ while (i < dev_priv->usec_timeout) { >+ mach64_update_ring_snapshot(dev_priv); >+ if (ring->head == ring->tail && >+ !(MACH64_READ(MACH64_GUI_STAT) & MACH64_GUI_ACTIVE)) { >+ if (i > 0) { >+ DRM_DEBUG("%s: %d usecs\n", __FUNCTION__, i); >+ } >+ return 0; >+ } >+ if (ring->head == head) { >+ ++i; >+ } else { >+ head = ring->head; >+ i = 0; >+ } >+ DRM_UDELAY(1); >+ } >+ >+ DRM_INFO("%s failed! GUI_STAT=0x%08x\n", __FUNCTION__, >+ MACH64_READ(MACH64_GUI_STAT)); >+ mach64_dump_ring_info(dev_priv); >+ return -EBUSY; >+} >+ >+/** >+ * Reset the the ring buffer descriptors. >+ * >+ * \sa mach64_do_engine_reset() >+ */ >+static void mach64_ring_reset(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ >+ mach64_do_release_used_buffers(dev_priv); >+ ring->head_addr = ring->start_addr; >+ ring->head = ring->tail = 0; >+ ring->space = ring->size; >+ >+ MACH64_WRITE(MACH64_BM_GUI_TABLE_CMD, >+ ring->head_addr | MACH64_CIRCULAR_BUF_SIZE_16KB); >+ >+ dev_priv->ring_running = 0; >+} >+ >+/** >+ * Ensure the all the queued commands will be processed. >+ */ >+int mach64_do_dma_flush(drm_mach64_private_t * dev_priv) >+{ >+ /* FIXME: It's not necessary to wait for idle when flushing >+ * we just need to ensure the ring will be completely processed >+ * in finite time without another ioctl >+ */ >+ return mach64_ring_idle(dev_priv); >+} >+ >+/** >+ * Stop all DMA activity. >+ */ >+int mach64_do_dma_idle(drm_mach64_private_t * dev_priv) >+{ >+ int ret; >+ >+ /* wait for completion */ >+ if ((ret = mach64_ring_idle(dev_priv)) < 0) { >+ DRM_ERROR("%s failed BM_GUI_TABLE=0x%08x tail: %u\n", >+ __FUNCTION__, MACH64_READ(MACH64_BM_GUI_TABLE), >+ dev_priv->ring.tail); >+ return ret; >+ } >+ >+ mach64_ring_stop(dev_priv); >+ >+ /* clean up after pass */ >+ mach64_do_release_used_buffers(dev_priv); >+ return 0; >+} >+ >+/** >+ * Reset the engine. This will stop the DMA if it is running. >+ */ >+int mach64_do_engine_reset(drm_mach64_private_t * dev_priv) >+{ >+ u32 tmp; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ /* Kill off any outstanding DMA transfers. >+ */ >+ tmp = MACH64_READ(MACH64_BUS_CNTL); >+ MACH64_WRITE(MACH64_BUS_CNTL, tmp | MACH64_BUS_MASTER_DIS); >+ >+ /* Reset the GUI engine (high to low transition). >+ */ >+ tmp = MACH64_READ(MACH64_GEN_TEST_CNTL); >+ MACH64_WRITE(MACH64_GEN_TEST_CNTL, tmp & ~MACH64_GUI_ENGINE_ENABLE); >+ /* Enable the GUI engine >+ */ >+ tmp = MACH64_READ(MACH64_GEN_TEST_CNTL); >+ MACH64_WRITE(MACH64_GEN_TEST_CNTL, tmp | MACH64_GUI_ENGINE_ENABLE); >+ >+ /* ensure engine is not locked up by clearing any FIFO or HOST errors >+ */ >+ tmp = MACH64_READ(MACH64_BUS_CNTL); >+ MACH64_WRITE(MACH64_BUS_CNTL, tmp | 0x00a00000); >+ >+ /* Once GUI engine is restored, disable bus mastering */ >+ MACH64_WRITE(MACH64_SRC_CNTL, 0); >+ >+ /* Reset descriptor ring */ >+ mach64_ring_reset(dev_priv); >+ >+ return 0; >+} >+ >+/*@}*/ >+ >+ >+/*******************************************************************/ >+/** \name Debugging output */ >+/*@{*/ >+ >+/** >+ * Dump engine registers values. >+ */ >+void mach64_dump_engine_info(drm_mach64_private_t * dev_priv) >+{ >+ DRM_INFO("\n"); >+ if (!dev_priv->is_pci) { >+ DRM_INFO(" AGP_BASE = 0x%08x\n", >+ MACH64_READ(MACH64_AGP_BASE)); >+ DRM_INFO(" AGP_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_AGP_CNTL)); >+ } >+ DRM_INFO(" ALPHA_TST_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_ALPHA_TST_CNTL)); >+ DRM_INFO("\n"); >+ DRM_INFO(" BM_COMMAND = 0x%08x\n", >+ MACH64_READ(MACH64_BM_COMMAND)); >+ DRM_INFO("BM_FRAME_BUF_OFFSET = 0x%08x\n", >+ MACH64_READ(MACH64_BM_FRAME_BUF_OFFSET)); >+ DRM_INFO(" BM_GUI_TABLE = 0x%08x\n", >+ MACH64_READ(MACH64_BM_GUI_TABLE)); >+ DRM_INFO(" BM_STATUS = 0x%08x\n", >+ MACH64_READ(MACH64_BM_STATUS)); >+ DRM_INFO(" BM_SYSTEM_MEM_ADDR = 0x%08x\n", >+ MACH64_READ(MACH64_BM_SYSTEM_MEM_ADDR)); >+ DRM_INFO(" BM_SYSTEM_TABLE = 0x%08x\n", >+ MACH64_READ(MACH64_BM_SYSTEM_TABLE)); >+ DRM_INFO(" BUS_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_BUS_CNTL)); >+ DRM_INFO("\n"); >+ /* DRM_INFO( " CLOCK_CNTL = 0x%08x\n", MACH64_READ( MACH64_CLOCK_CNTL ) ); */ >+ DRM_INFO(" CLR_CMP_CLR = 0x%08x\n", >+ MACH64_READ(MACH64_CLR_CMP_CLR)); >+ DRM_INFO(" CLR_CMP_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_CLR_CMP_CNTL)); >+ /* DRM_INFO( " CLR_CMP_MSK = 0x%08x\n", MACH64_READ( MACH64_CLR_CMP_MSK ) ); */ >+ DRM_INFO(" CONFIG_CHIP_ID = 0x%08x\n", >+ MACH64_READ(MACH64_CONFIG_CHIP_ID)); >+ DRM_INFO(" CONFIG_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_CONFIG_CNTL)); >+ DRM_INFO(" CONFIG_STAT0 = 0x%08x\n", >+ MACH64_READ(MACH64_CONFIG_STAT0)); >+ DRM_INFO(" CONFIG_STAT1 = 0x%08x\n", >+ MACH64_READ(MACH64_CONFIG_STAT1)); >+ DRM_INFO(" CONFIG_STAT2 = 0x%08x\n", >+ MACH64_READ(MACH64_CONFIG_STAT2)); >+ DRM_INFO(" CRC_SIG = 0x%08x\n", MACH64_READ(MACH64_CRC_SIG)); >+ DRM_INFO(" CUSTOM_MACRO_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_CUSTOM_MACRO_CNTL)); >+ DRM_INFO("\n"); >+ /* DRM_INFO( " DAC_CNTL = 0x%08x\n", MACH64_READ( MACH64_DAC_CNTL ) ); */ >+ /* DRM_INFO( " DAC_REGS = 0x%08x\n", MACH64_READ( MACH64_DAC_REGS ) ); */ >+ DRM_INFO(" DP_BKGD_CLR = 0x%08x\n", >+ MACH64_READ(MACH64_DP_BKGD_CLR)); >+ DRM_INFO(" DP_FRGD_CLR = 0x%08x\n", >+ MACH64_READ(MACH64_DP_FRGD_CLR)); >+ DRM_INFO(" DP_MIX = 0x%08x\n", MACH64_READ(MACH64_DP_MIX)); >+ DRM_INFO(" DP_PIX_WIDTH = 0x%08x\n", >+ MACH64_READ(MACH64_DP_PIX_WIDTH)); >+ DRM_INFO(" DP_SRC = 0x%08x\n", MACH64_READ(MACH64_DP_SRC)); >+ DRM_INFO(" DP_WRITE_MASK = 0x%08x\n", >+ MACH64_READ(MACH64_DP_WRITE_MASK)); >+ DRM_INFO(" DSP_CONFIG = 0x%08x\n", >+ MACH64_READ(MACH64_DSP_CONFIG)); >+ DRM_INFO(" DSP_ON_OFF = 0x%08x\n", >+ MACH64_READ(MACH64_DSP_ON_OFF)); >+ DRM_INFO(" DST_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_DST_CNTL)); >+ DRM_INFO(" DST_OFF_PITCH = 0x%08x\n", >+ MACH64_READ(MACH64_DST_OFF_PITCH)); >+ DRM_INFO("\n"); >+ /* DRM_INFO( " EXT_DAC_REGS = 0x%08x\n", MACH64_READ( MACH64_EXT_DAC_REGS ) ); */ >+ DRM_INFO(" EXT_MEM_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_EXT_MEM_CNTL)); >+ DRM_INFO("\n"); >+ DRM_INFO(" FIFO_STAT = 0x%08x\n", >+ MACH64_READ(MACH64_FIFO_STAT)); >+ DRM_INFO("\n"); >+ DRM_INFO(" GEN_TEST_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_GEN_TEST_CNTL)); >+ /* DRM_INFO( " GP_IO = 0x%08x\n", MACH64_READ( MACH64_GP_IO ) ); */ >+ DRM_INFO(" GUI_CMDFIFO_DATA = 0x%08x\n", >+ MACH64_READ(MACH64_GUI_CMDFIFO_DATA)); >+ DRM_INFO(" GUI_CMDFIFO_DEBUG = 0x%08x\n", >+ MACH64_READ(MACH64_GUI_CMDFIFO_DEBUG)); >+ DRM_INFO(" GUI_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_GUI_CNTL)); >+ DRM_INFO(" GUI_STAT = 0x%08x\n", >+ MACH64_READ(MACH64_GUI_STAT)); >+ DRM_INFO(" GUI_TRAJ_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_GUI_TRAJ_CNTL)); >+ DRM_INFO("\n"); >+ DRM_INFO(" HOST_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_HOST_CNTL)); >+ DRM_INFO(" HW_DEBUG = 0x%08x\n", >+ MACH64_READ(MACH64_HW_DEBUG)); >+ DRM_INFO("\n"); >+ DRM_INFO(" MEM_ADDR_CONFIG = 0x%08x\n", >+ MACH64_READ(MACH64_MEM_ADDR_CONFIG)); >+ DRM_INFO(" MEM_BUF_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_MEM_BUF_CNTL)); >+ DRM_INFO("\n"); >+ DRM_INFO(" PAT_REG0 = 0x%08x\n", >+ MACH64_READ(MACH64_PAT_REG0)); >+ DRM_INFO(" PAT_REG1 = 0x%08x\n", >+ MACH64_READ(MACH64_PAT_REG1)); >+ DRM_INFO("\n"); >+ DRM_INFO(" SC_LEFT = 0x%08x\n", MACH64_READ(MACH64_SC_LEFT)); >+ DRM_INFO(" SC_RIGHT = 0x%08x\n", >+ MACH64_READ(MACH64_SC_RIGHT)); >+ DRM_INFO(" SC_TOP = 0x%08x\n", MACH64_READ(MACH64_SC_TOP)); >+ DRM_INFO(" SC_BOTTOM = 0x%08x\n", >+ MACH64_READ(MACH64_SC_BOTTOM)); >+ DRM_INFO("\n"); >+ DRM_INFO(" SCALE_3D_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_SCALE_3D_CNTL)); >+ DRM_INFO(" SCRATCH_REG0 = 0x%08x\n", >+ MACH64_READ(MACH64_SCRATCH_REG0)); >+ DRM_INFO(" SCRATCH_REG1 = 0x%08x\n", >+ MACH64_READ(MACH64_SCRATCH_REG1)); >+ DRM_INFO(" SETUP_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_SETUP_CNTL)); >+ DRM_INFO(" SRC_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_SRC_CNTL)); >+ DRM_INFO("\n"); >+ DRM_INFO(" TEX_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_TEX_CNTL)); >+ DRM_INFO(" TEX_SIZE_PITCH = 0x%08x\n", >+ MACH64_READ(MACH64_TEX_SIZE_PITCH)); >+ DRM_INFO(" TIMER_CONFIG = 0x%08x\n", >+ MACH64_READ(MACH64_TIMER_CONFIG)); >+ DRM_INFO("\n"); >+ DRM_INFO(" Z_CNTL = 0x%08x\n", MACH64_READ(MACH64_Z_CNTL)); >+ DRM_INFO(" Z_OFF_PITCH = 0x%08x\n", >+ MACH64_READ(MACH64_Z_OFF_PITCH)); >+ DRM_INFO("\n"); >+} >+ >+#define MACH64_DUMP_CONTEXT 3 >+ >+/** >+ * Used by mach64_dump_ring_info() to dump the contents of the current buffer >+ * pointed by the ring head. >+ */ >+static void mach64_dump_buf_info(drm_mach64_private_t * dev_priv, >+ struct drm_buf * buf) >+{ >+ u32 addr = GETBUFADDR(buf); >+ u32 used = buf->used >> 2; >+ u32 sys_addr = MACH64_READ(MACH64_BM_SYSTEM_MEM_ADDR); >+ u32 *p = GETBUFPTR(buf); >+ int skipped = 0; >+ >+ DRM_INFO("buffer contents:\n"); >+ >+ while (used) { >+ u32 reg, count; >+ >+ reg = le32_to_cpu(*p++); >+ if (addr <= GETBUFADDR(buf) + MACH64_DUMP_CONTEXT * 4 || >+ (addr >= sys_addr - MACH64_DUMP_CONTEXT * 4 && >+ addr <= sys_addr + MACH64_DUMP_CONTEXT * 4) || >+ addr >= >+ GETBUFADDR(buf) + buf->used - MACH64_DUMP_CONTEXT * 4) { >+ DRM_INFO("%08x: 0x%08x\n", addr, reg); >+ } >+ addr += 4; >+ used--; >+ >+ count = (reg >> 16) + 1; >+ reg = reg & 0xffff; >+ reg = MMSELECT(reg); >+ while (count && used) { >+ if (addr <= GETBUFADDR(buf) + MACH64_DUMP_CONTEXT * 4 || >+ (addr >= sys_addr - MACH64_DUMP_CONTEXT * 4 && >+ addr <= sys_addr + MACH64_DUMP_CONTEXT * 4) || >+ addr >= >+ GETBUFADDR(buf) + buf->used - >+ MACH64_DUMP_CONTEXT * 4) { >+ DRM_INFO("%08x: 0x%04x = 0x%08x\n", addr, >+ reg, le32_to_cpu(*p)); >+ skipped = 0; >+ } else { >+ if (!skipped) { >+ DRM_INFO(" ...\n"); >+ skipped = 1; >+ } >+ } >+ p++; >+ addr += 4; >+ used--; >+ >+ reg += 4; >+ count--; >+ } >+ } >+ >+ DRM_INFO("\n"); >+} >+ >+/** >+ * Dump the ring state and contents, including the contents of the buffer being >+ * processed by the graphics engine. >+ */ >+void mach64_dump_ring_info(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ int i, skipped; >+ >+ DRM_INFO("\n"); >+ >+ DRM_INFO("ring contents:\n"); >+ DRM_INFO(" head_addr: 0x%08x head: %u tail: %u\n\n", >+ ring->head_addr, ring->head, ring->tail); >+ >+ skipped = 0; >+ for (i = 0; i < ring->size / sizeof(u32); i += 4) { >+ if (i <= MACH64_DUMP_CONTEXT * 4 || >+ i >= ring->size / sizeof(u32) - MACH64_DUMP_CONTEXT * 4 || >+ (i >= ring->tail - MACH64_DUMP_CONTEXT * 4 && >+ i <= ring->tail + MACH64_DUMP_CONTEXT * 4) || >+ (i >= ring->head - MACH64_DUMP_CONTEXT * 4 && >+ i <= ring->head + MACH64_DUMP_CONTEXT * 4)) { >+ DRM_INFO(" 0x%08x: 0x%08x 0x%08x 0x%08x 0x%08x%s%s\n", >+ (u32)(ring->start_addr + i * sizeof(u32)), >+ le32_to_cpu(((u32 *) ring->start)[i + 0]), >+ le32_to_cpu(((u32 *) ring->start)[i + 1]), >+ le32_to_cpu(((u32 *) ring->start)[i + 2]), >+ le32_to_cpu(((u32 *) ring->start)[i + 3]), >+ i == ring->head ? " (head)" : "", >+ i == ring->tail ? " (tail)" : ""); >+ skipped = 0; >+ } else { >+ if (!skipped) { >+ DRM_INFO(" ...\n"); >+ skipped = 1; >+ } >+ } >+ } >+ >+ DRM_INFO("\n"); >+ >+ if (ring->head >= 0 && ring->head < ring->size / sizeof(u32)) { >+ struct list_head *ptr; >+ u32 addr = le32_to_cpu(((u32 *) ring->start)[ring->head + 1]); >+ >+ list_for_each(ptr, &dev_priv->pending) { >+ drm_mach64_freelist_t *entry = >+ list_entry(ptr, drm_mach64_freelist_t, list); >+ struct drm_buf *buf = entry->buf; >+ >+ u32 buf_addr = GETBUFADDR(buf); >+ >+ if (buf_addr <= addr && addr < buf_addr + buf->used) { >+ mach64_dump_buf_info(dev_priv, buf); >+ } >+ } >+ } >+ >+ DRM_INFO("\n"); >+ DRM_INFO(" BM_GUI_TABLE = 0x%08x\n", >+ MACH64_READ(MACH64_BM_GUI_TABLE)); >+ DRM_INFO("\n"); >+ DRM_INFO("BM_FRAME_BUF_OFFSET = 0x%08x\n", >+ MACH64_READ(MACH64_BM_FRAME_BUF_OFFSET)); >+ DRM_INFO(" BM_SYSTEM_MEM_ADDR = 0x%08x\n", >+ MACH64_READ(MACH64_BM_SYSTEM_MEM_ADDR)); >+ DRM_INFO(" BM_COMMAND = 0x%08x\n", >+ MACH64_READ(MACH64_BM_COMMAND)); >+ DRM_INFO("\n"); >+ DRM_INFO(" BM_STATUS = 0x%08x\n", >+ MACH64_READ(MACH64_BM_STATUS)); >+ DRM_INFO(" BUS_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_BUS_CNTL)); >+ DRM_INFO(" FIFO_STAT = 0x%08x\n", >+ MACH64_READ(MACH64_FIFO_STAT)); >+ DRM_INFO(" GUI_STAT = 0x%08x\n", >+ MACH64_READ(MACH64_GUI_STAT)); >+ DRM_INFO(" SRC_CNTL = 0x%08x\n", >+ MACH64_READ(MACH64_SRC_CNTL)); >+} >+ >+/*@}*/ >+ >+ >+/*******************************************************************/ >+/** \name DMA descriptor ring macros */ >+/*@{*/ >+ >+/** >+ * Add the end mark to the ring's new tail position. >+ * >+ * The bus master engine will keep processing the DMA buffers listed in the ring >+ * until it finds this mark, making it stop. >+ * >+ * \sa mach64_clear_dma_eol >+ */ >+static __inline__ void mach64_set_dma_eol(volatile u32 * addr) >+{ >+#if defined(__i386__) >+ int nr = 31; >+ >+ /* Taken from include/asm-i386/bitops.h linux header */ >+ __asm__ __volatile__("lock;" "btsl %1,%0":"=m"(*addr) >+ :"Ir"(nr)); >+#elif defined(__powerpc__) >+ u32 old; >+ u32 mask = cpu_to_le32(MACH64_DMA_EOL); >+ >+ /* Taken from the include/asm-ppc/bitops.h linux header */ >+ __asm__ __volatile__("\n\ >+1: lwarx %0,0,%3 \n\ >+ or %0,%0,%2 \n\ >+ stwcx. %0,0,%3 \n\ >+ bne- 1b":"=&r"(old), "=m"(*addr) >+ :"r"(mask), "r"(addr), "m"(*addr) >+ :"cc"); >+#elif defined(__alpha__) >+ u32 temp; >+ u32 mask = MACH64_DMA_EOL; >+ >+ /* Taken from the include/asm-alpha/bitops.h linux header */ >+ __asm__ __volatile__("1: ldl_l %0,%3\n" >+ " bis %0,%2,%0\n" >+ " stl_c %0,%1\n" >+ " beq %0,2f\n" >+ ".subsection 2\n" >+ "2: br 1b\n" >+ ".previous":"=&r"(temp), "=m"(*addr) >+ :"Ir"(mask), "m"(*addr)); >+#else >+ u32 mask = cpu_to_le32(MACH64_DMA_EOL); >+ >+ *addr |= mask; >+#endif >+} >+ >+/** >+ * Remove the end mark from the ring's old tail position. >+ * >+ * It should be called after calling mach64_set_dma_eol to mark the ring's new >+ * tail position. >+ * >+ * We update the end marks while the bus master engine is in operation. Since >+ * the bus master engine may potentially be reading from the same position >+ * that we write, we must change atomically to avoid having intermediary bad >+ * data. >+ */ >+static __inline__ void mach64_clear_dma_eol(volatile u32 * addr) >+{ >+#if defined(__i386__) >+ int nr = 31; >+ >+ /* Taken from include/asm-i386/bitops.h linux header */ >+ __asm__ __volatile__("lock;" "btrl %1,%0":"=m"(*addr) >+ :"Ir"(nr)); >+#elif defined(__powerpc__) >+ u32 old; >+ u32 mask = cpu_to_le32(MACH64_DMA_EOL); >+ >+ /* Taken from the include/asm-ppc/bitops.h linux header */ >+ __asm__ __volatile__("\n\ >+1: lwarx %0,0,%3 \n\ >+ andc %0,%0,%2 \n\ >+ stwcx. %0,0,%3 \n\ >+ bne- 1b":"=&r"(old), "=m"(*addr) >+ :"r"(mask), "r"(addr), "m"(*addr) >+ :"cc"); >+#elif defined(__alpha__) >+ u32 temp; >+ u32 mask = ~MACH64_DMA_EOL; >+ >+ /* Taken from the include/asm-alpha/bitops.h linux header */ >+ __asm__ __volatile__("1: ldl_l %0,%3\n" >+ " and %0,%2,%0\n" >+ " stl_c %0,%1\n" >+ " beq %0,2f\n" >+ ".subsection 2\n" >+ "2: br 1b\n" >+ ".previous":"=&r"(temp), "=m"(*addr) >+ :"Ir"(mask), "m"(*addr)); >+#else >+ u32 mask = cpu_to_le32(~MACH64_DMA_EOL); >+ >+ *addr &= mask; >+#endif >+} >+ >+#define RING_LOCALS \ >+ int _ring_tail, _ring_write; unsigned int _ring_mask; volatile u32 *_ring >+ >+#define RING_WRITE_OFS _ring_write >+ >+#define BEGIN_RING( n ) \ >+do { \ >+ if ( MACH64_VERBOSE ) { \ >+ DRM_INFO( "BEGIN_RING( %d ) in %s\n", \ >+ (n), __FUNCTION__ ); \ >+ } \ >+ if ( dev_priv->ring.space <= (n) * sizeof(u32) ) { \ >+ int ret; \ >+ if ((ret=mach64_wait_ring( dev_priv, (n) * sizeof(u32))) < 0 ) { \ >+ DRM_ERROR( "wait_ring failed, resetting engine\n"); \ >+ mach64_dump_engine_info( dev_priv ); \ >+ mach64_do_engine_reset( dev_priv ); \ >+ return ret; \ >+ } \ >+ } \ >+ dev_priv->ring.space -= (n) * sizeof(u32); \ >+ _ring = (u32 *) dev_priv->ring.start; \ >+ _ring_tail = _ring_write = dev_priv->ring.tail; \ >+ _ring_mask = dev_priv->ring.tail_mask; \ >+} while (0) >+ >+#define OUT_RING( x ) \ >+do { \ >+ if ( MACH64_VERBOSE ) { \ >+ DRM_INFO( " OUT_RING( 0x%08x ) at 0x%x\n", \ >+ (unsigned int)(x), _ring_write ); \ >+ } \ >+ _ring[_ring_write++] = cpu_to_le32( x ); \ >+ _ring_write &= _ring_mask; \ >+} while (0) >+ >+#define ADVANCE_RING() \ >+do { \ >+ if ( MACH64_VERBOSE ) { \ >+ DRM_INFO( "ADVANCE_RING() wr=0x%06x tail=0x%06x\n", \ >+ _ring_write, _ring_tail ); \ >+ } \ >+ DRM_MEMORYBARRIER(); \ >+ mach64_clear_dma_eol( &_ring[(_ring_tail - 2) & _ring_mask] ); \ >+ DRM_MEMORYBARRIER(); \ >+ dev_priv->ring.tail = _ring_write; \ >+ mach64_ring_tick( dev_priv, &(dev_priv)->ring ); \ >+} while (0) >+ >+/** >+ * Queue a DMA buffer of registers writes into the ring buffer. >+ */ >+int mach64_add_buf_to_ring(drm_mach64_private_t *dev_priv, >+ drm_mach64_freelist_t *entry) >+{ >+ int bytes, pages, remainder; >+ u32 address, page; >+ int i; >+ struct drm_buf *buf = entry->buf; >+ RING_LOCALS; >+ >+ bytes = buf->used; >+ address = GETBUFADDR( buf ); >+ pages = (bytes + MACH64_DMA_CHUNKSIZE - 1) / MACH64_DMA_CHUNKSIZE; >+ >+ BEGIN_RING( pages * 4 ); >+ >+ for ( i = 0 ; i < pages-1 ; i++ ) { >+ page = address + i * MACH64_DMA_CHUNKSIZE; >+ OUT_RING( MACH64_APERTURE_OFFSET + MACH64_BM_ADDR ); >+ OUT_RING( page ); >+ OUT_RING( MACH64_DMA_CHUNKSIZE | MACH64_DMA_HOLD_OFFSET ); >+ OUT_RING( 0 ); >+ } >+ >+ /* generate the final descriptor for any remaining commands in this buffer */ >+ page = address + i * MACH64_DMA_CHUNKSIZE; >+ remainder = bytes - i * MACH64_DMA_CHUNKSIZE; >+ >+ /* Save dword offset of last descriptor for this buffer. >+ * This is needed to check for completion of the buffer in freelist_get >+ */ >+ entry->ring_ofs = RING_WRITE_OFS; >+ >+ OUT_RING( MACH64_APERTURE_OFFSET + MACH64_BM_ADDR ); >+ OUT_RING( page ); >+ OUT_RING( remainder | MACH64_DMA_HOLD_OFFSET | MACH64_DMA_EOL ); >+ OUT_RING( 0 ); >+ >+ ADVANCE_RING(); >+ >+ return 0; >+} >+ >+/** >+ * Queue DMA buffer controlling host data tranfers (e.g., blit). >+ * >+ * Almost identical to mach64_add_buf_to_ring. >+ */ >+int mach64_add_hostdata_buf_to_ring(drm_mach64_private_t *dev_priv, >+ drm_mach64_freelist_t *entry) >+{ >+ int bytes, pages, remainder; >+ u32 address, page; >+ int i; >+ struct drm_buf *buf = entry->buf; >+ RING_LOCALS; >+ >+ bytes = buf->used - MACH64_HOSTDATA_BLIT_OFFSET; >+ pages = (bytes + MACH64_DMA_CHUNKSIZE - 1) / MACH64_DMA_CHUNKSIZE; >+ address = GETBUFADDR( buf ); >+ >+ BEGIN_RING( 4 + pages * 4 ); >+ >+ OUT_RING( MACH64_APERTURE_OFFSET + MACH64_BM_ADDR ); >+ OUT_RING( address ); >+ OUT_RING( MACH64_HOSTDATA_BLIT_OFFSET | MACH64_DMA_HOLD_OFFSET ); >+ OUT_RING( 0 ); >+ address += MACH64_HOSTDATA_BLIT_OFFSET; >+ >+ for ( i = 0 ; i < pages-1 ; i++ ) { >+ page = address + i * MACH64_DMA_CHUNKSIZE; >+ OUT_RING( MACH64_APERTURE_OFFSET + MACH64_BM_HOSTDATA ); >+ OUT_RING( page ); >+ OUT_RING( MACH64_DMA_CHUNKSIZE | MACH64_DMA_HOLD_OFFSET ); >+ OUT_RING( 0 ); >+ } >+ >+ /* generate the final descriptor for any remaining commands in this buffer */ >+ page = address + i * MACH64_DMA_CHUNKSIZE; >+ remainder = bytes - i * MACH64_DMA_CHUNKSIZE; >+ >+ /* Save dword offset of last descriptor for this buffer. >+ * This is needed to check for completion of the buffer in freelist_get >+ */ >+ entry->ring_ofs = RING_WRITE_OFS; >+ >+ OUT_RING( MACH64_APERTURE_OFFSET + MACH64_BM_HOSTDATA ); >+ OUT_RING( page ); >+ OUT_RING( remainder | MACH64_DMA_HOLD_OFFSET | MACH64_DMA_EOL ); >+ OUT_RING( 0 ); >+ >+ ADVANCE_RING(); >+ >+ return 0; >+} >+ >+/*@}*/ >+ >+ >+/*******************************************************************/ >+/** \name DMA test and initialization */ >+/*@{*/ >+ >+/** >+ * Perform a simple DMA operation using the pattern registers to test whether >+ * DMA works. >+ * >+ * \return zero if successful. >+ * >+ * \note This function was the testbed for many experiences regarding Mach64 >+ * DMA operation. It is left here since it so tricky to get DMA operating >+ * properly in some architectures and hardware. >+ */ >+static int mach64_bm_dma_test(struct drm_device * dev) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_dma_handle_t *cpu_addr_dmah; >+ u32 data_addr; >+ u32 *table, *data; >+ u32 expected[2]; >+ u32 src_cntl, pat_reg0, pat_reg1; >+ int i, count, failed; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ table = (u32 *) dev_priv->ring.start; >+ >+ /* FIXME: get a dma buffer from the freelist here */ >+ DRM_DEBUG("Allocating data memory ...\n"); >+ cpu_addr_dmah = >+ drm_pci_alloc(dev, 0x1000, 0x1000, 0xfffffffful); >+ if (!cpu_addr_dmah) { >+ DRM_INFO("data-memory allocation failed!\n"); >+ return -ENOMEM; >+ } else { >+ data = (u32 *) cpu_addr_dmah->vaddr; >+ data_addr = (u32) cpu_addr_dmah->busaddr; >+ } >+ >+ /* Save the X server's value for SRC_CNTL and restore it >+ * in case our test fails. This prevents the X server >+ * from disabling it's cache for this register >+ */ >+ src_cntl = MACH64_READ(MACH64_SRC_CNTL); >+ pat_reg0 = MACH64_READ(MACH64_PAT_REG0); >+ pat_reg1 = MACH64_READ(MACH64_PAT_REG1); >+ >+ mach64_do_wait_for_fifo(dev_priv, 3); >+ >+ MACH64_WRITE(MACH64_SRC_CNTL, 0); >+ MACH64_WRITE(MACH64_PAT_REG0, 0x11111111); >+ MACH64_WRITE(MACH64_PAT_REG1, 0x11111111); >+ >+ mach64_do_wait_for_idle(dev_priv); >+ >+ for (i = 0; i < 2; i++) { >+ u32 reg; >+ reg = MACH64_READ((MACH64_PAT_REG0 + i * 4)); >+ DRM_DEBUG("(Before DMA Transfer) reg %d = 0x%08x\n", i, reg); >+ if (reg != 0x11111111) { >+ DRM_INFO("Error initializing test registers\n"); >+ DRM_INFO("resetting engine ...\n"); >+ mach64_do_engine_reset(dev_priv); >+ DRM_INFO("freeing data buffer memory.\n"); >+ drm_pci_free(dev, cpu_addr_dmah); >+ return -EIO; >+ } >+ } >+ >+ /* fill up a buffer with sets of 2 consecutive writes starting with PAT_REG0 */ >+ count = 0; >+ >+ data[count++] = cpu_to_le32(DMAREG(MACH64_PAT_REG0) | (1 << 16)); >+ data[count++] = expected[0] = 0x22222222; >+ data[count++] = expected[1] = 0xaaaaaaaa; >+ >+ while (count < 1020) { >+ data[count++] = >+ cpu_to_le32(DMAREG(MACH64_PAT_REG0) | (1 << 16)); >+ data[count++] = 0x22222222; >+ data[count++] = 0xaaaaaaaa; >+ } >+ data[count++] = cpu_to_le32(DMAREG(MACH64_SRC_CNTL) | (0 << 16)); >+ data[count++] = 0; >+ >+ DRM_DEBUG("Preparing table ...\n"); >+ table[MACH64_DMA_FRAME_BUF_OFFSET] = cpu_to_le32(MACH64_BM_ADDR + >+ MACH64_APERTURE_OFFSET); >+ table[MACH64_DMA_SYS_MEM_ADDR] = cpu_to_le32(data_addr); >+ table[MACH64_DMA_COMMAND] = cpu_to_le32(count * sizeof(u32) >+ | MACH64_DMA_HOLD_OFFSET >+ | MACH64_DMA_EOL); >+ table[MACH64_DMA_RESERVED] = 0; >+ >+ DRM_DEBUG("table[0] = 0x%08x\n", table[0]); >+ DRM_DEBUG("table[1] = 0x%08x\n", table[1]); >+ DRM_DEBUG("table[2] = 0x%08x\n", table[2]); >+ DRM_DEBUG("table[3] = 0x%08x\n", table[3]); >+ >+ for (i = 0; i < 6; i++) { >+ DRM_DEBUG(" data[%d] = 0x%08x\n", i, data[i]); >+ } >+ DRM_DEBUG(" ...\n"); >+ for (i = count - 5; i < count; i++) { >+ DRM_DEBUG(" data[%d] = 0x%08x\n", i, data[i]); >+ } >+ >+ DRM_MEMORYBARRIER(); >+ >+ DRM_DEBUG("waiting for idle...\n"); >+ if ((i = mach64_do_wait_for_idle(dev_priv))) { >+ DRM_INFO("mach64_do_wait_for_idle failed (result=%d)\n", i); >+ DRM_INFO("resetting engine ...\n"); >+ mach64_do_engine_reset(dev_priv); >+ mach64_do_wait_for_fifo(dev_priv, 3); >+ MACH64_WRITE(MACH64_SRC_CNTL, src_cntl); >+ MACH64_WRITE(MACH64_PAT_REG0, pat_reg0); >+ MACH64_WRITE(MACH64_PAT_REG1, pat_reg1); >+ DRM_INFO("freeing data buffer memory.\n"); >+ drm_pci_free(dev, cpu_addr_dmah); >+ return i; >+ } >+ DRM_DEBUG("waiting for idle...done\n"); >+ >+ DRM_DEBUG("BUS_CNTL = 0x%08x\n", MACH64_READ(MACH64_BUS_CNTL)); >+ DRM_DEBUG("SRC_CNTL = 0x%08x\n", MACH64_READ(MACH64_SRC_CNTL)); >+ DRM_DEBUG("\n"); >+ DRM_DEBUG("data bus addr = 0x%08x\n", data_addr); >+ DRM_DEBUG("table bus addr = 0x%08x\n", dev_priv->ring.start_addr); >+ >+ DRM_DEBUG("starting DMA transfer...\n"); >+ MACH64_WRITE(MACH64_BM_GUI_TABLE_CMD, >+ dev_priv->ring.start_addr | MACH64_CIRCULAR_BUF_SIZE_16KB); >+ >+ MACH64_WRITE(MACH64_SRC_CNTL, >+ MACH64_SRC_BM_ENABLE | MACH64_SRC_BM_SYNC | >+ MACH64_SRC_BM_OP_SYSTEM_TO_REG); >+ >+ /* Kick off the transfer */ >+ DRM_DEBUG("starting DMA transfer... done.\n"); >+ MACH64_WRITE(MACH64_DST_HEIGHT_WIDTH, 0); >+ >+ DRM_DEBUG("waiting for idle...\n"); >+ >+ if ((i = mach64_do_wait_for_idle(dev_priv))) { >+ /* engine locked up, dump register state and reset */ >+ DRM_INFO("mach64_do_wait_for_idle failed (result=%d)\n", i); >+ mach64_dump_engine_info(dev_priv); >+ DRM_INFO("resetting engine ...\n"); >+ mach64_do_engine_reset(dev_priv); >+ mach64_do_wait_for_fifo(dev_priv, 3); >+ MACH64_WRITE(MACH64_SRC_CNTL, src_cntl); >+ MACH64_WRITE(MACH64_PAT_REG0, pat_reg0); >+ MACH64_WRITE(MACH64_PAT_REG1, pat_reg1); >+ DRM_INFO("freeing data buffer memory.\n"); >+ drm_pci_free(dev, cpu_addr_dmah); >+ return i; >+ } >+ >+ DRM_DEBUG("waiting for idle...done\n"); >+ >+ /* restore SRC_CNTL */ >+ mach64_do_wait_for_fifo(dev_priv, 1); >+ MACH64_WRITE(MACH64_SRC_CNTL, src_cntl); >+ >+ failed = 0; >+ >+ /* Check register values to see if the GUI master operation succeeded */ >+ for (i = 0; i < 2; i++) { >+ u32 reg; >+ reg = MACH64_READ((MACH64_PAT_REG0 + i * 4)); >+ DRM_DEBUG("(After DMA Transfer) reg %d = 0x%08x\n", i, reg); >+ if (reg != expected[i]) { >+ failed = -1; >+ } >+ } >+ >+ /* restore pattern registers */ >+ mach64_do_wait_for_fifo(dev_priv, 2); >+ MACH64_WRITE(MACH64_PAT_REG0, pat_reg0); >+ MACH64_WRITE(MACH64_PAT_REG1, pat_reg1); >+ >+ DRM_DEBUG("freeing data buffer memory.\n"); >+ drm_pci_free(dev, cpu_addr_dmah); >+ DRM_DEBUG("returning ...\n"); >+ >+ return failed; >+} >+ >+/** >+ * Called during the DMA initialization ioctl to initialize all the necessary >+ * software and hardware state for DMA operation. >+ */ >+static int mach64_do_dma_init(struct drm_device * dev, drm_mach64_init_t * init) >+{ >+ drm_mach64_private_t *dev_priv; >+ u32 tmp; >+ int i, ret; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ dev_priv = drm_alloc(sizeof(drm_mach64_private_t), DRM_MEM_DRIVER); >+ if (dev_priv == NULL) >+ return -ENOMEM; >+ >+ memset(dev_priv, 0, sizeof(drm_mach64_private_t)); >+ >+ dev_priv->is_pci = init->is_pci; >+ >+ dev_priv->fb_bpp = init->fb_bpp; >+ dev_priv->front_offset = init->front_offset; >+ dev_priv->front_pitch = init->front_pitch; >+ dev_priv->back_offset = init->back_offset; >+ dev_priv->back_pitch = init->back_pitch; >+ >+ dev_priv->depth_bpp = init->depth_bpp; >+ dev_priv->depth_offset = init->depth_offset; >+ dev_priv->depth_pitch = init->depth_pitch; >+ >+ dev_priv->front_offset_pitch = (((dev_priv->front_pitch / 8) << 22) | >+ (dev_priv->front_offset >> 3)); >+ dev_priv->back_offset_pitch = (((dev_priv->back_pitch / 8) << 22) | >+ (dev_priv->back_offset >> 3)); >+ dev_priv->depth_offset_pitch = (((dev_priv->depth_pitch / 8) << 22) | >+ (dev_priv->depth_offset >> 3)); >+ >+ dev_priv->usec_timeout = 1000000; >+ >+ /* Set up the freelist, placeholder list and pending list */ >+ INIT_LIST_HEAD(&dev_priv->free_list); >+ INIT_LIST_HEAD(&dev_priv->placeholders); >+ INIT_LIST_HEAD(&dev_priv->pending); >+ >+ dev_priv->sarea = drm_getsarea(dev); >+ if (!dev_priv->sarea) { >+ DRM_ERROR("can not find sarea!\n"); >+ dev->dev_private = (void *)dev_priv; >+ mach64_do_cleanup_dma(dev); >+ return -EINVAL; >+ } >+ dev_priv->fb = drm_core_findmap(dev, init->fb_offset); >+ if (!dev_priv->fb) { >+ DRM_ERROR("can not find frame buffer map!\n"); >+ dev->dev_private = (void *)dev_priv; >+ mach64_do_cleanup_dma(dev); >+ return -EINVAL; >+ } >+ dev_priv->mmio = drm_core_findmap(dev, init->mmio_offset); >+ if (!dev_priv->mmio) { >+ DRM_ERROR("can not find mmio map!\n"); >+ dev->dev_private = (void *)dev_priv; >+ mach64_do_cleanup_dma(dev); >+ return -EINVAL; >+ } >+ >+ dev_priv->ring_map = drm_core_findmap(dev, init->ring_offset); >+ if (!dev_priv->ring_map) { >+ DRM_ERROR("can not find ring map!\n"); >+ dev->dev_private = (void *)dev_priv; >+ mach64_do_cleanup_dma(dev); >+ return -EINVAL; >+ } >+ >+ dev_priv->sarea_priv = (drm_mach64_sarea_t *) >+ ((u8 *) dev_priv->sarea->handle + init->sarea_priv_offset); >+ >+ if (!dev_priv->is_pci) { >+ drm_core_ioremap(dev_priv->ring_map, dev); >+ if (!dev_priv->ring_map->handle) { >+ DRM_ERROR("can not ioremap virtual address for" >+ " descriptor ring\n"); >+ dev->dev_private = (void *)dev_priv; >+ mach64_do_cleanup_dma(dev); >+ return -ENOMEM; >+ } >+ dev->agp_buffer_token = init->buffers_offset; >+ dev->agp_buffer_map = >+ drm_core_findmap(dev, init->buffers_offset); >+ if (!dev->agp_buffer_map) { >+ DRM_ERROR("can not find dma buffer map!\n"); >+ dev->dev_private = (void *)dev_priv; >+ mach64_do_cleanup_dma(dev); >+ return -EINVAL; >+ } >+ /* there might be a nicer way to do this - >+ dev isn't passed all the way though the mach64 - DA */ >+ dev_priv->dev_buffers = dev->agp_buffer_map; >+ >+ drm_core_ioremap(dev->agp_buffer_map, dev); >+ if (!dev->agp_buffer_map->handle) { >+ DRM_ERROR("can not ioremap virtual address for" >+ " dma buffer\n"); >+ dev->dev_private = (void *)dev_priv; >+ mach64_do_cleanup_dma(dev); >+ return -ENOMEM; >+ } >+ dev_priv->agp_textures = >+ drm_core_findmap(dev, init->agp_textures_offset); >+ if (!dev_priv->agp_textures) { >+ DRM_ERROR("can not find agp texture region!\n"); >+ dev->dev_private = (void *)dev_priv; >+ mach64_do_cleanup_dma(dev); >+ return -EINVAL; >+ } >+ } >+ >+ dev->dev_private = (void *)dev_priv; >+ >+ dev_priv->driver_mode = init->dma_mode; >+ >+ /* changing the FIFO size from the default causes problems with DMA */ >+ tmp = MACH64_READ(MACH64_GUI_CNTL); >+ if ((tmp & MACH64_CMDFIFO_SIZE_MASK) != MACH64_CMDFIFO_SIZE_128) { >+ DRM_INFO("Setting FIFO size to 128 entries\n"); >+ /* FIFO must be empty to change the FIFO depth */ >+ if ((ret = mach64_do_wait_for_idle(dev_priv))) { >+ DRM_ERROR >+ ("wait for idle failed before changing FIFO depth!\n"); >+ mach64_do_cleanup_dma(dev); >+ return ret; >+ } >+ MACH64_WRITE(MACH64_GUI_CNTL, ((tmp & ~MACH64_CMDFIFO_SIZE_MASK) >+ | MACH64_CMDFIFO_SIZE_128)); >+ /* need to read GUI_STAT for proper sync according to docs */ >+ if ((ret = mach64_do_wait_for_idle(dev_priv))) { >+ DRM_ERROR >+ ("wait for idle failed when changing FIFO depth!\n"); >+ mach64_do_cleanup_dma(dev); >+ return ret; >+ } >+ } >+ >+ dev_priv->ring.size = 0x4000; /* 16KB */ >+ dev_priv->ring.start = dev_priv->ring_map->handle; >+ dev_priv->ring.start_addr = (u32) dev_priv->ring_map->offset; >+ >+ memset(dev_priv->ring.start, 0, dev_priv->ring.size); >+ DRM_INFO("descriptor ring: cpu addr %p, bus addr: 0x%08x\n", >+ dev_priv->ring.start, dev_priv->ring.start_addr); >+ >+ ret = 0; >+ if (dev_priv->driver_mode != MACH64_MODE_MMIO) { >+ >+ /* enable block 1 registers and bus mastering */ >+ MACH64_WRITE(MACH64_BUS_CNTL, ((MACH64_READ(MACH64_BUS_CNTL) >+ | MACH64_BUS_EXT_REG_EN) >+ & ~MACH64_BUS_MASTER_DIS)); >+ >+ /* try a DMA GUI-mastering pass and fall back to MMIO if it fails */ >+ DRM_DEBUG("Starting DMA test...\n"); >+ if ((ret = mach64_bm_dma_test(dev))) { >+ dev_priv->driver_mode = MACH64_MODE_MMIO; >+ } >+ } >+ >+ switch (dev_priv->driver_mode) { >+ case MACH64_MODE_MMIO: >+ MACH64_WRITE(MACH64_BUS_CNTL, (MACH64_READ(MACH64_BUS_CNTL) >+ | MACH64_BUS_EXT_REG_EN >+ | MACH64_BUS_MASTER_DIS)); >+ if (init->dma_mode == MACH64_MODE_MMIO) >+ DRM_INFO("Forcing pseudo-DMA mode\n"); >+ else >+ DRM_INFO >+ ("DMA test failed (ret=%d), using pseudo-DMA mode\n", >+ ret); >+ break; >+ case MACH64_MODE_DMA_SYNC: >+ DRM_INFO("DMA test succeeded, using synchronous DMA mode\n"); >+ break; >+ case MACH64_MODE_DMA_ASYNC: >+ default: >+ DRM_INFO("DMA test succeeded, using asynchronous DMA mode\n"); >+ } >+ >+ dev_priv->ring_running = 0; >+ >+ /* setup offsets for physical address of table start and end */ >+ dev_priv->ring.head_addr = dev_priv->ring.start_addr; >+ dev_priv->ring.head = dev_priv->ring.tail = 0; >+ dev_priv->ring.tail_mask = (dev_priv->ring.size / sizeof(u32)) - 1; >+ dev_priv->ring.space = dev_priv->ring.size; >+ >+ /* setup physical address and size of descriptor table */ >+ mach64_do_wait_for_fifo(dev_priv, 1); >+ MACH64_WRITE(MACH64_BM_GUI_TABLE_CMD, >+ (dev_priv->ring. >+ head_addr | MACH64_CIRCULAR_BUF_SIZE_16KB)); >+ >+ /* init frame counter */ >+ dev_priv->sarea_priv->frames_queued = 0; >+ for (i = 0; i < MACH64_MAX_QUEUED_FRAMES; i++) { >+ dev_priv->frame_ofs[i] = ~0; /* All ones indicates placeholder */ >+ } >+ >+ /* Allocate the DMA buffer freelist */ >+ if ((ret = mach64_init_freelist(dev))) { >+ DRM_ERROR("Freelist allocation failed\n"); >+ mach64_do_cleanup_dma(dev); >+ return ret; >+ } >+ >+ return 0; >+} >+ >+/*******************************************************************/ >+/** MMIO Pseudo-DMA (intended primarily for debugging, not performance) >+ */ >+ >+int mach64_do_dispatch_pseudo_dma(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ volatile u32 *ring_read; >+ struct list_head *ptr; >+ drm_mach64_freelist_t *entry; >+ struct drm_buf *buf = NULL; >+ u32 *buf_ptr; >+ u32 used, reg, target; >+ int fifo, count, found, ret, no_idle_wait; >+ >+ fifo = count = reg = no_idle_wait = 0; >+ target = MACH64_BM_ADDR; >+ >+ if ((ret = mach64_do_wait_for_idle(dev_priv)) < 0) { >+ DRM_INFO >+ ("%s: idle failed before pseudo-dma dispatch, resetting engine\n", >+ __FUNCTION__); >+ mach64_dump_engine_info(dev_priv); >+ mach64_do_engine_reset(dev_priv); >+ return ret; >+ } >+ >+ ring_read = (u32 *) ring->start; >+ >+ while (ring->tail != ring->head) { >+ u32 buf_addr, new_target, offset; >+ u32 bytes, remaining, head, eol; >+ >+ head = ring->head; >+ >+ new_target = >+ le32_to_cpu(ring_read[head++]) - MACH64_APERTURE_OFFSET; >+ buf_addr = le32_to_cpu(ring_read[head++]); >+ eol = le32_to_cpu(ring_read[head]) & MACH64_DMA_EOL; >+ bytes = le32_to_cpu(ring_read[head++]) >+ & ~(MACH64_DMA_HOLD_OFFSET | MACH64_DMA_EOL); >+ head++; >+ head &= ring->tail_mask; >+ >+ /* can't wait for idle between a blit setup descriptor >+ * and a HOSTDATA descriptor or the engine will lock >+ */ >+ if (new_target == MACH64_BM_HOSTDATA >+ && target == MACH64_BM_ADDR) >+ no_idle_wait = 1; >+ >+ target = new_target; >+ >+ found = 0; >+ offset = 0; >+ list_for_each(ptr, &dev_priv->pending) { >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ buf = entry->buf; >+ offset = buf_addr - GETBUFADDR(buf); >+ if (offset >= 0 && offset < MACH64_BUFFER_SIZE) { >+ found = 1; >+ break; >+ } >+ } >+ >+ if (!found || buf == NULL) { >+ DRM_ERROR >+ ("Couldn't find pending buffer: head: %u tail: %u buf_addr: 0x%08x %s\n", >+ head, ring->tail, buf_addr, (eol ? "eol" : "")); >+ mach64_dump_ring_info(dev_priv); >+ mach64_do_engine_reset(dev_priv); >+ return -EINVAL; >+ } >+ >+ /* Hand feed the buffer to the card via MMIO, waiting for the fifo >+ * every 16 writes >+ */ >+ DRM_DEBUG("target: (0x%08x) %s\n", target, >+ (target == >+ MACH64_BM_HOSTDATA ? "BM_HOSTDATA" : "BM_ADDR")); >+ DRM_DEBUG("offset: %u bytes: %u used: %u\n", offset, bytes, >+ buf->used); >+ >+ remaining = (buf->used - offset) >> 2; /* dwords remaining in buffer */ >+ used = bytes >> 2; /* dwords in buffer for this descriptor */ >+ buf_ptr = (u32 *) ((char *)GETBUFPTR(buf) + offset); >+ >+ while (used) { >+ >+ if (count == 0) { >+ if (target == MACH64_BM_HOSTDATA) { >+ reg = DMAREG(MACH64_HOST_DATA0); >+ count = >+ (remaining > 16) ? 16 : remaining; >+ fifo = 0; >+ } else { >+ reg = le32_to_cpu(*buf_ptr++); >+ used--; >+ count = (reg >> 16) + 1; >+ } >+ >+ reg = reg & 0xffff; >+ reg = MMSELECT(reg); >+ } >+ while (count && used) { >+ if (!fifo) { >+ if (no_idle_wait) { >+ if ((ret = >+ mach64_do_wait_for_fifo >+ (dev_priv, 16)) < 0) { >+ no_idle_wait = 0; >+ return ret; >+ } >+ } else { >+ if ((ret = >+ mach64_do_wait_for_idle >+ (dev_priv)) < 0) { >+ return ret; >+ } >+ } >+ fifo = 16; >+ } >+ --fifo; >+ MACH64_WRITE(reg, le32_to_cpu(*buf_ptr++)); >+ used--; >+ remaining--; >+ >+ reg += 4; >+ count--; >+ } >+ } >+ ring->head = head; >+ ring->head_addr = ring->start_addr + (ring->head * sizeof(u32)); >+ ring->space += (4 * sizeof(u32)); >+ } >+ >+ if ((ret = mach64_do_wait_for_idle(dev_priv)) < 0) { >+ return ret; >+ } >+ MACH64_WRITE(MACH64_BM_GUI_TABLE_CMD, >+ ring->head_addr | MACH64_CIRCULAR_BUF_SIZE_16KB); >+ >+ DRM_DEBUG("%s completed\n", __FUNCTION__); >+ return 0; >+} >+ >+/*@}*/ >+ >+ >+/*******************************************************************/ >+/** \name DMA cleanup */ >+/*@{*/ >+ >+int mach64_do_cleanup_dma(struct drm_device * dev) >+{ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ /* Make sure interrupts are disabled here because the uninstall ioctl >+ * may not have been called from userspace and after dev_private >+ * is freed, it's too late. >+ */ >+ if (dev->irq) >+ drm_irq_uninstall(dev); >+ >+ if (dev->dev_private) { >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ >+ if (!dev_priv->is_pci) { >+ if (dev_priv->ring_map) >+ drm_core_ioremapfree(dev_priv->ring_map, dev); >+ >+ if (dev->agp_buffer_map) { >+ drm_core_ioremapfree(dev->agp_buffer_map, dev); >+ dev->agp_buffer_map = NULL; >+ } >+ } >+ >+ mach64_destroy_freelist(dev); >+ >+ drm_free(dev_priv, sizeof(drm_mach64_private_t), >+ DRM_MEM_DRIVER); >+ dev->dev_private = NULL; >+ } >+ >+ return 0; >+} >+ >+/*@}*/ >+ >+ >+/*******************************************************************/ >+/** \name IOCTL handlers */ >+/*@{*/ >+ >+int mach64_dma_init(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_init_t *init = data; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ switch (init->func) { >+ case DRM_MACH64_INIT_DMA: >+ return mach64_do_dma_init(dev, init); >+ case DRM_MACH64_CLEANUP_DMA: >+ return mach64_do_cleanup_dma(dev); >+ } >+ >+ return -EINVAL; >+} >+ >+int mach64_dma_idle(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ return mach64_do_dma_idle(dev_priv); >+} >+ >+int mach64_dma_flush(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ return mach64_do_dma_flush(dev_priv); >+} >+ >+int mach64_engine_reset(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ return mach64_do_engine_reset(dev_priv); >+} >+ >+/*@}*/ >+ >+ >+/*******************************************************************/ >+/** \name Freelist management */ >+/*@{*/ >+ >+int mach64_init_freelist(struct drm_device * dev) >+{ >+ struct drm_device_dma *dma = dev->dma; >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_freelist_t *entry; >+ struct list_head *ptr; >+ int i; >+ >+ DRM_DEBUG("%s: adding %d buffers to freelist\n", __FUNCTION__, >+ dma->buf_count); >+ >+ for (i = 0; i < dma->buf_count; i++) { >+ if ((entry = >+ (drm_mach64_freelist_t *) >+ drm_alloc(sizeof(drm_mach64_freelist_t), >+ DRM_MEM_BUFLISTS)) == NULL) >+ return -ENOMEM; >+ memset(entry, 0, sizeof(drm_mach64_freelist_t)); >+ entry->buf = dma->buflist[i]; >+ ptr = &entry->list; >+ list_add_tail(ptr, &dev_priv->free_list); >+ } >+ >+ return 0; >+} >+ >+void mach64_destroy_freelist(struct drm_device * dev) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_freelist_t *entry; >+ struct list_head *ptr; >+ struct list_head *tmp; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ list_for_each_safe(ptr, tmp, &dev_priv->pending) { >+ list_del(ptr); >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ drm_free(entry, sizeof(*entry), DRM_MEM_BUFLISTS); >+ } >+ list_for_each_safe(ptr, tmp, &dev_priv->placeholders) { >+ list_del(ptr); >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ drm_free(entry, sizeof(*entry), DRM_MEM_BUFLISTS); >+ } >+ >+ list_for_each_safe(ptr, tmp, &dev_priv->free_list) { >+ list_del(ptr); >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ drm_free(entry, sizeof(*entry), DRM_MEM_BUFLISTS); >+ } >+} >+ >+/* IMPORTANT: This function should only be called when the engine is idle or locked up, >+ * as it assumes all buffers in the pending list have been completed by the hardware. >+ */ >+int mach64_do_release_used_buffers(drm_mach64_private_t * dev_priv) >+{ >+ struct list_head *ptr; >+ struct list_head *tmp; >+ drm_mach64_freelist_t *entry; >+ int i; >+ >+ if (list_empty(&dev_priv->pending)) >+ return 0; >+ >+ /* Iterate the pending list and move all buffers into the freelist... */ >+ i = 0; >+ list_for_each_safe(ptr, tmp, &dev_priv->pending) { >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ if (entry->discard) { >+ entry->buf->pending = 0; >+ list_del(ptr); >+ list_add_tail(ptr, &dev_priv->free_list); >+ i++; >+ } >+ } >+ >+ DRM_DEBUG("%s: released %d buffers from pending list\n", __FUNCTION__, >+ i); >+ >+ return 0; >+} >+ >+static int mach64_do_reclaim_completed(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ struct list_head *ptr; >+ struct list_head *tmp; >+ drm_mach64_freelist_t *entry; >+ u32 head, tail, ofs; >+ >+ mach64_ring_tick(dev_priv, ring); >+ head = ring->head; >+ tail = ring->tail; >+ >+ if (head == tail) { >+#if MACH64_EXTRA_CHECKING >+ if (MACH64_READ(MACH64_GUI_STAT) & MACH64_GUI_ACTIVE) { >+ DRM_ERROR("Empty ring with non-idle engine!\n"); >+ mach64_dump_ring_info(dev_priv); >+ return -1; >+ } >+#endif >+ /* last pass is complete, so release everything */ >+ mach64_do_release_used_buffers(dev_priv); >+ DRM_DEBUG("%s: idle engine, freed all buffers.\n", >+ __FUNCTION__); >+ if (list_empty(&dev_priv->free_list)) { >+ DRM_ERROR("Freelist empty with idle engine\n"); >+ return -1; >+ } >+ return 0; >+ } >+ /* Look for a completed buffer and bail out of the loop >+ * as soon as we find one -- don't waste time trying >+ * to free extra bufs here, leave that to do_release_used_buffers >+ */ >+ list_for_each_safe(ptr, tmp, &dev_priv->pending) { >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ ofs = entry->ring_ofs; >+ if (entry->discard && >+ ((head < tail && (ofs < head || ofs >= tail)) || >+ (head > tail && (ofs < head && ofs >= tail)))) { >+#if MACH64_EXTRA_CHECKING >+ int i; >+ >+ for (i = head; i != tail; i = (i + 4) & ring->tail_mask) >+ { >+ u32 o1 = le32_to_cpu(((u32 *) ring-> >+ start)[i + 1]); >+ u32 o2 = GETBUFADDR(entry->buf); >+ >+ if (o1 == o2) { >+ DRM_ERROR >+ ("Attempting to free used buffer: " >+ "i=%d buf=0x%08x\n", >+ i, o1); >+ mach64_dump_ring_info(dev_priv); >+ return -1; >+ } >+ } >+#endif >+ /* found a processed buffer */ >+ entry->buf->pending = 0; >+ list_del(ptr); >+ list_add_tail(ptr, &dev_priv->free_list); >+ DRM_DEBUG >+ ("%s: freed processed buffer (head=%d tail=%d " >+ "buf ring ofs=%d).\n", >+ __FUNCTION__, head, tail, ofs); >+ return 0; >+ } >+ } >+ >+ return 1; >+} >+ >+struct drm_buf *mach64_freelist_get(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ drm_mach64_freelist_t *entry; >+ struct list_head *ptr; >+ int t; >+ >+ if (list_empty(&dev_priv->free_list)) { >+ if (list_empty(&dev_priv->pending)) { >+ DRM_ERROR >+ ("Couldn't get buffer - pending and free lists empty\n"); >+ t = 0; >+ list_for_each(ptr, &dev_priv->placeholders) { >+ t++; >+ } >+ DRM_INFO("Placeholders: %d\n", t); >+ return NULL; >+ } >+ >+ for (t = 0; t < dev_priv->usec_timeout; t++) { >+ int ret; >+ >+ ret = mach64_do_reclaim_completed(dev_priv); >+ if (ret == 0) >+ goto _freelist_entry_found; >+ if (ret < 0) >+ return NULL; >+ >+ DRM_UDELAY(1); >+ } >+ mach64_dump_ring_info(dev_priv); >+ DRM_ERROR >+ ("timeout waiting for buffers: ring head_addr: 0x%08x head: %d tail: %d\n", >+ ring->head_addr, ring->head, ring->tail); >+ return NULL; >+ } >+ >+ _freelist_entry_found: >+ ptr = dev_priv->free_list.next; >+ list_del(ptr); >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ entry->buf->used = 0; >+ list_add_tail(ptr, &dev_priv->placeholders); >+ return entry->buf; >+} >+ >+int mach64_freelist_put(drm_mach64_private_t * dev_priv, struct drm_buf * copy_buf) >+{ >+ struct list_head *ptr; >+ drm_mach64_freelist_t *entry; >+ >+#if MACH64_EXTRA_CHECKING >+ list_for_each(ptr, &dev_priv->pending) { >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ if (copy_buf == entry->buf) { >+ DRM_ERROR("%s: Trying to release a pending buf\n", >+ __FUNCTION__); >+ return -EFAULT; >+ } >+ } >+#endif >+ ptr = dev_priv->placeholders.next; >+ entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ copy_buf->pending = 0; >+ copy_buf->used = 0; >+ entry->buf = copy_buf; >+ entry->discard = 1; >+ list_del(ptr); >+ list_add_tail(ptr, &dev_priv->free_list); >+ >+ return 0; >+} >+ >+/*@}*/ >+ >+ >+/*******************************************************************/ >+/** \name DMA buffer request and submission IOCTL handler */ >+/*@{*/ >+ >+static int mach64_dma_get_buffers(struct drm_device *dev, >+ struct drm_file *file_priv, >+ struct drm_dma * d) >+{ >+ int i; >+ struct drm_buf *buf; >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ >+ for (i = d->granted_count; i < d->request_count; i++) { >+ buf = mach64_freelist_get(dev_priv); >+#if MACH64_EXTRA_CHECKING >+ if (!buf) >+ return -EFAULT; >+#else >+ if (!buf) >+ return -EAGAIN; >+#endif >+ >+ buf->file_priv = file_priv; >+ >+ if (DRM_COPY_TO_USER(&d->request_indices[i], &buf->idx, >+ sizeof(buf->idx))) >+ return -EFAULT; >+ if (DRM_COPY_TO_USER(&d->request_sizes[i], &buf->total, >+ sizeof(buf->total))) >+ return -EFAULT; >+ >+ d->granted_count++; >+ } >+ return 0; >+} >+ >+int mach64_dma_buffers(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ struct drm_device_dma *dma = dev->dma; >+ struct drm_dma *d = data; >+ int ret = 0; >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ /* Please don't send us buffers. >+ */ >+ if (d->send_count != 0) { >+ DRM_ERROR("Process %d trying to send %d buffers via drmDMA\n", >+ DRM_CURRENTPID, d->send_count); >+ return -EINVAL; >+ } >+ >+ /* We'll send you buffers. >+ */ >+ if (d->request_count < 0 || d->request_count > dma->buf_count) { >+ DRM_ERROR("Process %d trying to get %d buffers (of %d max)\n", >+ DRM_CURRENTPID, d->request_count, dma->buf_count); >+ ret = -EINVAL; >+ } >+ >+ d->granted_count = 0; >+ >+ if (d->request_count) { >+ ret = mach64_dma_get_buffers(dev, file_priv, d); >+ } >+ >+ return ret; >+} >+ >+void mach64_driver_lastclose(struct drm_device * dev) >+{ >+ mach64_do_cleanup_dma(dev); >+} >+ >+/*@}*/ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mach64_drm.h linux-2.6.23.i686/drivers/char/drm/mach64_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/mach64_drm.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mach64_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,256 @@ >+/* mach64_drm.h -- Public header for the mach64 driver -*- linux-c -*- >+ * Created: Thu Nov 30 20:04:32 2000 by gareth@valinux.com >+ */ >+/* >+ * Copyright 2000 Gareth Hughes >+ * Copyright 2002 Frank C. Earl >+ * Copyright 2002-2003 Leif Delgass >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT OWNER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER >+ * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN >+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Gareth Hughes <gareth@valinux.com> >+ * Frank C. Earl <fearl@airmail.net> >+ * Leif Delgass <ldelgass@retinalburn.net> >+ */ >+ >+#ifndef __MACH64_DRM_H__ >+#define __MACH64_DRM_H__ >+ >+/* WARNING: If you change any of these defines, make sure to change the >+ * defines in the Xserver file (mach64_sarea.h) >+ */ >+#ifndef __MACH64_SAREA_DEFINES__ >+#define __MACH64_SAREA_DEFINES__ >+ >+/* What needs to be changed for the current vertex buffer? >+ * GH: We're going to be pedantic about this. We want the card to do as >+ * little as possible, so let's avoid having it fetch a whole bunch of >+ * register values that don't change all that often, if at all. >+ */ >+#define MACH64_UPLOAD_DST_OFF_PITCH 0x0001 >+#define MACH64_UPLOAD_Z_OFF_PITCH 0x0002 >+#define MACH64_UPLOAD_Z_ALPHA_CNTL 0x0004 >+#define MACH64_UPLOAD_SCALE_3D_CNTL 0x0008 >+#define MACH64_UPLOAD_DP_FOG_CLR 0x0010 >+#define MACH64_UPLOAD_DP_WRITE_MASK 0x0020 >+#define MACH64_UPLOAD_DP_PIX_WIDTH 0x0040 >+#define MACH64_UPLOAD_SETUP_CNTL 0x0080 >+#define MACH64_UPLOAD_MISC 0x0100 >+#define MACH64_UPLOAD_TEXTURE 0x0200 >+#define MACH64_UPLOAD_TEX0IMAGE 0x0400 >+#define MACH64_UPLOAD_TEX1IMAGE 0x0800 >+#define MACH64_UPLOAD_CLIPRECTS 0x1000 /* handled client-side */ >+#define MACH64_UPLOAD_CONTEXT 0x00ff >+#define MACH64_UPLOAD_ALL 0x1fff >+ >+/* DMA buffer size >+ */ >+#define MACH64_BUFFER_SIZE 16384 >+ >+/* Max number of swaps allowed on the ring >+ * before the client must wait >+ */ >+#define MACH64_MAX_QUEUED_FRAMES 3U >+ >+/* Byte offsets for host blit buffer data >+ */ >+#define MACH64_HOSTDATA_BLIT_OFFSET 104 >+ >+/* Keep these small for testing. >+ */ >+#define MACH64_NR_SAREA_CLIPRECTS 8 >+ >+#define MACH64_CARD_HEAP 0 >+#define MACH64_AGP_HEAP 1 >+#define MACH64_NR_TEX_HEAPS 2 >+#define MACH64_NR_TEX_REGIONS 64 >+#define MACH64_LOG_TEX_GRANULARITY 16 >+ >+#define MACH64_TEX_MAXLEVELS 1 >+ >+#define MACH64_NR_CONTEXT_REGS 15 >+#define MACH64_NR_TEXTURE_REGS 4 >+ >+#endif /* __MACH64_SAREA_DEFINES__ */ >+ >+typedef struct { >+ unsigned int dst_off_pitch; >+ >+ unsigned int z_off_pitch; >+ unsigned int z_cntl; >+ unsigned int alpha_tst_cntl; >+ >+ unsigned int scale_3d_cntl; >+ >+ unsigned int sc_left_right; >+ unsigned int sc_top_bottom; >+ >+ unsigned int dp_fog_clr; >+ unsigned int dp_write_mask; >+ unsigned int dp_pix_width; >+ unsigned int dp_mix; >+ unsigned int dp_src; >+ >+ unsigned int clr_cmp_cntl; >+ unsigned int gui_traj_cntl; >+ >+ unsigned int setup_cntl; >+ >+ unsigned int tex_size_pitch; >+ unsigned int tex_cntl; >+ unsigned int secondary_tex_off; >+ unsigned int tex_offset; >+} drm_mach64_context_regs_t; >+ >+typedef struct drm_mach64_sarea { >+ /* The channel for communication of state information to the kernel >+ * on firing a vertex dma buffer. >+ */ >+ drm_mach64_context_regs_t context_state; >+ unsigned int dirty; >+ unsigned int vertsize; >+ >+ /* The current cliprects, or a subset thereof. >+ */ >+ struct drm_clip_rect boxes[MACH64_NR_SAREA_CLIPRECTS]; >+ unsigned int nbox; >+ >+ /* Counters for client-side throttling of rendering clients. >+ */ >+ unsigned int frames_queued; >+ >+ /* Texture memory LRU. >+ */ >+ struct drm_tex_region tex_list[MACH64_NR_TEX_HEAPS][MACH64_NR_TEX_REGIONS + >+ 1]; >+ unsigned int tex_age[MACH64_NR_TEX_HEAPS]; >+ int ctx_owner; >+} drm_mach64_sarea_t; >+ >+/* WARNING: If you change any of these defines, make sure to change the >+ * defines in the Xserver file (mach64_common.h) >+ */ >+ >+/* Mach64 specific ioctls >+ * The device specific ioctl range is 0x40 to 0x79. >+ */ >+ >+#define DRM_MACH64_INIT 0x00 >+#define DRM_MACH64_IDLE 0x01 >+#define DRM_MACH64_RESET 0x02 >+#define DRM_MACH64_SWAP 0x03 >+#define DRM_MACH64_CLEAR 0x04 >+#define DRM_MACH64_VERTEX 0x05 >+#define DRM_MACH64_BLIT 0x06 >+#define DRM_MACH64_FLUSH 0x07 >+#define DRM_MACH64_GETPARAM 0x08 >+ >+#define DRM_IOCTL_MACH64_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_MACH64_INIT, drm_mach64_init_t) >+#define DRM_IOCTL_MACH64_IDLE DRM_IO( DRM_COMMAND_BASE + DRM_MACH64_IDLE ) >+#define DRM_IOCTL_MACH64_RESET DRM_IO( DRM_COMMAND_BASE + DRM_MACH64_RESET ) >+#define DRM_IOCTL_MACH64_SWAP DRM_IO( DRM_COMMAND_BASE + DRM_MACH64_SWAP ) >+#define DRM_IOCTL_MACH64_CLEAR DRM_IOW( DRM_COMMAND_BASE + DRM_MACH64_CLEAR, drm_mach64_clear_t) >+#define DRM_IOCTL_MACH64_VERTEX DRM_IOW( DRM_COMMAND_BASE + DRM_MACH64_VERTEX, drm_mach64_vertex_t) >+#define DRM_IOCTL_MACH64_BLIT DRM_IOW( DRM_COMMAND_BASE + DRM_MACH64_BLIT, drm_mach64_blit_t) >+#define DRM_IOCTL_MACH64_FLUSH DRM_IO( DRM_COMMAND_BASE + DRM_MACH64_FLUSH ) >+#define DRM_IOCTL_MACH64_GETPARAM DRM_IOWR( DRM_COMMAND_BASE + DRM_MACH64_GETPARAM, drm_mach64_getparam_t) >+ >+/* Buffer flags for clears >+ */ >+#define MACH64_FRONT 0x1 >+#define MACH64_BACK 0x2 >+#define MACH64_DEPTH 0x4 >+ >+/* Primitive types for vertex buffers >+ */ >+#define MACH64_PRIM_POINTS 0x00000000 >+#define MACH64_PRIM_LINES 0x00000001 >+#define MACH64_PRIM_LINE_LOOP 0x00000002 >+#define MACH64_PRIM_LINE_STRIP 0x00000003 >+#define MACH64_PRIM_TRIANGLES 0x00000004 >+#define MACH64_PRIM_TRIANGLE_STRIP 0x00000005 >+#define MACH64_PRIM_TRIANGLE_FAN 0x00000006 >+#define MACH64_PRIM_QUADS 0x00000007 >+#define MACH64_PRIM_QUAD_STRIP 0x00000008 >+#define MACH64_PRIM_POLYGON 0x00000009 >+ >+typedef enum _drm_mach64_dma_mode_t { >+ MACH64_MODE_DMA_ASYNC, >+ MACH64_MODE_DMA_SYNC, >+ MACH64_MODE_MMIO >+} drm_mach64_dma_mode_t; >+ >+typedef struct drm_mach64_init { >+ enum { >+ DRM_MACH64_INIT_DMA = 0x01, >+ DRM_MACH64_CLEANUP_DMA = 0x02 >+ } func; >+ >+ unsigned long sarea_priv_offset; >+ int is_pci; >+ drm_mach64_dma_mode_t dma_mode; >+ >+ unsigned int fb_bpp; >+ unsigned int front_offset, front_pitch; >+ unsigned int back_offset, back_pitch; >+ >+ unsigned int depth_bpp; >+ unsigned int depth_offset, depth_pitch; >+ >+ unsigned long fb_offset; >+ unsigned long mmio_offset; >+ unsigned long ring_offset; >+ unsigned long buffers_offset; >+ unsigned long agp_textures_offset; >+} drm_mach64_init_t; >+ >+typedef struct drm_mach64_clear { >+ unsigned int flags; >+ int x, y, w, h; >+ unsigned int clear_color; >+ unsigned int clear_depth; >+} drm_mach64_clear_t; >+ >+typedef struct drm_mach64_vertex { >+ int prim; >+ void *buf; /* Address of vertex buffer */ >+ unsigned long used; /* Number of bytes in buffer */ >+ int discard; /* Client finished with buffer? */ >+} drm_mach64_vertex_t; >+ >+typedef struct drm_mach64_blit { >+ void *buf; >+ int pitch; >+ int offset; >+ int format; >+ unsigned short x, y; >+ unsigned short width, height; >+} drm_mach64_blit_t; >+ >+typedef struct drm_mach64_getparam { >+ enum { >+ MACH64_PARAM_FRAMES_QUEUED = 0x01, >+ MACH64_PARAM_IRQ_NR = 0x02 >+ } param; >+ void *value; >+} drm_mach64_getparam_t; >+ >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mach64_drv.c linux-2.6.23.i686/drivers/char/drm/mach64_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mach64_drv.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mach64_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,103 @@ >+/* mach64_drv.c -- mach64 (Rage Pro) driver -*- linux-c -*- >+ * Created: Fri Nov 24 18:34:32 2000 by gareth@valinux.com >+ * >+ * Copyright 2000 Gareth Hughes >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * GARETH HUGHES BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER >+ * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN >+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Gareth Hughes <gareth@valinux.com> >+ * Leif Delgass <ldelgass@retinalburn.net> >+ */ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "mach64_drm.h" >+#include "mach64_drv.h" >+ >+#include "drm_pciids.h" >+ >+static struct pci_device_id pciidlist[] = { >+ mach64_PCI_IDS >+}; >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); >+static struct drm_driver driver = { >+ .driver_features = >+ DRIVER_USE_AGP | DRIVER_USE_MTRR | DRIVER_PCI_DMA | DRIVER_HAVE_DMA >+ | DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_IRQ_VBL, >+ .lastclose = mach64_driver_lastclose, >+ .vblank_wait = mach64_driver_vblank_wait, >+ .irq_preinstall = mach64_driver_irq_preinstall, >+ .irq_postinstall = mach64_driver_irq_postinstall, >+ .irq_uninstall = mach64_driver_irq_uninstall, >+ .irq_handler = mach64_driver_irq_handler, >+ .reclaim_buffers = drm_core_reclaim_buffers, >+ .get_map_ofs = drm_core_get_map_ofs, >+ .get_reg_ofs = drm_core_get_reg_ofs, >+ .ioctls = mach64_ioctls, >+ .dma_ioctl = mach64_dma_buffers, >+ .fops = { >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+ }, >+ .pci_driver = { >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), >+ }, >+ >+ .name = DRIVER_NAME, >+ .desc = DRIVER_DESC, >+ .date = DRIVER_DATE, >+ .major = DRIVER_MAJOR, >+ .minor = DRIVER_MINOR, >+ .patchlevel = DRIVER_PATCHLEVEL, >+}; >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ >+ >+static int __init mach64_init(void) >+{ >+ driver.num_ioctls = mach64_max_ioctl; >+ return drm_init(&driver, pciidlist); >+} >+ >+static void __exit mach64_exit(void) >+{ >+ drm_exit(&driver); >+} >+ >+module_init(mach64_init); >+module_exit(mach64_exit); >+ >+MODULE_AUTHOR(DRIVER_AUTHOR); >+MODULE_DESCRIPTION(DRIVER_DESC); >+MODULE_LICENSE("GPL and additional rights"); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mach64_drv.h linux-2.6.23.i686/drivers/char/drm/mach64_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/mach64_drv.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mach64_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,869 @@ >+/* mach64_drv.h -- Private header for mach64 driver -*- linux-c -*- >+ * Created: Fri Nov 24 22:07:58 2000 by gareth@valinux.com >+ */ >+/* >+ * Copyright 2000 Gareth Hughes >+ * Copyright 2002 Frank C. Earl >+ * Copyright 2002-2003 Leif Delgass >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT OWNER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER >+ * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN >+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Gareth Hughes <gareth@valinux.com> >+ * Frank C. Earl <fearl@airmail.net> >+ * Leif Delgass <ldelgass@retinalburn.net> >+ * José Fonseca <j_r_fonseca@yahoo.co.uk> >+ */ >+ >+#ifndef __MACH64_DRV_H__ >+#define __MACH64_DRV_H__ >+ >+/* General customization: >+ */ >+ >+#define DRIVER_AUTHOR "Gareth Hughes, Leif Delgass, José Fonseca" >+ >+#define DRIVER_NAME "mach64" >+#define DRIVER_DESC "DRM module for the ATI Rage Pro" >+#define DRIVER_DATE "20060718" >+ >+#define DRIVER_MAJOR 2 >+#define DRIVER_MINOR 0 >+#define DRIVER_PATCHLEVEL 0 >+ >+/* FIXME: remove these when not needed */ >+/* Development driver options */ >+#define MACH64_EXTRA_CHECKING 0 /* Extra sanity checks for DMA/freelist management */ >+#define MACH64_VERBOSE 0 /* Verbose debugging output */ >+ >+typedef struct drm_mach64_freelist { >+ struct list_head list; /* List pointers for free_list, placeholders, or pending list */ >+ struct drm_buf *buf; /* Pointer to the buffer */ >+ int discard; /* This flag is set when we're done (re)using a buffer */ >+ u32 ring_ofs; /* dword offset in ring of last descriptor for this buffer */ >+} drm_mach64_freelist_t; >+ >+typedef struct drm_mach64_descriptor_ring { >+ void *start; /* write pointer (cpu address) to start of descriptor ring */ >+ u32 start_addr; /* bus address of beginning of descriptor ring */ >+ int size; /* size of ring in bytes */ >+ >+ u32 head_addr; /* bus address of descriptor ring head */ >+ u32 head; /* dword offset of descriptor ring head */ >+ u32 tail; /* dword offset of descriptor ring tail */ >+ u32 tail_mask; /* mask used to wrap ring */ >+ int space; /* number of free bytes in ring */ >+} drm_mach64_descriptor_ring_t; >+ >+typedef struct drm_mach64_private { >+ drm_mach64_sarea_t *sarea_priv; >+ >+ int is_pci; >+ drm_mach64_dma_mode_t driver_mode; /* Async DMA, sync DMA, or MMIO */ >+ >+ int usec_timeout; /* Timeout for the wait functions */ >+ >+ drm_mach64_descriptor_ring_t ring; /* DMA descriptor table (ring buffer) */ >+ int ring_running; /* Is bus mastering is enabled */ >+ >+ struct list_head free_list; /* Free-list head */ >+ struct list_head placeholders; /* Placeholder list for buffers held by clients */ >+ struct list_head pending; /* Buffers pending completion */ >+ >+ u32 frame_ofs[MACH64_MAX_QUEUED_FRAMES]; /* dword ring offsets of most recent frame swaps */ >+ >+ unsigned int fb_bpp; >+ unsigned int front_offset, front_pitch; >+ unsigned int back_offset, back_pitch; >+ >+ unsigned int depth_bpp; >+ unsigned int depth_offset, depth_pitch; >+ >+ u32 front_offset_pitch; >+ u32 back_offset_pitch; >+ u32 depth_offset_pitch; >+ >+ drm_local_map_t *sarea; >+ drm_local_map_t *fb; >+ drm_local_map_t *mmio; >+ drm_local_map_t *ring_map; >+ drm_local_map_t *dev_buffers; /* this is a pointer to a structure in dev */ >+ drm_local_map_t *agp_textures; >+} drm_mach64_private_t; >+ >+extern struct drm_ioctl_desc mach64_ioctls[]; >+extern int mach64_max_ioctl; >+ >+ /* mach64_dma.c */ >+extern int mach64_dma_init(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_dma_idle(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_dma_flush(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_engine_reset(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_dma_buffers(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern void mach64_driver_lastclose(struct drm_device * dev); >+ >+extern int mach64_init_freelist(struct drm_device * dev); >+extern void mach64_destroy_freelist(struct drm_device * dev); >+extern struct drm_buf *mach64_freelist_get(drm_mach64_private_t * dev_priv); >+extern int mach64_freelist_put(drm_mach64_private_t * dev_priv, >+ struct drm_buf * copy_buf); >+ >+extern int mach64_do_wait_for_fifo(drm_mach64_private_t * dev_priv, >+ int entries); >+extern int mach64_do_wait_for_idle(drm_mach64_private_t * dev_priv); >+extern int mach64_wait_ring(drm_mach64_private_t * dev_priv, int n); >+extern int mach64_do_dispatch_pseudo_dma(drm_mach64_private_t * dev_priv); >+extern int mach64_do_release_used_buffers(drm_mach64_private_t * dev_priv); >+extern void mach64_dump_engine_info(drm_mach64_private_t * dev_priv); >+extern void mach64_dump_ring_info(drm_mach64_private_t * dev_priv); >+extern int mach64_do_engine_reset(drm_mach64_private_t * dev_priv); >+ >+extern int mach64_add_buf_to_ring(drm_mach64_private_t *dev_priv, >+ drm_mach64_freelist_t *_entry); >+extern int mach64_add_hostdata_buf_to_ring(drm_mach64_private_t *dev_priv, >+ drm_mach64_freelist_t *_entry); >+ >+extern int mach64_do_dma_idle(drm_mach64_private_t * dev_priv); >+extern int mach64_do_dma_flush(drm_mach64_private_t * dev_priv); >+extern int mach64_do_cleanup_dma(struct drm_device * dev); >+ >+ /* mach64_state.c */ >+extern int mach64_dma_clear(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_dma_swap(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_dma_vertex(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_dma_blit(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_get_param(struct drm_device *dev, void *data, >+ struct drm_file *file_priv); >+extern int mach64_driver_vblank_wait(struct drm_device * dev, >+ unsigned int *sequence); >+ >+extern irqreturn_t mach64_driver_irq_handler(DRM_IRQ_ARGS); >+extern void mach64_driver_irq_preinstall(struct drm_device * dev); >+extern void mach64_driver_irq_postinstall(struct drm_device * dev); >+extern void mach64_driver_irq_uninstall(struct drm_device * dev); >+ >+/* ================================================================ >+ * Registers >+ */ >+ >+#define MACH64_AGP_BASE 0x0148 >+#define MACH64_AGP_CNTL 0x014c >+#define MACH64_ALPHA_TST_CNTL 0x0550 >+ >+#define MACH64_DSP_CONFIG 0x0420 >+#define MACH64_DSP_ON_OFF 0x0424 >+#define MACH64_EXT_MEM_CNTL 0x04ac >+#define MACH64_GEN_TEST_CNTL 0x04d0 >+#define MACH64_HW_DEBUG 0x047c >+#define MACH64_MEM_ADDR_CONFIG 0x0434 >+#define MACH64_MEM_BUF_CNTL 0x042c >+#define MACH64_MEM_CNTL 0x04b0 >+ >+#define MACH64_BM_ADDR 0x0648 >+#define MACH64_BM_COMMAND 0x0188 >+#define MACH64_BM_DATA 0x0648 >+#define MACH64_BM_FRAME_BUF_OFFSET 0x0180 >+#define MACH64_BM_GUI_TABLE 0x01b8 >+#define MACH64_BM_GUI_TABLE_CMD 0x064c >+# define MACH64_CIRCULAR_BUF_SIZE_16KB (0 << 0) >+# define MACH64_CIRCULAR_BUF_SIZE_32KB (1 << 0) >+# define MACH64_CIRCULAR_BUF_SIZE_64KB (2 << 0) >+# define MACH64_CIRCULAR_BUF_SIZE_128KB (3 << 0) >+# define MACH64_LAST_DESCRIPTOR (1 << 31) >+#define MACH64_BM_HOSTDATA 0x0644 >+#define MACH64_BM_STATUS 0x018c >+#define MACH64_BM_SYSTEM_MEM_ADDR 0x0184 >+#define MACH64_BM_SYSTEM_TABLE 0x01bc >+#define MACH64_BUS_CNTL 0x04a0 >+# define MACH64_BUS_MSTR_RESET (1 << 1) >+# define MACH64_BUS_APER_REG_DIS (1 << 4) >+# define MACH64_BUS_FLUSH_BUF (1 << 2) >+# define MACH64_BUS_MASTER_DIS (1 << 6) >+# define MACH64_BUS_EXT_REG_EN (1 << 27) >+ >+#define MACH64_CLR_CMP_CLR 0x0700 >+#define MACH64_CLR_CMP_CNTL 0x0708 >+#define MACH64_CLR_CMP_MASK 0x0704 >+#define MACH64_CONFIG_CHIP_ID 0x04e0 >+#define MACH64_CONFIG_CNTL 0x04dc >+#define MACH64_CONFIG_STAT0 0x04e4 >+#define MACH64_CONFIG_STAT1 0x0494 >+#define MACH64_CONFIG_STAT2 0x0498 >+#define MACH64_CONTEXT_LOAD_CNTL 0x072c >+#define MACH64_CONTEXT_MASK 0x0720 >+#define MACH64_COMPOSITE_SHADOW_ID 0x0798 >+#define MACH64_CRC_SIG 0x04e8 >+#define MACH64_CUSTOM_MACRO_CNTL 0x04d4 >+ >+#define MACH64_DP_BKGD_CLR 0x06c0 >+#define MACH64_DP_FOG_CLR 0x06c4 >+#define MACH64_DP_FGRD_BKGD_CLR 0x06e0 >+#define MACH64_DP_FRGD_CLR 0x06c4 >+#define MACH64_DP_FGRD_CLR_MIX 0x06dc >+ >+#define MACH64_DP_MIX 0x06d4 >+# define BKGD_MIX_NOT_D (0 << 0) >+# define BKGD_MIX_ZERO (1 << 0) >+# define BKGD_MIX_ONE (2 << 0) >+# define MACH64_BKGD_MIX_D (3 << 0) >+# define BKGD_MIX_NOT_S (4 << 0) >+# define BKGD_MIX_D_XOR_S (5 << 0) >+# define BKGD_MIX_NOT_D_XOR_S (6 << 0) >+# define MACH64_BKGD_MIX_S (7 << 0) >+# define BKGD_MIX_NOT_D_OR_NOT_S (8 << 0) >+# define BKGD_MIX_D_OR_NOT_S (9 << 0) >+# define BKGD_MIX_NOT_D_OR_S (10 << 0) >+# define BKGD_MIX_D_OR_S (11 << 0) >+# define BKGD_MIX_D_AND_S (12 << 0) >+# define BKGD_MIX_NOT_D_AND_S (13 << 0) >+# define BKGD_MIX_D_AND_NOT_S (14 << 0) >+# define BKGD_MIX_NOT_D_AND_NOT_S (15 << 0) >+# define BKGD_MIX_D_PLUS_S_DIV2 (23 << 0) >+# define FRGD_MIX_NOT_D (0 << 16) >+# define FRGD_MIX_ZERO (1 << 16) >+# define FRGD_MIX_ONE (2 << 16) >+# define FRGD_MIX_D (3 << 16) >+# define FRGD_MIX_NOT_S (4 << 16) >+# define FRGD_MIX_D_XOR_S (5 << 16) >+# define FRGD_MIX_NOT_D_XOR_S (6 << 16) >+# define MACH64_FRGD_MIX_S (7 << 16) >+# define FRGD_MIX_NOT_D_OR_NOT_S (8 << 16) >+# define FRGD_MIX_D_OR_NOT_S (9 << 16) >+# define FRGD_MIX_NOT_D_OR_S (10 << 16) >+# define FRGD_MIX_D_OR_S (11 << 16) >+# define FRGD_MIX_D_AND_S (12 << 16) >+# define FRGD_MIX_NOT_D_AND_S (13 << 16) >+# define FRGD_MIX_D_AND_NOT_S (14 << 16) >+# define FRGD_MIX_NOT_D_AND_NOT_S (15 << 16) >+# define FRGD_MIX_D_PLUS_S_DIV2 (23 << 16) >+ >+#define MACH64_DP_PIX_WIDTH 0x06d0 >+# define MACH64_HOST_TRIPLE_ENABLE (1 << 13) >+# define MACH64_BYTE_ORDER_MSB_TO_LSB (0 << 24) >+# define MACH64_BYTE_ORDER_LSB_TO_MSB (1 << 24) >+ >+#define MACH64_DP_SRC 0x06d8 >+# define MACH64_BKGD_SRC_BKGD_CLR (0 << 0) >+# define MACH64_BKGD_SRC_FRGD_CLR (1 << 0) >+# define MACH64_BKGD_SRC_HOST (2 << 0) >+# define MACH64_BKGD_SRC_BLIT (3 << 0) >+# define MACH64_BKGD_SRC_PATTERN (4 << 0) >+# define MACH64_BKGD_SRC_3D (5 << 0) >+# define MACH64_FRGD_SRC_BKGD_CLR (0 << 8) >+# define MACH64_FRGD_SRC_FRGD_CLR (1 << 8) >+# define MACH64_FRGD_SRC_HOST (2 << 8) >+# define MACH64_FRGD_SRC_BLIT (3 << 8) >+# define MACH64_FRGD_SRC_PATTERN (4 << 8) >+# define MACH64_FRGD_SRC_3D (5 << 8) >+# define MACH64_MONO_SRC_ONE (0 << 16) >+# define MACH64_MONO_SRC_PATTERN (1 << 16) >+# define MACH64_MONO_SRC_HOST (2 << 16) >+# define MACH64_MONO_SRC_BLIT (3 << 16) >+ >+#define MACH64_DP_WRITE_MASK 0x06c8 >+ >+#define MACH64_DST_CNTL 0x0530 >+# define MACH64_DST_X_RIGHT_TO_LEFT (0 << 0) >+# define MACH64_DST_X_LEFT_TO_RIGHT (1 << 0) >+# define MACH64_DST_Y_BOTTOM_TO_TOP (0 << 1) >+# define MACH64_DST_Y_TOP_TO_BOTTOM (1 << 1) >+# define MACH64_DST_X_MAJOR (0 << 2) >+# define MACH64_DST_Y_MAJOR (1 << 2) >+# define MACH64_DST_X_TILE (1 << 3) >+# define MACH64_DST_Y_TILE (1 << 4) >+# define MACH64_DST_LAST_PEL (1 << 5) >+# define MACH64_DST_POLYGON_ENABLE (1 << 6) >+# define MACH64_DST_24_ROTATION_ENABLE (1 << 7) >+ >+#define MACH64_DST_HEIGHT_WIDTH 0x0518 >+#define MACH64_DST_OFF_PITCH 0x0500 >+#define MACH64_DST_WIDTH_HEIGHT 0x06ec >+#define MACH64_DST_X_Y 0x06e8 >+#define MACH64_DST_Y_X 0x050c >+ >+#define MACH64_FIFO_STAT 0x0710 >+# define MACH64_FIFO_SLOT_MASK 0x0000ffff >+# define MACH64_FIFO_ERR (1 << 31) >+ >+#define MACH64_GEN_TEST_CNTL 0x04d0 >+# define MACH64_GUI_ENGINE_ENABLE (1 << 8) >+#define MACH64_GUI_CMDFIFO_DEBUG 0x0170 >+#define MACH64_GUI_CMDFIFO_DATA 0x0174 >+#define MACH64_GUI_CNTL 0x0178 >+# define MACH64_CMDFIFO_SIZE_MASK 0x00000003ul >+# define MACH64_CMDFIFO_SIZE_192 0x00000000ul >+# define MACH64_CMDFIFO_SIZE_128 0x00000001ul >+# define MACH64_CMDFIFO_SIZE_64 0x00000002ul >+#define MACH64_GUI_STAT 0x0738 >+# define MACH64_GUI_ACTIVE (1 << 0) >+#define MACH64_GUI_TRAJ_CNTL 0x0730 >+ >+#define MACH64_HOST_CNTL 0x0640 >+#define MACH64_HOST_DATA0 0x0600 >+ >+#define MACH64_ONE_OVER_AREA 0x029c >+#define MACH64_ONE_OVER_AREA_UC 0x0300 >+ >+#define MACH64_PAT_REG0 0x0680 >+#define MACH64_PAT_REG1 0x0684 >+ >+#define MACH64_SC_LEFT 0x06a0 >+#define MACH64_SC_RIGHT 0x06a4 >+#define MACH64_SC_LEFT_RIGHT 0x06a8 >+#define MACH64_SC_TOP 0x06ac >+#define MACH64_SC_BOTTOM 0x06b0 >+#define MACH64_SC_TOP_BOTTOM 0x06b4 >+ >+#define MACH64_SCALE_3D_CNTL 0x05fc >+#define MACH64_SCRATCH_REG0 0x0480 >+#define MACH64_SCRATCH_REG1 0x0484 >+#define MACH64_SECONDARY_TEX_OFF 0x0778 >+#define MACH64_SETUP_CNTL 0x0304 >+#define MACH64_SRC_CNTL 0x05b4 >+# define MACH64_SRC_BM_ENABLE (1 << 8) >+# define MACH64_SRC_BM_SYNC (1 << 9) >+# define MACH64_SRC_BM_OP_FRAME_TO_SYSTEM (0 << 10) >+# define MACH64_SRC_BM_OP_SYSTEM_TO_FRAME (1 << 10) >+# define MACH64_SRC_BM_OP_REG_TO_SYSTEM (2 << 10) >+# define MACH64_SRC_BM_OP_SYSTEM_TO_REG (3 << 10) >+#define MACH64_SRC_HEIGHT1 0x0594 >+#define MACH64_SRC_HEIGHT2 0x05ac >+#define MACH64_SRC_HEIGHT1_WIDTH1 0x0598 >+#define MACH64_SRC_HEIGHT2_WIDTH2 0x05b0 >+#define MACH64_SRC_OFF_PITCH 0x0580 >+#define MACH64_SRC_WIDTH1 0x0590 >+#define MACH64_SRC_Y_X 0x058c >+ >+#define MACH64_TEX_0_OFF 0x05c0 >+#define MACH64_TEX_CNTL 0x0774 >+#define MACH64_TEX_SIZE_PITCH 0x0770 >+#define MACH64_TIMER_CONFIG 0x0428 >+ >+#define MACH64_VERTEX_1_ARGB 0x0254 >+#define MACH64_VERTEX_1_S 0x0240 >+#define MACH64_VERTEX_1_SECONDARY_S 0x0328 >+#define MACH64_VERTEX_1_SECONDARY_T 0x032c >+#define MACH64_VERTEX_1_SECONDARY_W 0x0330 >+#define MACH64_VERTEX_1_SPEC_ARGB 0x024c >+#define MACH64_VERTEX_1_T 0x0244 >+#define MACH64_VERTEX_1_W 0x0248 >+#define MACH64_VERTEX_1_X_Y 0x0258 >+#define MACH64_VERTEX_1_Z 0x0250 >+#define MACH64_VERTEX_2_ARGB 0x0274 >+#define MACH64_VERTEX_2_S 0x0260 >+#define MACH64_VERTEX_2_SECONDARY_S 0x0334 >+#define MACH64_VERTEX_2_SECONDARY_T 0x0338 >+#define MACH64_VERTEX_2_SECONDARY_W 0x033c >+#define MACH64_VERTEX_2_SPEC_ARGB 0x026c >+#define MACH64_VERTEX_2_T 0x0264 >+#define MACH64_VERTEX_2_W 0x0268 >+#define MACH64_VERTEX_2_X_Y 0x0278 >+#define MACH64_VERTEX_2_Z 0x0270 >+#define MACH64_VERTEX_3_ARGB 0x0294 >+#define MACH64_VERTEX_3_S 0x0280 >+#define MACH64_VERTEX_3_SECONDARY_S 0x02a0 >+#define MACH64_VERTEX_3_SECONDARY_T 0x02a4 >+#define MACH64_VERTEX_3_SECONDARY_W 0x02a8 >+#define MACH64_VERTEX_3_SPEC_ARGB 0x028c >+#define MACH64_VERTEX_3_T 0x0284 >+#define MACH64_VERTEX_3_W 0x0288 >+#define MACH64_VERTEX_3_X_Y 0x0298 >+#define MACH64_VERTEX_3_Z 0x0290 >+ >+#define MACH64_Z_CNTL 0x054c >+#define MACH64_Z_OFF_PITCH 0x0548 >+ >+#define MACH64_CRTC_VLINE_CRNT_VLINE 0x0410 >+# define MACH64_CRTC_VLINE_MASK 0x000007ff >+# define MACH64_CRTC_CRNT_VLINE_MASK 0x07ff0000 >+#define MACH64_CRTC_OFF_PITCH 0x0414 >+#define MACH64_CRTC_INT_CNTL 0x0418 >+# define MACH64_CRTC_VBLANK (1 << 0) >+# define MACH64_CRTC_VBLANK_INT_EN (1 << 1) >+# define MACH64_CRTC_VBLANK_INT (1 << 2) >+# define MACH64_CRTC_VLINE_INT_EN (1 << 3) >+# define MACH64_CRTC_VLINE_INT (1 << 4) >+# define MACH64_CRTC_VLINE_SYNC (1 << 5) /* 0=even, 1=odd */ >+# define MACH64_CRTC_FRAME (1 << 6) /* 0=even, 1=odd */ >+# define MACH64_CRTC_SNAPSHOT_INT_EN (1 << 7) >+# define MACH64_CRTC_SNAPSHOT_INT (1 << 8) >+# define MACH64_CRTC_I2C_INT_EN (1 << 9) >+# define MACH64_CRTC_I2C_INT (1 << 10) >+# define MACH64_CRTC2_VBLANK (1 << 11) /* LT Pro */ >+# define MACH64_CRTC2_VBLANK_INT_EN (1 << 12) /* LT Pro */ >+# define MACH64_CRTC2_VBLANK_INT (1 << 13) /* LT Pro */ >+# define MACH64_CRTC2_VLINE_INT_EN (1 << 14) /* LT Pro */ >+# define MACH64_CRTC2_VLINE_INT (1 << 15) /* LT Pro */ >+# define MACH64_CRTC_CAPBUF0_INT_EN (1 << 16) >+# define MACH64_CRTC_CAPBUF0_INT (1 << 17) >+# define MACH64_CRTC_CAPBUF1_INT_EN (1 << 18) >+# define MACH64_CRTC_CAPBUF1_INT (1 << 19) >+# define MACH64_CRTC_OVERLAY_EOF_INT_EN (1 << 20) >+# define MACH64_CRTC_OVERLAY_EOF_INT (1 << 21) >+# define MACH64_CRTC_ONESHOT_CAP_INT_EN (1 << 22) >+# define MACH64_CRTC_ONESHOT_CAP_INT (1 << 23) >+# define MACH64_CRTC_BUSMASTER_EOL_INT_EN (1 << 24) >+# define MACH64_CRTC_BUSMASTER_EOL_INT (1 << 25) >+# define MACH64_CRTC_GP_INT_EN (1 << 26) >+# define MACH64_CRTC_GP_INT (1 << 27) >+# define MACH64_CRTC2_VLINE_SYNC (1 << 28) /* LT Pro */ /* 0=even, 1=odd */ >+# define MACH64_CRTC_SNAPSHOT2_INT_EN (1 << 29) /* LT Pro */ >+# define MACH64_CRTC_SNAPSHOT2_INT (1 << 30) /* LT Pro */ >+# define MACH64_CRTC_VBLANK2_INT (1 << 31) >+# define MACH64_CRTC_INT_ENS \ >+ ( \ >+ MACH64_CRTC_VBLANK_INT_EN | \ >+ MACH64_CRTC_VLINE_INT_EN | \ >+ MACH64_CRTC_SNAPSHOT_INT_EN | \ >+ MACH64_CRTC_I2C_INT_EN | \ >+ MACH64_CRTC2_VBLANK_INT_EN | \ >+ MACH64_CRTC2_VLINE_INT_EN | \ >+ MACH64_CRTC_CAPBUF0_INT_EN | \ >+ MACH64_CRTC_CAPBUF1_INT_EN | \ >+ MACH64_CRTC_OVERLAY_EOF_INT_EN | \ >+ MACH64_CRTC_ONESHOT_CAP_INT_EN | \ >+ MACH64_CRTC_BUSMASTER_EOL_INT_EN | \ >+ MACH64_CRTC_GP_INT_EN | \ >+ MACH64_CRTC_SNAPSHOT2_INT_EN | \ >+ 0 \ >+ ) >+# define MACH64_CRTC_INT_ACKS \ >+ ( \ >+ MACH64_CRTC_VBLANK_INT | \ >+ MACH64_CRTC_VLINE_INT | \ >+ MACH64_CRTC_SNAPSHOT_INT | \ >+ MACH64_CRTC_I2C_INT | \ >+ MACH64_CRTC2_VBLANK_INT | \ >+ MACH64_CRTC2_VLINE_INT | \ >+ MACH64_CRTC_CAPBUF0_INT | \ >+ MACH64_CRTC_CAPBUF1_INT | \ >+ MACH64_CRTC_OVERLAY_EOF_INT | \ >+ MACH64_CRTC_ONESHOT_CAP_INT | \ >+ MACH64_CRTC_BUSMASTER_EOL_INT | \ >+ MACH64_CRTC_GP_INT | \ >+ MACH64_CRTC_SNAPSHOT2_INT | \ >+ MACH64_CRTC_VBLANK2_INT | \ >+ 0 \ >+ ) >+ >+#define MACH64_DATATYPE_CI8 2 >+#define MACH64_DATATYPE_ARGB1555 3 >+#define MACH64_DATATYPE_RGB565 4 >+#define MACH64_DATATYPE_ARGB8888 6 >+#define MACH64_DATATYPE_RGB332 7 >+#define MACH64_DATATYPE_Y8 8 >+#define MACH64_DATATYPE_RGB8 9 >+#define MACH64_DATATYPE_VYUY422 11 >+#define MACH64_DATATYPE_YVYU422 12 >+#define MACH64_DATATYPE_AYUV444 14 >+#define MACH64_DATATYPE_ARGB4444 15 >+ >+#define MACH64_READ(reg) DRM_READ32(dev_priv->mmio, (reg) ) >+#define MACH64_WRITE(reg,val) DRM_WRITE32(dev_priv->mmio, (reg), (val) ) >+ >+#define DWMREG0 0x0400 >+#define DWMREG0_END 0x07ff >+#define DWMREG1 0x0000 >+#define DWMREG1_END 0x03ff >+ >+#define ISREG0(r) (((r) >= DWMREG0) && ((r) <= DWMREG0_END)) >+#define DMAREG0(r) (((r) - DWMREG0) >> 2) >+#define DMAREG1(r) ((((r) - DWMREG1) >> 2 ) | 0x0100) >+#define DMAREG(r) (ISREG0(r) ? DMAREG0(r) : DMAREG1(r)) >+ >+#define MMREG0 0x0000 >+#define MMREG0_END 0x00ff >+ >+#define ISMMREG0(r) (((r) >= MMREG0) && ((r) <= MMREG0_END)) >+#define MMSELECT0(r) (((r) << 2) + DWMREG0) >+#define MMSELECT1(r) (((((r) & 0xff) << 2) + DWMREG1)) >+#define MMSELECT(r) (ISMMREG0(r) ? MMSELECT0(r) : MMSELECT1(r)) >+ >+/* ================================================================ >+ * DMA constants >+ */ >+ >+/* DMA descriptor field indices: >+ * The descriptor fields are loaded into the read-only >+ * BM_* system bus master registers during a bus-master operation >+ */ >+#define MACH64_DMA_FRAME_BUF_OFFSET 0 /* BM_FRAME_BUF_OFFSET */ >+#define MACH64_DMA_SYS_MEM_ADDR 1 /* BM_SYSTEM_MEM_ADDR */ >+#define MACH64_DMA_COMMAND 2 /* BM_COMMAND */ >+#define MACH64_DMA_RESERVED 3 /* BM_STATUS */ >+ >+/* BM_COMMAND descriptor field flags */ >+#define MACH64_DMA_HOLD_OFFSET (1<<30) /* Don't increment DMA_FRAME_BUF_OFFSET */ >+#define MACH64_DMA_EOL (1<<31) /* End of descriptor list flag */ >+ >+#define MACH64_DMA_CHUNKSIZE 0x1000 /* 4kB per DMA descriptor */ >+#define MACH64_APERTURE_OFFSET 0x7ff800 /* frame-buffer offset for gui-masters */ >+ >+/* ================================================================ >+ * Ring operations >+ * >+ * Since the Mach64 bus master engine requires polling, these functions end >+ * up being called frequently, hence being inline. >+ */ >+ >+static __inline__ void mach64_ring_start(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ >+ DRM_DEBUG("%s: head_addr: 0x%08x head: %d tail: %d space: %d\n", >+ __FUNCTION__, >+ ring->head_addr, ring->head, ring->tail, ring->space); >+ >+ if (mach64_do_wait_for_idle(dev_priv) < 0) { >+ mach64_do_engine_reset(dev_priv); >+ } >+ >+ if (dev_priv->driver_mode != MACH64_MODE_MMIO) { >+ /* enable bus mastering and block 1 registers */ >+ MACH64_WRITE(MACH64_BUS_CNTL, >+ (MACH64_READ(MACH64_BUS_CNTL) & >+ ~MACH64_BUS_MASTER_DIS) >+ | MACH64_BUS_EXT_REG_EN); >+ mach64_do_wait_for_idle(dev_priv); >+ } >+ >+ /* reset descriptor table ring head */ >+ MACH64_WRITE(MACH64_BM_GUI_TABLE_CMD, >+ ring->head_addr | MACH64_CIRCULAR_BUF_SIZE_16KB); >+ >+ dev_priv->ring_running = 1; >+} >+ >+static __inline__ void mach64_ring_resume(drm_mach64_private_t * dev_priv, >+ drm_mach64_descriptor_ring_t * ring) >+{ >+ DRM_DEBUG("%s: head_addr: 0x%08x head: %d tail: %d space: %d\n", >+ __FUNCTION__, >+ ring->head_addr, ring->head, ring->tail, ring->space); >+ >+ /* reset descriptor table ring head */ >+ MACH64_WRITE(MACH64_BM_GUI_TABLE_CMD, >+ ring->head_addr | MACH64_CIRCULAR_BUF_SIZE_16KB); >+ >+ if (dev_priv->driver_mode == MACH64_MODE_MMIO) { >+ mach64_do_dispatch_pseudo_dma(dev_priv); >+ } else { >+ /* enable GUI bus mastering, and sync the bus master to the GUI */ >+ MACH64_WRITE(MACH64_SRC_CNTL, >+ MACH64_SRC_BM_ENABLE | MACH64_SRC_BM_SYNC | >+ MACH64_SRC_BM_OP_SYSTEM_TO_REG); >+ >+ /* kick off the transfer */ >+ MACH64_WRITE(MACH64_DST_HEIGHT_WIDTH, 0); >+ if (dev_priv->driver_mode == MACH64_MODE_DMA_SYNC) { >+ if ((mach64_do_wait_for_idle(dev_priv)) < 0) { >+ DRM_ERROR("%s: idle failed, resetting engine\n", >+ __FUNCTION__); >+ mach64_dump_engine_info(dev_priv); >+ mach64_do_engine_reset(dev_priv); >+ return; >+ } >+ mach64_do_release_used_buffers(dev_priv); >+ } >+ } >+} >+ >+/** >+ * Poll the ring head and make sure the bus master is alive. >+ * >+ * Mach64's bus master engine will stop if there are no more entries to process. >+ * This function polls the engine for the last processed entry and calls >+ * mach64_ring_resume if there is an unprocessed entry. >+ * >+ * Note also that, since we update the ring tail while the bus master engine is >+ * in operation, it is possible that the last tail update was too late to be >+ * processed, and the bus master engine stops at the previous tail position. >+ * Therefore it is important to call this function frequently. >+ */ >+static __inline__ void mach64_ring_tick(drm_mach64_private_t * dev_priv, >+ drm_mach64_descriptor_ring_t * ring) >+{ >+ DRM_DEBUG("%s: head_addr: 0x%08x head: %d tail: %d space: %d\n", >+ __FUNCTION__, >+ ring->head_addr, ring->head, ring->tail, ring->space); >+ >+ if (!dev_priv->ring_running) { >+ mach64_ring_start(dev_priv); >+ >+ if (ring->head != ring->tail) { >+ mach64_ring_resume(dev_priv, ring); >+ } >+ } else { >+ /* GUI_ACTIVE must be read before BM_GUI_TABLE to >+ * correctly determine the ring head >+ */ >+ int gui_active = >+ MACH64_READ(MACH64_GUI_STAT) & MACH64_GUI_ACTIVE; >+ >+ ring->head_addr = MACH64_READ(MACH64_BM_GUI_TABLE) & 0xfffffff0; >+ >+ if (gui_active) { >+ /* If not idle, BM_GUI_TABLE points one descriptor >+ * past the current head >+ */ >+ if (ring->head_addr == ring->start_addr) { >+ ring->head_addr += ring->size; >+ } >+ ring->head_addr -= 4 * sizeof(u32); >+ } >+ >+ if (ring->head_addr < ring->start_addr || >+ ring->head_addr >= ring->start_addr + ring->size) { >+ DRM_ERROR("bad ring head address: 0x%08x\n", >+ ring->head_addr); >+ mach64_dump_ring_info(dev_priv); >+ mach64_do_engine_reset(dev_priv); >+ return; >+ } >+ >+ ring->head = (ring->head_addr - ring->start_addr) / sizeof(u32); >+ >+ if (!gui_active && ring->head != ring->tail) { >+ mach64_ring_resume(dev_priv, ring); >+ } >+ } >+} >+ >+static __inline__ void mach64_ring_stop(drm_mach64_private_t * dev_priv) >+{ >+ DRM_DEBUG("%s: head_addr: 0x%08x head: %d tail: %d space: %d\n", >+ __FUNCTION__, >+ dev_priv->ring.head_addr, dev_priv->ring.head, >+ dev_priv->ring.tail, dev_priv->ring.space); >+ >+ /* restore previous SRC_CNTL to disable busmastering */ >+ mach64_do_wait_for_fifo(dev_priv, 1); >+ MACH64_WRITE(MACH64_SRC_CNTL, 0); >+ >+ /* disable busmastering but keep the block 1 registers enabled */ >+ mach64_do_wait_for_idle(dev_priv); >+ MACH64_WRITE(MACH64_BUS_CNTL, MACH64_READ(MACH64_BUS_CNTL) >+ | MACH64_BUS_MASTER_DIS | MACH64_BUS_EXT_REG_EN); >+ >+ dev_priv->ring_running = 0; >+} >+ >+static __inline__ void >+mach64_update_ring_snapshot(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ mach64_ring_tick(dev_priv, ring); >+ >+ ring->space = (ring->head - ring->tail) * sizeof(u32); >+ if (ring->space <= 0) { >+ ring->space += ring->size; >+ } >+} >+ >+/* ================================================================ >+ * DMA macros >+ * >+ * Mach64's ring buffer doesn't take register writes directly. These >+ * have to be written indirectly in DMA buffers. These macros simplify >+ * the task of setting up a buffer, writing commands to it, and >+ * queuing the buffer in the ring. >+ */ >+ >+#define DMALOCALS \ >+ drm_mach64_freelist_t *_entry = NULL; \ >+ struct drm_buf *_buf = NULL; \ >+ u32 *_buf_wptr; int _outcount >+ >+#define GETBUFPTR( __buf ) \ >+((dev_priv->is_pci) ? \ >+ ((u32 *)(__buf)->address) : \ >+ ((u32 *)((char *)dev_priv->dev_buffers->handle + (__buf)->offset))) >+ >+#define GETBUFADDR( __buf ) ((u32)(__buf)->bus_address) >+ >+#define GETRINGOFFSET() (_entry->ring_ofs) >+ >+static __inline__ int mach64_find_pending_buf_entry(drm_mach64_private_t * >+ dev_priv, >+ drm_mach64_freelist_t ** >+ entry, struct drm_buf * buf) >+{ >+ struct list_head *ptr; >+#if MACH64_EXTRA_CHECKING >+ if (list_empty(&dev_priv->pending)) { >+ DRM_ERROR("Empty pending list in %s\n", __FUNCTION__); >+ return -EINVAL; >+ } >+#endif >+ ptr = dev_priv->pending.prev; >+ *entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ while ((*entry)->buf != buf) { >+ if (ptr == &dev_priv->pending) { >+ return -EFAULT; >+ } >+ ptr = ptr->prev; >+ *entry = list_entry(ptr, drm_mach64_freelist_t, list); >+ } >+ return 0; >+} >+ >+#define DMASETPTR( _p ) \ >+do { \ >+ _buf = (_p); \ >+ _outcount = 0; \ >+ _buf_wptr = GETBUFPTR( _buf ); \ >+} while(0) >+ >+/* FIXME: use a private set of smaller buffers for state emits, clears, and swaps? */ >+#define DMAGETPTR( file_priv, dev_priv, n ) \ >+do { \ >+ if ( MACH64_VERBOSE ) { \ >+ DRM_INFO( "DMAGETPTR( %d ) in %s\n", \ >+ n, __FUNCTION__ ); \ >+ } \ >+ _buf = mach64_freelist_get( dev_priv ); \ >+ if (_buf == NULL) { \ >+ DRM_ERROR("%s: couldn't get buffer in DMAGETPTR\n", \ >+ __FUNCTION__ ); \ >+ return -EAGAIN; \ >+ } \ >+ if (_buf->pending) { \ >+ DRM_ERROR("%s: pending buf in DMAGETPTR\n", \ >+ __FUNCTION__ ); \ >+ return -EFAULT; \ >+ } \ >+ _buf->file_priv = file_priv; \ >+ _outcount = 0; \ >+ \ >+ _buf_wptr = GETBUFPTR( _buf ); \ >+} while (0) >+ >+#define DMAOUTREG( reg, val ) \ >+do { \ >+ if ( MACH64_VERBOSE ) { \ >+ DRM_INFO( " DMAOUTREG( 0x%x = 0x%08x )\n", \ >+ reg, val ); \ >+ } \ >+ _buf_wptr[_outcount++] = cpu_to_le32(DMAREG(reg)); \ >+ _buf_wptr[_outcount++] = cpu_to_le32((val)); \ >+ _buf->used += 8; \ >+} while (0) >+ >+#define DMAADVANCE( dev_priv, _discard ) \ >+do { \ >+ struct list_head *ptr; \ >+ int ret; \ >+ \ >+ if ( MACH64_VERBOSE ) { \ >+ DRM_INFO( "DMAADVANCE() in %s\n", __FUNCTION__ ); \ >+ } \ >+ \ >+ if (_buf->used <= 0) { \ >+ DRM_ERROR( "DMAADVANCE() in %s: sending empty buf %d\n", \ >+ __FUNCTION__, _buf->idx ); \ >+ return -EFAULT; \ >+ } \ >+ if (_buf->pending) { \ >+ /* This is a resued buffer, so we need to find it in the pending list */ \ >+ if ( (ret=mach64_find_pending_buf_entry(dev_priv, &_entry, _buf)) ) { \ >+ DRM_ERROR( "DMAADVANCE() in %s: couldn't find pending buf %d\n", \ >+ __FUNCTION__, _buf->idx ); \ >+ return ret; \ >+ } \ >+ if (_entry->discard) { \ >+ DRM_ERROR( "DMAADVANCE() in %s: sending discarded pending buf %d\n", \ >+ __FUNCTION__, _buf->idx ); \ >+ return -EFAULT; \ >+ } \ >+ } else { \ >+ if (list_empty(&dev_priv->placeholders)) { \ >+ DRM_ERROR( "DMAADVANCE() in %s: empty placeholder list\n", \ >+ __FUNCTION__ ); \ >+ return -EFAULT; \ >+ } \ >+ ptr = dev_priv->placeholders.next; \ >+ list_del(ptr); \ >+ _entry = list_entry(ptr, drm_mach64_freelist_t, list); \ >+ _buf->pending = 1; \ >+ _entry->buf = _buf; \ >+ list_add_tail(ptr, &dev_priv->pending); \ >+ } \ >+ _entry->discard = (_discard); \ >+ if ( (ret = mach64_add_buf_to_ring( dev_priv, _entry )) ) \ >+ return ret; \ >+} while (0) >+ >+#define DMADISCARDBUF() \ >+do { \ >+ if (_entry == NULL) { \ >+ int ret; \ >+ if ( (ret=mach64_find_pending_buf_entry(dev_priv, &_entry, _buf)) ) { \ >+ DRM_ERROR( "%s: couldn't find pending buf %d\n", \ >+ __FUNCTION__, _buf->idx ); \ >+ return ret; \ >+ } \ >+ } \ >+ _entry->discard = 1; \ >+} while(0) >+ >+#define DMAADVANCEHOSTDATA( dev_priv ) \ >+do { \ >+ struct list_head *ptr; \ >+ int ret; \ >+ \ >+ if ( MACH64_VERBOSE ) { \ >+ DRM_INFO( "DMAADVANCEHOSTDATA() in %s\n", __FUNCTION__ ); \ >+ } \ >+ \ >+ if (_buf->used <= 0) { \ >+ DRM_ERROR( "DMAADVANCEHOSTDATA() in %s: sending empty buf %d\n", \ >+ __FUNCTION__, _buf->idx ); \ >+ return -EFAULT; \ >+ } \ >+ if (list_empty(&dev_priv->placeholders)) { \ >+ DRM_ERROR( "%s: empty placeholder list in DMAADVANCEHOSTDATA()\n", \ >+ __FUNCTION__ ); \ >+ return -EFAULT; \ >+ } \ >+ \ >+ ptr = dev_priv->placeholders.next; \ >+ list_del(ptr); \ >+ _entry = list_entry(ptr, drm_mach64_freelist_t, list); \ >+ _entry->buf = _buf; \ >+ _entry->buf->pending = 1; \ >+ list_add_tail(ptr, &dev_priv->pending); \ >+ _entry->discard = 1; \ >+ if ( (ret = mach64_add_hostdata_buf_to_ring( dev_priv, _entry )) ) \ >+ return ret; \ >+} while (0) >+ >+#endif /* __MACH64_DRV_H__ */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mach64_irq.c linux-2.6.23.i686/drivers/char/drm/mach64_irq.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mach64_irq.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mach64_irq.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,136 @@ >+/* mach64_irq.c -- IRQ handling for ATI Mach64 -*- linux-c -*- >+ * Created: Tue Feb 25, 2003 by Leif Delgass, based on radeon_irq.c/r128_irq.c >+ */ >+/*- >+ * Copyright (C) The Weather Channel, Inc. 2002. >+ * Copyright 2003 Leif Delgass >+ * All Rights Reserved. >+ * >+ * The Weather Channel (TM) funded Tungsten Graphics to develop the >+ * initial release of the Radeon 8500 driver under the XFree86 license. >+ * This notice must be preserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Keith Whitwell <keith@tungstengraphics.com> >+ * Eric Anholt <anholt@FreeBSD.org> >+ * Leif Delgass <ldelgass@retinalburn.net> >+ */ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "mach64_drm.h" >+#include "mach64_drv.h" >+ >+irqreturn_t mach64_driver_irq_handler(DRM_IRQ_ARGS) >+{ >+ struct drm_device *dev = (struct drm_device *) arg; >+ drm_mach64_private_t *dev_priv = >+ (drm_mach64_private_t *) dev->dev_private; >+ int status; >+ >+ status = MACH64_READ(MACH64_CRTC_INT_CNTL); >+ >+ /* VBLANK interrupt */ >+ if (status & MACH64_CRTC_VBLANK_INT) { >+ /* Mask off all interrupt ack bits before setting the ack bit, since >+ * there may be other handlers outside the DRM. >+ * >+ * NOTE: On mach64, you need to keep the enable bits set when doing >+ * the ack, despite what the docs say about not acking and enabling >+ * in a single write. >+ */ >+ MACH64_WRITE(MACH64_CRTC_INT_CNTL, >+ (status & ~MACH64_CRTC_INT_ACKS) >+ | MACH64_CRTC_VBLANK_INT); >+ >+ atomic_inc(&dev->vbl_received); >+ DRM_WAKEUP(&dev->vbl_queue); >+ drm_vbl_send_signals(dev); >+ return IRQ_HANDLED; >+ } >+ return IRQ_NONE; >+} >+ >+int mach64_driver_vblank_wait(struct drm_device * dev, unsigned int *sequence) >+{ >+ unsigned int cur_vblank; >+ int ret = 0; >+ >+ /* Assume that the user has missed the current sequence number >+ * by about a day rather than she wants to wait for years >+ * using vertical blanks... >+ */ >+ DRM_WAIT_ON(ret, dev->vbl_queue, 3 * DRM_HZ, >+ (((cur_vblank = atomic_read(&dev->vbl_received)) >+ - *sequence) <= (1 << 23))); >+ >+ *sequence = cur_vblank; >+ >+ return ret; >+} >+ >+/* drm_dma.h hooks >+*/ >+void mach64_driver_irq_preinstall(struct drm_device * dev) >+{ >+ drm_mach64_private_t *dev_priv = >+ (drm_mach64_private_t *) dev->dev_private; >+ >+ u32 status = MACH64_READ(MACH64_CRTC_INT_CNTL); >+ >+ DRM_DEBUG("before install CRTC_INT_CTNL: 0x%08x\n", status); >+ >+ /* Disable and clear VBLANK interrupt */ >+ MACH64_WRITE(MACH64_CRTC_INT_CNTL, (status & ~MACH64_CRTC_VBLANK_INT_EN) >+ | MACH64_CRTC_VBLANK_INT); >+} >+ >+void mach64_driver_irq_postinstall(struct drm_device * dev) >+{ >+ drm_mach64_private_t *dev_priv = >+ (drm_mach64_private_t *) dev->dev_private; >+ >+ /* Turn on VBLANK interrupt */ >+ MACH64_WRITE(MACH64_CRTC_INT_CNTL, MACH64_READ(MACH64_CRTC_INT_CNTL) >+ | MACH64_CRTC_VBLANK_INT_EN); >+ >+ DRM_DEBUG("after install CRTC_INT_CTNL: 0x%08x\n", >+ MACH64_READ(MACH64_CRTC_INT_CNTL)); >+ >+} >+ >+void mach64_driver_irq_uninstall(struct drm_device * dev) >+{ >+ drm_mach64_private_t *dev_priv = >+ (drm_mach64_private_t *) dev->dev_private; >+ if (!dev_priv) >+ return; >+ >+ /* Disable and clear VBLANK interrupt */ >+ MACH64_WRITE(MACH64_CRTC_INT_CNTL, >+ (MACH64_READ(MACH64_CRTC_INT_CNTL) & >+ ~MACH64_CRTC_VBLANK_INT_EN) >+ | MACH64_CRTC_VBLANK_INT); >+ >+ DRM_DEBUG("after uninstall CRTC_INT_CTNL: 0x%08x\n", >+ MACH64_READ(MACH64_CRTC_INT_CNTL)); >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mach64_state.c linux-2.6.23.i686/drivers/char/drm/mach64_state.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mach64_state.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mach64_state.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,911 @@ >+/* mach64_state.c -- State support for mach64 (Rage Pro) driver -*- linux-c -*- >+ * Created: Sun Dec 03 19:20:26 2000 by gareth@valinux.com >+ */ >+/* >+ * Copyright 2000 Gareth Hughes >+ * Copyright 2002-2003 Leif Delgass >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT OWNER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER >+ * IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN >+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Gareth Hughes <gareth@valinux.com> >+ * Leif Delgass <ldelgass@retinalburn.net> >+ * José Fonseca <j_r_fonseca@yahoo.co.uk> >+ */ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "mach64_drm.h" >+#include "mach64_drv.h" >+ >+/* Interface history: >+ * >+ * 1.0 - Initial mach64 DRM >+ * >+ */ >+struct drm_ioctl_desc mach64_ioctls[] = { >+ DRM_IOCTL_DEF(DRM_MACH64_INIT, mach64_dma_init, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >+ DRM_IOCTL_DEF(DRM_MACH64_CLEAR, mach64_dma_clear, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_MACH64_SWAP, mach64_dma_swap, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_MACH64_IDLE, mach64_dma_idle, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_MACH64_RESET, mach64_engine_reset, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_MACH64_VERTEX, mach64_dma_vertex, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_MACH64_BLIT, mach64_dma_blit, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_MACH64_FLUSH, mach64_dma_flush, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_MACH64_GETPARAM, mach64_get_param, DRM_AUTH), >+}; >+ >+int mach64_max_ioctl = DRM_ARRAY_SIZE(mach64_ioctls); >+ >+/* ================================================================ >+ * DMA hardware state programming functions >+ */ >+ >+static void mach64_print_dirty(const char *msg, unsigned int flags) >+{ >+ DRM_DEBUG("%s: (0x%x) %s%s%s%s%s%s%s%s%s%s%s%s\n", >+ msg, >+ flags, >+ (flags & MACH64_UPLOAD_DST_OFF_PITCH) ? "dst_off_pitch, " : >+ "", >+ (flags & MACH64_UPLOAD_Z_ALPHA_CNTL) ? "z_alpha_cntl, " : "", >+ (flags & MACH64_UPLOAD_SCALE_3D_CNTL) ? "scale_3d_cntl, " : >+ "", (flags & MACH64_UPLOAD_DP_FOG_CLR) ? "dp_fog_clr, " : "", >+ (flags & MACH64_UPLOAD_DP_WRITE_MASK) ? "dp_write_mask, " : >+ "", >+ (flags & MACH64_UPLOAD_DP_PIX_WIDTH) ? "dp_pix_width, " : "", >+ (flags & MACH64_UPLOAD_SETUP_CNTL) ? "setup_cntl, " : "", >+ (flags & MACH64_UPLOAD_MISC) ? "misc, " : "", >+ (flags & MACH64_UPLOAD_TEXTURE) ? "texture, " : "", >+ (flags & MACH64_UPLOAD_TEX0IMAGE) ? "tex0 image, " : "", >+ (flags & MACH64_UPLOAD_TEX1IMAGE) ? "tex1 image, " : "", >+ (flags & MACH64_UPLOAD_CLIPRECTS) ? "cliprects, " : ""); >+} >+ >+/* Mach64 doesn't have hardware cliprects, just one hardware scissor, >+ * so the GL scissor is intersected with each cliprect here >+ */ >+/* This function returns 0 on success, 1 for no intersection, and >+ * negative for an error >+ */ >+static int mach64_emit_cliprect(struct drm_file *file_priv, >+ drm_mach64_private_t * dev_priv, >+ struct drm_clip_rect * box) >+{ >+ u32 sc_left_right, sc_top_bottom; >+ struct drm_clip_rect scissor; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_mach64_context_regs_t *regs = &sarea_priv->context_state; >+ DMALOCALS; >+ >+ DRM_DEBUG("%s: box=%p\n", __FUNCTION__, box); >+ >+ /* Get GL scissor */ >+ /* FIXME: store scissor in SAREA as a cliprect instead of in >+ * hardware format, or do intersection client-side >+ */ >+ scissor.x1 = regs->sc_left_right & 0xffff; >+ scissor.x2 = (regs->sc_left_right & 0xffff0000) >> 16; >+ scissor.y1 = regs->sc_top_bottom & 0xffff; >+ scissor.y2 = (regs->sc_top_bottom & 0xffff0000) >> 16; >+ >+ /* Intersect GL scissor with cliprect */ >+ if (box->x1 > scissor.x1) >+ scissor.x1 = box->x1; >+ if (box->y1 > scissor.y1) >+ scissor.y1 = box->y1; >+ if (box->x2 < scissor.x2) >+ scissor.x2 = box->x2; >+ if (box->y2 < scissor.y2) >+ scissor.y2 = box->y2; >+ /* positive return means skip */ >+ if (scissor.x1 >= scissor.x2) >+ return 1; >+ if (scissor.y1 >= scissor.y2) >+ return 1; >+ >+ DMAGETPTR(file_priv, dev_priv, 2); /* returns on failure to get buffer */ >+ >+ sc_left_right = ((scissor.x1 << 0) | (scissor.x2 << 16)); >+ sc_top_bottom = ((scissor.y1 << 0) | (scissor.y2 << 16)); >+ >+ DMAOUTREG(MACH64_SC_LEFT_RIGHT, sc_left_right); >+ DMAOUTREG(MACH64_SC_TOP_BOTTOM, sc_top_bottom); >+ >+ DMAADVANCE(dev_priv, 1); >+ >+ return 0; >+} >+ >+static __inline__ int mach64_emit_state(struct drm_file *file_priv, >+ drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_mach64_context_regs_t *regs = &sarea_priv->context_state; >+ unsigned int dirty = sarea_priv->dirty; >+ u32 offset = ((regs->tex_size_pitch & 0xf0) >> 2); >+ DMALOCALS; >+ >+ if (MACH64_VERBOSE) { >+ mach64_print_dirty(__FUNCTION__, dirty); >+ } else { >+ DRM_DEBUG("%s: dirty=0x%08x\n", __FUNCTION__, dirty); >+ } >+ >+ DMAGETPTR(file_priv, dev_priv, 17); /* returns on failure to get buffer */ >+ >+ if (dirty & MACH64_UPLOAD_MISC) { >+ DMAOUTREG(MACH64_DP_MIX, regs->dp_mix); >+ DMAOUTREG(MACH64_DP_SRC, regs->dp_src); >+ DMAOUTREG(MACH64_CLR_CMP_CNTL, regs->clr_cmp_cntl); >+ DMAOUTREG(MACH64_GUI_TRAJ_CNTL, regs->gui_traj_cntl); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_MISC; >+ } >+ >+ if (dirty & MACH64_UPLOAD_DST_OFF_PITCH) { >+ DMAOUTREG(MACH64_DST_OFF_PITCH, regs->dst_off_pitch); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_DST_OFF_PITCH; >+ } >+ if (dirty & MACH64_UPLOAD_Z_OFF_PITCH) { >+ DMAOUTREG(MACH64_Z_OFF_PITCH, regs->z_off_pitch); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_Z_OFF_PITCH; >+ } >+ if (dirty & MACH64_UPLOAD_Z_ALPHA_CNTL) { >+ DMAOUTREG(MACH64_Z_CNTL, regs->z_cntl); >+ DMAOUTREG(MACH64_ALPHA_TST_CNTL, regs->alpha_tst_cntl); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_Z_ALPHA_CNTL; >+ } >+ if (dirty & MACH64_UPLOAD_SCALE_3D_CNTL) { >+ DMAOUTREG(MACH64_SCALE_3D_CNTL, regs->scale_3d_cntl); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_SCALE_3D_CNTL; >+ } >+ if (dirty & MACH64_UPLOAD_DP_FOG_CLR) { >+ DMAOUTREG(MACH64_DP_FOG_CLR, regs->dp_fog_clr); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_DP_FOG_CLR; >+ } >+ if (dirty & MACH64_UPLOAD_DP_WRITE_MASK) { >+ DMAOUTREG(MACH64_DP_WRITE_MASK, regs->dp_write_mask); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_DP_WRITE_MASK; >+ } >+ if (dirty & MACH64_UPLOAD_DP_PIX_WIDTH) { >+ DMAOUTREG(MACH64_DP_PIX_WIDTH, regs->dp_pix_width); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_DP_PIX_WIDTH; >+ } >+ if (dirty & MACH64_UPLOAD_SETUP_CNTL) { >+ DMAOUTREG(MACH64_SETUP_CNTL, regs->setup_cntl); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_SETUP_CNTL; >+ } >+ >+ if (dirty & MACH64_UPLOAD_TEXTURE) { >+ DMAOUTREG(MACH64_TEX_SIZE_PITCH, regs->tex_size_pitch); >+ DMAOUTREG(MACH64_TEX_CNTL, regs->tex_cntl); >+ DMAOUTREG(MACH64_SECONDARY_TEX_OFF, regs->secondary_tex_off); >+ DMAOUTREG(MACH64_TEX_0_OFF + offset, regs->tex_offset); >+ sarea_priv->dirty &= ~MACH64_UPLOAD_TEXTURE; >+ } >+ >+ DMAADVANCE(dev_priv, 1); >+ >+ sarea_priv->dirty &= MACH64_UPLOAD_CLIPRECTS; >+ >+ return 0; >+ >+} >+ >+/* ================================================================ >+ * DMA command dispatch functions >+ */ >+ >+static int mach64_dma_dispatch_clear(struct drm_device * dev, >+ struct drm_file *file_priv, >+ unsigned int flags, >+ int cx, int cy, int cw, int ch, >+ unsigned int clear_color, >+ unsigned int clear_depth) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_mach64_context_regs_t *ctx = &sarea_priv->context_state; >+ int nbox = sarea_priv->nbox; >+ struct drm_clip_rect *pbox = sarea_priv->boxes; >+ u32 fb_bpp, depth_bpp; >+ int i; >+ DMALOCALS; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ switch (dev_priv->fb_bpp) { >+ case 16: >+ fb_bpp = MACH64_DATATYPE_RGB565; >+ break; >+ case 32: >+ fb_bpp = MACH64_DATATYPE_ARGB8888; >+ break; >+ default: >+ return -EINVAL; >+ } >+ switch (dev_priv->depth_bpp) { >+ case 16: >+ depth_bpp = MACH64_DATATYPE_RGB565; >+ break; >+ case 24: >+ case 32: >+ depth_bpp = MACH64_DATATYPE_ARGB8888; >+ break; >+ default: >+ return -EINVAL; >+ } >+ >+ if (!nbox) >+ return 0; >+ >+ DMAGETPTR(file_priv, dev_priv, nbox * 31); /* returns on failure to get buffer */ >+ >+ for (i = 0; i < nbox; i++) { >+ int x = pbox[i].x1; >+ int y = pbox[i].y1; >+ int w = pbox[i].x2 - x; >+ int h = pbox[i].y2 - y; >+ >+ DRM_DEBUG("dispatch clear %d,%d-%d,%d flags 0x%x\n", >+ pbox[i].x1, pbox[i].y1, >+ pbox[i].x2, pbox[i].y2, flags); >+ >+ if (flags & (MACH64_FRONT | MACH64_BACK)) { >+ /* Setup for color buffer clears >+ */ >+ >+ DMAOUTREG(MACH64_Z_CNTL, 0); >+ DMAOUTREG(MACH64_SCALE_3D_CNTL, 0); >+ >+ DMAOUTREG(MACH64_SC_LEFT_RIGHT, ctx->sc_left_right); >+ DMAOUTREG(MACH64_SC_TOP_BOTTOM, ctx->sc_top_bottom); >+ >+ DMAOUTREG(MACH64_CLR_CMP_CNTL, 0); >+ DMAOUTREG(MACH64_GUI_TRAJ_CNTL, >+ (MACH64_DST_X_LEFT_TO_RIGHT | >+ MACH64_DST_Y_TOP_TO_BOTTOM)); >+ >+ DMAOUTREG(MACH64_DP_PIX_WIDTH, ((fb_bpp << 0) | >+ (fb_bpp << 4) | >+ (fb_bpp << 8) | >+ (fb_bpp << 16) | >+ (fb_bpp << 28))); >+ >+ DMAOUTREG(MACH64_DP_FRGD_CLR, clear_color); >+ DMAOUTREG(MACH64_DP_WRITE_MASK, ctx->dp_write_mask); >+ DMAOUTREG(MACH64_DP_MIX, (MACH64_BKGD_MIX_D | >+ MACH64_FRGD_MIX_S)); >+ DMAOUTREG(MACH64_DP_SRC, (MACH64_BKGD_SRC_FRGD_CLR | >+ MACH64_FRGD_SRC_FRGD_CLR | >+ MACH64_MONO_SRC_ONE)); >+ >+ } >+ >+ if (flags & MACH64_FRONT) { >+ >+ DMAOUTREG(MACH64_DST_OFF_PITCH, >+ dev_priv->front_offset_pitch); >+ DMAOUTREG(MACH64_DST_X_Y, (y << 16) | x); >+ DMAOUTREG(MACH64_DST_WIDTH_HEIGHT, (h << 16) | w); >+ >+ } >+ >+ if (flags & MACH64_BACK) { >+ >+ DMAOUTREG(MACH64_DST_OFF_PITCH, >+ dev_priv->back_offset_pitch); >+ DMAOUTREG(MACH64_DST_X_Y, (y << 16) | x); >+ DMAOUTREG(MACH64_DST_WIDTH_HEIGHT, (h << 16) | w); >+ >+ } >+ >+ if (flags & MACH64_DEPTH) { >+ /* Setup for depth buffer clear >+ */ >+ DMAOUTREG(MACH64_Z_CNTL, 0); >+ DMAOUTREG(MACH64_SCALE_3D_CNTL, 0); >+ >+ DMAOUTREG(MACH64_SC_LEFT_RIGHT, ctx->sc_left_right); >+ DMAOUTREG(MACH64_SC_TOP_BOTTOM, ctx->sc_top_bottom); >+ >+ DMAOUTREG(MACH64_CLR_CMP_CNTL, 0); >+ DMAOUTREG(MACH64_GUI_TRAJ_CNTL, >+ (MACH64_DST_X_LEFT_TO_RIGHT | >+ MACH64_DST_Y_TOP_TO_BOTTOM)); >+ >+ DMAOUTREG(MACH64_DP_PIX_WIDTH, ((depth_bpp << 0) | >+ (depth_bpp << 4) | >+ (depth_bpp << 8) | >+ (depth_bpp << 16) | >+ (depth_bpp << 28))); >+ >+ DMAOUTREG(MACH64_DP_FRGD_CLR, clear_depth); >+ DMAOUTREG(MACH64_DP_WRITE_MASK, 0xffffffff); >+ DMAOUTREG(MACH64_DP_MIX, (MACH64_BKGD_MIX_D | >+ MACH64_FRGD_MIX_S)); >+ DMAOUTREG(MACH64_DP_SRC, (MACH64_BKGD_SRC_FRGD_CLR | >+ MACH64_FRGD_SRC_FRGD_CLR | >+ MACH64_MONO_SRC_ONE)); >+ >+ DMAOUTREG(MACH64_DST_OFF_PITCH, >+ dev_priv->depth_offset_pitch); >+ DMAOUTREG(MACH64_DST_X_Y, (y << 16) | x); >+ DMAOUTREG(MACH64_DST_WIDTH_HEIGHT, (h << 16) | w); >+ } >+ } >+ >+ DMAADVANCE(dev_priv, 1); >+ >+ return 0; >+} >+ >+static int mach64_dma_dispatch_swap(struct drm_device * dev, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ int nbox = sarea_priv->nbox; >+ struct drm_clip_rect *pbox = sarea_priv->boxes; >+ u32 fb_bpp; >+ int i; >+ DMALOCALS; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ switch (dev_priv->fb_bpp) { >+ case 16: >+ fb_bpp = MACH64_DATATYPE_RGB565; >+ break; >+ case 32: >+ default: >+ fb_bpp = MACH64_DATATYPE_ARGB8888; >+ break; >+ } >+ >+ if (!nbox) >+ return 0; >+ >+ DMAGETPTR(file_priv, dev_priv, 13 + nbox * 4); /* returns on failure to get buffer */ >+ >+ DMAOUTREG(MACH64_Z_CNTL, 0); >+ DMAOUTREG(MACH64_SCALE_3D_CNTL, 0); >+ >+ DMAOUTREG(MACH64_SC_LEFT_RIGHT, 0 | (8191 << 16)); /* no scissor */ >+ DMAOUTREG(MACH64_SC_TOP_BOTTOM, 0 | (16383 << 16)); >+ >+ DMAOUTREG(MACH64_CLR_CMP_CNTL, 0); >+ DMAOUTREG(MACH64_GUI_TRAJ_CNTL, (MACH64_DST_X_LEFT_TO_RIGHT | >+ MACH64_DST_Y_TOP_TO_BOTTOM)); >+ >+ DMAOUTREG(MACH64_DP_PIX_WIDTH, ((fb_bpp << 0) | >+ (fb_bpp << 4) | >+ (fb_bpp << 8) | >+ (fb_bpp << 16) | (fb_bpp << 28))); >+ >+ DMAOUTREG(MACH64_DP_WRITE_MASK, 0xffffffff); >+ DMAOUTREG(MACH64_DP_MIX, (MACH64_BKGD_MIX_D | MACH64_FRGD_MIX_S)); >+ DMAOUTREG(MACH64_DP_SRC, (MACH64_BKGD_SRC_BKGD_CLR | >+ MACH64_FRGD_SRC_BLIT | MACH64_MONO_SRC_ONE)); >+ >+ DMAOUTREG(MACH64_SRC_OFF_PITCH, dev_priv->back_offset_pitch); >+ DMAOUTREG(MACH64_DST_OFF_PITCH, dev_priv->front_offset_pitch); >+ >+ for (i = 0; i < nbox; i++) { >+ int x = pbox[i].x1; >+ int y = pbox[i].y1; >+ int w = pbox[i].x2 - x; >+ int h = pbox[i].y2 - y; >+ >+ DRM_DEBUG("dispatch swap %d,%d-%d,%d\n", >+ pbox[i].x1, pbox[i].y1, pbox[i].x2, pbox[i].y2); >+ >+ DMAOUTREG(MACH64_SRC_WIDTH1, w); >+ DMAOUTREG(MACH64_SRC_Y_X, (x << 16) | y); >+ DMAOUTREG(MACH64_DST_Y_X, (x << 16) | y); >+ DMAOUTREG(MACH64_DST_WIDTH_HEIGHT, (h << 16) | w); >+ >+ } >+ >+ DMAADVANCE(dev_priv, 1); >+ >+ if (dev_priv->driver_mode == MACH64_MODE_DMA_ASYNC) { >+ for (i = 0; i < MACH64_MAX_QUEUED_FRAMES - 1; i++) { >+ dev_priv->frame_ofs[i] = dev_priv->frame_ofs[i + 1]; >+ } >+ dev_priv->frame_ofs[i] = GETRINGOFFSET(); >+ >+ dev_priv->sarea_priv->frames_queued++; >+ } >+ >+ return 0; >+} >+ >+static int mach64_do_get_frames_queued(drm_mach64_private_t * dev_priv) >+{ >+ drm_mach64_descriptor_ring_t *ring = &dev_priv->ring; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ int i, start; >+ u32 head, tail, ofs; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ if (sarea_priv->frames_queued == 0) >+ return 0; >+ >+ tail = ring->tail; >+ mach64_ring_tick(dev_priv, ring); >+ head = ring->head; >+ >+ start = (MACH64_MAX_QUEUED_FRAMES - >+ DRM_MIN(MACH64_MAX_QUEUED_FRAMES, sarea_priv->frames_queued)); >+ >+ if (head == tail) { >+ sarea_priv->frames_queued = 0; >+ for (i = start; i < MACH64_MAX_QUEUED_FRAMES; i++) { >+ dev_priv->frame_ofs[i] = ~0; >+ } >+ return 0; >+ } >+ >+ for (i = start; i < MACH64_MAX_QUEUED_FRAMES; i++) { >+ ofs = dev_priv->frame_ofs[i]; >+ DRM_DEBUG("frame_ofs[%d] ofs: %d\n", i, ofs); >+ if (ofs == ~0 || >+ (head < tail && (ofs < head || ofs >= tail)) || >+ (head > tail && (ofs < head && ofs >= tail))) { >+ sarea_priv->frames_queued = >+ (MACH64_MAX_QUEUED_FRAMES - 1) - i; >+ dev_priv->frame_ofs[i] = ~0; >+ } >+ } >+ >+ return sarea_priv->frames_queued; >+} >+ >+/* Copy and verify a client submited buffer. >+ * FIXME: Make an assembly optimized version >+ */ >+static __inline__ int copy_from_user_vertex(u32 *to, >+ const u32 __user *ufrom, >+ unsigned long bytes) >+{ >+ unsigned long n = bytes; /* dwords remaining in buffer */ >+ u32 *from, *orig_from; >+ >+ from = drm_alloc(bytes, DRM_MEM_DRIVER); >+ if (from == NULL) >+ return -ENOMEM; >+ >+ if (DRM_COPY_FROM_USER(from, ufrom, bytes)) { >+ drm_free(from, bytes, DRM_MEM_DRIVER); >+ return -EFAULT; >+ } >+ orig_from = from; /* we'll be modifying the "from" ptr, so save it */ >+ >+ n >>= 2; >+ >+ while (n > 1) { >+ u32 data, reg, count; >+ >+ data = *from++; >+ >+ n--; >+ >+ reg = le32_to_cpu(data); >+ count = (reg >> 16) + 1; >+ if (count <= n) { >+ n -= count; >+ reg &= 0xffff; >+ >+ /* This is an exact match of Mach64's Setup Engine registers, >+ * excluding SETUP_CNTL (1_C1). >+ */ >+ if ((reg >= 0x0190 && reg < 0x01c1) || >+ (reg >= 0x01ca && reg <= 0x01cf)) { >+ *to++ = data; >+ memcpy(to, from, count << 2); >+ from += count; >+ to += count; >+ } else { >+ DRM_ERROR("%s: Got bad command: 0x%04x\n", >+ __FUNCTION__, reg); >+ drm_free(orig_from, bytes, DRM_MEM_DRIVER); >+ return -EACCES; >+ } >+ } else { >+ DRM_ERROR >+ ("%s: Got bad command count(=%u) dwords remaining=%lu\n", >+ __FUNCTION__, count, n); >+ drm_free(orig_from, bytes, DRM_MEM_DRIVER); >+ return -EINVAL; >+ } >+ } >+ >+ drm_free(orig_from, bytes, DRM_MEM_DRIVER); >+ if (n == 0) >+ return 0; >+ else { >+ DRM_ERROR("%s: Bad buf->used(=%lu)\n", __FUNCTION__, bytes); >+ return -EINVAL; >+ } >+} >+ >+static int mach64_dma_dispatch_vertex(struct drm_device * dev, >+ struct drm_file *file_priv, >+ drm_mach64_vertex_t * vertex) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ struct drm_buf *copy_buf; >+ void *buf = vertex->buf; >+ unsigned long used = vertex->used; >+ int ret = 0; >+ int i = 0; >+ int done = 0; >+ int verify_ret = 0; >+ DMALOCALS; >+ >+ DRM_DEBUG("%s: buf=%p used=%lu nbox=%d\n", >+ __FUNCTION__, buf, used, sarea_priv->nbox); >+ >+ if (!used) >+ goto _vertex_done; >+ >+ copy_buf = mach64_freelist_get(dev_priv); >+ if (copy_buf == NULL) { >+ DRM_ERROR("%s: couldn't get buffer\n", __FUNCTION__); >+ return -EAGAIN; >+ } >+ >+ /* Mach64's vertex data is actually register writes. To avoid security >+ * compromises these register writes have to be verified and copied from >+ * user space into a private DMA buffer. >+ */ >+ verify_ret = copy_from_user_vertex(GETBUFPTR(copy_buf), buf, used); >+ >+ if (verify_ret != 0) { >+ mach64_freelist_put(dev_priv, copy_buf); >+ goto _vertex_done; >+ } >+ >+ copy_buf->used = used; >+ >+ DMASETPTR(copy_buf); >+ >+ if (sarea_priv->dirty & ~MACH64_UPLOAD_CLIPRECTS) { >+ ret = mach64_emit_state(file_priv, dev_priv); >+ if (ret < 0) >+ return ret; >+ } >+ >+ do { >+ /* Emit the next cliprect */ >+ if (i < sarea_priv->nbox) { >+ ret = mach64_emit_cliprect(file_priv, dev_priv, >+ &sarea_priv->boxes[i]); >+ if (ret < 0) { >+ /* failed to get buffer */ >+ return ret; >+ } else if (ret != 0) { >+ /* null intersection with scissor */ >+ continue; >+ } >+ } >+ if ((i >= sarea_priv->nbox - 1)) >+ done = 1; >+ >+ /* Add the buffer to the DMA queue */ >+ DMAADVANCE(dev_priv, done); >+ >+ } while (++i < sarea_priv->nbox); >+ >+ if (!done) { >+ if (copy_buf->pending) { >+ DMADISCARDBUF(); >+ } else { >+ /* This buffer wasn't used (no cliprects), so place it >+ * back on the free list >+ */ >+ mach64_freelist_put(dev_priv, copy_buf); >+ } >+ } >+ >+_vertex_done: >+ sarea_priv->dirty &= ~MACH64_UPLOAD_CLIPRECTS; >+ sarea_priv->nbox = 0; >+ >+ return verify_ret; >+} >+ >+static __inline__ int copy_from_user_blit(u32 *to, >+ const u32 __user *ufrom, >+ unsigned long bytes) >+{ >+ to = (u32 *)((char *)to + MACH64_HOSTDATA_BLIT_OFFSET); >+ >+ if (DRM_COPY_FROM_USER(to, ufrom, bytes)) { >+ return -EFAULT; >+ } >+ >+ return 0; >+} >+ >+static int mach64_dma_dispatch_blit(struct drm_device * dev, >+ struct drm_file *file_priv, >+ drm_mach64_blit_t * blit) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ int dword_shift, dwords; >+ unsigned long used; >+ struct drm_buf *copy_buf; >+ int verify_ret = 0; >+ DMALOCALS; >+ >+ /* The compiler won't optimize away a division by a variable, >+ * even if the only legal values are powers of two. Thus, we'll >+ * use a shift instead. >+ */ >+ switch (blit->format) { >+ case MACH64_DATATYPE_ARGB8888: >+ dword_shift = 0; >+ break; >+ case MACH64_DATATYPE_ARGB1555: >+ case MACH64_DATATYPE_RGB565: >+ case MACH64_DATATYPE_VYUY422: >+ case MACH64_DATATYPE_YVYU422: >+ case MACH64_DATATYPE_ARGB4444: >+ dword_shift = 1; >+ break; >+ case MACH64_DATATYPE_CI8: >+ case MACH64_DATATYPE_RGB8: >+ dword_shift = 2; >+ break; >+ default: >+ DRM_ERROR("invalid blit format %d\n", blit->format); >+ return -EINVAL; >+ } >+ >+ /* Set buf->used to the bytes of blit data based on the blit dimensions >+ * and verify the size. When the setup is emitted to the buffer with >+ * the DMA* macros below, buf->used is incremented to include the bytes >+ * used for setup as well as the blit data. >+ */ >+ dwords = (blit->width * blit->height) >> dword_shift; >+ used = dwords << 2; >+ if (used <= 0 || >+ used > MACH64_BUFFER_SIZE - MACH64_HOSTDATA_BLIT_OFFSET) { >+ DRM_ERROR("Invalid blit size: %lu bytes\n", used); >+ return -EINVAL; >+ } >+ >+ copy_buf = mach64_freelist_get(dev_priv); >+ if (copy_buf == NULL) { >+ DRM_ERROR("%s: couldn't get buffer\n", __FUNCTION__); >+ return -EAGAIN; >+ } >+ >+ /* Copy the blit data from userspace. >+ * >+ * XXX: This is overkill. The most efficient solution would be having >+ * two sets of buffers (one set private for vertex data, the other set >+ * client-writable for blits). However that would bring more complexity >+ * and would break backward compatability. The solution currently >+ * implemented is keeping all buffers private, allowing to secure the >+ * driver, without increasing complexity at the expense of some speed >+ * transfering data. >+ */ >+ verify_ret = copy_from_user_blit(GETBUFPTR(copy_buf), blit->buf, used); >+ >+ if (verify_ret != 0) { >+ mach64_freelist_put(dev_priv, copy_buf); >+ goto _blit_done; >+ } >+ >+ copy_buf->used = used; >+ >+ /* FIXME: Use a last buffer flag and reduce the state emitted for subsequent, >+ * continuation buffers? >+ */ >+ >+ /* Blit via BM_HOSTDATA (gui-master) - like HOST_DATA[0-15], but doesn't require >+ * a register command every 16 dwords. State setup is added at the start of the >+ * buffer -- the client leaves space for this based on MACH64_HOSTDATA_BLIT_OFFSET >+ */ >+ DMASETPTR(copy_buf); >+ >+ DMAOUTREG(MACH64_Z_CNTL, 0); >+ DMAOUTREG(MACH64_SCALE_3D_CNTL, 0); >+ >+ DMAOUTREG(MACH64_SC_LEFT_RIGHT, 0 | (8191 << 16)); /* no scissor */ >+ DMAOUTREG(MACH64_SC_TOP_BOTTOM, 0 | (16383 << 16)); >+ >+ DMAOUTREG(MACH64_CLR_CMP_CNTL, 0); /* disable */ >+ DMAOUTREG(MACH64_GUI_TRAJ_CNTL, >+ MACH64_DST_X_LEFT_TO_RIGHT | MACH64_DST_Y_TOP_TO_BOTTOM); >+ >+ DMAOUTREG(MACH64_DP_PIX_WIDTH, (blit->format << 0) /* dst pix width */ >+ |(blit->format << 4) /* composite pix width */ >+ |(blit->format << 8) /* src pix width */ >+ |(blit->format << 16) /* host data pix width */ >+ |(blit->format << 28) /* scaler/3D pix width */ >+ ); >+ >+ DMAOUTREG(MACH64_DP_WRITE_MASK, 0xffffffff); /* enable all planes */ >+ DMAOUTREG(MACH64_DP_MIX, MACH64_BKGD_MIX_D | MACH64_FRGD_MIX_S); >+ DMAOUTREG(MACH64_DP_SRC, >+ MACH64_BKGD_SRC_BKGD_CLR >+ | MACH64_FRGD_SRC_HOST | MACH64_MONO_SRC_ONE); >+ >+ DMAOUTREG(MACH64_DST_OFF_PITCH, >+ (blit->pitch << 22) | (blit->offset >> 3)); >+ DMAOUTREG(MACH64_DST_X_Y, (blit->y << 16) | blit->x); >+ DMAOUTREG(MACH64_DST_WIDTH_HEIGHT, (blit->height << 16) | blit->width); >+ >+ DRM_DEBUG("%s: %lu bytes\n", __FUNCTION__, used); >+ >+ /* Add the buffer to the queue */ >+ DMAADVANCEHOSTDATA(dev_priv); >+ >+_blit_done: >+ return verify_ret; >+} >+ >+/* ================================================================ >+ * IOCTL functions >+ */ >+ >+int mach64_dma_clear(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_mach64_clear_t *clear = data; >+ int ret; >+ >+ DRM_DEBUG("%s: pid=%d\n", __FUNCTION__, DRM_CURRENTPID); >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ if (sarea_priv->nbox > MACH64_NR_SAREA_CLIPRECTS) >+ sarea_priv->nbox = MACH64_NR_SAREA_CLIPRECTS; >+ >+ ret = mach64_dma_dispatch_clear(dev, file_priv, clear->flags, >+ clear->x, clear->y, clear->w, clear->h, >+ clear->clear_color, >+ clear->clear_depth); >+ >+ /* Make sure we restore the 3D state next time. >+ */ >+ sarea_priv->dirty |= (MACH64_UPLOAD_CONTEXT | MACH64_UPLOAD_MISC); >+ return ret; >+} >+ >+int mach64_dma_swap(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ int ret; >+ >+ DRM_DEBUG("%s: pid=%d\n", __FUNCTION__, DRM_CURRENTPID); >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ if (sarea_priv->nbox > MACH64_NR_SAREA_CLIPRECTS) >+ sarea_priv->nbox = MACH64_NR_SAREA_CLIPRECTS; >+ >+ ret = mach64_dma_dispatch_swap(dev, file_priv); >+ >+ /* Make sure we restore the 3D state next time. >+ */ >+ sarea_priv->dirty |= (MACH64_UPLOAD_CONTEXT | MACH64_UPLOAD_MISC); >+ return ret; >+} >+ >+int mach64_dma_vertex(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_mach64_vertex_t *vertex = data; >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ >+ DRM_DEBUG("%s: pid=%d buf=%p used=%lu discard=%d\n", >+ __FUNCTION__, DRM_CURRENTPID, >+ vertex->buf, vertex->used, vertex->discard); >+ >+ if (vertex->prim < 0 || vertex->prim > MACH64_PRIM_POLYGON) { >+ DRM_ERROR("buffer prim %d\n", vertex->prim); >+ return -EINVAL; >+ } >+ >+ if (vertex->used > MACH64_BUFFER_SIZE || (vertex->used & 3) != 0) { >+ DRM_ERROR("Invalid vertex buffer size: %lu bytes\n", >+ vertex->used); >+ return -EINVAL; >+ } >+ >+ if (sarea_priv->nbox > MACH64_NR_SAREA_CLIPRECTS) >+ sarea_priv->nbox = MACH64_NR_SAREA_CLIPRECTS; >+ >+ return mach64_dma_dispatch_vertex(dev, file_priv, vertex); >+} >+ >+int mach64_dma_blit(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_mach64_blit_t *blit = data; >+ int ret; >+ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ >+ ret = mach64_dma_dispatch_blit(dev, file_priv, blit); >+ >+ /* Make sure we restore the 3D state next time. >+ */ >+ sarea_priv->dirty |= (MACH64_UPLOAD_CONTEXT | >+ MACH64_UPLOAD_MISC | MACH64_UPLOAD_CLIPRECTS); >+ >+ return ret; >+} >+ >+int mach64_get_param(struct drm_device *dev, void *data, >+ struct drm_file *file_priv) >+{ >+ drm_mach64_private_t *dev_priv = dev->dev_private; >+ drm_mach64_getparam_t *param = data; >+ int value; >+ >+ DRM_DEBUG("%s\n", __FUNCTION__); >+ >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ >+ switch (param->param) { >+ case MACH64_PARAM_FRAMES_QUEUED: >+ /* Needs lock since it calls mach64_ring_tick() */ >+ LOCK_TEST_WITH_RETURN(dev, file_priv); >+ value = mach64_do_get_frames_queued(dev_priv); >+ break; >+ case MACH64_PARAM_IRQ_NR: >+ value = dev->irq; >+ break; >+ default: >+ return -EINVAL; >+ } >+ >+ if (DRM_COPY_TO_USER(param->value, &value, sizeof(int))) { >+ DRM_ERROR("copy_to_user\n"); >+ return -EFAULT; >+ } >+ >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/Makefile linux-2.6.23.i686/drivers/char/drm/Makefile >--- linux-2.6.23.i686.orig/drivers/char/drm/Makefile 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/Makefile 2008-01-06 09:24:57.000000000 +0100 >@@ -1,33 +1,46 @@ > # > # Makefile for the drm device driver. This driver provides support for the > # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher. >+# >+# Based on David Woodhouse's mtd build. >+# >+# $XFree86: xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/Makefile.kernel,v 1.18 2003/08/16 17:59:17 dawes Exp $ >+# > >-drm-objs := drm_auth.o drm_bufs.o drm_context.o drm_dma.o drm_drawable.o \ >+drm-objs := drm_auth.o drm_bufs.o drm_context.o drm_dma.o drm_drawable.o \ > drm_drv.o drm_fops.o drm_ioctl.o drm_irq.o \ > drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \ >- drm_agpsupport.o drm_scatter.o ati_pcigart.o drm_pci.o \ >- drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o >- >+ drm_sysfs.o drm_pci.o drm_agpsupport.o drm_scatter.o \ >+ drm_memory_debug.o ati_pcigart.o drm_sman.o \ >+ drm_hashtab.o drm_mm.o drm_object.o drm_compat.o \ >+ drm_fence.o drm_ttm.o drm_bo.o drm_bo_move.o drm_bo_lock.o \ >+ drm_regman.o > tdfx-objs := tdfx_drv.o > r128-objs := r128_drv.o r128_cce.o r128_state.o r128_irq.o > mga-objs := mga_drv.o mga_dma.o mga_state.o mga_warp.o mga_irq.o > i810-objs := i810_drv.o i810_dma.o >-i830-objs := i830_drv.o i830_dma.o i830_irq.o >-i915-objs := i915_drv.o i915_dma.o i915_irq.o i915_mem.o >+i915-objs := i915_drv.o i915_dma.o i915_irq.o i915_mem.o i915_fence.o \ >+ i915_buffer.o i915_compat.o > nouveau-objs := nouveau_drv.o nouveau_state.o nouveau_fifo.o nouveau_mem.o \ >- nouveau_object.o nouveau_irq.o nouveau_notifier.o \ >- nouveau_sgdma.o nouveau_dma.o \ >+ nouveau_object.o nouveau_irq.o nouveau_notifier.o nouveau_swmthd.o \ >+ nouveau_sgdma.o nouveau_dma.o nouveau_buffer.o nouveau_fence.o \ > nv04_timer.o \ > nv04_mc.o nv40_mc.o nv50_mc.o \ > nv04_fb.o nv10_fb.o nv40_fb.o \ > nv04_fifo.o nv10_fifo.o nv40_fifo.o nv50_fifo.o \ >- nv04_graph.o nv10_graph.o nv20_graph.o nv30_graph.o \ >+ nv04_graph.o nv10_graph.o nv20_graph.o \ > nv40_graph.o nv50_graph.o \ > nv04_instmem.o nv50_instmem.o > radeon-objs := radeon_drv.o radeon_cp.o radeon_state.o radeon_mem.o radeon_irq.o r300_cmdbuf.o > sis-objs := sis_drv.o sis_mm.o >+ffb-objs := ffb_drv.o ffb_context.o > savage-objs := savage_drv.o savage_bci.o savage_state.o >-via-objs := via_irq.o via_drv.o via_map.o via_mm.o via_dma.o via_verifier.o via_video.o via_dmablit.o >+via-objs := via_irq.o via_drv.o via_map.o via_mm.o via_dma.o via_verifier.o \ >+ via_video.o via_dmablit.o via_fence.o via_buffer.o >+mach64-objs := mach64_drv.o mach64_dma.o mach64_irq.o mach64_state.o >+nv-objs := nv_drv.o >+xgi-objs := xgi_cmdlist.o xgi_drv.o xgi_fb.o xgi_misc.o xgi_pcie.o \ >+ xgi_fence.o > > ifeq ($(CONFIG_COMPAT),y) > drm-objs += drm_ioc32.o >@@ -35,7 +48,8 @@ radeon-objs += radeon_ioc32.o > mga-objs += mga_ioc32.o > r128-objs += r128_ioc32.o > i915-objs += i915_ioc32.o >-nouveau-objs += nouveau_ioc32.o >+nouveau-objs += nouveau_ioc32.o >+xgi-objs += xgi_ioc32.o > endif > > obj-$(CONFIG_DRM) += drm.o >@@ -44,11 +58,13 @@ obj-$(CONFIG_DRM_R128) += r128.o > obj-$(CONFIG_DRM_RADEON)+= radeon.o > obj-$(CONFIG_DRM_MGA) += mga.o > obj-$(CONFIG_DRM_I810) += i810.o >-obj-$(CONFIG_DRM_I830) += i830.o >-obj-$(CONFIG_DRM_I915) += i915.o >-obj-$(CONFIG_DRM_NOUVEAU) += nouveau.o >+obj-$(CONFIG_DRM_I915) += i915.o >+obj-$(CONFIG_DRM_NOUVEAU) += nouveau.o > obj-$(CONFIG_DRM_SIS) += sis.o >+obj-$(CONFIG_DRM_FFB) += ffb.o > obj-$(CONFIG_DRM_SAVAGE)+= savage.o >-obj-$(CONFIG_DRM_VIA) +=via.o >- >- >+obj-$(CONFIG_DRM_VIA) += via.o >+obj-$(CONFIG_DRM_MACH64)+= mach64.o >+obj-$(CONFIG_DRM_NV) += nv.o >+obj-$(CONFIG_DRM_NOUVEAU) += nouveau.o >+obj-$(CONFIG_DRM_XGI) += xgi.o >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mga_dma.c linux-2.6.23.i686/drivers/char/drm/mga_dma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mga_dma.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mga_dma.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,7 +1,7 @@ > /* mga_dma.c -- DMA support for mga g200/g400 -*- linux-c -*- > * Created: Mon Dec 13 01:50:01 1999 by jhartmann@precisioninsight.com >- * >- * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. >+ */ >+/* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. > * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. > * All Rights Reserved. > * >@@ -44,8 +44,8 @@ > #define MGA_DEFAULT_USEC_TIMEOUT 10000 > #define MGA_FREELIST_DEBUG 0 > >-#define MINIMAL_CLEANUP 0 >-#define FULL_CLEANUP 1 >+#define MINIMAL_CLEANUP 0 >+#define FULL_CLEANUP 1 > static int mga_do_cleanup_dma(struct drm_device *dev, int full_cleanup); > > /* ================================================================ >@@ -313,7 +313,7 @@ static void mga_freelist_cleanup(struct > */ > static void mga_freelist_reset(struct drm_device * dev) > { >- struct drm_device_dma *dma = dev->dma; >+ drm_device_dma_t *dma = dev->dma; > struct drm_buf *buf; > drm_mga_buf_priv_t *buf_priv; > int i; >@@ -393,7 +393,7 @@ int mga_freelist_put(struct drm_device * > * DMA initialization, cleanup > */ > >-int mga_driver_load(struct drm_device * dev, unsigned long flags) >+int mga_driver_load(struct drm_device *dev, unsigned long flags) > { > drm_mga_private_t *dev_priv; > >@@ -418,7 +418,6 @@ int mga_driver_load(struct drm_device * > return 0; > } > >-#if __OS_HAS_AGP > /** > * Bootstrap the driver for AGP DMA. > * >@@ -434,16 +433,16 @@ int mga_driver_load(struct drm_device * > * > * \sa mga_do_dma_bootstrap, mga_do_pci_dma_bootstrap > */ >-static int mga_do_agp_dma_bootstrap(struct drm_device * dev, >+static int mga_do_agp_dma_bootstrap(struct drm_device *dev, > drm_mga_dma_bootstrap_t * dma_bs) > { > drm_mga_private_t *const dev_priv = >- (drm_mga_private_t *) dev->dev_private; >+ (drm_mga_private_t *)dev->dev_private; > unsigned int warp_size = mga_warp_microcode_size(dev_priv); > int err; > unsigned offset; > const unsigned secondary_size = dma_bs->secondary_bin_count >- * dma_bs->secondary_bin_size; >+ * dma_bs->secondary_bin_size; > const unsigned agp_size = (dma_bs->agp_size << 20); > struct drm_buf_desc req; > struct drm_agp_mode mode; >@@ -493,13 +492,13 @@ static int mga_do_agp_dma_bootstrap(stru > dma_bs->agp_size); > return err; > } >- >+ > dev_priv->agp_size = agp_size; > dev_priv->agp_handle = agp_req.handle; > > bind_req.handle = agp_req.handle; > bind_req.offset = 0; >- err = drm_agp_bind(dev, &bind_req); >+ err = drm_agp_bind( dev, &bind_req ); > if (err) { > DRM_ERROR("Unable to bind AGP memory: %d\n", err); > return err; >@@ -521,7 +520,7 @@ static int mga_do_agp_dma_bootstrap(stru > > offset += warp_size; > err = drm_addmap(dev, offset, dma_bs->primary_size, >- _DRM_AGP, _DRM_READ_ONLY, &dev_priv->primary); >+ _DRM_AGP, _DRM_READ_ONLY, & dev_priv->primary); > if (err) { > DRM_ERROR("Unable to map primary DMA region: %d\n", err); > return err; >@@ -529,13 +528,13 @@ static int mga_do_agp_dma_bootstrap(stru > > offset += dma_bs->primary_size; > err = drm_addmap(dev, offset, secondary_size, >- _DRM_AGP, 0, &dev->agp_buffer_map); >+ _DRM_AGP, 0, & dev->agp_buffer_map); > if (err) { > DRM_ERROR("Unable to map secondary DMA region: %d\n", err); > return err; > } > >- (void)memset(&req, 0, sizeof(req)); >+ (void)memset( &req, 0, sizeof(req) ); > req.count = dma_bs->secondary_bin_count; > req.size = dma_bs->secondary_bin_size; > req.flags = _DRM_AGP_BUFFER; >@@ -547,10 +546,11 @@ static int mga_do_agp_dma_bootstrap(stru > return err; > } > >+#ifdef __linux__ > { > struct drm_map_list *_entry; > unsigned long agp_token = 0; >- >+ > list_for_each_entry(_entry, &dev->maplist, head) { > if (_entry->map == dev->agp_buffer_map) > agp_token = _entry->user_token; >@@ -560,12 +560,13 @@ static int mga_do_agp_dma_bootstrap(stru > > dev->agp_buffer_token = agp_token; > } >+#endif > > offset += secondary_size; > err = drm_addmap(dev, offset, agp_size - offset, >- _DRM_AGP, 0, &dev_priv->agp_textures); >+ _DRM_AGP, 0, & dev_priv->agp_textures); > if (err) { >- DRM_ERROR("Unable to map AGP texture region %d\n", err); >+ DRM_ERROR("Unable to map AGP texture region: %d\n", err); > return err; > } > >@@ -587,13 +588,6 @@ static int mga_do_agp_dma_bootstrap(stru > DRM_INFO("Initialized card for AGP DMA.\n"); > return 0; > } >-#else >-static int mga_do_agp_dma_bootstrap(struct drm_device * dev, >- drm_mga_dma_bootstrap_t * dma_bs) >-{ >- return -EINVAL; >-} >-#endif > > /** > * Bootstrap the driver for PCI DMA. >@@ -613,13 +607,14 @@ static int mga_do_pci_dma_bootstrap(stru > drm_mga_dma_bootstrap_t * dma_bs) > { > drm_mga_private_t *const dev_priv = >- (drm_mga_private_t *) dev->dev_private; >+ (drm_mga_private_t *) dev->dev_private; > unsigned int warp_size = mga_warp_microcode_size(dev_priv); > unsigned int primary_size; > unsigned int bin_count; > int err; > struct drm_buf_desc req; > >+ > if (dev->dma == NULL) { > DRM_ERROR("dev->dma is NULL\n"); > return -EFAULT; >@@ -646,7 +641,7 @@ static int mga_do_pci_dma_bootstrap(stru > */ > > for (primary_size = dma_bs->primary_size; primary_size != 0; >- primary_size >>= 1) { >+ primary_size >>= 1 ) { > /* The proper alignment for this mapping is 0x04 */ > err = drm_addmap(dev, 0, primary_size, _DRM_CONSISTENT, > _DRM_READ_ONLY, &dev_priv->primary); >@@ -667,7 +662,7 @@ static int mga_do_pci_dma_bootstrap(stru > } > > for (bin_count = dma_bs->secondary_bin_count; bin_count > 0; >- bin_count--) { >+ bin_count-- ) { > (void)memset(&req, 0, sizeof(req)); > req.count = bin_count; > req.size = dma_bs->secondary_bin_size; >@@ -699,13 +694,15 @@ static int mga_do_pci_dma_bootstrap(stru > return 0; > } > >-static int mga_do_dma_bootstrap(struct drm_device * dev, >- drm_mga_dma_bootstrap_t * dma_bs) >+ >+static int mga_do_dma_bootstrap(struct drm_device *dev, >+ drm_mga_dma_bootstrap_t *dma_bs) > { > const int is_agp = (dma_bs->agp_mode != 0) && drm_device_is_agp(dev); > int err; > drm_mga_private_t *const dev_priv = >- (drm_mga_private_t *) dev->dev_private; >+ (drm_mga_private_t *) dev->dev_private; >+ > > dev_priv->used_new_dma_init = 1; > >@@ -713,25 +710,28 @@ static int mga_do_dma_bootstrap(struct d > * the cards MMIO registers and map a status page. > */ > err = drm_addmap(dev, dev_priv->mmio_base, dev_priv->mmio_size, >- _DRM_REGISTERS, _DRM_READ_ONLY, &dev_priv->mmio); >+ _DRM_REGISTERS, _DRM_READ_ONLY, & dev_priv->mmio); > if (err) { > DRM_ERROR("Unable to map MMIO region: %d\n", err); > return err; > } > >+ > err = drm_addmap(dev, 0, SAREA_MAX, _DRM_SHM, > _DRM_READ_ONLY | _DRM_LOCKED | _DRM_KERNEL, >- &dev_priv->status); >+ & dev_priv->status); > if (err) { > DRM_ERROR("Unable to map status region: %d\n", err); > return err; > } > >+ > /* The DMA initialization procedure is slightly different for PCI and > * AGP cards. AGP cards just allocate a large block of AGP memory and > * carve off portions of it for internal uses. The remaining memory > * is returned to user-mode to be used for AGP textures. > */ >+ > if (is_agp) { > err = mga_do_agp_dma_bootstrap(dev, dma_bs); > } >@@ -744,6 +744,7 @@ static int mga_do_dma_bootstrap(struct d > mga_do_cleanup_dma(dev, MINIMAL_CLEANUP); > } > >+ > /* Not only do we want to try and initialized PCI cards for PCI DMA, > * but we also try to initialized AGP cards that could not be > * initialized for AGP DMA. This covers the case where we have an AGP >@@ -756,6 +757,7 @@ static int mga_do_dma_bootstrap(struct d > err = mga_do_pci_dma_bootstrap(dev, dma_bs); > } > >+ > return err; > } > >@@ -768,6 +770,7 @@ int mga_dma_bootstrap(struct drm_device > const drm_mga_private_t *const dev_priv = > (drm_mga_private_t *) dev->dev_private; > >+ > err = mga_do_dma_bootstrap(dev, bootstrap); > if (err) { > mga_do_cleanup_dma(dev, FULL_CLEANUP); >@@ -784,15 +787,17 @@ int mga_dma_bootstrap(struct drm_device > > bootstrap->agp_mode = modes[bootstrap->agp_mode & 0x07]; > >- return err; >+ return 0; > } > >+ > static int mga_do_init_dma(struct drm_device * dev, drm_mga_init_t * init) > { > drm_mga_private_t *dev_priv; > int ret; > DRM_DEBUG("\n"); > >+ > dev_priv = dev->dev_private; > > if (init->sgram) { >@@ -850,7 +855,7 @@ static int mga_do_init_dma(struct drm_de > } > dev->agp_buffer_token = init->buffers_offset; > dev->agp_buffer_map = >- drm_core_findmap(dev, init->buffers_offset); >+ drm_core_findmap(dev, init->buffers_offset); > if (!dev->agp_buffer_map) { > DRM_ERROR("failed to find dma buffer region!\n"); > return -EINVAL; >@@ -875,14 +880,14 @@ static int mga_do_init_dma(struct drm_de > } > > ret = mga_warp_install_microcode(dev_priv); >- if (ret < 0) { >- DRM_ERROR("failed to install WARP ucode!: %d\n", ret); >+ if (ret != 0) { >+ DRM_ERROR("failed to install WARP ucode: %d!\n", ret); > return ret; > } > > ret = mga_warp_init(dev_priv); >- if (ret < 0) { >- DRM_ERROR("failed to init WARP engine!: %d\n", ret); >+ if (ret != 0) { >+ DRM_ERROR("failed to init WARP engine: %d!\n", ret); > return ret; > } > >@@ -893,10 +898,6 @@ static int mga_do_init_dma(struct drm_de > /* Init the primary DMA registers. > */ > MGA_WRITE(MGA_PRIMADDRESS, dev_priv->primary->offset | MGA_DMA_GENERAL); >-#if 0 >- MGA_WRITE(MGA_PRIMPTR, virt_to_bus((void *)dev_priv->prim.status) | MGA_PRIMPTREN0 | /* Soft trap, SECEND, SETUPEND */ >- MGA_PRIMPTREN1); /* DWGSYNC */ >-#endif > > dev_priv->prim.start = (u8 *) dev_priv->primary->handle; > dev_priv->prim.end = ((u8 *) dev_priv->primary->handle >@@ -954,7 +955,6 @@ static int mga_do_cleanup_dma(struct drm > drm_core_ioremapfree(dev->agp_buffer_map, dev); > > if (dev_priv->used_new_dma_init) { >-#if __OS_HAS_AGP > if (dev_priv->agp_handle != 0) { > struct drm_agp_binding unbind_req; > struct drm_agp_buffer free_req; >@@ -964,7 +964,7 @@ static int mga_do_cleanup_dma(struct drm > > free_req.handle = dev_priv->agp_handle; > drm_agp_free(dev, &free_req); >- >+ > dev_priv->agp_textures = NULL; > dev_priv->agp_size = 0; > dev_priv->agp_handle = 0; >@@ -973,7 +973,6 @@ static int mga_do_cleanup_dma(struct drm > if ((dev->agp != NULL) && dev->agp->acquired) { > err = drm_agp_release(dev); > } >-#endif > } > > dev_priv->warp = NULL; >@@ -998,7 +997,7 @@ static int mga_do_cleanup_dma(struct drm > } > } > >- return 0; >+ return err; > } > > int mga_dma_init(struct drm_device *dev, void *data, >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mga_drm.h linux-2.6.23.i686/drivers/char/drm/mga_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/mga_drm.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/mga_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -73,7 +73,7 @@ > > #define MGA_CARD_TYPE_G200 1 > #define MGA_CARD_TYPE_G400 2 >-#define MGA_CARD_TYPE_G450 3 /* not currently used */ >+#define MGA_CARD_TYPE_G450 3 /* not currently used */ > #define MGA_CARD_TYPE_G550 4 > > #define MGA_FRONT 0x1 >@@ -224,6 +224,7 @@ typedef struct _drm_mga_sarea { > int ctxOwner; > } drm_mga_sarea_t; > >+ > /* MGA specific ioctls > * The device specific ioctl range is 0x40 to 0x79. > */ >@@ -245,6 +246,7 @@ typedef struct _drm_mga_sarea { > #define DRM_MGA_WAIT_FENCE 0x0b > #define DRM_MGA_DMA_BOOTSTRAP 0x0c > >+ > #define DRM_IOCTL_MGA_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_MGA_INIT, drm_mga_init_t) > #define DRM_IOCTL_MGA_FLUSH DRM_IOW( DRM_COMMAND_BASE + DRM_MGA_FLUSH, drm_lock_t) > #define DRM_IOCTL_MGA_RESET DRM_IO( DRM_COMMAND_BASE + DRM_MGA_RESET) >@@ -296,6 +298,7 @@ typedef struct drm_mga_init { > unsigned long buffers_offset; > } drm_mga_init_t; > >+ > typedef struct drm_mga_dma_bootstrap { > /** > * \name AGP texture region >@@ -308,10 +311,11 @@ typedef struct drm_mga_dma_bootstrap { > * is zero, it means that PCI memory (most likely through the use of > * an IOMMU) is being used for "AGP" textures. > */ >- /*@{ */ >- unsigned long texture_handle; /**< Handle used to map AGP textures. */ >- uint32_t texture_size; /**< Size of the AGP texture region. */ >- /*@} */ >+ /*@{*/ >+ unsigned long texture_handle; /**< Handle used to map AGP textures. */ >+ uint32_t texture_size; /**< Size of the AGP texture region. */ >+ /*@}*/ >+ > > /** > * Requested size of the primary DMA region. >@@ -321,6 +325,7 @@ typedef struct drm_mga_dma_bootstrap { > */ > uint32_t primary_size; > >+ > /** > * Requested number of secondary DMA buffers. > * >@@ -331,6 +336,7 @@ typedef struct drm_mga_dma_bootstrap { > */ > uint32_t secondary_bin_count; > >+ > /** > * Requested size of each secondary DMA buffer. > * >@@ -340,6 +346,7 @@ typedef struct drm_mga_dma_bootstrap { > */ > uint32_t secondary_bin_size; > >+ > /** > * Bit-wise mask of AGPSTAT2_* values. Currently only \c AGPSTAT2_1X, > * \c AGPSTAT2_2X, and \c AGPSTAT2_4X are supported. If this value is >@@ -352,6 +359,7 @@ typedef struct drm_mga_dma_bootstrap { > */ > uint32_t agp_mode; > >+ > /** > * Desired AGP GART size, measured in megabytes. > */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mga_drv.c linux-2.6.23.i686/drivers/char/drm/mga_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mga_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/mga_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -42,12 +42,13 @@ static struct pci_device_id pciidlist[] > mga_PCI_IDS > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = > DRIVER_USE_AGP | DRIVER_USE_MTRR | DRIVER_PCI_DMA | > DRIVER_HAVE_DMA | DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | > DRIVER_IRQ_VBL, >- .dev_priv_size = sizeof(drm_mga_buf_priv_t), >+ .dev_priv_size = sizeof (drm_mga_buf_priv_t), > .load = mga_driver_load, > .unload = mga_driver_unload, > .lastclose = mga_driver_lastclose, >@@ -64,20 +65,22 @@ static struct drm_driver driver = { > .ioctls = mga_ioctls, > .dma_ioctl = mga_dma_buffers, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >-#ifdef CONFIG_COMPAT >- .compat_ioctl = mga_compat_ioctl, >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+#if defined(CONFIG_COMPAT) && LINUX_VERSION_CODE > KERNEL_VERSION(2,6,9) >+ .compat_ioctl = mga_compat_ioctl, > #endif >- }, >+ }, > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), > }, > > .name = DRIVER_NAME, >@@ -88,10 +91,16 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ >+ > static int __init mga_init(void) > { > driver.num_ioctls = mga_max_ioctl; >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit mga_exit(void) >@@ -120,7 +129,8 @@ MODULE_LICENSE("GPL and additional right > */ > static int mga_driver_device_is_agp(struct drm_device * dev) > { >- const struct pci_dev *const pdev = dev->pdev; >+ const struct pci_dev * const pdev = dev->pdev; >+ > > /* There are PCI versions of the G450. These cards have the > * same PCI ID as the AGP G450, but have an additional PCI-to-PCI >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mga_drv.h linux-2.6.23.i686/drivers/char/drm/mga_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/mga_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mga_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -38,11 +38,11 @@ > > #define DRIVER_NAME "mga" > #define DRIVER_DESC "Matrox G200/G400" >-#define DRIVER_DATE "20051102" >+#define DRIVER_DATE "20060319" > > #define DRIVER_MAJOR 3 > #define DRIVER_MINOR 2 >-#define DRIVER_PATCHLEVEL 1 >+#define DRIVER_PATCHLEVEL 2 > > typedef struct drm_mga_primary_buffer { > u8 *start; >@@ -112,10 +112,10 @@ typedef struct drm_mga_private { > * > * \sa drm_mga_private_t::mmio > */ >- /*@{ */ >- u32 mmio_base; /**< Bus address of base of MMIO. */ >- u32 mmio_size; /**< Size of the MMIO region. */ >- /*@} */ >+ /*@{*/ >+ u32 mmio_base; /**< Bus address of base of MMIO. */ >+ u32 mmio_size; /**< Size of the MMIO region. */ >+ /*@}*/ > > u32 clear_cmd; > u32 maccess; >@@ -216,8 +216,8 @@ static inline u32 _MGA_READ(u32 * addr) > #define MGA_WRITE( reg, val ) DRM_WRITE32(dev_priv->mmio, (reg), (val)) > #endif > >-#define DWGREG0 0x1c00 >-#define DWGREG0_END 0x1dff >+#define DWGREG0 0x1c00 >+#define DWGREG0_END 0x1dff > #define DWGREG1 0x2c00 > #define DWGREG1_END 0x2dff > >@@ -394,22 +394,22 @@ do { \ > #define MGA_VINTCLR (1 << 4) > #define MGA_VINTEN (1 << 5) > >-#define MGA_ALPHACTRL 0x2c7c >-#define MGA_AR0 0x1c60 >-#define MGA_AR1 0x1c64 >-#define MGA_AR2 0x1c68 >-#define MGA_AR3 0x1c6c >-#define MGA_AR4 0x1c70 >-#define MGA_AR5 0x1c74 >-#define MGA_AR6 0x1c78 >+#define MGA_ALPHACTRL 0x2c7c >+#define MGA_AR0 0x1c60 >+#define MGA_AR1 0x1c64 >+#define MGA_AR2 0x1c68 >+#define MGA_AR3 0x1c6c >+#define MGA_AR4 0x1c70 >+#define MGA_AR5 0x1c74 >+#define MGA_AR6 0x1c78 > > #define MGA_CXBNDRY 0x1c80 >-#define MGA_CXLEFT 0x1ca0 >+#define MGA_CXLEFT 0x1ca0 > #define MGA_CXRIGHT 0x1ca4 > >-#define MGA_DMAPAD 0x1c54 >-#define MGA_DSTORG 0x2cb8 >-#define MGA_DWGCTL 0x1c00 >+#define MGA_DMAPAD 0x1c54 >+#define MGA_DSTORG 0x2cb8 >+#define MGA_DWGCTL 0x1c00 > # define MGA_OPCOD_MASK (15 << 0) > # define MGA_OPCOD_TRAP (4 << 0) > # define MGA_OPCOD_TEXTURE_TRAP (6 << 0) >@@ -455,27 +455,27 @@ do { \ > # define MGA_CLIPDIS (1 << 31) > #define MGA_DWGSYNC 0x2c4c > >-#define MGA_FCOL 0x1c24 >-#define MGA_FIFOSTATUS 0x1e10 >-#define MGA_FOGCOL 0x1cf4 >+#define MGA_FCOL 0x1c24 >+#define MGA_FIFOSTATUS 0x1e10 >+#define MGA_FOGCOL 0x1cf4 > #define MGA_FXBNDRY 0x1c84 >-#define MGA_FXLEFT 0x1ca8 >+#define MGA_FXLEFT 0x1ca8 > #define MGA_FXRIGHT 0x1cac > >-#define MGA_ICLEAR 0x1e18 >+#define MGA_ICLEAR 0x1e18 > # define MGA_SOFTRAPICLR (1 << 0) > # define MGA_VLINEICLR (1 << 5) >-#define MGA_IEN 0x1e1c >+#define MGA_IEN 0x1e1c > # define MGA_SOFTRAPIEN (1 << 0) > # define MGA_VLINEIEN (1 << 5) > >-#define MGA_LEN 0x1c5c >+#define MGA_LEN 0x1c5c > > #define MGA_MACCESS 0x1c04 > >-#define MGA_PITCH 0x1c8c >-#define MGA_PLNWT 0x1c1c >-#define MGA_PRIMADDRESS 0x1e58 >+#define MGA_PITCH 0x1c8c >+#define MGA_PLNWT 0x1c1c >+#define MGA_PRIMADDRESS 0x1e58 > # define MGA_DMA_GENERAL (0 << 0) > # define MGA_DMA_BLIT (1 << 0) > # define MGA_DMA_VECTOR (2 << 0) >@@ -487,43 +487,43 @@ do { \ > # define MGA_PRIMPTREN0 (1 << 0) > # define MGA_PRIMPTREN1 (1 << 1) > >-#define MGA_RST 0x1e40 >+#define MGA_RST 0x1e40 > # define MGA_SOFTRESET (1 << 0) > # define MGA_SOFTEXTRST (1 << 1) > >-#define MGA_SECADDRESS 0x2c40 >-#define MGA_SECEND 0x2c44 >-#define MGA_SETUPADDRESS 0x2cd0 >-#define MGA_SETUPEND 0x2cd4 >+#define MGA_SECADDRESS 0x2c40 >+#define MGA_SECEND 0x2c44 >+#define MGA_SETUPADDRESS 0x2cd0 >+#define MGA_SETUPEND 0x2cd4 > #define MGA_SGN 0x1c58 > #define MGA_SOFTRAP 0x2c48 >-#define MGA_SRCORG 0x2cb4 >+#define MGA_SRCORG 0x2cb4 > # define MGA_SRMMAP_MASK (1 << 0) > # define MGA_SRCMAP_FB (0 << 0) > # define MGA_SRCMAP_SYSMEM (1 << 0) > # define MGA_SRCACC_MASK (1 << 1) > # define MGA_SRCACC_PCI (0 << 1) > # define MGA_SRCACC_AGP (1 << 1) >-#define MGA_STATUS 0x1e14 >+#define MGA_STATUS 0x1e14 > # define MGA_SOFTRAPEN (1 << 0) > # define MGA_VSYNCPEN (1 << 4) > # define MGA_VLINEPEN (1 << 5) > # define MGA_DWGENGSTS (1 << 16) > # define MGA_ENDPRDMASTS (1 << 17) > #define MGA_STENCIL 0x2cc8 >-#define MGA_STENCILCTL 0x2ccc >+#define MGA_STENCILCTL 0x2ccc > >-#define MGA_TDUALSTAGE0 0x2cf8 >-#define MGA_TDUALSTAGE1 0x2cfc >-#define MGA_TEXBORDERCOL 0x2c5c >-#define MGA_TEXCTL 0x2c30 >+#define MGA_TDUALSTAGE0 0x2cf8 >+#define MGA_TDUALSTAGE1 0x2cfc >+#define MGA_TEXBORDERCOL 0x2c5c >+#define MGA_TEXCTL 0x2c30 > #define MGA_TEXCTL2 0x2c3c > # define MGA_DUALTEX (1 << 7) > # define MGA_G400_TC2_MAGIC (1 << 15) > # define MGA_MAP1_ENABLE (1 << 31) >-#define MGA_TEXFILTER 0x2c58 >-#define MGA_TEXHEIGHT 0x2c2c >-#define MGA_TEXORG 0x2c24 >+#define MGA_TEXFILTER 0x2c58 >+#define MGA_TEXHEIGHT 0x2c2c >+#define MGA_TEXORG 0x2c24 > # define MGA_TEXORGMAP_MASK (1 << 0) > # define MGA_TEXORGMAP_FB (0 << 0) > # define MGA_TEXORGMAP_SYSMEM (1 << 0) >@@ -534,45 +534,45 @@ do { \ > #define MGA_TEXORG2 0x2ca8 > #define MGA_TEXORG3 0x2cac > #define MGA_TEXORG4 0x2cb0 >-#define MGA_TEXTRANS 0x2c34 >-#define MGA_TEXTRANSHIGH 0x2c38 >-#define MGA_TEXWIDTH 0x2c28 >- >-#define MGA_WACCEPTSEQ 0x1dd4 >-#define MGA_WCODEADDR 0x1e6c >-#define MGA_WFLAG 0x1dc4 >-#define MGA_WFLAG1 0x1de0 >+#define MGA_TEXTRANS 0x2c34 >+#define MGA_TEXTRANSHIGH 0x2c38 >+#define MGA_TEXWIDTH 0x2c28 >+ >+#define MGA_WACCEPTSEQ 0x1dd4 >+#define MGA_WCODEADDR 0x1e6c >+#define MGA_WFLAG 0x1dc4 >+#define MGA_WFLAG1 0x1de0 > #define MGA_WFLAGNB 0x1e64 >-#define MGA_WFLAGNB1 0x1e08 >+#define MGA_WFLAGNB1 0x1e08 > #define MGA_WGETMSB 0x1dc8 >-#define MGA_WIADDR 0x1dc0 >+#define MGA_WIADDR 0x1dc0 > #define MGA_WIADDR2 0x1dd8 > # define MGA_WMODE_SUSPEND (0 << 0) > # define MGA_WMODE_RESUME (1 << 0) > # define MGA_WMODE_JUMP (2 << 0) > # define MGA_WMODE_START (3 << 0) > # define MGA_WAGP_ENABLE (1 << 2) >-#define MGA_WMISC 0x1e70 >+#define MGA_WMISC 0x1e70 > # define MGA_WUCODECACHE_ENABLE (1 << 0) > # define MGA_WMASTER_ENABLE (1 << 1) > # define MGA_WCACHEFLUSH_ENABLE (1 << 3) > #define MGA_WVRTXSZ 0x1dcc > >-#define MGA_YBOT 0x1c9c >-#define MGA_YDST 0x1c90 >+#define MGA_YBOT 0x1c9c >+#define MGA_YDST 0x1c90 > #define MGA_YDSTLEN 0x1c88 > #define MGA_YDSTORG 0x1c94 >-#define MGA_YTOP 0x1c98 >+#define MGA_YTOP 0x1c98 > >-#define MGA_ZORG 0x1c0c >+#define MGA_ZORG 0x1c0c > > /* This finishes the current batch of commands > */ >-#define MGA_EXEC 0x0100 >+#define MGA_EXEC 0x0100 > > /* AGP PLL encoding (for G200 only). > */ >-#define MGA_AGP_PLL 0x1e4c >+#define MGA_AGP_PLL 0x1e4c > # define MGA_AGP2XPLL_DISABLE (0 << 0) > # define MGA_AGP2XPLL_ENABLE (1 << 0) > >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mga_ioc32.c linux-2.6.23.i686/drivers/char/drm/mga_ioc32.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mga_ioc32.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/mga_ioc32.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,3 +1,4 @@ >+ > /** > * \file mga_ioc32.c > * >@@ -90,25 +91,25 @@ static int compat_mga_init(struct file * > || __put_user(init32.buffers_offset, &init->buffers_offset)) > return -EFAULT; > >- for (i = 0; i < MGA_NR_TEX_HEAPS; i++) { >- err |= >- __put_user(init32.texture_offset[i], >- &init->texture_offset[i]); >- err |= >- __put_user(init32.texture_size[i], &init->texture_size[i]); >+ for (i=0; i<MGA_NR_TEX_HEAPS; i++) >+ { >+ err |= __put_user(init32.texture_offset[i], &init->texture_offset[i]); >+ err |= __put_user(init32.texture_size[i], &init->texture_size[i]); > } > if (err) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_MGA_INIT, (unsigned long)init); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_MGA_INIT, (unsigned long) init); > } > >+ > typedef struct drm_mga_getparam32 { > int param; > u32 value; > } drm_mga_getparam32_t; > >+ > static int compat_mga_getparam(struct file *file, unsigned int cmd, > unsigned long arg) > { >@@ -121,11 +122,10 @@ static int compat_mga_getparam(struct fi > getparam = compat_alloc_user_space(sizeof(*getparam)); > if (!access_ok(VERIFY_WRITE, getparam, sizeof(*getparam)) > || __put_user(getparam32.param, &getparam->param) >- || __put_user((void __user *)(unsigned long)getparam32.value, >- &getparam->value)) >+ || __put_user((void __user *)(unsigned long)getparam32.value, &getparam->value)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_MGA_GETPARAM, (unsigned long)getparam); > } > >@@ -166,7 +166,7 @@ static int compat_mga_dma_bootstrap(stru > || __put_user(dma_bootstrap32.agp_size, &dma_bootstrap->agp_size)) > return -EFAULT; > >- err = drm_ioctl(file->f_path.dentry->d_inode, file, >+ err = drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_MGA_DMA_BOOTSTRAP, > (unsigned long)dma_bootstrap); > if (err) >@@ -182,8 +182,10 @@ static int compat_mga_dma_bootstrap(stru > &dma_bootstrap->secondary_bin_count) > || __get_user(dma_bootstrap32.secondary_bin_size, > &dma_bootstrap->secondary_bin_size) >- || __get_user(dma_bootstrap32.agp_mode, &dma_bootstrap->agp_mode) >- || __get_user(dma_bootstrap32.agp_size, &dma_bootstrap->agp_size)) >+ || __get_user(dma_bootstrap32.agp_mode, >+ &dma_bootstrap->agp_mode) >+ || __get_user(dma_bootstrap32.agp_size, >+ &dma_bootstrap->agp_size)) > return -EFAULT; > > if (copy_to_user((void __user *)arg, &dma_bootstrap32, >@@ -208,7 +210,8 @@ drm_ioctl_compat_t *mga_compat_ioctls[] > * \param arg user argument. > * \return zero on success or negative number on failure. > */ >-long mga_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) >+long mga_compat_ioctl(struct file *filp, unsigned int cmd, >+ unsigned long arg) > { > unsigned int nr = DRM_IOCTL_NR(cmd); > drm_ioctl_compat_t *fn = NULL; >@@ -222,9 +225,9 @@ long mga_compat_ioctl(struct file *filp, > > lock_kernel(); /* XXX for now */ > if (fn != NULL) >- ret = (*fn) (filp, cmd, arg); >+ ret = (*fn)(filp, cmd, arg); > else >- ret = drm_ioctl(filp->f_path.dentry->d_inode, filp, cmd, arg); >+ ret = drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg); > unlock_kernel(); > > return ret; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mga_irq.c linux-2.6.23.i686/drivers/char/drm/mga_irq.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mga_irq.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/mga_irq.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,5 +1,6 @@ > /* mga_irq.c -- IRQ handling for radeon -*- linux-c -*- >- * >+ */ >+/* > * Copyright (C) The Weather Channel, Inc. 2002. All Rights Reserved. > * > * The Weather Channel (TM) funded Tungsten Graphics to develop the >@@ -58,6 +59,7 @@ irqreturn_t mga_driver_irq_handler(DRM_I > const u32 prim_start = MGA_READ(MGA_PRIMADDRESS); > const u32 prim_end = MGA_READ(MGA_PRIMEND); > >+ > MGA_WRITE(MGA_ICLEAR, MGA_SOFTRAPICLR); > > /* In addition to clearing the interrupt-pending bit, we >@@ -72,9 +74,8 @@ irqreturn_t mga_driver_irq_handler(DRM_I > handled = 1; > } > >- if (handled) { >+ if (handled) > return IRQ_HANDLED; >- } > return IRQ_NONE; > } > >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mga_state.c linux-2.6.23.i686/drivers/char/drm/mga_state.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mga_state.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mga_state.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,6 +1,7 @@ > /* mga_state.c -- State support for MGA G200/G400 -*- linux-c -*- > * Created: Thu Jan 27 02:53:43 2000 by jhartmann@precisioninsight.com >- * >+ */ >+/* > * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. > * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. > * All Rights Reserved. >@@ -99,19 +100,23 @@ static __inline__ void mga_g400_emit_con > > DMA_BLOCK(MGA_DSTORG, ctx->dstorg, > MGA_MACCESS, ctx->maccess, >- MGA_PLNWT, ctx->plnwt, MGA_DWGCTL, ctx->dwgctl); >+ MGA_PLNWT, ctx->plnwt, >+ MGA_DWGCTL, ctx->dwgctl); > > DMA_BLOCK(MGA_ALPHACTRL, ctx->alphactrl, > MGA_FOGCOL, ctx->fogcolor, >- MGA_WFLAG, ctx->wflag, MGA_ZORG, dev_priv->depth_offset); >+ MGA_WFLAG, ctx->wflag, >+ MGA_ZORG, dev_priv->depth_offset); > > DMA_BLOCK(MGA_WFLAG1, ctx->wflag, > MGA_TDUALSTAGE0, ctx->tdualstage0, >- MGA_TDUALSTAGE1, ctx->tdualstage1, MGA_FCOL, ctx->fcol); >+ MGA_TDUALSTAGE1, ctx->tdualstage1, >+ MGA_FCOL, ctx->fcol); > > DMA_BLOCK(MGA_STENCIL, ctx->stencil, > MGA_STENCILCTL, ctx->stencilctl, >- MGA_DMAPAD, 0x00000000, MGA_DMAPAD, 0x00000000); >+ MGA_DMAPAD, 0x00000000, >+ MGA_DMAPAD, 0x00000000); > > ADVANCE_DMA(); > } >@@ -131,15 +136,18 @@ static __inline__ void mga_g200_emit_tex > > DMA_BLOCK(MGA_TEXORG, tex->texorg, > MGA_TEXORG1, tex->texorg1, >- MGA_TEXORG2, tex->texorg2, MGA_TEXORG3, tex->texorg3); >+ MGA_TEXORG2, tex->texorg2, >+ MGA_TEXORG3, tex->texorg3); > > DMA_BLOCK(MGA_TEXORG4, tex->texorg4, > MGA_TEXWIDTH, tex->texwidth, >- MGA_TEXHEIGHT, tex->texheight, MGA_WR24, tex->texwidth); >+ MGA_TEXHEIGHT, tex->texheight, >+ MGA_WR24, tex->texwidth); > > DMA_BLOCK(MGA_WR34, tex->texheight, > MGA_TEXTRANS, 0x0000ffff, >- MGA_TEXTRANSHIGH, 0x0000ffff, MGA_DMAPAD, 0x00000000); >+ MGA_TEXTRANSHIGH, 0x0000ffff, >+ MGA_DMAPAD, 0x00000000); > > ADVANCE_DMA(); > } >@@ -150,8 +158,8 @@ static __inline__ void mga_g400_emit_tex > drm_mga_texture_regs_t *tex = &sarea_priv->tex_state[0]; > DMA_LOCALS; > >-/* printk("mga_g400_emit_tex0 %x %x %x\n", tex->texorg, */ >-/* tex->texctl, tex->texctl2); */ >+/* printk("mga_g400_emit_tex0 %x %x %x\n", tex->texorg, */ >+/* tex->texctl, tex->texctl2); */ > > BEGIN_DMA(6); > >@@ -162,15 +170,18 @@ static __inline__ void mga_g400_emit_tex > > DMA_BLOCK(MGA_TEXORG, tex->texorg, > MGA_TEXORG1, tex->texorg1, >- MGA_TEXORG2, tex->texorg2, MGA_TEXORG3, tex->texorg3); >+ MGA_TEXORG2, tex->texorg2, >+ MGA_TEXORG3, tex->texorg3); > > DMA_BLOCK(MGA_TEXORG4, tex->texorg4, > MGA_TEXWIDTH, tex->texwidth, >- MGA_TEXHEIGHT, tex->texheight, MGA_WR49, 0x00000000); >+ MGA_TEXHEIGHT, tex->texheight, >+ MGA_WR49, 0x00000000); > > DMA_BLOCK(MGA_WR57, 0x00000000, > MGA_WR53, 0x00000000, >- MGA_WR61, 0x00000000, MGA_WR52, MGA_G400_WR_MAGIC); >+ MGA_WR61, 0x00000000, >+ MGA_WR52, MGA_G400_WR_MAGIC); > > DMA_BLOCK(MGA_WR60, MGA_G400_WR_MAGIC, > MGA_WR54, tex->texwidth | MGA_G400_WR_MAGIC, >@@ -179,7 +190,8 @@ static __inline__ void mga_g400_emit_tex > > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_DMAPAD, 0x00000000, >- MGA_TEXTRANS, 0x0000ffff, MGA_TEXTRANSHIGH, 0x0000ffff); >+ MGA_TEXTRANS, 0x0000ffff, >+ MGA_TEXTRANSHIGH, 0x0000ffff); > > ADVANCE_DMA(); > } >@@ -190,8 +202,8 @@ static __inline__ void mga_g400_emit_tex > drm_mga_texture_regs_t *tex = &sarea_priv->tex_state[1]; > DMA_LOCALS; > >-/* printk("mga_g400_emit_tex1 %x %x %x\n", tex->texorg, */ >-/* tex->texctl, tex->texctl2); */ >+/* printk("mga_g400_emit_tex1 %x %x %x\n", tex->texorg, */ >+/* tex->texctl, tex->texctl2); */ > > BEGIN_DMA(5); > >@@ -204,11 +216,13 @@ static __inline__ void mga_g400_emit_tex > > DMA_BLOCK(MGA_TEXORG, tex->texorg, > MGA_TEXORG1, tex->texorg1, >- MGA_TEXORG2, tex->texorg2, MGA_TEXORG3, tex->texorg3); >+ MGA_TEXORG2, tex->texorg2, >+ MGA_TEXORG3, tex->texorg3); > > DMA_BLOCK(MGA_TEXORG4, tex->texorg4, > MGA_TEXWIDTH, tex->texwidth, >- MGA_TEXHEIGHT, tex->texheight, MGA_WR49, 0x00000000); >+ MGA_TEXHEIGHT, tex->texheight, >+ MGA_WR49, 0x00000000); > > DMA_BLOCK(MGA_WR57, 0x00000000, > MGA_WR53, 0x00000000, >@@ -233,11 +247,13 @@ static __inline__ void mga_g200_emit_pip > > DMA_BLOCK(MGA_WIADDR, MGA_WMODE_SUSPEND, > MGA_WVRTXSZ, 0x00000007, >- MGA_WFLAG, 0x00000000, MGA_WR24, 0x00000000); >+ MGA_WFLAG, 0x00000000, >+ MGA_WR24, 0x00000000); > > DMA_BLOCK(MGA_WR25, 0x00000100, > MGA_WR34, 0x00000000, >- MGA_WR42, 0x0000ffff, MGA_WR60, 0x0000ffff); >+ MGA_WR42, 0x0000ffff, >+ MGA_WR60, 0x0000ffff); > > /* Padding required to to hardware bug. > */ >@@ -256,18 +272,20 @@ static __inline__ void mga_g400_emit_pip > unsigned int pipe = sarea_priv->warp_pipe; > DMA_LOCALS; > >-/* printk("mga_g400_emit_pipe %x\n", pipe); */ >+/* printk("mga_g400_emit_pipe %x\n", pipe); */ > > BEGIN_DMA(10); > > DMA_BLOCK(MGA_WIADDR2, MGA_WMODE_SUSPEND, > MGA_DMAPAD, 0x00000000, >- MGA_DMAPAD, 0x00000000, MGA_DMAPAD, 0x00000000); >+ MGA_DMAPAD, 0x00000000, >+ MGA_DMAPAD, 0x00000000); > > if (pipe & MGA_T2) { > DMA_BLOCK(MGA_WVRTXSZ, 0x00001e09, > MGA_DMAPAD, 0x00000000, >- MGA_DMAPAD, 0x00000000, MGA_DMAPAD, 0x00000000); >+ MGA_DMAPAD, 0x00000000, >+ MGA_DMAPAD, 0x00000000); > > DMA_BLOCK(MGA_WACCEPTSEQ, 0x00000000, > MGA_WACCEPTSEQ, 0x00000000, >@@ -295,7 +313,8 @@ static __inline__ void mga_g400_emit_pip > > DMA_BLOCK(MGA_WVRTXSZ, 0x00001807, > MGA_DMAPAD, 0x00000000, >- MGA_DMAPAD, 0x00000000, MGA_DMAPAD, 0x00000000); >+ MGA_DMAPAD, 0x00000000, >+ MGA_DMAPAD, 0x00000000); > > DMA_BLOCK(MGA_WACCEPTSEQ, 0x00000000, > MGA_WACCEPTSEQ, 0x00000000, >@@ -305,7 +324,8 @@ static __inline__ void mga_g400_emit_pip > > DMA_BLOCK(MGA_WFLAG, 0x00000000, > MGA_WFLAG1, 0x00000000, >- MGA_WR56, MGA_G400_WR56_MAGIC, MGA_DMAPAD, 0x00000000); >+ MGA_WR56, MGA_G400_WR56_MAGIC, >+ MGA_DMAPAD, 0x00000000); > > DMA_BLOCK(MGA_WR49, 0x00000000, /* tex0 */ > MGA_WR57, 0x00000000, /* tex0 */ >@@ -495,7 +515,8 @@ static void mga_dma_dispatch_clear(struc > > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_DMAPAD, 0x00000000, >- MGA_DWGSYNC, 0x00007100, MGA_DWGSYNC, 0x00007000); >+ MGA_DWGSYNC, 0x00007100, >+ MGA_DWGSYNC, 0x00007000); > > ADVANCE_DMA(); > >@@ -561,7 +582,8 @@ static void mga_dma_dispatch_clear(struc > /* Force reset of DWGCTL */ > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_DMAPAD, 0x00000000, >- MGA_PLNWT, ctx->plnwt, MGA_DWGCTL, ctx->dwgctl); >+ MGA_PLNWT, ctx->plnwt, >+ MGA_DWGCTL, ctx->dwgctl); > > ADVANCE_DMA(); > >@@ -586,7 +608,8 @@ static void mga_dma_dispatch_swap(struct > > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_DMAPAD, 0x00000000, >- MGA_DWGSYNC, 0x00007100, MGA_DWGSYNC, 0x00007000); >+ MGA_DWGSYNC, 0x00007100, >+ MGA_DWGSYNC, 0x00007000); > > DMA_BLOCK(MGA_DSTORG, dev_priv->front_offset, > MGA_MACCESS, dev_priv->maccess, >@@ -595,7 +618,8 @@ static void mga_dma_dispatch_swap(struct > > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_DMAPAD, 0x00000000, >- MGA_PLNWT, 0xffffffff, MGA_DWGCTL, MGA_DWGCTL_COPY); >+ MGA_PLNWT, 0xffffffff, >+ MGA_DWGCTL, MGA_DWGCTL_COPY); > > for (i = 0; i < nbox; i++) { > struct drm_clip_rect *box = &pbox[i]; >@@ -613,7 +637,8 @@ static void mga_dma_dispatch_swap(struct > > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_PLNWT, ctx->plnwt, >- MGA_SRCORG, dev_priv->front_offset, MGA_DWGCTL, ctx->dwgctl); >+ MGA_SRCORG, dev_priv->front_offset, >+ MGA_DWGCTL, ctx->dwgctl); > > ADVANCE_DMA(); > >@@ -724,8 +749,7 @@ static void mga_dma_dispatch_iload(struc > drm_mga_private_t *dev_priv = dev->dev_private; > drm_mga_buf_priv_t *buf_priv = buf->dev_private; > drm_mga_context_regs_t *ctx = &dev_priv->sarea_priv->context_state; >- u32 srcorg = >- buf->bus_address | dev_priv->dma_access | MGA_SRCMAP_SYSMEM; >+ u32 srcorg = buf->bus_address | dev_priv->dma_access | MGA_SRCMAP_SYSMEM; > u32 y2; > DMA_LOCALS; > DRM_DEBUG("buf=%d used=%d\n", buf->idx, buf->used); >@@ -736,22 +760,28 @@ static void mga_dma_dispatch_iload(struc > > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_DMAPAD, 0x00000000, >- MGA_DWGSYNC, 0x00007100, MGA_DWGSYNC, 0x00007000); >+ MGA_DWGSYNC, 0x00007100, >+ MGA_DWGSYNC, 0x00007000); > > DMA_BLOCK(MGA_DSTORG, dstorg, >- MGA_MACCESS, 0x00000000, MGA_SRCORG, srcorg, MGA_AR5, 64); >+ MGA_MACCESS, 0x00000000, >+ MGA_SRCORG, srcorg, >+ MGA_AR5, 64); > > DMA_BLOCK(MGA_PITCH, 64, > MGA_PLNWT, 0xffffffff, >- MGA_DMAPAD, 0x00000000, MGA_DWGCTL, MGA_DWGCTL_COPY); >+ MGA_DMAPAD, 0x00000000, >+ MGA_DWGCTL, MGA_DWGCTL_COPY); > > DMA_BLOCK(MGA_AR0, 63, > MGA_AR3, 0, >- MGA_FXBNDRY, (63 << 16) | 0, MGA_YDSTLEN + MGA_EXEC, y2); >+ MGA_FXBNDRY, (63 << 16) | 0, >+ MGA_YDSTLEN + MGA_EXEC, y2); > > DMA_BLOCK(MGA_PLNWT, ctx->plnwt, > MGA_SRCORG, dev_priv->front_offset, >- MGA_PITCH, dev_priv->front_pitch, MGA_DWGSYNC, 0x00007000); >+ MGA_PITCH, dev_priv->front_pitch, >+ MGA_DWGSYNC, 0x00007000); > > ADVANCE_DMA(); > >@@ -781,11 +811,13 @@ static void mga_dma_dispatch_blit(struct > > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_DMAPAD, 0x00000000, >- MGA_DWGSYNC, 0x00007100, MGA_DWGSYNC, 0x00007000); >+ MGA_DWGSYNC, 0x00007100, >+ MGA_DWGSYNC, 0x00007000); > > DMA_BLOCK(MGA_DWGCTL, MGA_DWGCTL_COPY, > MGA_PLNWT, blit->planemask, >- MGA_SRCORG, blit->srcorg, MGA_DSTORG, blit->dstorg); >+ MGA_SRCORG, blit->srcorg, >+ MGA_DSTORG, blit->dstorg); > > DMA_BLOCK(MGA_SGN, scandir, > MGA_MACCESS, dev_priv->maccess, >@@ -819,7 +851,8 @@ static void mga_dma_dispatch_blit(struct > /* Force reset of DWGCTL */ > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_PLNWT, ctx->plnwt, >- MGA_PITCH, dev_priv->front_pitch, MGA_DWGCTL, ctx->dwgctl); >+ MGA_PITCH, dev_priv->front_pitch, >+ MGA_DWGCTL, ctx->dwgctl); > > ADVANCE_DMA(); > } >@@ -1062,14 +1095,14 @@ static int mga_set_fence(struct drm_devi > BEGIN_DMA(1); > DMA_BLOCK(MGA_DMAPAD, 0x00000000, > MGA_DMAPAD, 0x00000000, >- MGA_DMAPAD, 0x00000000, MGA_SOFTRAP, 0x00000000); >+ MGA_DMAPAD, 0x00000000, >+ MGA_SOFTRAP, 0x00000000); > ADVANCE_DMA(); > > return 0; > } > >-static int mga_wait_fence(struct drm_device *dev, void *data, struct drm_file * >-file_priv) >+static int mga_wait_fence(struct drm_device *dev, void *data, struct drm_file *file_priv) > { > drm_mga_private_t *dev_priv = dev->dev_private; > u32 *fence = data; >@@ -1082,6 +1115,7 @@ file_priv) > DRM_DEBUG("pid=%d\n", DRM_CURRENTPID); > > mga_driver_fence_wait(dev, fence); >+ > return 0; > } > >@@ -1099,6 +1133,7 @@ struct drm_ioctl_desc mga_ioctls[] = { > DRM_IOCTL_DEF(DRM_MGA_SET_FENCE, mga_set_fence, DRM_AUTH), > DRM_IOCTL_DEF(DRM_MGA_WAIT_FENCE, mga_wait_fence, DRM_AUTH), > DRM_IOCTL_DEF(DRM_MGA_DMA_BOOTSTRAP, mga_dma_bootstrap, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >+ > }; > > int mga_max_ioctl = DRM_ARRAY_SIZE(mga_ioctls); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/mga_warp.c linux-2.6.23.i686/drivers/char/drm/mga_warp.c >--- linux-2.6.23.i686.orig/drivers/char/drm/mga_warp.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/mga_warp.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,6 +1,7 @@ > /* mga_warp.c -- Matrox G200/G400 WARP engine management -*- linux-c -*- > * Created: Thu Jan 11 21:29:32 2001 by gareth@valinux.com >- * >+ */ >+/* > * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. > * All Rights Reserved. > * >@@ -48,30 +49,33 @@ do { \ > } while (0) > > static const unsigned int mga_warp_g400_microcode_size = >- (WARP_UCODE_SIZE(warp_g400_tgz) + >- WARP_UCODE_SIZE(warp_g400_tgza) + >- WARP_UCODE_SIZE(warp_g400_tgzaf) + >- WARP_UCODE_SIZE(warp_g400_tgzf) + >- WARP_UCODE_SIZE(warp_g400_tgzs) + >- WARP_UCODE_SIZE(warp_g400_tgzsa) + >- WARP_UCODE_SIZE(warp_g400_tgzsaf) + >- WARP_UCODE_SIZE(warp_g400_tgzsf) + >- WARP_UCODE_SIZE(warp_g400_t2gz) + >- WARP_UCODE_SIZE(warp_g400_t2gza) + >- WARP_UCODE_SIZE(warp_g400_t2gzaf) + >- WARP_UCODE_SIZE(warp_g400_t2gzf) + >- WARP_UCODE_SIZE(warp_g400_t2gzs) + >- WARP_UCODE_SIZE(warp_g400_t2gzsa) + >- WARP_UCODE_SIZE(warp_g400_t2gzsaf) + WARP_UCODE_SIZE(warp_g400_t2gzsf)); >+ (WARP_UCODE_SIZE(warp_g400_tgz) + >+ WARP_UCODE_SIZE(warp_g400_tgza) + >+ WARP_UCODE_SIZE(warp_g400_tgzaf) + >+ WARP_UCODE_SIZE(warp_g400_tgzf) + >+ WARP_UCODE_SIZE(warp_g400_tgzs) + >+ WARP_UCODE_SIZE(warp_g400_tgzsa) + >+ WARP_UCODE_SIZE(warp_g400_tgzsaf) + >+ WARP_UCODE_SIZE(warp_g400_tgzsf) + >+ WARP_UCODE_SIZE(warp_g400_t2gz) + >+ WARP_UCODE_SIZE(warp_g400_t2gza) + >+ WARP_UCODE_SIZE(warp_g400_t2gzaf) + >+ WARP_UCODE_SIZE(warp_g400_t2gzf) + >+ WARP_UCODE_SIZE(warp_g400_t2gzs) + >+ WARP_UCODE_SIZE(warp_g400_t2gzsa) + >+ WARP_UCODE_SIZE(warp_g400_t2gzsaf) + >+ WARP_UCODE_SIZE(warp_g400_t2gzsf)); > > static const unsigned int mga_warp_g200_microcode_size = >- (WARP_UCODE_SIZE(warp_g200_tgz) + >- WARP_UCODE_SIZE(warp_g200_tgza) + >- WARP_UCODE_SIZE(warp_g200_tgzaf) + >- WARP_UCODE_SIZE(warp_g200_tgzf) + >- WARP_UCODE_SIZE(warp_g200_tgzs) + >- WARP_UCODE_SIZE(warp_g200_tgzsa) + >- WARP_UCODE_SIZE(warp_g200_tgzsaf) + WARP_UCODE_SIZE(warp_g200_tgzsf)); >+ (WARP_UCODE_SIZE(warp_g200_tgz) + >+ WARP_UCODE_SIZE(warp_g200_tgza) + >+ WARP_UCODE_SIZE(warp_g200_tgzaf) + >+ WARP_UCODE_SIZE(warp_g200_tgzf) + >+ WARP_UCODE_SIZE(warp_g200_tgzs) + >+ WARP_UCODE_SIZE(warp_g200_tgzsa) + >+ WARP_UCODE_SIZE(warp_g200_tgzsaf) + >+ WARP_UCODE_SIZE(warp_g200_tgzsf)); >+ > > unsigned int mga_warp_microcode_size(const drm_mga_private_t * dev_priv) > { >@@ -82,6 +86,7 @@ unsigned int mga_warp_microcode_size(con > case MGA_CARD_TYPE_G200: > return PAGE_ALIGN(mga_warp_g200_microcode_size); > default: >+ DRM_ERROR("Unknown chipset value: 0x%x\n", dev_priv->chipset); > return 0; > } > } >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_buffer.c linux-2.6.23.i686/drivers/char/drm/nouveau_buffer.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_buffer.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_buffer.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,298 @@ >+/* >+ * Copyright 2007 Dave Airlied >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR >+ * OTHER DEALINGS IN THE SOFTWARE. >+ */ >+/* >+ * Authors: Dave Airlied <airlied@linux.ie> >+ * Ben Skeggs <darktama@iinet.net.au> >+ * Jeremy Kolb <jkolb@brandeis.edu> >+ */ >+ >+#include "drmP.h" >+#include "nouveau_drm.h" >+#include "nouveau_drv.h" >+#include "nouveau_dma.h" >+ >+static struct drm_ttm_backend * >+nouveau_bo_create_ttm_backend_entry(struct drm_device * dev) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ >+ switch (dev_priv->gart_info.type) { >+ case NOUVEAU_GART_AGP: >+ return drm_agp_init_ttm(dev); >+ case NOUVEAU_GART_SGDMA: >+ return nouveau_sgdma_init_ttm(dev); >+ default: >+ DRM_ERROR("Unknown GART type %d\n", dev_priv->gart_info.type); >+ break; >+ } >+ >+ return NULL; >+} >+ >+static int >+nouveau_bo_fence_type(struct drm_buffer_object *bo, >+ uint32_t *fclass, uint32_t *type) >+{ >+ /* When we get called, *fclass is set to the requested fence class */ >+ >+ if (bo->mem.proposed_flags & (DRM_BO_FLAG_READ | DRM_BO_FLAG_WRITE)) >+ *type = 3; >+ else >+ *type = 1; >+ return 0; >+ >+} >+ >+static int >+nouveau_bo_invalidate_caches(struct drm_device *dev, uint64_t buffer_flags) >+{ >+ /* We'll do this from user space. */ >+ return 0; >+} >+ >+static int >+nouveau_bo_init_mem_type(struct drm_device *dev, uint32_t type, >+ struct drm_mem_type_manager *man) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ >+ switch (type) { >+ case DRM_BO_MEM_LOCAL: >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_MEMTYPE_CACHED; >+ man->drm_bus_maptype = 0; >+ break; >+ case DRM_BO_MEM_VRAM: >+ man->flags = _DRM_FLAG_MEMTYPE_FIXED | >+ _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_NEEDS_IOREMAP; >+ man->io_addr = NULL; >+ man->drm_bus_maptype = _DRM_FRAME_BUFFER; >+ man->io_offset = drm_get_resource_start(dev, 1); >+ man->io_size = drm_get_resource_len(dev, 1); >+ if (man->io_size > nouveau_mem_fb_amount(dev)) >+ man->io_size = nouveau_mem_fb_amount(dev); >+ break; >+ case DRM_BO_MEM_PRIV0: >+ /* Unmappable VRAM */ >+ man->flags = _DRM_FLAG_MEMTYPE_CMA; >+ man->drm_bus_maptype = 0; >+ break; >+ case DRM_BO_MEM_TT: >+ switch (dev_priv->gart_info.type) { >+ case NOUVEAU_GART_AGP: >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_MEMTYPE_CSELECT | >+ _DRM_FLAG_NEEDS_IOREMAP; >+ man->drm_bus_maptype = _DRM_AGP; >+ break; >+ case NOUVEAU_GART_SGDMA: >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_MEMTYPE_CSELECT | >+ _DRM_FLAG_MEMTYPE_CMA; >+ man->drm_bus_maptype = _DRM_SCATTER_GATHER; >+ break; >+ default: >+ DRM_ERROR("Unknown GART type: %d\n", >+ dev_priv->gart_info.type); >+ return -EINVAL; >+ } >+ >+ man->io_offset = dev_priv->gart_info.aper_base; >+ man->io_size = dev_priv->gart_info.aper_size; >+ man->io_addr = NULL; >+ break; >+ default: >+ DRM_ERROR("Unsupported memory type %u\n", (unsigned)type); >+ return -EINVAL; >+ } >+ return 0; >+} >+ >+static uint64_t >+nouveau_bo_evict_flags(struct drm_buffer_object *bo) >+{ >+ switch (bo->mem.mem_type) { >+ case DRM_BO_MEM_LOCAL: >+ case DRM_BO_MEM_TT: >+ return DRM_BO_FLAG_MEM_LOCAL; >+ default: >+ return DRM_BO_FLAG_MEM_TT | DRM_BO_FLAG_CACHED; >+ } >+ return 0; >+} >+ >+ >+/* GPU-assisted copy using NV_MEMORY_TO_MEMORY_FORMAT, can access >+ * DRM_BO_MEM_{VRAM,PRIV0,TT} directly. >+ */ >+static int >+nouveau_bo_move_m2mf(struct drm_buffer_object *bo, int evict, int no_wait, >+ struct drm_bo_mem_reg *new_mem) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_drm_channel *dchan = &dev_priv->channel; >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ uint32_t srch, dsth, page_count; >+ >+ /* Can happen during init/takedown */ >+ if (!dchan->chan) >+ return -EINVAL; >+ >+ srch = old_mem->mem_type == DRM_BO_MEM_TT ? NvDmaTT : NvDmaFB; >+ dsth = new_mem->mem_type == DRM_BO_MEM_TT ? NvDmaTT : NvDmaFB; >+ if (srch != dchan->m2mf_dma_source || dsth != dchan->m2mf_dma_destin) { >+ dchan->m2mf_dma_source = srch; >+ dchan->m2mf_dma_destin = dsth; >+ >+ BEGIN_RING(NvSubM2MF, >+ NV_MEMORY_TO_MEMORY_FORMAT_SET_DMA_SOURCE, 2); >+ OUT_RING (dchan->m2mf_dma_source); >+ OUT_RING (dchan->m2mf_dma_destin); >+ } >+ >+ page_count = new_mem->num_pages; >+ while (page_count) { >+ int line_count = (page_count > 2047) ? 2047 : page_count; >+ >+ BEGIN_RING(NvSubM2MF, NV_MEMORY_TO_MEMORY_FORMAT_OFFSET_IN, 8); >+ OUT_RING (old_mem->mm_node->start << PAGE_SHIFT); >+ OUT_RING (new_mem->mm_node->start << PAGE_SHIFT); >+ OUT_RING (PAGE_SIZE); /* src_pitch */ >+ OUT_RING (PAGE_SIZE); /* dst_pitch */ >+ OUT_RING (PAGE_SIZE); /* line_length */ >+ OUT_RING (line_count); >+ OUT_RING ((1<<8)|(1<<0)); >+ OUT_RING (0); >+ BEGIN_RING(NvSubM2MF, NV_MEMORY_TO_MEMORY_FORMAT_NOP, 1); >+ OUT_RING (0); >+ >+ page_count -= line_count; >+ } >+ >+ return drm_bo_move_accel_cleanup(bo, evict, no_wait, dchan->chan->id, >+ DRM_FENCE_TYPE_EXE, 0, new_mem); >+} >+ >+/* Flip pages into the GART and move if we can. */ >+static int >+nouveau_bo_move_gart(struct drm_buffer_object *bo, int evict, int no_wait, >+ struct drm_bo_mem_reg *new_mem) >+{ >+ struct drm_device *dev = bo->dev; >+ struct drm_bo_mem_reg tmp_mem; >+ int ret; >+ >+ tmp_mem = *new_mem; >+ tmp_mem.mm_node = NULL; >+ tmp_mem.proposed_flags = (DRM_BO_FLAG_MEM_TT | >+ DRM_BO_FLAG_CACHED | >+ DRM_BO_FLAG_FORCE_CACHING); >+ >+ ret = drm_bo_mem_space(bo, &tmp_mem, no_wait); >+ >+ if (ret) >+ return ret; >+ >+ ret = drm_ttm_bind (bo->ttm, &tmp_mem); >+ if (ret) >+ goto out_cleanup; >+ >+ ret = nouveau_bo_move_m2mf(bo, 1, no_wait, &tmp_mem); >+ if (ret) >+ goto out_cleanup; >+ >+ ret = drm_bo_move_ttm(bo, evict, no_wait, new_mem); >+ >+out_cleanup: >+ if (tmp_mem.mm_node) { >+ mutex_lock(&dev->struct_mutex); >+ if (tmp_mem.mm_node != bo->pinned_node) >+ drm_mm_put_block(tmp_mem.mm_node); >+ tmp_mem.mm_node = NULL; >+ mutex_unlock(&dev->struct_mutex); >+ } >+ return ret; >+} >+ >+static int >+nouveau_bo_move(struct drm_buffer_object *bo, int evict, int no_wait, >+ struct drm_bo_mem_reg *new_mem) >+{ >+ struct drm_bo_mem_reg *old_mem = &bo->mem; >+ >+ if (new_mem->mem_type == DRM_BO_MEM_LOCAL) { >+ if (old_mem->mem_type == DRM_BO_MEM_LOCAL) >+ return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); >+#if 0 >+ if (!nouveau_bo_move_to_gart(bo, evict, no_wait, new_mem)) >+#endif >+ return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); >+ } >+ else >+ if (old_mem->mem_type == DRM_BO_MEM_LOCAL) { >+#if 0 >+ if (nouveau_bo_move_to_gart(bo, evict, no_wait, new_mem)) >+#endif >+ return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); >+ } >+ else { >+// if (nouveau_bo_move_m2mf(bo, evict, no_wait, new_mem)) >+ return drm_bo_move_memcpy(bo, evict, no_wait, new_mem); >+ } >+ return 0; >+} >+ >+static void >+nouveau_bo_flush_ttm(struct drm_ttm *ttm) >+{ >+} >+ >+static uint32_t nouveau_mem_prios[] = { >+ DRM_BO_MEM_PRIV0, >+ DRM_BO_MEM_VRAM, >+ DRM_BO_MEM_TT, >+ DRM_BO_MEM_LOCAL >+}; >+static uint32_t nouveau_busy_prios[] = { >+ DRM_BO_MEM_TT, >+ DRM_BO_MEM_PRIV0, >+ DRM_BO_MEM_VRAM, >+ DRM_BO_MEM_LOCAL >+}; >+ >+struct drm_bo_driver nouveau_bo_driver = { >+ .mem_type_prio = nouveau_mem_prios, >+ .mem_busy_prio = nouveau_busy_prios, >+ .num_mem_type_prio = sizeof(nouveau_mem_prios)/sizeof(uint32_t), >+ .num_mem_busy_prio = sizeof(nouveau_busy_prios)/sizeof(uint32_t), >+ .create_ttm_backend_entry = nouveau_bo_create_ttm_backend_entry, >+ .fence_type = nouveau_bo_fence_type, >+ .invalidate_caches = nouveau_bo_invalidate_caches, >+ .init_mem_type = nouveau_bo_init_mem_type, >+ .evict_flags = nouveau_bo_evict_flags, >+ .move = nouveau_bo_move, >+ .ttm_cache_flush= nouveau_bo_flush_ttm >+}; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_dma.c linux-2.6.23.i686/drivers/char/drm/nouveau_dma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_dma.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_dma.c 2008-01-06 09:24:57.000000000 +0100 >@@ -29,6 +29,9 @@ > #include "nouveau_drv.h" > #include "nouveau_dma.h" > >+/* FIXME : should go into a nouveau_drm.h define ? >+ * (it's shared between DRI & DDX & DRM) >+ */ > #define SKIPS 8 > > int >@@ -130,10 +133,10 @@ nouveau_dma_channel_takedown(struct drm_ > > #define RING_SKIPS 8 > >-#define READ_GET() ((NV_READ(NV03_FIFO_REGS_DMAGET(dchan->chan->id)) - \ >- dchan->chan->pushbuf_base) >> 2) >+#define READ_GET() ((NV_READ(dchan->chan->get) - \ >+ dchan->chan->pushbuf_base) >> 2) > #define WRITE_PUT(val) do { \ >- NV_WRITE(NV03_FIFO_REGS_DMAPUT(dchan->chan->id), \ >+ NV_WRITE(dchan->chan->put, \ > ((val) << 2) + dchan->chan->pushbuf_base); \ > } while(0) > >@@ -174,4 +177,3 @@ nouveau_dma_wait(struct drm_device *dev, > > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_dma.h linux-2.6.23.i686/drivers/char/drm/nouveau_dma.h >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_dma.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_dma.h 2008-01-06 09:24:57.000000000 +0100 >@@ -89,10 +89,8 @@ typedef enum { > if (dchan->cur != dchan->put) { \ > DRM_MEMORYBARRIER(); \ > dchan->put = dchan->cur; \ >- NV_WRITE(NV03_FIFO_REGS_DMAPUT(dchan->chan->id), \ >- (dchan->put<<2)); \ >+ NV_WRITE(dchan->chan->put, dchan->put << 2); \ > } \ > } while(0) > > #endif >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_drm.h linux-2.6.23.i686/drivers/char/drm/nouveau_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_drm.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -119,18 +119,13 @@ struct drm_nouveau_setparam { > > enum nouveau_card_type { > NV_UNKNOWN =0, >- NV_01 =1, >- NV_03 =3, > NV_04 =4, > NV_05 =5, > NV_10 =10, > NV_11 =11, >- NV_15 =11, > NV_17 =17, > NV_20 =20, >- NV_25 =20, > NV_30 =30, >- NV_34 =30, > NV_40 =40, > NV_44 =44, > NV_50 =50, >@@ -163,4 +158,3 @@ struct drm_nouveau_sarea { > #define DRM_NOUVEAU_MEM_FREE 0x09 > > #endif /* __NOUVEAU_DRM_H__ */ >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_drv.c linux-2.6.23.i686/drivers/char/drm/nouveau_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_drv.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -29,12 +29,22 @@ > #include "drm_pciids.h" > > static struct pci_device_id pciidlist[] = { >- nouveau_PCI_IDS >+ { >+ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID), >+ .class = PCI_BASE_CLASS_DISPLAY << 16, >+ .class_mask = 0xff << 16, >+ }, >+ { >+ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA_SGS, PCI_ANY_ID), >+ .class = PCI_BASE_CLASS_DISPLAY << 16, >+ .class_mask = 0xff << 16, >+ } > }; > > extern struct drm_ioctl_desc nouveau_ioctls[]; > extern int nouveau_max_ioctl; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = > DRIVER_USE_AGP | DRIVER_PCI_DMA | DRIVER_SG | >@@ -60,15 +70,20 @@ static struct drm_driver driver = { > .mmap = drm_mmap, > .poll = drm_poll, > .fasync = drm_fasync, >-#ifdef CONFIG_COMPAT >+#if defined(CONFIG_COMPAT) && LINUX_VERSION_CODE > KERNEL_VERSION(2,6,9) > .compat_ioctl = nouveau_compat_ioctl, > #endif > }, > .pci_driver = { > .name = DRIVER_NAME, > .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), > }, > >+ .bo_driver = &nouveau_bo_driver, >+ .fence_driver = &nouveau_fence_driver, >+ > .name = DRIVER_NAME, > .desc = DRIVER_DESC, > .date = DRIVER_DATE, >@@ -77,10 +92,15 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ > static int __init nouveau_init(void) > { > driver.num_ioctls = nouveau_max_ioctl; >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit nouveau_exit(void) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_drv.h linux-2.6.23.i686/drivers/char/drm/nouveau_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -59,7 +59,7 @@ enum nouveau_flags { > }; > > #define NVOBJ_ENGINE_SW 0 >-#define NVOBJ_ENGINE_GR 1 >+#define NVOBJ_ENGINE_GR 1 > #define NVOBJ_ENGINE_INT 0xdeadbeef > > #define NVOBJ_FLAG_ALLOW_NO_REFS (1 << 0) >@@ -106,11 +106,20 @@ struct nouveau_channel > /* mapping of the regs controling the fifo */ > drm_local_map_t *regs; > >+ /* Fencing */ >+ uint32_t next_sequence; >+ > /* DMA push buffer */ > struct nouveau_gpuobj_ref *pushbuf; > struct mem_block *pushbuf_mem; > uint32_t pushbuf_base; > >+ /* FIFO user control regs */ >+ uint32_t user, user_size; >+ uint32_t put; >+ uint32_t get; >+ uint32_t ref_cnt; >+ > /* Notifier memory */ > struct mem_block *notifier_block; > struct mem_block *notifier_heap; >@@ -120,8 +129,9 @@ struct nouveau_channel > struct nouveau_gpuobj_ref *ramfc; > > /* PGRAPH context */ >+ /* XXX may be merge 2 pointers as private data ??? */ > struct nouveau_gpuobj_ref *ramin_grctx; >- uint32_t pgraph_ctx [340]; /* XXX dynamic alloc ? */ >+ void *pgraph_ctx; > > /* NV50 VM */ > struct nouveau_gpuobj *vm_pd; >@@ -189,9 +199,13 @@ struct nouveau_fb_engine { > struct nouveau_fifo_engine { > void *priv; > >+ int channels; >+ > int (*init)(struct drm_device *); > void (*takedown)(struct drm_device *); > >+ int (*channel_id)(struct drm_device *); >+ > int (*create_context)(struct nouveau_channel *); > void (*destroy_context)(struct nouveau_channel *); > int (*load_context)(struct nouveau_channel *); >@@ -217,6 +231,7 @@ struct nouveau_engine { > struct nouveau_fifo_engine fifo; > }; > >+#define NOUVEAU_MAX_CHANNEL_NR 128 > struct drm_nouveau_private { > enum { > NOUVEAU_CARD_INIT_DOWN, >@@ -224,6 +239,8 @@ struct drm_nouveau_private { > NOUVEAU_CARD_INIT_FAILED > } init_state; > >+ int ttm; >+ > /* the card type, takes NV_* as values */ > int card_type; > /* exact chipset, derived from NV_PMC_BOOT_0 */ >@@ -235,7 +252,7 @@ struct drm_nouveau_private { > drm_local_map_t *ramin; /* NV40 onwards */ > > int fifo_alloc_count; >- struct nouveau_channel *fifos[NV_MAX_FIFO_NUMBER]; >+ struct nouveau_channel *fifos[NOUVEAU_MAX_CHANNEL_NR]; > > struct nouveau_engine Engine; > struct nouveau_drm_channel channel; >@@ -343,6 +360,7 @@ extern struct mem_block* nouveau_mem_all > int flags, struct drm_file *); > extern void nouveau_mem_free(struct drm_device *dev, struct mem_block*); > extern int nouveau_mem_init(struct drm_device *); >+extern int nouveau_mem_init_ttm(struct drm_device *); > extern void nouveau_mem_close(struct drm_device *); > > /* nouveau_notifier.c */ >@@ -357,7 +375,6 @@ extern int nouveau_ioctl_notifier_free( > > /* nouveau_fifo.c */ > extern int nouveau_fifo_init(struct drm_device *); >-extern int nouveau_fifo_number(struct drm_device *); > extern int nouveau_fifo_ctx_size(struct drm_device *); > extern void nouveau_fifo_cleanup(struct drm_device *, struct drm_file *); > extern int nouveau_fifo_owner(struct drm_device *, struct drm_file *, >@@ -423,7 +440,7 @@ extern int nouveau_sgdma_init(struct drm > extern void nouveau_sgdma_takedown(struct drm_device *); > extern int nouveau_sgdma_get_page(struct drm_device *, uint32_t offset, > uint32_t *page); >-//extern struct ^ *nouveau_sgdma_init_ttm(struct drm_device *); >+extern struct drm_ttm_backend *nouveau_sgdma_init_ttm(struct drm_device *); > extern int nouveau_sgdma_nottm_hack_init(struct drm_device *); > extern void nouveau_sgdma_nottm_hack_takedown(struct drm_device *); > >@@ -445,12 +462,14 @@ extern int nv40_fb_init(struct drm_devi > extern void nv40_fb_takedown(struct drm_device *); > > /* nv04_fifo.c */ >+extern int nv04_fifo_channel_id(struct drm_device *); > extern int nv04_fifo_create_context(struct nouveau_channel *); > extern void nv04_fifo_destroy_context(struct nouveau_channel *); > extern int nv04_fifo_load_context(struct nouveau_channel *); > extern int nv04_fifo_save_context(struct nouveau_channel *); > > /* nv10_fifo.c */ >+extern int nv10_fifo_channel_id(struct drm_device *); > extern int nv10_fifo_create_context(struct nouveau_channel *); > extern void nv10_fifo_destroy_context(struct nouveau_channel *); > extern int nv10_fifo_load_context(struct nouveau_channel *); >@@ -466,6 +485,7 @@ extern int nv40_fifo_save_context(struc > /* nv50_fifo.c */ > extern int nv50_fifo_init(struct drm_device *); > extern void nv50_fifo_takedown(struct drm_device *); >+extern int nv50_fifo_channel_id(struct drm_device *); > extern int nv50_fifo_create_context(struct nouveau_channel *); > extern void nv50_fifo_destroy_context(struct nouveau_channel *); > extern int nv50_fifo_load_context(struct nouveau_channel *); >@@ -490,21 +510,13 @@ extern int nv10_graph_load_context(stru > extern int nv10_graph_save_context(struct nouveau_channel *); > > /* nv20_graph.c */ >-extern void nouveau_nv20_context_switch(struct drm_device *); >-extern int nv20_graph_init(struct drm_device *); >-extern void nv20_graph_takedown(struct drm_device *); > extern int nv20_graph_create_context(struct nouveau_channel *); > extern void nv20_graph_destroy_context(struct nouveau_channel *); > extern int nv20_graph_load_context(struct nouveau_channel *); > extern int nv20_graph_save_context(struct nouveau_channel *); >- >-/* nv30_graph.c */ >+extern int nv20_graph_init(struct drm_device *); >+extern void nv20_graph_takedown(struct drm_device *); > extern int nv30_graph_init(struct drm_device *); >-extern void nv30_graph_takedown(struct drm_device *); >-extern int nv30_graph_create_context(struct nouveau_channel *); >-extern void nv30_graph_destroy_context(struct nouveau_channel *); >-extern int nv30_graph_load_context(struct nouveau_channel *); >-extern int nv30_graph_save_context(struct nouveau_channel *); > > /* nv40_graph.c */ > extern int nv40_graph_init(struct drm_device *); >@@ -560,6 +572,13 @@ extern void nv04_timer_takedown(struct d > extern long nouveau_compat_ioctl(struct file *file, unsigned int cmd, > unsigned long arg); > >+/* nouveau_buffer.c */ >+extern struct drm_bo_driver nouveau_bo_driver; >+ >+/* nouveau_fence.c */ >+extern struct drm_fence_driver nouveau_fence_driver; >+extern void nouveau_fence_handler(struct drm_device *dev, int channel); >+ > #if defined(__powerpc__) > #define NV_READ(reg) in_be32((void __iomem *)(dev_priv->mmio)->handle + (reg) ) > #define NV_WRITE(reg,val) out_be32((void __iomem *)(dev_priv->mmio)->handle + (reg) , (val) ) >@@ -581,4 +600,3 @@ extern long nouveau_compat_ioctl(struct > #define INSTANCE_WR(o,i,v) NV_WI32((o)->im_pramin->start + ((i)<<2), (v)) > > #endif /* __NOUVEAU_DRV_H__ */ >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_fence.c linux-2.6.23.i686/drivers/char/drm/nouveau_fence.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_fence.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_fence.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,134 @@ >+/* >+ * Copyright (C) 2007 Ben Skeggs. >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sublicense, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, >+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF >+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. >+ * IN NO EVENT SHALL THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE >+ * LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION >+ * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION >+ * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ */ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "nouveau_drv.h" >+#include "nouveau_dma.h" >+ >+static int >+nouveau_fence_has_irq(struct drm_device *dev, uint32_t class, uint32_t flags) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ >+ DRM_DEBUG("class=%d, flags=0x%08x\n", class, flags); >+ >+ /* DRM's channel always uses IRQs to signal fences */ >+ if (class == dev_priv->channel.chan->id) >+ return 1; >+ >+ /* Other channels don't use IRQs at all yet */ >+ return 0; >+} >+ >+static int >+nouveau_fence_emit(struct drm_device *dev, uint32_t class, uint32_t flags, >+ uint32_t *breadcrumb, uint32_t *native_type) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_channel *chan = dev_priv->fifos[class]; >+ struct nouveau_drm_channel *dchan = &dev_priv->channel; >+ >+ DRM_DEBUG("class=%d, flags=0x%08x\n", class, flags); >+ >+ /* We can't emit fences on client channels, update sequence number >+ * and userspace will emit the fence >+ */ >+ *breadcrumb = ++chan->next_sequence; >+ *native_type = DRM_FENCE_TYPE_EXE; >+ if (chan != dchan->chan) { >+ DRM_DEBUG("user fence 0x%08x\n", *breadcrumb); >+ return 0; >+ } >+ >+ DRM_DEBUG("emit 0x%08x\n", *breadcrumb); >+ BEGIN_RING(NvSubM2MF, NV_MEMORY_TO_MEMORY_FORMAT_SET_REF, 1); >+ OUT_RING (*breadcrumb); >+ BEGIN_RING(NvSubM2MF, 0x0150, 1); >+ OUT_RING (0); >+ FIRE_RING (); >+ >+ return 0; >+} >+ >+static void >+nouveau_fence_perform_flush(struct drm_device *dev, uint32_t class) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct drm_fence_class_manager *fc = &dev->fm.fence_class[class]; >+ struct nouveau_channel *chan = dev_priv->fifos[class]; >+ uint32_t pending_types = 0; >+ >+ DRM_DEBUG("class=%d\n", class); >+ >+ pending_types = fc->pending_flush | >+ ((fc->pending_exe_flush) ? DRM_FENCE_TYPE_EXE : 0); >+ DRM_DEBUG("pending: 0x%08x 0x%08x\n", pending_types, >+ fc->pending_flush); >+ >+ if (pending_types) { >+ uint32_t sequence = NV_READ(chan->ref_cnt); >+ >+ DRM_DEBUG("got 0x%08x\n", sequence); >+ drm_fence_handler(dev, class, sequence, pending_types, 0); >+ } >+} >+ >+static void >+nouveau_fence_poke_flush(struct drm_device *dev, uint32_t class) >+{ >+ struct drm_fence_manager *fm = &dev->fm; >+ unsigned long flags; >+ >+ DRM_DEBUG("class=%d\n", class); >+ >+ write_lock_irqsave(&fm->lock, flags); >+ nouveau_fence_perform_flush(dev, class); >+ write_unlock_irqrestore(&fm->lock, flags); >+} >+ >+void >+nouveau_fence_handler(struct drm_device *dev, int channel) >+{ >+ struct drm_fence_manager *fm = &dev->fm; >+ >+ DRM_DEBUG("class=%d\n", channel); >+ >+ write_lock(&fm->lock); >+ nouveau_fence_perform_flush(dev, channel); >+ write_unlock(&fm->lock); >+} >+ >+struct drm_fence_driver nouveau_fence_driver = { >+ .num_classes = 8, >+ .wrap_diff = (1 << 30), >+ .flush_diff = (1 << 29), >+ .sequence_mask = 0xffffffffU, >+ .lazy_capable = 1, >+ .has_irq = nouveau_fence_has_irq, >+ .emit = nouveau_fence_emit, >+ .poke_flush = nouveau_fence_poke_flush >+}; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_fifo.c linux-2.6.23.i686/drivers/char/drm/nouveau_fifo.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_fifo.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_fifo.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,4 +1,4 @@ >-/* >+/* > * Copyright 2005-2006 Stephane Marchesin > * All Rights Reserved. > * >@@ -28,24 +28,6 @@ > #include "nouveau_drm.h" > > >-/* returns the number of hw fifos */ >-int nouveau_fifo_number(struct drm_device *dev) >-{ >- struct drm_nouveau_private *dev_priv=dev->dev_private; >- switch(dev_priv->card_type) >- { >- case NV_03: >- return 8; >- case NV_04: >- case NV_05: >- return 16; >- case NV_50: >- return 128; >- default: >- return 32; >- } >-} >- > /* returns the size of fifo context */ > int nouveau_fifo_ctx_size(struct drm_device *dev) > { >@@ -65,7 +47,7 @@ int nouveau_fifo_ctx_size(struct drm_dev > > /* voir nv_xaa.c : NVResetGraphics > * mémoire mappée par nv_driver.c : NVMapMem >- * voir nv_driver.c : NVPreInit >+ * voir nv_driver.c : NVPreInit > */ > > static int nouveau_fifo_instmem_configure(struct drm_device *dev) >@@ -73,7 +55,7 @@ static int nouveau_fifo_instmem_configur > struct drm_nouveau_private *dev_priv = dev->dev_private; > > NV_WRITE(NV03_PFIFO_RAMHT, >- (0x03 << 24) /* search 128 */ | >+ (0x03 << 24) /* search 128 */ | > ((dev_priv->ramht_bits - 9) << 16) | > (dev_priv->ramht_offset >> 8) > ); >@@ -109,7 +91,6 @@ static int nouveau_fifo_instmem_configur > case NV_11: > case NV_10: > case NV_04: >- case NV_03: > NV_WRITE(NV03_PFIFO_RAMFC, dev_priv->ramfc_offset>>8); > break; > } >@@ -169,7 +150,7 @@ int nouveau_fifo_init(struct drm_device > NV_PFIFO_CACHE1_DMA_FETCH_MAX_REQS_4 | > #ifdef __BIG_ENDIAN > NV_PFIFO_CACHE1_BIG_ENDIAN | >-#endif >+#endif > 0x00000000); > > NV_WRITE(NV04_PFIFO_CACHE1_DMA_PUSH, 0x00000001); >@@ -285,18 +266,19 @@ nouveau_fifo_alloc(struct drm_device *de > > /* > * Alright, here is the full story >- * Nvidia cards have multiple hw fifo contexts (praise them for that, >+ * Nvidia cards have multiple hw fifo contexts (praise them for that, > * no complicated crash-prone context switches) >- * We allocate a new context for each app and let it write to it directly >+ * We allocate a new context for each app and let it write to it directly > * (woo, full userspace command submission !) > * When there are no more contexts, you lost > */ >- for(channel=0; channel<nouveau_fifo_number(dev); channel++) { >+ for (channel = 0; channel < engine->fifo.channels; channel++) { > if (dev_priv->fifos[channel] == NULL) > break; > } >+ > /* no more fifos. you lost. */ >- if (channel==nouveau_fifo_number(dev)) >+ if (channel == engine->fifo.channels) > return -EINVAL; > > dev_priv->fifos[channel] = drm_calloc(1, sizeof(struct nouveau_channel), >@@ -312,6 +294,28 @@ nouveau_fifo_alloc(struct drm_device *de > > DRM_INFO("Allocating FIFO number %d\n", channel); > >+ /* Locate channel's user control regs */ >+ if (dev_priv->card_type < NV_40) { >+ chan->user = NV03_USER(channel); >+ chan->user_size = NV03_USER_SIZE; >+ chan->put = NV03_USER_DMA_PUT(channel); >+ chan->get = NV03_USER_DMA_GET(channel); >+ chan->ref_cnt = NV03_USER_REF_CNT(channel); >+ } else >+ if (dev_priv->card_type < NV_50) { >+ chan->user = NV40_USER(channel); >+ chan->user_size = NV40_USER_SIZE; >+ chan->put = NV40_USER_DMA_PUT(channel); >+ chan->get = NV40_USER_DMA_GET(channel); >+ chan->ref_cnt = NV40_USER_REF_CNT(channel); >+ } else { >+ chan->user = NV50_USER(channel); >+ chan->user_size = NV50_USER_SIZE; >+ chan->put = NV50_USER_DMA_PUT(channel); >+ chan->get = NV50_USER_DMA_GET(channel); >+ chan->ref_cnt = NV50_USER_REF_CNT(channel); >+ } >+ > /* Allocate space for per-channel fixed notifier memory */ > ret = nouveau_notifier_init_channel(chan); > if (ret) { >@@ -355,14 +359,11 @@ nouveau_fifo_alloc(struct drm_device *de > return ret; > } > >- /* setup channel's default get/put values */ >- if (dev_priv->card_type < NV_50) { >- NV_WRITE(NV03_FIFO_REGS_DMAPUT(channel), chan->pushbuf_base); >- NV_WRITE(NV03_FIFO_REGS_DMAGET(channel), chan->pushbuf_base); >- } else { >- NV_WRITE(NV50_FIFO_REGS_DMAPUT(channel), chan->pushbuf_base); >- NV_WRITE(NV50_FIFO_REGS_DMAGET(channel), chan->pushbuf_base); >- } >+ /* setup channel's default get/put values >+ * XXX: quite possibly extremely pointless.. >+ */ >+ NV_WRITE(chan->get, chan->pushbuf_base); >+ NV_WRITE(chan->put, chan->pushbuf_base); > > /* If this is the first channel, setup PFIFO ourselves. For any > * other case, the GPU will handle this when it switches contexts. >@@ -401,11 +402,49 @@ void nouveau_fifo_free(struct nouveau_ch > struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; > struct nouveau_engine *engine = &dev_priv->Engine; >+ uint64_t t_start; > > DRM_INFO("%s: freeing fifo %d\n", __func__, chan->id); > >+ /* Disable channel switching, if this channel isn't currenly >+ * active re-enable it if there's still pending commands. >+ * We really should do a manual context switch here, but I'm >+ * not sure I trust our ability to do this reliably yet.. >+ */ >+ NV_WRITE(NV03_PFIFO_CACHES, 0); >+ if (engine->fifo.channel_id(dev) != chan->id && >+ NV_READ(chan->get) != NV_READ(chan->put)) { >+ NV_WRITE(NV03_PFIFO_CACHES, 1); >+ } >+ >+ /* Give the channel a chance to idle, wait 2s (hopefully) */ >+ t_start = engine->timer.read(dev); >+ while (NV_READ(chan->get) != NV_READ(chan->put) || >+ NV_READ(NV03_PFIFO_CACHE1_GET) != >+ NV_READ(NV03_PFIFO_CACHE1_PUT)) { >+ if (engine->timer.read(dev) - t_start > 2000000000ULL) { >+ DRM_ERROR("Failed to idle channel %d before destroy." >+ "Prepare for strangeness..\n", chan->id); >+ break; >+ } >+ } >+ >+ /*XXX: Maybe should wait for PGRAPH to finish with the stuff it fetched >+ * from CACHE1 too? >+ */ >+ > /* disable the fifo caches */ > NV_WRITE(NV03_PFIFO_CACHES, 0x00000000); >+ NV_WRITE(NV04_PFIFO_CACHE1_DMA_PUSH, NV_READ(NV04_PFIFO_CACHE1_DMA_PUSH)&(~0x1)); >+ NV_WRITE(NV03_PFIFO_CACHE1_PUSH0, 0x00000000); >+ NV_WRITE(NV04_PFIFO_CACHE1_PULL0, 0x00000000); >+ >+ /* stop the fifo, otherwise it could be running and >+ * it will crash when removing gpu objects >+ *XXX: from real-world evidence, absolutely useless.. >+ */ >+ NV_WRITE(chan->get, chan->pushbuf_base); >+ NV_WRITE(chan->put, chan->pushbuf_base); > > // FIXME XXX needs more code > >@@ -415,6 +454,10 @@ void nouveau_fifo_free(struct nouveau_ch > engine->graph.destroy_context(chan); > > /* reenable the fifo caches */ >+ NV_WRITE(NV04_PFIFO_CACHE1_DMA_PUSH, >+ NV_READ(NV04_PFIFO_CACHE1_DMA_PUSH) | 1); >+ NV_WRITE(NV03_PFIFO_CACHE1_PUSH0, 0x00000001); >+ NV_WRITE(NV04_PFIFO_CACHE1_PULL0, 0x00000001); > NV_WRITE(NV03_PFIFO_CACHES, 0x00000001); > > /* Deallocate push buffer */ >@@ -438,10 +481,11 @@ void nouveau_fifo_free(struct nouveau_ch > void nouveau_fifo_cleanup(struct drm_device *dev, struct drm_file *file_priv) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_engine *engine = &dev_priv->Engine; > int i; > > DRM_DEBUG("clearing FIFO enables from file_priv\n"); >- for(i = 0; i < nouveau_fifo_number(dev); i++) { >+ for(i = 0; i < engine->fifo.channels; i++) { > struct nouveau_channel *chan = dev_priv->fifos[i]; > > if (chan && chan->file_priv == file_priv) >@@ -454,8 +498,9 @@ nouveau_fifo_owner(struct drm_device *de > int channel) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_engine *engine = &dev_priv->Engine; > >- if (channel >= nouveau_fifo_number(dev)) >+ if (channel >= engine->fifo.channels) > return 0; > if (dev_priv->fifos[channel] == NULL) > return 0; >@@ -495,14 +540,8 @@ static int nouveau_ioctl_fifo_alloc(stru > > /* make the fifo available to user space */ > /* first, the fifo control regs */ >- init->ctrl = dev_priv->mmio->offset; >- if (dev_priv->card_type < NV_50) { >- init->ctrl += NV03_FIFO_REGS(init->channel); >- init->ctrl_size = NV03_FIFO_REGS_SIZE; >- } else { >- init->ctrl += NV50_FIFO_REGS(init->channel); >- init->ctrl_size = NV50_FIFO_REGS_SIZE; >- } >+ init->ctrl = dev_priv->mmio->offset + chan->user; >+ init->ctrl_size = chan->user_size; > res = drm_addmap(dev, init->ctrl, init->ctrl_size, _DRM_REGISTERS, > 0, &chan->regs); > if (res != 0) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_irq.c linux-2.6.23.i686/drivers/char/drm/nouveau_irq.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_irq.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_irq.c 2008-01-06 09:24:57.000000000 +0100 >@@ -35,8 +35,10 @@ > #include "nouveau_drm.h" > #include "nouveau_drv.h" > #include "nouveau_reg.h" >+#include "nouveau_swmthd.h" > >-void nouveau_irq_preinstall(struct drm_device *dev) >+void >+nouveau_irq_preinstall(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; > >@@ -44,7 +46,8 @@ void nouveau_irq_preinstall(struct drm_d > NV_WRITE(NV03_PMC_INTR_EN_0, 0); > } > >-void nouveau_irq_postinstall(struct drm_device *dev) >+void >+nouveau_irq_postinstall(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; > >@@ -52,7 +55,8 @@ void nouveau_irq_postinstall(struct drm_ > NV_WRITE(NV03_PMC_INTR_EN_0, NV_PMC_INTR_EN_0_MASTER_ENABLE); > } > >-void nouveau_irq_uninstall(struct drm_device *dev) >+void >+nouveau_irq_uninstall(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; > >@@ -60,125 +64,86 @@ void nouveau_irq_uninstall(struct drm_de > NV_WRITE(NV03_PMC_INTR_EN_0, 0); > } > >-static void nouveau_fifo_irq_handler(struct drm_device *dev) >+static void >+nouveau_fifo_irq_handler(struct drm_device *dev) > { >- uint32_t status, chmode, chstat, channel; > struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_engine *engine = &dev_priv->Engine; >+ uint32_t status; > >- status = NV_READ(NV03_PFIFO_INTR_0); >- if (!status) >- return; >- chmode = NV_READ(NV04_PFIFO_MODE); >- chstat = NV_READ(NV04_PFIFO_DMA); >- channel=NV_READ(NV03_PFIFO_CACHE1_PUSH1)&(nouveau_fifo_number(dev)-1); >- >- if (status & NV_PFIFO_INTR_CACHE_ERROR) { >- uint32_t c1get, c1method, c1data; >- >- DRM_ERROR("PFIFO error interrupt\n"); >+ while ((status = NV_READ(NV03_PFIFO_INTR_0))) { >+ uint32_t chid, get; > >- c1get = NV_READ(NV03_PFIFO_CACHE1_GET) >> 2; >- if (dev_priv->card_type < NV_40) { >- /* Untested, so it may not work.. */ >- c1method = NV_READ(NV04_PFIFO_CACHE1_METHOD(c1get)); >- c1data = NV_READ(NV04_PFIFO_CACHE1_DATA(c1get)); >- } else { >- c1method = NV_READ(NV40_PFIFO_CACHE1_METHOD(c1get)); >- c1data = NV_READ(NV40_PFIFO_CACHE1_DATA(c1get)); >- } >+ NV_WRITE(NV03_PFIFO_CACHES, 0); > >- DRM_ERROR("Channel %d/%d - Method 0x%04x, Data 0x%08x\n", >- channel, (c1method >> 13) & 7, c1method & 0x1ffc, >- c1data); >+ chid = engine->fifo.channel_id(dev); >+ get = NV_READ(NV03_PFIFO_CACHE1_GET); > >- status &= ~NV_PFIFO_INTR_CACHE_ERROR; >- NV_WRITE(NV03_PFIFO_INTR_0, NV_PFIFO_INTR_CACHE_ERROR); >- } >+ if (status & NV_PFIFO_INTR_CACHE_ERROR) { >+ uint32_t mthd, data; >+ int ptr; >+ >+ ptr = get >> 2; >+ if (dev_priv->card_type < NV_40) { >+ mthd = NV_READ(NV04_PFIFO_CACHE1_METHOD(ptr)); >+ data = NV_READ(NV04_PFIFO_CACHE1_DATA(ptr)); >+ } else { >+ mthd = NV_READ(NV40_PFIFO_CACHE1_METHOD(ptr)); >+ data = NV_READ(NV40_PFIFO_CACHE1_DATA(ptr)); >+ } > >- if (status & NV_PFIFO_INTR_DMA_PUSHER) { >- DRM_ERROR("PFIFO DMA pusher interrupt: ch%d, 0x%08x\n", >- channel, NV_READ(NV04_PFIFO_CACHE1_DMA_GET)); >+ DRM_INFO("PFIFO_CACHE_ERROR - " >+ "Ch %d/%d Mthd 0x%04x Data 0x%08x\n", >+ chid, (mthd >> 13) & 7, mthd & 0x1ffc, data); > >- status &= ~NV_PFIFO_INTR_DMA_PUSHER; >- NV_WRITE(NV03_PFIFO_INTR_0, NV_PFIFO_INTR_DMA_PUSHER); >+ NV_WRITE(NV03_PFIFO_CACHE1_GET, get + 4); >+ NV_WRITE(NV04_PFIFO_CACHE1_PULL0, 1); > >- NV_WRITE(NV04_PFIFO_CACHE1_DMA_STATE, 0x00000000); >- if (NV_READ(NV04_PFIFO_CACHE1_DMA_PUT)!=NV_READ(NV04_PFIFO_CACHE1_DMA_GET)) >- { >- uint32_t getval=NV_READ(NV04_PFIFO_CACHE1_DMA_GET)+4; >- NV_WRITE(NV04_PFIFO_CACHE1_DMA_GET,getval); >+ status &= ~NV_PFIFO_INTR_CACHE_ERROR; >+ NV_WRITE(NV03_PFIFO_INTR_0, NV_PFIFO_INTR_CACHE_ERROR); > } >- } > >- if (status) { >- DRM_ERROR("Unhandled PFIFO interrupt: status=0x%08x\n", status); >+ if (status & NV_PFIFO_INTR_DMA_PUSHER) { >+ DRM_INFO("PFIFO_DMA_PUSHER - Ch %d\n", chid); > >- NV_WRITE(NV03_PFIFO_INTR_0, status); >- } >+ status &= ~NV_PFIFO_INTR_DMA_PUSHER; >+ NV_WRITE(NV03_PFIFO_INTR_0, NV_PFIFO_INTR_DMA_PUSHER); > >- NV_WRITE(NV03_PMC_INTR_0, NV_PMC_INTR_0_PFIFO_PENDING); >-} >+ NV_WRITE(NV04_PFIFO_CACHE1_DMA_STATE, 0x00000000); >+ if (NV_READ(NV04_PFIFO_CACHE1_DMA_PUT) != get) >+ NV_WRITE(NV04_PFIFO_CACHE1_DMA_GET, get + 4); >+ } > >-#if 0 >-static void nouveau_nv04_context_switch(struct drm_device *dev) >-{ >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- uint32_t channel,i; >- uint32_t max=0; >- NV_WRITE(NV04_PGRAPH_FIFO,0x0); >- channel=NV_READ(NV03_PFIFO_CACHE1_PUSH1)&(nouveau_fifo_number(dev)-1); >- //DRM_INFO("raw PFIFO_CACH1_PHS1 reg is %x\n",NV_READ(NV03_PFIFO_CACHE1_PUSH1)); >- //DRM_INFO("currently on channel %d\n",channel); >- for (i=0;i<nouveau_fifo_number(dev);i++) >- if ((dev_priv->fifos[i].used)&&(i!=channel)) { >- uint32_t put,get,pending; >- //put=NV_READ(dev_priv->ramfc_offset+i*32); >- //get=NV_READ(dev_priv->ramfc_offset+4+i*32); >- put=NV_READ(NV03_FIFO_REGS_DMAPUT(i)); >- get=NV_READ(NV03_FIFO_REGS_DMAGET(i)); >- pending=NV_READ(NV04_PFIFO_DMA); >- //DRM_INFO("Channel %d (put/get %x/%x)\n",i,put,get); >- /* mark all pending channels as such */ >- if ((put!=get)&!(pending&(1<<i))) >- { >- pending|=(1<<i); >- NV_WRITE(NV04_PFIFO_DMA,pending); >- } >- max++; >+ if (status) { >+ DRM_INFO("Unhandled PFIFO_INTR - 0x%08x\n", status); >+ NV_WRITE(NV03_PFIFO_INTR_0, status); > } >- nouveau_wait_for_idle(dev); > >-#if 1 >- /* 2-channel commute */ >- // NV_WRITE(NV03_PFIFO_CACHE1_PUSH1,channel|0x100); >- if (channel==0) >- channel=1; >- else >- channel=0; >- // dev_priv->cur_fifo=channel; >- NV_WRITE(NV04_PFIFO_NEXT_CHANNEL,channel|0x100); >-#endif >- //NV_WRITE(NV03_PFIFO_CACHE1_PUSH1,max|0x100); >- //NV_WRITE(0x2050,max|0x100); >+ NV_WRITE(NV03_PFIFO_CACHES, 1); >+ } > >- NV_WRITE(NV04_PGRAPH_FIFO,0x1); >- >+ NV_WRITE(NV03_PMC_INTR_0, NV_PMC_INTR_0_PFIFO_PENDING); > } >-#endif >- > >-struct nouveau_bitfield_names >-{ >+struct nouveau_bitfield_names { > uint32_t mask; > const char * name; > }; > > static struct nouveau_bitfield_names nouveau_nstatus_names[] = > { >- { NV03_PGRAPH_NSTATUS_STATE_IN_USE, "STATE_IN_USE" }, >- { NV03_PGRAPH_NSTATUS_INVALID_STATE, "INVALID_STATE" }, >- { NV03_PGRAPH_NSTATUS_BAD_ARGUMENT, "BAD_ARGUMENT" }, >- { NV03_PGRAPH_NSTATUS_PROTECTION_FAULT, "PROTECTION_FAULT" } >+ { NV04_PGRAPH_NSTATUS_STATE_IN_USE, "STATE_IN_USE" }, >+ { NV04_PGRAPH_NSTATUS_INVALID_STATE, "INVALID_STATE" }, >+ { NV04_PGRAPH_NSTATUS_BAD_ARGUMENT, "BAD_ARGUMENT" }, >+ { NV04_PGRAPH_NSTATUS_PROTECTION_FAULT, "PROTECTION_FAULT" } >+}; >+ >+static struct nouveau_bitfield_names nouveau_nstatus_names_nv10[] = >+{ >+ { NV10_PGRAPH_NSTATUS_STATE_IN_USE, "STATE_IN_USE" }, >+ { NV10_PGRAPH_NSTATUS_INVALID_STATE, "INVALID_STATE" }, >+ { NV10_PGRAPH_NSTATUS_BAD_ARGUMENT, "BAD_ARGUMENT" }, >+ { NV10_PGRAPH_NSTATUS_PROTECTION_FAULT, "PROTECTION_FAULT" } > }; > > static struct nouveau_bitfield_names nouveau_nsource_names[] = >@@ -225,6 +190,7 @@ static int > nouveau_graph_trapped_channel(struct drm_device *dev, int *channel_ret) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_engine *engine = &dev_priv->Engine; > int channel; > > if (dev_priv->card_type < NV_10) { >@@ -269,8 +235,7 @@ nouveau_graph_trapped_channel(struct drm > } > } > >- if (channel > nouveau_fifo_number(dev) || >- dev_priv->fifos[channel] == NULL) { >+ if (channel > engine->fifo.channels || !dev_priv->fifos[channel]) { > DRM_ERROR("AIII, invalid/inactive channel id %d\n", channel); > return -EINVAL; > } >@@ -279,112 +244,194 @@ nouveau_graph_trapped_channel(struct drm > return 0; > } > >+struct nouveau_pgraph_trap { >+ int channel; >+ int class; >+ int subc, mthd, size; >+ uint32_t data, data2; >+}; >+ > static void >-nouveau_graph_dump_trap_info(struct drm_device *dev) >+nouveau_graph_trap_info(struct drm_device *dev, >+ struct nouveau_pgraph_trap *trap) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; > uint32_t address; >- uint32_t channel, class; >- uint32_t method, subc, data, data2; >- uint32_t nsource, nstatus; > >- if (nouveau_graph_trapped_channel(dev, &channel)) >- channel = -1; >- >- data = NV_READ(NV04_PGRAPH_TRAPPED_DATA); >+ if (nouveau_graph_trapped_channel(dev, &trap->channel)) >+ trap->channel = -1; > address = NV_READ(NV04_PGRAPH_TRAPPED_ADDR); >- method = address & 0x1FFC; >+ >+ trap->mthd = address & 0x1FFC; >+ trap->data = NV_READ(NV04_PGRAPH_TRAPPED_DATA); > if (dev_priv->card_type < NV_10) { >- subc = (address >> 13) & 0x7; >- data2= 0; >+ trap->subc = (address >> 13) & 0x7; > } else { >- subc = (address >> 16) & 0x7; >- data2= NV_READ(NV10_PGRAPH_TRAPPED_DATA_HIGH); >+ trap->subc = (address >> 16) & 0x7; >+ trap->data2 = NV_READ(NV10_PGRAPH_TRAPPED_DATA_HIGH); > } >- nsource = NV_READ(NV03_PGRAPH_NSOURCE); >- nstatus = NV_READ(NV03_PGRAPH_NSTATUS); >- if (dev_priv->card_type < NV_50) { >- class = NV_READ(0x400160 + subc*4) & 0xFFFF; >+ >+ if (dev_priv->card_type < NV_10) { >+ trap->class = NV_READ(0x400180 + trap->subc*4) & 0xFF; >+ } else if (dev_priv->card_type < NV_40) { >+ trap->class = NV_READ(0x400160 + trap->subc*4) & 0xFFF; >+ } else if (dev_priv->card_type < NV_50) { >+ trap->class = NV_READ(0x400160 + trap->subc*4) & 0xFFFF; > } else { >- class = NV_READ(0x400814); >+ trap->class = NV_READ(0x400814); > } >+} >+ >+static void >+nouveau_graph_dump_trap_info(struct drm_device *dev, const char *id, >+ struct nouveau_pgraph_trap *trap) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ uint32_t nsource, nstatus; >+ >+ nsource = NV_READ(NV03_PGRAPH_NSOURCE); >+ nstatus = NV_READ(NV03_PGRAPH_NSTATUS); > >- DRM_ERROR("nSource:"); >+ DRM_INFO("%s - nSource:", id); > nouveau_print_bitfield_names(nsource, nouveau_nsource_names, > ARRAY_SIZE(nouveau_nsource_names)); > printk(", nStatus:"); >- nouveau_print_bitfield_names(nstatus, nouveau_nstatus_names, >+ if (dev_priv->card_type < NV_10) >+ nouveau_print_bitfield_names(nstatus, nouveau_nstatus_names, > ARRAY_SIZE(nouveau_nstatus_names)); >+ else >+ nouveau_print_bitfield_names(nstatus, nouveau_nstatus_names_nv10, >+ ARRAY_SIZE(nouveau_nstatus_names_nv10)); > printk("\n"); > >- DRM_ERROR("Channel %d/%d (class 0x%04x) - Method 0x%04x, Data 0x%08x:0x%08x\n", >- channel, subc, class, method, data2, data); >+ DRM_INFO("%s - Ch %d/%d Class 0x%04x Mthd 0x%04x Data 0x%08x:0x%08x\n", >+ id, trap->channel, trap->subc, trap->class, trap->mthd, >+ trap->data2, trap->data); > } > >-static void nouveau_pgraph_irq_handler(struct drm_device *dev) >+static inline void >+nouveau_pgraph_intr_notify(struct drm_device *dev, uint32_t nsource) > { >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- uint32_t status, nsource; >+ struct nouveau_pgraph_trap trap; >+ int unhandled = 0; > >- status = NV_READ(NV03_PGRAPH_INTR); >- if (!status) >- return; >- nsource = NV_READ(NV03_PGRAPH_NSOURCE); >+ nouveau_graph_trap_info(dev, &trap); >+ >+ if (nsource & NV03_PGRAPH_NSOURCE_ILLEGAL_MTHD) { >+ /* NV4 (nvidia TNT 1) reports software methods with >+ * PGRAPH NOTIFY ILLEGAL_MTHD >+ */ >+ DRM_DEBUG("Got NV04 software method method %x for class %#x\n", >+ trap.mthd, trap.class); >+ >+ if (nouveau_sw_method_execute(dev, trap.class, trap.mthd)) { >+ DRM_ERROR("Unable to execute NV04 software method %x " >+ "for object class %x. Please report.\n", >+ trap.mthd, trap.class); >+ unhandled = 1; >+ } >+ } else { >+ unhandled = 1; >+ } >+ >+ if (unhandled) >+ nouveau_graph_dump_trap_info(dev, "PGRAPH_NOTIFY", &trap); >+} > >- if (status & NV_PGRAPH_INTR_NOTIFY) { >- DRM_DEBUG("PGRAPH notify interrupt\n"); >+static inline void >+nouveau_pgraph_intr_error(struct drm_device *dev, uint32_t nsource) >+{ >+ struct nouveau_pgraph_trap trap; >+ int unhandled = 0; > >- nouveau_graph_dump_trap_info(dev); >+ nouveau_graph_trap_info(dev, &trap); > >- status &= ~NV_PGRAPH_INTR_NOTIFY; >- NV_WRITE(NV03_PGRAPH_INTR, NV_PGRAPH_INTR_NOTIFY); >+ if (nsource & NV03_PGRAPH_NSOURCE_ILLEGAL_MTHD) { >+ if (trap.channel >= 0 && trap.mthd == 0x0150) { >+ nouveau_fence_handler(dev, trap.channel); >+ } else >+ if (nouveau_sw_method_execute(dev, trap.class, trap.mthd)) { >+ unhandled = 1; >+ } >+ } else { >+ unhandled = 1; > } > >- if (status & NV_PGRAPH_INTR_ERROR) { >- DRM_ERROR("PGRAPH error interrupt\n"); >+ if (unhandled) >+ nouveau_graph_dump_trap_info(dev, "PGRAPH_ERROR", &trap); >+} > >- nouveau_graph_dump_trap_info(dev); >+static inline void >+nouveau_pgraph_intr_context_switch(struct drm_device *dev) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_engine *engine = &dev_priv->Engine; >+ uint32_t chid; > >- status &= ~NV_PGRAPH_INTR_ERROR; >- NV_WRITE(NV03_PGRAPH_INTR, NV_PGRAPH_INTR_ERROR); >+ chid = engine->fifo.channel_id(dev); >+ DRM_DEBUG("PGRAPH context switch interrupt channel %x\n", chid); >+ >+ switch(dev_priv->card_type) { >+ case NV_04: >+ case NV_05: >+ nouveau_nv04_context_switch(dev); >+ break; >+ case NV_10: >+ case NV_11: >+ case NV_17: >+ nouveau_nv10_context_switch(dev); >+ break; >+ default: >+ DRM_ERROR("Context switch not implemented\n"); >+ break; > } >+} > >- if (status & NV_PGRAPH_INTR_CONTEXT_SWITCH) { >- uint32_t channel=NV_READ(NV03_PFIFO_CACHE1_PUSH1)&(nouveau_fifo_number(dev)-1); >- DRM_DEBUG("PGRAPH context switch interrupt channel %x\n",channel); >- switch(dev_priv->card_type) >- { >- case NV_04: >- case NV_05: >- nouveau_nv04_context_switch(dev); >- break; >- case NV_10: >- case NV_11: >- case NV_17: >- nouveau_nv10_context_switch(dev); >- break; >- case NV_20: >- case NV_30: >- nouveau_nv20_context_switch(dev); >- break; >- default: >- DRM_ERROR("Context switch not implemented\n"); >- break; >+static void >+nouveau_pgraph_irq_handler(struct drm_device *dev) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ uint32_t status; >+ >+ while ((status = NV_READ(NV03_PGRAPH_INTR))) { >+ uint32_t nsource = NV_READ(NV03_PGRAPH_NSOURCE); >+ >+ if (status & NV_PGRAPH_INTR_NOTIFY) { >+ nouveau_pgraph_intr_notify(dev, nsource); >+ >+ status &= ~NV_PGRAPH_INTR_NOTIFY; >+ NV_WRITE(NV03_PGRAPH_INTR, NV_PGRAPH_INTR_NOTIFY); > } > >- status &= ~NV_PGRAPH_INTR_CONTEXT_SWITCH; >- NV_WRITE(NV03_PGRAPH_INTR, NV_PGRAPH_INTR_CONTEXT_SWITCH); >- } >+ if (status & NV_PGRAPH_INTR_ERROR) { >+ nouveau_pgraph_intr_error(dev, nsource); >+ >+ status &= ~NV_PGRAPH_INTR_ERROR; >+ NV_WRITE(NV03_PGRAPH_INTR, NV_PGRAPH_INTR_ERROR); >+ } >+ >+ if (status & NV_PGRAPH_INTR_CONTEXT_SWITCH) { >+ nouveau_pgraph_intr_context_switch(dev); >+ >+ status &= ~NV_PGRAPH_INTR_CONTEXT_SWITCH; >+ NV_WRITE(NV03_PGRAPH_INTR, >+ NV_PGRAPH_INTR_CONTEXT_SWITCH); >+ } >+ >+ if (status) { >+ DRM_INFO("Unhandled PGRAPH_INTR - 0x%08x\n", status); >+ NV_WRITE(NV03_PGRAPH_INTR, status); >+ } > >- if (status) { >- DRM_ERROR("Unhandled PGRAPH interrupt: STAT=0x%08x\n", status); >- NV_WRITE(NV03_PGRAPH_INTR, status); >+ if ((NV_READ(NV04_PGRAPH_FIFO) & (1 << 0)) == 0) >+ NV_WRITE(NV04_PGRAPH_FIFO, 1); > } > > NV_WRITE(NV03_PMC_INTR_0, NV_PMC_INTR_0_PGRAPH_PENDING); > } > >-static void nouveau_crtc_irq_handler(struct drm_device *dev, int crtc) >+static void >+nouveau_crtc_irq_handler(struct drm_device *dev, int crtc) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; > >@@ -397,7 +444,8 @@ static void nouveau_crtc_irq_handler(str > } > } > >-irqreturn_t nouveau_irq_handler(DRM_IRQ_ARGS) >+irqreturn_t >+nouveau_irq_handler(DRM_IRQ_ARGS) > { > struct drm_device *dev = (struct drm_device*)arg; > struct drm_nouveau_private *dev_priv = dev->dev_private; >@@ -427,4 +475,3 @@ irqreturn_t nouveau_irq_handler(DRM_IRQ_ > > return IRQ_HANDLED; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_mem.c linux-2.6.23.i686/drivers/char/drm/nouveau_mem.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_mem.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_mem.c 2008-01-06 09:24:57.000000000 +0100 >@@ -159,7 +159,7 @@ int nouveau_mem_init_heap(struct mem_blo > return 0; > } > >-/* >+/* > * Free all blocks associated with the releasing file_priv > */ > void nouveau_mem_release(struct drm_file *file_priv, struct mem_block *heap) >@@ -189,7 +189,7 @@ void nouveau_mem_release(struct drm_file > } > } > >-/* >+/* > * Cleanup everything > */ > void nouveau_mem_takedown(struct mem_block **heap) >@@ -223,6 +223,7 @@ void nouveau_mem_close(struct drm_device > static uint32_t > nouveau_mem_fb_amount_igp(struct drm_device *dev) > { >+#if defined(__linux__) && (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,19)) > struct drm_nouveau_private *dev_priv = dev->dev_private; > struct pci_dev *bridge; > uint32_t mem; >@@ -243,6 +244,9 @@ nouveau_mem_fb_amount_igp(struct drm_dev > } > > DRM_ERROR("impossible!\n"); >+#else >+ DRM_ERROR("Linux kernel >= 2.6.19 required to check for igp memory amount\n"); >+#endif > > return 0; > } >@@ -253,18 +257,6 @@ uint64_t nouveau_mem_fb_amount(struct dr > struct drm_nouveau_private *dev_priv=dev->dev_private; > switch(dev_priv->card_type) > { >- case NV_03: >- switch(NV_READ(NV03_BOOT_0)&NV03_BOOT_0_RAM_AMOUNT) >- { >- case NV03_BOOT_0_RAM_AMOUNT_8MB: >- case NV03_BOOT_0_RAM_AMOUNT_8MB_SDRAM: >- return 8*1024*1024; >- case NV03_BOOT_0_RAM_AMOUNT_4MB: >- return 4*1024*1024; >- case NV03_BOOT_0_RAM_AMOUNT_2MB: >- return 2*1024*1024; >- } >- break; > case NV_04: > case NV_05: > if (NV_READ(NV03_BOOT_0) & 0x00000100) { >@@ -296,7 +288,7 @@ uint64_t nouveau_mem_fb_amount(struct dr > } else { > uint64_t mem; > >- mem = (NV_READ(NV04_FIFO_DATA) & >+ mem = (NV_READ(NV04_FIFO_DATA) & > NV10_FIFO_DATA_RAM_AMOUNT_MB_MASK) >> > NV10_FIFO_DATA_RAM_AMOUNT_MB_SHIFT; > return mem*1024*1024; >@@ -309,13 +301,11 @@ uint64_t nouveau_mem_fb_amount(struct dr > } > > static int >-nouveau_mem_init_agp(struct drm_device *dev) >+nouveau_mem_init_agp(struct drm_device *dev, int ttm) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; > struct drm_agp_info info; > struct drm_agp_mode mode; >- struct drm_agp_buffer agp_req; >- struct drm_agp_binding bind_req; > int ret; > > ret = drm_agp_acquire(dev); >@@ -338,20 +328,25 @@ nouveau_mem_init_agp(struct drm_device * > return ret; > } > >- agp_req.size = info.aperture_size; >- agp_req.type = 0; >- ret = drm_agp_alloc(dev, &agp_req); >- if (ret) { >- DRM_ERROR("Unable to alloc AGP: %d\n", ret); >- return ret; >- } >+ if (!ttm) { >+ struct drm_agp_buffer agp_req; >+ struct drm_agp_binding bind_req; >+ >+ agp_req.size = info.aperture_size; >+ agp_req.type = 0; >+ ret = drm_agp_alloc(dev, &agp_req); >+ if (ret) { >+ DRM_ERROR("Unable to alloc AGP: %d\n", ret); >+ return ret; >+ } > >- bind_req.handle = agp_req.handle; >- bind_req.offset = 0; >- ret = drm_agp_bind(dev, &bind_req); >- if (ret) { >- DRM_ERROR("Unable to bind AGP: %d\n", ret); >- return ret; >+ bind_req.handle = agp_req.handle; >+ bind_req.offset = 0; >+ ret = drm_agp_bind(dev, &bind_req); >+ if (ret) { >+ DRM_ERROR("Unable to bind AGP: %d\n", ret); >+ return ret; >+ } > } > > dev_priv->gart_info.type = NOUVEAU_GART_AGP; >@@ -360,6 +355,73 @@ nouveau_mem_init_agp(struct drm_device * > return 0; > } > >+#define HACK_OLD_MM >+int >+nouveau_mem_init_ttm(struct drm_device *dev) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ uint32_t vram_size, bar1_size; >+ int ret; >+ >+ dev_priv->agp_heap = dev_priv->pci_heap = dev_priv->fb_heap = NULL; >+ dev_priv->fb_phys = drm_get_resource_start(dev,1); >+ dev_priv->gart_info.type = NOUVEAU_GART_NONE; >+ >+ drm_bo_driver_init(dev); >+ >+ /* non-mappable vram */ >+ dev_priv->fb_available_size = nouveau_mem_fb_amount(dev); >+ dev_priv->fb_available_size -= dev_priv->ramin_rsvd_vram; >+ vram_size = dev_priv->fb_available_size >> PAGE_SHIFT; >+ bar1_size = drm_get_resource_len(dev, 1) >> PAGE_SHIFT; >+ if (bar1_size < vram_size) { >+ if ((ret = drm_bo_init_mm(dev, DRM_BO_MEM_PRIV0, >+ bar1_size, vram_size - bar1_size))) { >+ DRM_ERROR("Failed PRIV0 mm init: %d\n", ret); >+ return ret; >+ } >+ vram_size = bar1_size; >+ } >+ >+ /* mappable vram */ >+#ifdef HACK_OLD_MM >+ vram_size /= 4; >+#endif >+ if ((ret = drm_bo_init_mm(dev, DRM_BO_MEM_VRAM, 0, vram_size))) { >+ DRM_ERROR("Failed VRAM mm init: %d\n", ret); >+ return ret; >+ } >+ >+ /* GART */ >+#ifndef __powerpc__ >+ if (drm_device_is_agp(dev) && dev->agp) { >+ if ((ret = nouveau_mem_init_agp(dev, 1))) >+ DRM_ERROR("Error initialising AGP: %d\n", ret); >+ } >+#endif >+ >+ if (dev_priv->gart_info.type == NOUVEAU_GART_NONE) { >+ if ((ret = nouveau_sgdma_init(dev))) >+ DRM_ERROR("Error initialising PCI SGDMA: %d\n", ret); >+ } >+ >+ if ((ret = drm_bo_init_mm(dev, DRM_BO_MEM_TT, 0, >+ dev_priv->gart_info.aper_size >> >+ PAGE_SHIFT))) { >+ DRM_ERROR("Failed TT mm init: %d\n", ret); >+ return ret; >+ } >+ >+#ifdef HACK_OLD_MM >+ vram_size <<= PAGE_SHIFT; >+ DRM_INFO("Old MM using %dKiB VRAM\n", (vram_size * 3) >> 10); >+ if (nouveau_mem_init_heap(&dev_priv->fb_heap, vram_size, vram_size * 3)) >+ return -ENOMEM; >+#endif >+ >+ return 0; >+} >+ > int nouveau_mem_init(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; >@@ -386,7 +448,7 @@ int nouveau_mem_init(struct drm_device * > DRM_DEBUG("Available VRAM: %dKiB\n", fb_size>>10); > > if (fb_size>256*1024*1024) { >- /* On cards with > 256Mb, you can't map everything. >+ /* On cards with > 256Mb, you can't map everything. > * So we create a second FB heap for that type of memory */ > if (nouveau_mem_init_heap(&dev_priv->fb_heap, > 0, 256*1024*1024)) >@@ -400,11 +462,13 @@ int nouveau_mem_init(struct drm_device * > dev_priv->fb_nomap_heap=NULL; > } > >+#ifndef __powerpc__ > /* Init AGP / NV50 PCIEGART */ > if (drm_device_is_agp(dev) && dev->agp) { >- if ((ret = nouveau_mem_init_agp(dev))) >+ if ((ret = nouveau_mem_init_agp(dev, 0))) > DRM_ERROR("Error initialising AGP: %d\n", ret); > } >+#endif > > /*Note: this is *not* just NV50 code, but only used on NV50 for now */ > if (dev_priv->gart_info.type == NOUVEAU_GART_NONE && >@@ -413,7 +477,7 @@ int nouveau_mem_init(struct drm_device * > if (!ret) { > ret = nouveau_sgdma_nottm_hack_init(dev); > if (ret) >- nouveau_sgdma_takedown(dev); >+ nouveau_sgdma_takedown(dev); > } > > if (ret) >@@ -425,7 +489,7 @@ int nouveau_mem_init(struct drm_device * > 0, dev_priv->gart_info.aper_size)) { > if (dev_priv->gart_info.type == NOUVEAU_GART_SGDMA) { > nouveau_sgdma_nottm_hack_takedown(dev); >- nouveau_sgdma_takedown(dev); >+ nouveau_sgdma_takedown(dev); > } > } > } >@@ -438,12 +502,12 @@ int nouveau_mem_init(struct drm_device * > sgreq.size = 16 << 20; //16MB of PCI scatter-gather zone > > if (drm_sg_alloc(dev, &sgreq)) { >- DRM_ERROR("Unable to allocate %dMB of scatter-gather" >+ DRM_ERROR("Unable to allocate %ldMB of scatter-gather" > " pages for PCI DMA!",sgreq.size>>20); > } else { > if (nouveau_mem_init_heap(&dev_priv->pci_heap, 0, > dev->sg->pages * PAGE_SIZE)) { >- DRM_ERROR("Unable to initialize pci_heap!"); >+ DRM_ERROR("Unable to initialize pci_heap!"); > } > } > } >@@ -459,8 +523,8 @@ struct mem_block* nouveau_mem_alloc(stru > int type; > struct drm_nouveau_private *dev_priv = dev->dev_private; > >- /* >- * Make things easier on ourselves: all allocations are page-aligned. >+ /* >+ * Make things easier on ourselves: all allocations are page-aligned. > * We need that to map allocated regions into the user space > */ > if (alignment < PAGE_SHIFT) >@@ -542,7 +606,7 @@ alloc_ok: > ret = drm_addmap(dev, block->start, block->size, > _DRM_SCATTER_GATHER, 0, &block->map); > >- if (ret) { >+ if (ret) { > nouveau_mem_free_block(block); > return NULL; > } >@@ -612,5 +676,3 @@ int nouveau_ioctl_mem_free(struct drm_de > nouveau_mem_free(dev, block); > return 0; > } >- >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_notifier.c linux-2.6.23.i686/drivers/char/drm/nouveau_notifier.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_notifier.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_notifier.c 2008-01-06 09:24:57.000000000 +0100 >@@ -33,17 +33,10 @@ int > nouveau_notifier_init_channel(struct nouveau_channel *chan) > { > struct drm_device *dev = chan->dev; >- struct drm_nouveau_private *dev_priv = dev->dev_private; > int flags, ret; > >- /*TODO: PCI notifier blocks */ >- if (dev_priv->agp_heap) >- flags = NOUVEAU_MEM_AGP; >- else if (dev_priv->pci_heap) >- flags = NOUVEAU_MEM_PCI; >- else >- flags = NOUVEAU_MEM_FB; >- flags |= (NOUVEAU_MEM_MAPPED | NOUVEAU_MEM_FB_ACCEPTABLE); >+ flags = (NOUVEAU_MEM_PCI | NOUVEAU_MEM_MAPPED | >+ NOUVEAU_MEM_FB_ACCEPTABLE); > > chan->notifier_block = nouveau_mem_alloc(dev, 0, PAGE_SIZE, flags, > (struct drm_file *)-2); >@@ -122,7 +115,7 @@ nouveau_notifier_alloc(struct nouveau_ch > } else { > target = NV_DMA_TARGET_AGP; > } >- } else >+ } else > if (chan->notifier_block->flags & NOUVEAU_MEM_PCI) { > target = NV_DMA_TARGET_PCI_NONLINEAR; > } else { >@@ -170,4 +163,3 @@ nouveau_ioctl_notifier_alloc(struct drm_ > > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_object.c linux-2.6.23.i686/drivers/char/drm/nouveau_object.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_object.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_object.c 2008-01-06 09:24:57.000000000 +0100 >@@ -524,7 +524,7 @@ nouveau_gpuobj_ref_find(struct nouveau_c > struct nouveau_gpuobj_ref *ref; > struct list_head *entry, *tmp; > >- list_for_each_safe(entry, tmp, &chan->ramht_refs) { >+ list_for_each_safe(entry, tmp, &chan->ramht_refs) { > ref = list_entry(entry, struct nouveau_gpuobj_ref, list); > > if (ref->handle == handle) { >@@ -616,7 +616,7 @@ nouveau_gpuobj_class_instmem_size(struct > DMA objects are used to reference a piece of memory in the > framebuffer, PCI or AGP address space. Each object is 16 bytes big > and looks as follows: >- >+ > entry[0] > 11:0 class (seems like I can always use 0 here) > 12 page table present? >@@ -648,7 +648,7 @@ nouveau_gpuobj_dma_new(struct nouveau_ch > struct drm_nouveau_private *dev_priv = dev->dev_private; > int ret; > uint32_t is_scatter_gather = 0; >- >+ > /* Total number of pages covered by the request. > */ > const unsigned int page_count = (size + PAGE_SIZE - 1) / PAGE_SIZE; >@@ -671,7 +671,7 @@ nouveau_gpuobj_dma_new(struct nouveau_ch > default: > break; > } >- >+ > ret = nouveau_gpuobj_new(dev, chan, > is_scatter_gather ? ((page_count << 2) + 12) : nouveau_gpuobj_class_instmem_size(dev, class), > 16, >@@ -687,11 +687,11 @@ nouveau_gpuobj_dma_new(struct nouveau_ch > adjust = offset & 0x00000fff; > if (access != NV_DMA_ACCESS_RO) > pte_flags |= (1<<1); >- >- if ( ! is_scatter_gather ) >+ >+ if ( ! is_scatter_gather ) > { > frame = offset & ~0x00000fff; >- >+ > INSTANCE_WR(*gpuobj, 0, ((1<<12) | (1<<13) | > (adjust << 20) | > (access << 14) | >@@ -701,7 +701,7 @@ nouveau_gpuobj_dma_new(struct nouveau_ch > INSTANCE_WR(*gpuobj, 2, frame | pte_flags); > INSTANCE_WR(*gpuobj, 3, frame | pte_flags); > } >- else >+ else > { > /* Intial page entry in the scatter-gather area that > * corresponds to the base offset >@@ -728,7 +728,7 @@ nouveau_gpuobj_dma_new(struct nouveau_ch > > /*write starting at the third dword*/ > instance_offset = 2; >- >+ > /*for each PAGE, get its bus address, fill in the page table entry, and advance*/ > for (i = 0; i < page_count; i++) { > if (dev->sg->busaddr[idx] == 0) { >@@ -745,12 +745,12 @@ nouveau_gpuobj_dma_new(struct nouveau_ch > } > > frame = (uint32_t) dev->sg->busaddr[idx]; >- INSTANCE_WR(*gpuobj, instance_offset, >+ INSTANCE_WR(*gpuobj, instance_offset, > frame | pte_flags); >- >+ > idx++; > instance_offset ++; >- } >+ } > } > } else { > uint32_t flags0, flags5; >@@ -848,7 +848,7 @@ nouveau_gpuobj_gart_dma_new(struct nouve > entry[0]: > 11:0 class (maybe uses more bits here?) > 17 user clip enable >- 21:19 patch config >+ 21:19 patch config > 25 patch status valid ? > entry[1]: > 15:0 DMA notifier (maybe 20:0) >@@ -986,7 +986,7 @@ nouveau_gpuobj_channel_init(struct nouve > /* NV50 VM, point offset 0-512MiB at shared PCIEGART table */ > if (dev_priv->card_type >= NV_50) { > uint32_t vm_offset; >- >+ > vm_offset = (dev_priv->chipset & 0xf0) == 0x50 ? 0x1400 : 0x200; > vm_offset += chan->ramin->gpuobj->im_pramin->start; > if ((ret = nouveau_gpuobj_new_fake(dev, vm_offset, ~0, 0x4000, >@@ -1074,7 +1074,7 @@ nouveau_gpuobj_channel_takedown(struct n > > DRM_DEBUG("ch%d\n", chan->id); > >- list_for_each_safe(entry, tmp, &chan->ramht_refs) { >+ list_for_each_safe(entry, tmp, &chan->ramht_refs) { > ref = list_entry(entry, struct nouveau_gpuobj_ref, list); > > nouveau_gpuobj_ref_del(dev, &ref); >@@ -1104,7 +1104,7 @@ int nouveau_ioctl_grobj_alloc(struct drm > NOUVEAU_GET_USER_CHANNEL_WITH_RETURN(init->channel, file_priv, chan); > > //FIXME: check args, only allow trusted objects to be created >- >+ > if (init->handle == ~0) > return -EINVAL; > >@@ -1145,4 +1145,3 @@ int nouveau_ioctl_gpuobj_free(struct drm > > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_reg.h linux-2.6.23.i686/drivers/char/drm/nouveau_reg.h >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_reg.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_reg.h 2008-01-06 09:24:57.000000000 +0100 >@@ -45,18 +45,43 @@ > #define NV_CLASS_NULL 0x00000030 > #define NV_CLASS_DMA_IN_MEMORY 0x0000003D > >+#define NV03_USER(i) (0x00800000+(i*NV03_USER_SIZE)) >+#define NV03_USER__SIZE 16 >+#define NV10_USER__SIZE 32 >+#define NV03_USER_SIZE 0x00010000 >+#define NV03_USER_DMA_PUT(i) (0x00800040+(i*NV03_USER_SIZE)) >+#define NV03_USER_DMA_PUT__SIZE 16 >+#define NV10_USER_DMA_PUT__SIZE 32 >+#define NV03_USER_DMA_GET(i) (0x00800044+(i*NV03_USER_SIZE)) >+#define NV03_USER_DMA_GET__SIZE 16 >+#define NV10_USER_DMA_GET__SIZE 32 >+#define NV03_USER_REF_CNT(i) (0x00800048+(i*NV03_USER_SIZE)) >+#define NV03_USER_REF_CNT__SIZE 16 >+#define NV10_USER_REF_CNT__SIZE 32 >+ >+#define NV40_USER(i) (0x00c00000+(i*NV40_USER_SIZE)) >+#define NV40_USER_SIZE 0x00001000 >+#define NV40_USER_DMA_PUT(i) (0x00c00040+(i*NV40_USER_SIZE)) >+#define NV40_USER_DMA_PUT__SIZE 32 >+#define NV40_USER_DMA_GET(i) (0x00c00044+(i*NV40_USER_SIZE)) >+#define NV40_USER_DMA_GET__SIZE 32 >+#define NV40_USER_REF_CNT(i) (0x00c00048+(i*NV40_USER_SIZE)) >+#define NV40_USER_REF_CNT__SIZE 32 >+ >+#define NV50_USER(i) (0x00c00000+(i*NV50_USER_SIZE)) >+#define NV50_USER_SIZE 0x00002000 >+#define NV50_USER_DMA_PUT(i) (0x00c00040+(i*NV50_USER_SIZE)) >+#define NV50_USER_DMA_PUT__SIZE 128 >+#define NV50_USER_DMA_GET(i) (0x00c00044+(i*NV50_USER_SIZE)) >+#define NV50_USER_DMA_GET__SIZE 128 >+/*XXX: I don't think this actually exists.. */ >+#define NV50_USER_REF_CNT(i) (0x00c00048+(i*NV50_USER_SIZE)) >+#define NV50_USER_REF_CNT__SIZE 128 >+ > #define NV03_FIFO_SIZE 0x8000UL >-#define NV_MAX_FIFO_NUMBER 128 >-#define NV03_FIFO_REGS_SIZE 0x10000 >-#define NV03_FIFO_REGS(i) (0x00800000+i*NV03_FIFO_REGS_SIZE) >-# define NV03_FIFO_REGS_DMAPUT(i) (NV03_FIFO_REGS(i)+0x40) >-# define NV03_FIFO_REGS_DMAGET(i) (NV03_FIFO_REGS(i)+0x44) >-#define NV50_FIFO_REGS_SIZE 0x2000 >-#define NV50_FIFO_REGS(i) (0x00c00000+i*NV50_FIFO_REGS_SIZE) >-# define NV50_FIFO_REGS_DMAPUT(i) (NV50_FIFO_REGS(i)+0x40) >-# define NV50_FIFO_REGS_DMAGET(i) (NV50_FIFO_REGS(i)+0x44) > > #define NV03_PMC_BOOT_0 0x00000000 >+#define NV03_PMC_BOOT_1 0x00000004 > #define NV03_PMC_INTR_0 0x00000100 > # define NV_PMC_INTR_0_PFIFO_PENDING (1<< 8) > # define NV_PMC_INTR_0_PGRAPH_PENDING (1<<12) >@@ -118,10 +143,14 @@ > #define NV10_PGRAPH_DEBUG_4 0x00400090 > #define NV03_PGRAPH_INTR 0x00400100 > #define NV03_PGRAPH_NSTATUS 0x00400104 >-# define NV03_PGRAPH_NSTATUS_STATE_IN_USE (1<<23) >-# define NV03_PGRAPH_NSTATUS_INVALID_STATE (1<<24) >-# define NV03_PGRAPH_NSTATUS_BAD_ARGUMENT (1<<25) >-# define NV03_PGRAPH_NSTATUS_PROTECTION_FAULT (1<<26) >+# define NV04_PGRAPH_NSTATUS_STATE_IN_USE (1<<11) >+# define NV04_PGRAPH_NSTATUS_INVALID_STATE (1<<12) >+# define NV04_PGRAPH_NSTATUS_BAD_ARGUMENT (1<<13) >+# define NV04_PGRAPH_NSTATUS_PROTECTION_FAULT (1<<14) >+# define NV10_PGRAPH_NSTATUS_STATE_IN_USE (1<<23) >+# define NV10_PGRAPH_NSTATUS_INVALID_STATE (1<<24) >+# define NV10_PGRAPH_NSTATUS_BAD_ARGUMENT (1<<25) >+# define NV10_PGRAPH_NSTATUS_PROTECTION_FAULT (1<<26) > #define NV03_PGRAPH_NSOURCE 0x00400108 > # define NV03_PGRAPH_NSOURCE_NOTIFICATION (1<< 0) > # define NV03_PGRAPH_NSOURCE_DATA_ERROR (1<< 1) >@@ -286,10 +315,8 @@ > #define NV10_PGRAPH_DMA_PITCH 0x00400770 > #define NV10_PGRAPH_DVD_COLORFMT 0x00400774 > #define NV10_PGRAPH_SCALED_FORMAT 0x00400778 >-#define NV10_PGRAPH_CHANNEL_CTX_TABLE 0x00400780 >-#define NV10_PGRAPH_CHANNEL_CTX_SIZE 0x00400784 >+#define NV20_PGRAPH_CHANNEL_CTX_TABLE 0x00400780 > #define NV20_PGRAPH_CHANNEL_CTX_POINTER 0x00400784 >-#define NV10_PGRAPH_CHANNEL_CTX_POINTER 0x00400788 > #define NV20_PGRAPH_CHANNEL_CTX_XFER 0x00400788 > #define NV20_PGRAPH_CHANNEL_CTX_XFER_LOAD 0x00000001 > #define NV20_PGRAPH_CHANNEL_CTX_XFER_SAVE 0x00000002 >@@ -319,6 +346,18 @@ > #define NV47_PGRAPH_TSTATUS0(i) 0x00400D0C > #define NV04_PGRAPH_V_RAM 0x00400D40 > #define NV04_PGRAPH_W_RAM 0x00400D80 >+#define NV10_PGRAPH_COMBINER0_IN_ALPHA 0x00400E40 >+#define NV10_PGRAPH_COMBINER1_IN_ALPHA 0x00400E44 >+#define NV10_PGRAPH_COMBINER0_IN_RGB 0x00400E48 >+#define NV10_PGRAPH_COMBINER1_IN_RGB 0x00400E4C >+#define NV10_PGRAPH_COMBINER_COLOR0 0x00400E50 >+#define NV10_PGRAPH_COMBINER_COLOR1 0x00400E54 >+#define NV10_PGRAPH_COMBINER0_OUT_ALPHA 0x00400E58 >+#define NV10_PGRAPH_COMBINER1_OUT_ALPHA 0x00400E5C >+#define NV10_PGRAPH_COMBINER0_OUT_RGB 0x00400E60 >+#define NV10_PGRAPH_COMBINER1_OUT_RGB 0x00400E64 >+#define NV10_PGRAPH_COMBINER_FINAL0 0x00400E68 >+#define NV10_PGRAPH_COMBINER_FINAL1 0x00400E6C > #define NV10_PGRAPH_WINDOWCLIP_HORIZONTAL 0x00400F00 > #define NV10_PGRAPH_WINDOWCLIP_VERTICAL 0x00400F20 > #define NV10_PGRAPH_XFMODE0 0x00400F40 >@@ -391,6 +430,12 @@ > #define NV04_PFIFO_CACHE0_PULL1 0x00003054 > #define NV03_PFIFO_CACHE1_PUSH0 0x00003200 > #define NV03_PFIFO_CACHE1_PUSH1 0x00003204 >+#define NV03_PFIFO_CACHE1_PUSH1_DMA (1<<8) >+#define NV40_PFIFO_CACHE1_PUSH1_DMA (1<<16) >+#define NV03_PFIFO_CACHE1_PUSH1_CHID_MASK 0x0000000f >+#define NV10_PFIFO_CACHE1_PUSH1_CHID_MASK 0x0000001f >+#define NV50_PFIFO_CACHE1_PUSH1_CHID_MASK 0x0000007f >+#define NV03_PFIFO_CACHE1_PUT 0x00003210 > #define NV04_PFIFO_CACHE1_DMA_PUSH 0x00003220 > #define NV04_PFIFO_CACHE1_DMA_FETCH 0x00003224 > # define NV_PFIFO_CACHE1_DMA_FETCH_TRIG_8_BYTES 0x00000000 >@@ -535,4 +580,3 @@ > #define NV40_RAMFC_UNK_48 0x48 > #define NV40_RAMFC_UNK_4C 0x4C > #define NV40_RAMFC_UNK_50 0x50 >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_sgdma.c linux-2.6.23.i686/drivers/char/drm/nouveau_sgdma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_sgdma.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_sgdma.c 2008-01-06 09:24:57.000000000 +0100 >@@ -6,7 +6,7 @@ > #define NV_CTXDMA_PAGE_MASK (NV_CTXDMA_PAGE_SIZE - 1) > > struct nouveau_sgdma_be { >-// struct drm_ttm_backend backend; >+ struct drm_ttm_backend backend; > struct drm_device *dev; > > int pages; >@@ -17,13 +17,17 @@ struct nouveau_sgdma_be { > unsigned int pte_start; > }; > >-static void nouveau_sgdma_clear(struct nouveau_sgdma_be *nvbe); >-static int nouveau_sgdma_unbind(struct nouveau_sgdma_be *nvbe); >+static int >+nouveau_sgdma_needs_ub_cache_adjust(struct drm_ttm_backend *be) >+{ >+ return ((be->flags & DRM_BE_FLAG_BOUND_CACHED) ? 0 : 1); >+} > > static int >-nouveau_sgdma_populate(struct nouveau_sgdma_be *nvbe, unsigned long num_pages, >- struct page **pages) >+nouveau_sgdma_populate(struct drm_ttm_backend *be, unsigned long num_pages, >+ struct page **pages, struct page *dummy_read_page) > { >+ struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; > int p, d, o; > > DRM_DEBUG("num_pages = %ld\n", num_pages); >@@ -37,12 +41,15 @@ nouveau_sgdma_populate(struct nouveau_sg > nvbe->pages_populated = d = 0; > for (p = 0; p < num_pages; p++) { > for (o = 0; o < PAGE_SIZE; o += NV_CTXDMA_PAGE_SIZE) { >+ struct page *page = pages[p]; >+ if (!page) >+ page = dummy_read_page; > nvbe->pagelist[d] = pci_map_page(nvbe->dev->pdev, >- pages[p], o, >+ page, o, > NV_CTXDMA_PAGE_SIZE, > PCI_DMA_BIDIRECTIONAL); > if (pci_dma_mapping_error(nvbe->pagelist[d])) { >- nouveau_sgdma_clear(nvbe); >+ be->func->clear(be); > DRM_ERROR("pci_map_page failed\n"); > return -EINVAL; > } >@@ -54,15 +61,16 @@ nouveau_sgdma_populate(struct nouveau_sg > } > > static void >-nouveau_sgdma_clear(struct nouveau_sgdma_be *nvbe) >+nouveau_sgdma_clear(struct drm_ttm_backend *be) > { >+ struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; > int d; > > DRM_DEBUG("\n"); > > if (nvbe && nvbe->pagelist) { > if (nvbe->is_bound) >- nouveau_sgdma_unbind(nvbe); >+ be->func->unbind(be); > > for (d = 0; d < nvbe->pages_populated; d++) { > pci_unmap_page(nvbe->dev->pdev, nvbe->pagelist[d], >@@ -75,15 +83,16 @@ nouveau_sgdma_clear(struct nouveau_sgdma > } > > static int >-nouveau_sgdma_bind(struct nouveau_sgdma_be *nvbe, unsigned long pg_start, >- int cached) >+nouveau_sgdma_bind(struct drm_ttm_backend *be, struct drm_bo_mem_reg *mem) > { >+ struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; > struct drm_nouveau_private *dev_priv = nvbe->dev->dev_private; > struct nouveau_gpuobj *gpuobj = dev_priv->gart_info.sg_ctxdma; >- uint64_t offset = (pg_start << PAGE_SHIFT); >+ uint64_t offset = (mem->mm_node->start << PAGE_SHIFT); > uint32_t i; > >- DRM_DEBUG("pg=0x%lx (0x%llx), cached=%d\n", pg_start, offset, cached); >+ DRM_DEBUG("pg=0x%lx (0x%llx), cached=%d\n", mem->mm_node->start, >+ offset, (mem->flags & DRM_BO_FLAG_CACHED) == 1); > > if (offset & NV_CTXDMA_PAGE_MASK) > return -EINVAL; >@@ -111,8 +120,10 @@ nouveau_sgdma_bind(struct nouveau_sgdma_ > return 0; > } > >-static int nouveau_sgdma_unbind(struct nouveau_sgdma_be *nvbe) >+static int >+nouveau_sgdma_unbind(struct drm_ttm_backend *be) > { >+ struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; > struct drm_nouveau_private *dev_priv = nvbe->dev->dev_private; > > DRM_DEBUG("\n"); >@@ -120,7 +131,7 @@ static int nouveau_sgdma_unbind(struct n > if (nvbe->is_bound) { > struct nouveau_gpuobj *gpuobj = dev_priv->gart_info.sg_ctxdma; > unsigned int pte; >- >+ > pte = nvbe->pte_start; > while (pte < (nvbe->pte_start + nvbe->pages)) { > uint64_t pteval = dev_priv->gart_info.sg_dummy_bus; >@@ -142,17 +153,29 @@ static int nouveau_sgdma_unbind(struct n > } > > static void >-nouveau_sgdma_destroy(struct nouveau_sgdma_be *nvbe) >+nouveau_sgdma_destroy(struct drm_ttm_backend *be) > { > DRM_DEBUG("\n"); >- if (nvbe) { >- if (nvbe->pagelist) >- nouveau_sgdma_clear(nvbe); >- drm_free(nvbe, sizeof(*nvbe), DRM_MEM_DRIVER); >+ if (be) { >+ struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be; >+ if (nvbe) { >+ if (nvbe->pagelist) >+ be->func->clear(be); >+ drm_ctl_free(nvbe, sizeof(*nvbe), DRM_MEM_TTM); >+ } > } > } > >-struct nouveau_sgdma_be * >+static struct drm_ttm_backend_func nouveau_sgdma_backend = { >+ .needs_ub_cache_adjust = nouveau_sgdma_needs_ub_cache_adjust, >+ .populate = nouveau_sgdma_populate, >+ .clear = nouveau_sgdma_clear, >+ .bind = nouveau_sgdma_bind, >+ .unbind = nouveau_sgdma_unbind, >+ .destroy = nouveau_sgdma_destroy >+}; >+ >+struct drm_ttm_backend * > nouveau_sgdma_init_ttm(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; >@@ -161,13 +184,15 @@ nouveau_sgdma_init_ttm(struct drm_device > if (!dev_priv->gart_info.sg_ctxdma) > return NULL; > >- nvbe = drm_calloc(1, sizeof(*nvbe), DRM_MEM_DRIVER); >+ nvbe = drm_ctl_calloc(1, sizeof(*nvbe), DRM_MEM_TTM); > if (!nvbe) > return NULL; > > nvbe->dev = dev; > >- return nvbe; >+ nvbe->backend.func = &nouveau_sgdma_backend; >+ >+ return &nvbe->backend; > } > > int >@@ -240,7 +265,7 @@ nouveau_sgdma_takedown(struct drm_device > if (dev_priv->gart_info.sg_dummy_page) { > pci_unmap_page(dev->pdev, dev_priv->gart_info.sg_dummy_bus, > NV_CTXDMA_PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); >- //unlock_page(dev_priv->gart_info.sg_dummy_page); >+ unlock_page(dev_priv->gart_info.sg_dummy_page); > __free_page(dev_priv->gart_info.sg_dummy_page); > dev_priv->gart_info.sg_dummy_page = NULL; > dev_priv->gart_info.sg_dummy_bus = 0; >@@ -253,13 +278,16 @@ int > nouveau_sgdma_nottm_hack_init(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; >- struct nouveau_sgdma_be *nvbe; >+ struct drm_ttm_backend *be; > struct drm_scatter_gather sgreq; >+ struct drm_mm_node mm_node; >+ struct drm_bo_mem_reg mem; > int ret; > >- nvbe = nouveau_sgdma_init_ttm(dev); >- if (!nvbe) >+ dev_priv->gart_info.sg_be = nouveau_sgdma_init_ttm(dev); >+ if (!dev_priv->gart_info.sg_be) > return -ENOMEM; >+ be = dev_priv->gart_info.sg_be; > > /* Hack the aperture size down to the amount of system memory > * we're going to bind into it. >@@ -274,12 +302,15 @@ nouveau_sgdma_nottm_hack_init(struct drm > } > dev_priv->gart_info.sg_handle = sgreq.handle; > >- if ((ret = nouveau_sgdma_populate(nvbe, dev->sg->pages, dev->sg->pagelist))) { >+ if ((ret = be->func->populate(be, dev->sg->pages, dev->sg->pagelist, dev->bm.dummy_read_page))) { > DRM_ERROR("failed populate: %d\n", ret); > return ret; > } > >- if ((ret = nouveau_sgdma_bind(nvbe, 0, 0))) { >+ mm_node.start = 0; >+ mem.mm_node = &mm_node; >+ >+ if ((ret = be->func->bind(be, &mem))) { > DRM_ERROR("failed bind: %d\n", ret); > return ret; > } >@@ -308,4 +339,3 @@ nouveau_sgdma_get_page(struct drm_device > DRM_ERROR("Unimplemented on NV50\n"); > return -EINVAL; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_state.c linux-2.6.23.i686/drivers/char/drm/nouveau_state.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_state.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_state.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,4 +1,4 @@ >-/* >+/* > * Copyright 2005 Stephane Marchesin > * All Rights Reserved. > * >@@ -40,7 +40,7 @@ static int nouveau_init_card_mappings(st > > /* map the mmio regs */ > ret = drm_addmap(dev, drm_get_resource_start(dev, 0), >- drm_get_resource_len(dev, 0), >+ drm_get_resource_len(dev, 0), > _DRM_REGISTERS, _DRM_READ_ONLY, &dev_priv->mmio); > if (ret) { > DRM_ERROR("Unable to initialize the mmio mapping (%d). " >@@ -116,8 +116,10 @@ static int nouveau_init_engine_ptrs(stru > engine->graph.destroy_context = nv04_graph_destroy_context; > engine->graph.load_context = nv04_graph_load_context; > engine->graph.save_context = nv04_graph_save_context; >+ engine->fifo.channels = 16; > engine->fifo.init = nouveau_fifo_init; > engine->fifo.takedown = nouveau_stub_takedown; >+ engine->fifo.channel_id = nv04_fifo_channel_id; > engine->fifo.create_context = nv04_fifo_create_context; > engine->fifo.destroy_context = nv04_fifo_destroy_context; > engine->fifo.load_context = nv04_fifo_load_context; >@@ -143,8 +145,10 @@ static int nouveau_init_engine_ptrs(stru > engine->graph.destroy_context = nv10_graph_destroy_context; > engine->graph.load_context = nv10_graph_load_context; > engine->graph.save_context = nv10_graph_save_context; >+ engine->fifo.channels = 32; > engine->fifo.init = nouveau_fifo_init; > engine->fifo.takedown = nouveau_stub_takedown; >+ engine->fifo.channel_id = nv10_fifo_channel_id; > engine->fifo.create_context = nv10_fifo_create_context; > engine->fifo.destroy_context = nv10_fifo_destroy_context; > engine->fifo.load_context = nv10_fifo_load_context; >@@ -170,8 +174,10 @@ static int nouveau_init_engine_ptrs(stru > engine->graph.destroy_context = nv20_graph_destroy_context; > engine->graph.load_context = nv20_graph_load_context; > engine->graph.save_context = nv20_graph_save_context; >+ engine->fifo.channels = 32; > engine->fifo.init = nouveau_fifo_init; > engine->fifo.takedown = nouveau_stub_takedown; >+ engine->fifo.channel_id = nv10_fifo_channel_id; > engine->fifo.create_context = nv10_fifo_create_context; > engine->fifo.destroy_context = nv10_fifo_destroy_context; > engine->fifo.load_context = nv10_fifo_load_context; >@@ -192,13 +198,15 @@ static int nouveau_init_engine_ptrs(stru > engine->fb.init = nv10_fb_init; > engine->fb.takedown = nv10_fb_takedown; > engine->graph.init = nv30_graph_init; >- engine->graph.takedown = nv30_graph_takedown; >- engine->graph.create_context = nv30_graph_create_context; >- engine->graph.destroy_context = nv30_graph_destroy_context; >- engine->graph.load_context = nv30_graph_load_context; >- engine->graph.save_context = nv30_graph_save_context; >+ engine->graph.takedown = nv20_graph_takedown; >+ engine->graph.create_context = nv20_graph_create_context; >+ engine->graph.destroy_context = nv20_graph_destroy_context; >+ engine->graph.load_context = nv20_graph_load_context; >+ engine->graph.save_context = nv20_graph_save_context; >+ engine->fifo.channels = 32; > engine->fifo.init = nouveau_fifo_init; > engine->fifo.takedown = nouveau_stub_takedown; >+ engine->fifo.channel_id = nv10_fifo_channel_id; > engine->fifo.create_context = nv10_fifo_create_context; > engine->fifo.destroy_context = nv10_fifo_destroy_context; > engine->fifo.load_context = nv10_fifo_load_context; >@@ -224,8 +232,10 @@ static int nouveau_init_engine_ptrs(stru > engine->graph.destroy_context = nv40_graph_destroy_context; > engine->graph.load_context = nv40_graph_load_context; > engine->graph.save_context = nv40_graph_save_context; >+ engine->fifo.channels = 32; > engine->fifo.init = nv40_fifo_init; > engine->fifo.takedown = nouveau_stub_takedown; >+ engine->fifo.channel_id = nv10_fifo_channel_id; > engine->fifo.create_context = nv40_fifo_create_context; > engine->fifo.destroy_context = nv40_fifo_destroy_context; > engine->fifo.load_context = nv40_fifo_load_context; >@@ -252,8 +262,10 @@ static int nouveau_init_engine_ptrs(stru > engine->graph.destroy_context = nv50_graph_destroy_context; > engine->graph.load_context = nv50_graph_load_context; > engine->graph.save_context = nv50_graph_save_context; >+ engine->fifo.channels = 128; > engine->fifo.init = nv50_fifo_init; > engine->fifo.takedown = nv50_fifo_takedown; >+ engine->fifo.channel_id = nv50_fifo_channel_id; > engine->fifo.create_context = nv50_fifo_create_context; > engine->fifo.destroy_context = nv50_fifo_destroy_context; > engine->fifo.load_context = nv50_fifo_load_context; >@@ -273,16 +285,53 @@ nouveau_card_init(struct drm_device *dev > struct drm_nouveau_private *dev_priv = dev->dev_private; > struct nouveau_engine *engine; > int ret; >+#if defined(__powerpc__) >+ struct device_node *dn; >+#endif > > DRM_DEBUG("prev state = %d\n", dev_priv->init_state); > > if (dev_priv->init_state == NOUVEAU_CARD_INIT_DONE) > return 0; >+ dev_priv->ttm = 0; > > /* Map any PCI resources we need on the card */ > ret = nouveau_init_card_mappings(dev); > if (ret) return ret; > >+#if defined(__powerpc__) >+ /* Put the card in BE mode if it's not */ >+ if (NV_READ(NV03_PMC_BOOT_1)) >+ NV_WRITE(NV03_PMC_BOOT_1,0x00000001); >+ >+ DRM_MEMORYBARRIER(); >+#endif >+ >+#if defined(__linux__) && defined(__powerpc__) >+ /* if we have an OF card, copy vbios to RAMIN */ >+ dn = pci_device_to_OF_node(dev->pdev); >+ if (dn) >+ { >+ int size; >+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,22)) >+ const uint32_t *bios = of_get_property(dn, "NVDA,BMP", &size); >+#else >+ const uint32_t *bios = get_property(dn, "NVDA,BMP", &size); >+#endif >+ if (bios) >+ { >+ int i; >+ for(i=0;i<size;i+=4) >+ NV_WI32(i, bios[i/4]); >+ DRM_INFO("OF bios successfully copied (%d bytes)\n",size); >+ } >+ else >+ DRM_INFO("Unable to get the OF bios\n"); >+ } >+ else >+ DRM_INFO("Unable to get the OF node\n"); >+#endif >+ > /* Determine exact chipset we're running on */ > if (dev_priv->card_type < NV_10) > dev_priv->chipset = dev_priv->card_type; >@@ -307,8 +356,13 @@ nouveau_card_init(struct drm_device *dev > if (ret) return ret; > > /* Setup the memory manager */ >- ret = nouveau_mem_init(dev); >- if (ret) return ret; >+ if (dev_priv->ttm) { >+ ret = nouveau_mem_init_ttm(dev); >+ if (ret) return ret; >+ } else { >+ ret = nouveau_mem_init(dev); >+ if (ret) return ret; >+ } > > ret = nouveau_gpuobj_init(dev); > if (ret) return ret; >@@ -400,22 +454,93 @@ int nouveau_firstopen(struct drm_device > return 0; > } > >+#define NV40_CHIPSET_MASK 0x00000baf >+#define NV44_CHIPSET_MASK 0x00005450 >+ > int nouveau_load(struct drm_device *dev, unsigned long flags) > { > struct drm_nouveau_private *dev_priv; >- >- if (flags==NV_UNKNOWN) >- return -EINVAL; >+ void __iomem *regs; >+ uint32_t reg0,reg1; >+ uint8_t architecture = 0; > > dev_priv = drm_calloc(1, sizeof(*dev_priv), DRM_MEM_DRIVER); >- if (!dev_priv) >+ if (!dev_priv) > return -ENOMEM; > >- dev_priv->card_type=flags&NOUVEAU_FAMILY; >- dev_priv->flags=flags&NOUVEAU_FLAGS; >+ dev_priv->flags = flags & NOUVEAU_FLAGS; > dev_priv->init_state = NOUVEAU_CARD_INIT_DOWN; > >+ DRM_DEBUG("vendor: 0x%X device: 0x%X class: 0x%X\n", dev->pci_vendor, dev->pci_device, dev->pdev->class); >+ >+ /* Time to determine the card architecture */ >+ regs = ioremap_nocache(pci_resource_start(dev->pdev, 0), 0x8); >+ if (!regs) { >+ DRM_ERROR("Could not ioremap to determine register\n"); >+ return -ENOMEM; >+ } >+ >+ reg0 = readl(regs+NV03_PMC_BOOT_0); >+ reg1 = readl(regs+NV03_PMC_BOOT_1); >+#if defined(__powerpc__) >+ if (reg1) >+ reg0=___swab32(reg0); >+#endif >+ >+ /* We're dealing with >=NV10 */ >+ if ((reg0 & 0x0f000000) > 0 ) { >+ /* Bit 27-20 contain the architecture in hex */ >+ architecture = (reg0 & 0xff00000) >> 20; >+ /* NV04 or NV05 */ >+ } else if ((reg0 & 0xff00fff0) == 0x20004000) { >+ architecture = 0x04; >+ } >+ >+ iounmap(regs); >+ >+ if (architecture >= 0x50) { >+ dev_priv->card_type = NV_50; >+ } else if (architecture >= 0x40) { >+ uint8_t subarch = architecture & 0xf; >+ /* Selection criteria borrowed from NV40EXA */ >+ if (NV40_CHIPSET_MASK & (1 << subarch)) { >+ dev_priv->card_type = NV_40; >+ } else if (NV44_CHIPSET_MASK & (1 << subarch)) { >+ dev_priv->card_type = NV_44; >+ } else { >+ dev_priv->card_type = NV_UNKNOWN; >+ } >+ } else if (architecture >= 0x30) { >+ dev_priv->card_type = NV_30; >+ } else if (architecture >= 0x20) { >+ dev_priv->card_type = NV_20; >+ } else if (architecture >= 0x17) { >+ dev_priv->card_type = NV_17; >+ } else if (architecture >= 0x11) { >+ dev_priv->card_type = NV_11; >+ } else if (architecture >= 0x10) { >+ dev_priv->card_type = NV_10; >+ } else if (architecture >= 0x04) { >+ dev_priv->card_type = NV_04; >+ } else { >+ dev_priv->card_type = NV_UNKNOWN; >+ } >+ >+ DRM_INFO("Detected an NV%d generation card (0x%08x)\n", dev_priv->card_type,reg0); >+ >+ if (dev_priv->card_type == NV_UNKNOWN) { >+ return -EINVAL; >+ } >+ >+ /* Special flags */ >+ if (dev->pci_device == 0x01a0) { >+ dev_priv->flags |= NV_NFORCE; >+ } else if (dev->pci_device == 0x01f0) { >+ dev_priv->flags |= NV_NFORCE2; >+ } >+ > dev->dev_private = (void *)dev_priv; >+ > return 0; > } > >@@ -423,12 +548,15 @@ void nouveau_lastclose(struct drm_device > { > struct drm_nouveau_private *dev_priv = dev->dev_private; > >- nouveau_card_takedown(dev); >- >- if(dev_priv->fb_mtrr>0) >- { >- drm_mtrr_del(dev_priv->fb_mtrr, drm_get_resource_start(dev, 1),nouveau_mem_fb_amount(dev), DRM_MTRR_WC); >- dev_priv->fb_mtrr=0; >+ /* In the case of an error dev_priv may not be be allocated yet */ >+ if (dev_priv && dev_priv->card_type) { >+ nouveau_card_takedown(dev); >+ >+ if(dev_priv->fb_mtrr>0) >+ { >+ drm_mtrr_del(dev_priv->fb_mtrr, drm_get_resource_start(dev, 1),nouveau_mem_fb_amount(dev), DRM_MTRR_WC); >+ dev_priv->fb_mtrr=0; >+ } > } > } > >@@ -479,8 +607,8 @@ int nouveau_ioctl_getparam(struct drm_de > break; > case NOUVEAU_GETPARAM_PCI_PHYSICAL: > if ( dev -> sg ) >- getparam->value=(uint64_t)(void *) dev->sg->virtual; >- else >+ getparam->value=(uint64_t) dev->sg->virtual; >+ else > { > DRM_ERROR("Requested PCIGART address, while no PCIGART was created\n"); > return -EINVAL; >@@ -538,9 +666,6 @@ void nouveau_wait_for_idle(struct drm_de > { > struct drm_nouveau_private *dev_priv=dev->dev_private; > switch(dev_priv->card_type) { >- case NV_03: >- while (NV_READ(NV03_PGRAPH_STATUS)); >- break; > case NV_50: > break; > default: { >@@ -565,5 +690,3 @@ void nouveau_wait_for_idle(struct drm_de > } > } > } >- >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_swmthd.c linux-2.6.23.i686/drivers/char/drm/nouveau_swmthd.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_swmthd.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_swmthd.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,191 @@ >+/* >+ * Copyright (C) 2007 Arthur Huillet. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sublicense, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, >+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF >+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. >+ * IN NO EVENT SHALL THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE >+ * LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION >+ * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION >+ * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ */ >+ >+/* >+ * Authors: >+ * Arthur Huillet <arthur.huillet AT free DOT fr> >+ */ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "nouveau_drm.h" >+#include "nouveau_drv.h" >+#include "nouveau_reg.h" >+ >+/*TODO: add a "card_type" attribute*/ >+typedef struct{ >+ uint32_t oclass; /* object class for this software method */ >+ uint32_t mthd; /* method number */ >+ void (*method_code)(struct drm_device *dev, uint32_t oclass, uint32_t mthd); /* pointer to the function that does the work */ >+ } nouveau_software_method_t; >+ >+ >+ /* This function handles the NV04 setcontext software methods. >+One function for all because they are very similar.*/ >+static void nouveau_NV04_setcontext_sw_method(struct drm_device *dev, uint32_t oclass, uint32_t mthd) { >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ uint32_t inst_loc = NV_READ(NV04_PGRAPH_CTX_SWITCH4) & 0xFFFF; >+ uint32_t value_to_set = 0, bit_to_set = 0; >+ >+ switch ( oclass ) { >+ case 0x4a: >+ switch ( mthd ) { >+ case 0x188 : >+ case 0x18c : >+ bit_to_set = 0; >+ break; >+ case 0x198 : >+ bit_to_set = 1 << 24; /*PATCH_STATUS_VALID*/ >+ break; >+ case 0x2fc : >+ bit_to_set = NV_READ(NV04_PGRAPH_TRAPPED_DATA) << 15; /*PATCH_CONFIG = NV04_PGRAPH_TRAPPED_DATA*/ >+ break; >+ default : ; >+ }; >+ break; >+ case 0x5c: >+ switch ( mthd ) { >+ case 0x184: >+ bit_to_set = 1 << 13; /*USER_CLIP_ENABLE*/ >+ break; >+ case 0x188: >+ case 0x18c: >+ bit_to_set = 0; >+ break; >+ case 0x198: >+ bit_to_set = 1 << 24; /*PATCH_STATUS_VALID*/ >+ break; >+ case 0x2fc : >+ bit_to_set = NV_READ(NV04_PGRAPH_TRAPPED_DATA) << 15; /*PATCH_CONFIG = NV04_PGRAPH_TRAPPED_DATA*/ >+ break; >+ }; >+ break; >+ case 0x5f: >+ switch ( mthd ) { >+ case 0x184 : >+ bit_to_set = 1 << 12; /*CHROMA_KEY_ENABLE*/ >+ break; >+ case 0x188 : >+ bit_to_set = 1 << 13; /*USER_CLIP_ENABLE*/ >+ break; >+ case 0x18c : >+ case 0x190 : >+ bit_to_set = 0; >+ break; >+ case 0x19c : >+ bit_to_set = 1 << 24; /*PATCH_STATUS_VALID*/ >+ break; >+ case 0x2fc : >+ bit_to_set = NV_READ(NV04_PGRAPH_TRAPPED_DATA) << 15; /*PATCH_CONFIG = NV04_PGRAPH_TRAPPED_DATA*/ >+ break; >+ }; >+ break; >+ case 0x61: >+ switch ( mthd ) { >+ case 0x188 : >+ bit_to_set = 1 << 13; /*USER_CLIP_ENABLE*/ >+ break; >+ case 0x18c : >+ case 0x190 : >+ bit_to_set = 0; >+ break; >+ case 0x19c : >+ bit_to_set = 1 << 24; /*PATCH_STATUS_VALID*/ >+ break; >+ case 0x2fc : >+ bit_to_set = NV_READ(NV04_PGRAPH_TRAPPED_DATA) << 15; /*PATCH_CONFIG = NV04_PGRAPH_TRAPPED_DATA*/ >+ break; >+ }; >+ break; >+ case 0x77: >+ switch ( mthd ) { >+ case 0x198 : >+ bit_to_set = 1 << 24; /*PATCH_STATUS_VALID*/ >+ break; >+ case 0x304 : >+ bit_to_set = NV_READ(NV04_PGRAPH_TRAPPED_DATA) << 15; //PATCH_CONFIG >+ break; >+ }; >+ break; >+ default :; >+ }; >+ >+ value_to_set = (NV_READ(0x00700000 | inst_loc << 4))| bit_to_set; >+ >+ /*RAMIN*/ >+ nouveau_wait_for_idle(dev); >+ NV_WRITE(0x00700000 | inst_loc << 4, value_to_set); >+ >+ /*DRM_DEBUG("CTX_SWITCH1 value is %#x\n", NV_READ(NV04_PGRAPH_CTX_SWITCH1));*/ >+ NV_WRITE(NV04_PGRAPH_CTX_SWITCH1, value_to_set); >+ >+ /*DRM_DEBUG("CTX_CACHE1 + xxx value is %#x\n", NV_READ(NV04_PGRAPH_CTX_CACHE1 + (((NV_READ(NV04_PGRAPH_TRAPPED_ADDR) >> 13) & 0x7) << 2)));*/ >+ NV_WRITE(NV04_PGRAPH_CTX_CACHE1 + (((NV_READ(NV04_PGRAPH_TRAPPED_ADDR) >> 13) & 0x7) << 2), value_to_set); >+} >+ >+ nouveau_software_method_t nouveau_sw_methods[] = { >+ /*NV04 context software methods*/ >+ { 0x4a, 0x188, nouveau_NV04_setcontext_sw_method }, >+ { 0x4a, 0x18c, nouveau_NV04_setcontext_sw_method }, >+ { 0x4a, 0x198, nouveau_NV04_setcontext_sw_method }, >+ { 0x4a, 0x2fc, nouveau_NV04_setcontext_sw_method }, >+ { 0x5c, 0x184, nouveau_NV04_setcontext_sw_method }, >+ { 0x5c, 0x188, nouveau_NV04_setcontext_sw_method }, >+ { 0x5c, 0x18c, nouveau_NV04_setcontext_sw_method }, >+ { 0x5c, 0x198, nouveau_NV04_setcontext_sw_method }, >+ { 0x5c, 0x2fc, nouveau_NV04_setcontext_sw_method }, >+ { 0x5f, 0x184, nouveau_NV04_setcontext_sw_method }, >+ { 0x5f, 0x188, nouveau_NV04_setcontext_sw_method }, >+ { 0x5f, 0x18c, nouveau_NV04_setcontext_sw_method }, >+ { 0x5f, 0x190, nouveau_NV04_setcontext_sw_method }, >+ { 0x5f, 0x19c, nouveau_NV04_setcontext_sw_method }, >+ { 0x5f, 0x2fc, nouveau_NV04_setcontext_sw_method }, >+ { 0x61, 0x188, nouveau_NV04_setcontext_sw_method }, >+ { 0x61, 0x18c, nouveau_NV04_setcontext_sw_method }, >+ { 0x61, 0x190, nouveau_NV04_setcontext_sw_method }, >+ { 0x61, 0x19c, nouveau_NV04_setcontext_sw_method }, >+ { 0x61, 0x2fc, nouveau_NV04_setcontext_sw_method }, >+ { 0x77, 0x198, nouveau_NV04_setcontext_sw_method }, >+ { 0x77, 0x304, nouveau_NV04_setcontext_sw_method }, >+ /*terminator*/ >+ { 0x0, 0x0, NULL, }, >+ }; >+ >+ int nouveau_sw_method_execute(struct drm_device *dev, uint32_t oclass, uint32_t method) { >+ int i = 0; >+ while ( nouveau_sw_methods[ i ] . method_code != NULL ) >+ { >+ if ( nouveau_sw_methods[ i ] . oclass == oclass && nouveau_sw_methods[ i ] . mthd == method ) >+ { >+ nouveau_sw_methods[ i ] . method_code(dev, oclass, method); >+ return 0; >+ } >+ i ++; >+ } >+ >+ return 1; >+ } >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nouveau_swmthd.h linux-2.6.23.i686/drivers/char/drm/nouveau_swmthd.h >--- linux-2.6.23.i686.orig/drivers/char/drm/nouveau_swmthd.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nouveau_swmthd.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,33 @@ >+/* >+ * Copyright (C) 2007 Arthur Huillet. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sublicense, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, >+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF >+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. >+ * IN NO EVENT SHALL THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE >+ * LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION >+ * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION >+ * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ */ >+ >+/* >+ * Authors: >+ * Arthur Huillet <arthur.huillet AT free DOT fr> >+ */ >+ >+int nouveau_sw_method_execute(struct drm_device *dev, uint32_t oclass, uint32_t method); /* execute the given software method, returns 0 on success */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv04_fb.c linux-2.6.23.i686/drivers/char/drm/nv04_fb.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv04_fb.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv04_fb.c 2008-01-06 09:24:57.000000000 +0100 >@@ -21,4 +21,3 @@ void > nv04_fb_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv04_fifo.c linux-2.6.23.i686/drivers/char/drm/nv04_fifo.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv04_fifo.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv04_fifo.c 2008-01-06 09:24:57.000000000 +0100 >@@ -36,6 +36,15 @@ > #define NV04_RAMFC__SIZE 32 > > int >+nv04_fifo_channel_id(struct drm_device *dev) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ >+ return (NV_READ(NV03_PFIFO_CACHE1_PUSH1) & >+ NV03_PFIFO_CACHE1_PUSH1_CHID_MASK); >+} >+ >+int > nv04_fifo_create_context(struct nouveau_channel *chan) > { > struct drm_device *dev = chan->dev; >@@ -71,7 +80,7 @@ nv04_fifo_destroy_context(struct nouveau > { > struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; >- >+ > NV_WRITE(NV04_PFIFO_MODE, NV_READ(NV04_PFIFO_MODE)&~(1<<chan->id)); > > nouveau_gpuobj_ref_del(dev, &chan->ramfc); >@@ -84,15 +93,16 @@ nv04_fifo_load_context(struct nouveau_ch > struct drm_nouveau_private *dev_priv = dev->dev_private; > uint32_t tmp; > >- NV_WRITE(NV03_PFIFO_CACHE1_PUSH1, (1<<8) | chan->id); >+ NV_WRITE(NV03_PFIFO_CACHE1_PUSH1, >+ NV03_PFIFO_CACHE1_PUSH1_DMA | chan->id); > > NV_WRITE(NV04_PFIFO_CACHE1_DMA_GET, RAMFC_RD(DMA_GET)); > NV_WRITE(NV04_PFIFO_CACHE1_DMA_PUT, RAMFC_RD(DMA_PUT)); >- >+ > tmp = RAMFC_RD(DMA_INSTANCE); > NV_WRITE(NV04_PFIFO_CACHE1_DMA_INSTANCE, tmp & 0xFFFF); > NV_WRITE(NV04_PFIFO_CACHE1_DMA_DCOUNT, tmp >> 16); >- >+ > NV_WRITE(NV04_PFIFO_CACHE1_DMA_STATE, RAMFC_RD(DMA_STATE)); > NV_WRITE(NV04_PFIFO_CACHE1_DMA_FETCH, RAMFC_RD(DMA_FETCH)); > NV_WRITE(NV04_PFIFO_CACHE1_ENGINE, RAMFC_RD(ENGINE)); >@@ -123,7 +133,6 @@ nv04_fifo_save_context(struct nouveau_ch > RAMFC_WR(DMA_FETCH, NV_READ(NV04_PFIFO_CACHE1_DMA_FETCH)); > RAMFC_WR(ENGINE, NV_READ(NV04_PFIFO_CACHE1_ENGINE)); > RAMFC_WR(PULL1_ENGINE, NV_READ(NV04_PFIFO_CACHE1_PULL1)); >- >+ > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv04_graph.c linux-2.6.23.i686/drivers/char/drm/nv04_graph.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv04_graph.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv04_graph.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,4 +1,4 @@ >-/* >+/* > * Copyright 2007 Stephane Marchesin > * All Rights Reserved. > * >@@ -27,322 +27,409 @@ > #include "nouveau_drm.h" > #include "nouveau_drv.h" > >-struct reg_interval >-{ >- uint32_t reg; >- int number; >-} nv04_graph_ctx_regs [] = { >- {NV04_PGRAPH_CTX_SWITCH1,1}, >- {NV04_PGRAPH_CTX_SWITCH2,1}, >- {NV04_PGRAPH_CTX_SWITCH3,1}, >- {NV04_PGRAPH_CTX_SWITCH4,1}, >- {NV04_PGRAPH_CTX_CACHE1,1}, >- {NV04_PGRAPH_CTX_CACHE2,1}, >- {NV04_PGRAPH_CTX_CACHE3,1}, >- {NV04_PGRAPH_CTX_CACHE4,1}, >- {0x00400184,1}, >- {0x004001a4,1}, >- {0x004001c4,1}, >- {0x004001e4,1}, >- {0x00400188,1}, >- {0x004001a8,1}, >- {0x004001c8,1}, >- {0x004001e8,1}, >- {0x0040018c,1}, >- {0x004001ac,1}, >- {0x004001cc,1}, >- {0x004001ec,1}, >- {0x00400190,1}, >- {0x004001b0,1}, >- {0x004001d0,1}, >- {0x004001f0,1}, >- {0x00400194,1}, >- {0x004001b4,1}, >- {0x004001d4,1}, >- {0x004001f4,1}, >- {0x00400198,1}, >- {0x004001b8,1}, >- {0x004001d8,1}, >- {0x004001f8,1}, >- {0x0040019c,1}, >- {0x004001bc,1}, >- {0x004001dc,1}, >- {0x004001fc,1}, >- {0x00400174,1}, >- {NV04_PGRAPH_DMA_START_0,1}, >- {NV04_PGRAPH_DMA_START_1,1}, >- {NV04_PGRAPH_DMA_LENGTH,1}, >- {NV04_PGRAPH_DMA_MISC,1}, >- {NV04_PGRAPH_DMA_PITCH,1}, >- {NV04_PGRAPH_BOFFSET0,1}, >- {NV04_PGRAPH_BBASE0,1}, >- {NV04_PGRAPH_BLIMIT0,1}, >- {NV04_PGRAPH_BOFFSET1,1}, >- {NV04_PGRAPH_BBASE1,1}, >- {NV04_PGRAPH_BLIMIT1,1}, >- {NV04_PGRAPH_BOFFSET2,1}, >- {NV04_PGRAPH_BBASE2,1}, >- {NV04_PGRAPH_BLIMIT2,1}, >- {NV04_PGRAPH_BOFFSET3,1}, >- {NV04_PGRAPH_BBASE3,1}, >- {NV04_PGRAPH_BLIMIT3,1}, >- {NV04_PGRAPH_BOFFSET4,1}, >- {NV04_PGRAPH_BBASE4,1}, >- {NV04_PGRAPH_BLIMIT4,1}, >- {NV04_PGRAPH_BOFFSET5,1}, >- {NV04_PGRAPH_BBASE5,1}, >- {NV04_PGRAPH_BLIMIT5,1}, >- {NV04_PGRAPH_BPITCH0,1}, >- {NV04_PGRAPH_BPITCH1,1}, >- {NV04_PGRAPH_BPITCH2,1}, >- {NV04_PGRAPH_BPITCH3,1}, >- {NV04_PGRAPH_BPITCH4,1}, >- {NV04_PGRAPH_SURFACE,1}, >- {NV04_PGRAPH_STATE,1}, >- {NV04_PGRAPH_BSWIZZLE2,1}, >- {NV04_PGRAPH_BSWIZZLE5,1}, >- {NV04_PGRAPH_BPIXEL,1}, >- {NV04_PGRAPH_NOTIFY,1}, >- {NV04_PGRAPH_PATT_COLOR0,1}, >- {NV04_PGRAPH_PATT_COLOR1,1}, >- {NV04_PGRAPH_PATT_COLORRAM,64}, >- {NV04_PGRAPH_PATTERN,1}, >- {0x0040080c,1}, >- {NV04_PGRAPH_PATTERN_SHAPE,1}, >- {0x00400600,1}, >- {NV04_PGRAPH_ROP3,1}, >- {NV04_PGRAPH_CHROMA,1}, >- {NV04_PGRAPH_BETA_AND,1}, >- {NV04_PGRAPH_BETA_PREMULT,1}, >- {NV04_PGRAPH_CONTROL0,1}, >- {NV04_PGRAPH_CONTROL1,1}, >- {NV04_PGRAPH_CONTROL2,1}, >- {NV04_PGRAPH_BLEND,1}, >- {NV04_PGRAPH_STORED_FMT,1}, >- {NV04_PGRAPH_SOURCE_COLOR,1}, >- {0x00400560,1}, >- {0x00400568,1}, >- {0x00400564,1}, >- {0x0040056c,1}, >- {0x00400400,1}, >- {0x00400480,1}, >- {0x00400404,1}, >- {0x00400484,1}, >- {0x00400408,1}, >- {0x00400488,1}, >- {0x0040040c,1}, >- {0x0040048c,1}, >- {0x00400410,1}, >- {0x00400490,1}, >- {0x00400414,1}, >- {0x00400494,1}, >- {0x00400418,1}, >- {0x00400498,1}, >- {0x0040041c,1}, >- {0x0040049c,1}, >- {0x00400420,1}, >- {0x004004a0,1}, >- {0x00400424,1}, >- {0x004004a4,1}, >- {0x00400428,1}, >- {0x004004a8,1}, >- {0x0040042c,1}, >- {0x004004ac,1}, >- {0x00400430,1}, >- {0x004004b0,1}, >- {0x00400434,1}, >- {0x004004b4,1}, >- {0x00400438,1}, >- {0x004004b8,1}, >- {0x0040043c,1}, >- {0x004004bc,1}, >- {0x00400440,1}, >- {0x004004c0,1}, >- {0x00400444,1}, >- {0x004004c4,1}, >- {0x00400448,1}, >- {0x004004c8,1}, >- {0x0040044c,1}, >- {0x004004cc,1}, >- {0x00400450,1}, >- {0x004004d0,1}, >- {0x00400454,1}, >- {0x004004d4,1}, >- {0x00400458,1}, >- {0x004004d8,1}, >- {0x0040045c,1}, >- {0x004004dc,1}, >- {0x00400460,1}, >- {0x004004e0,1}, >- {0x00400464,1}, >- {0x004004e4,1}, >- {0x00400468,1}, >- {0x004004e8,1}, >- {0x0040046c,1}, >- {0x004004ec,1}, >- {0x00400470,1}, >- {0x004004f0,1}, >- {0x00400474,1}, >- {0x004004f4,1}, >- {0x00400478,1}, >- {0x004004f8,1}, >- {0x0040047c,1}, >- {0x004004fc,1}, >- {0x0040053c,1}, >- {0x00400544,1}, >- {0x00400540,1}, >- {0x00400548,1}, >- {0x00400560,1}, >- {0x00400568,1}, >- {0x00400564,1}, >- {0x0040056c,1}, >- {0x00400534,1}, >- {0x00400538,1}, >- {0x00400514,1}, >- {0x00400518,1}, >- {0x0040051c,1}, >- {0x00400520,1}, >- {0x00400524,1}, >- {0x00400528,1}, >- {0x0040052c,1}, >- {0x00400530,1}, >- {0x00400d00,1}, >- {0x00400d40,1}, >- {0x00400d80,1}, >- {0x00400d04,1}, >- {0x00400d44,1}, >- {0x00400d84,1}, >- {0x00400d08,1}, >- {0x00400d48,1}, >- {0x00400d88,1}, >- {0x00400d0c,1}, >- {0x00400d4c,1}, >- {0x00400d8c,1}, >- {0x00400d10,1}, >- {0x00400d50,1}, >- {0x00400d90,1}, >- {0x00400d14,1}, >- {0x00400d54,1}, >- {0x00400d94,1}, >- {0x00400d18,1}, >- {0x00400d58,1}, >- {0x00400d98,1}, >- {0x00400d1c,1}, >- {0x00400d5c,1}, >- {0x00400d9c,1}, >- {0x00400d20,1}, >- {0x00400d60,1}, >- {0x00400da0,1}, >- {0x00400d24,1}, >- {0x00400d64,1}, >- {0x00400da4,1}, >- {0x00400d28,1}, >- {0x00400d68,1}, >- {0x00400da8,1}, >- {0x00400d2c,1}, >- {0x00400d6c,1}, >- {0x00400dac,1}, >- {0x00400d30,1}, >- {0x00400d70,1}, >- {0x00400db0,1}, >- {0x00400d34,1}, >- {0x00400d74,1}, >- {0x00400db4,1}, >- {0x00400d38,1}, >- {0x00400d78,1}, >- {0x00400db8,1}, >- {0x00400d3c,1}, >- {0x00400d7c,1}, >- {0x00400dbc,1}, >- {0x00400590,1}, >- {0x00400594,1}, >- {0x00400598,1}, >- {0x0040059c,1}, >- {0x004005a8,1}, >- {0x004005ac,1}, >- {0x004005b0,1}, >- {0x004005b4,1}, >- {0x004005c0,1}, >- {0x004005c4,1}, >- {0x004005c8,1}, >- {0x004005cc,1}, >- {0x004005d0,1}, >- {0x004005d4,1}, >- {0x004005d8,1}, >- {0x004005dc,1}, >- {0x004005e0,1}, >- {NV04_PGRAPH_PASSTHRU_0,1}, >- {NV04_PGRAPH_PASSTHRU_1,1}, >- {NV04_PGRAPH_PASSTHRU_2,1}, >- {NV04_PGRAPH_DVD_COLORFMT,1}, >- {NV04_PGRAPH_SCALED_FORMAT,1}, >- {NV04_PGRAPH_MISC24_0,1}, >- {NV04_PGRAPH_MISC24_1,1}, >- {NV04_PGRAPH_MISC24_2,1}, >- {0x00400500,1}, >- {0x00400504,1}, >- {NV04_PGRAPH_VALID1,1}, >- {NV04_PGRAPH_VALID2,1} >+static uint32_t nv04_graph_ctx_regs [] = { >+ NV04_PGRAPH_CTX_SWITCH1, >+ NV04_PGRAPH_CTX_SWITCH2, >+ NV04_PGRAPH_CTX_SWITCH3, >+ NV04_PGRAPH_CTX_SWITCH4, >+ NV04_PGRAPH_CTX_CACHE1, >+ NV04_PGRAPH_CTX_CACHE2, >+ NV04_PGRAPH_CTX_CACHE3, >+ NV04_PGRAPH_CTX_CACHE4, >+ 0x00400184, >+ 0x004001a4, >+ 0x004001c4, >+ 0x004001e4, >+ 0x00400188, >+ 0x004001a8, >+ 0x004001c8, >+ 0x004001e8, >+ 0x0040018c, >+ 0x004001ac, >+ 0x004001cc, >+ 0x004001ec, >+ 0x00400190, >+ 0x004001b0, >+ 0x004001d0, >+ 0x004001f0, >+ 0x00400194, >+ 0x004001b4, >+ 0x004001d4, >+ 0x004001f4, >+ 0x00400198, >+ 0x004001b8, >+ 0x004001d8, >+ 0x004001f8, >+ 0x0040019c, >+ 0x004001bc, >+ 0x004001dc, >+ 0x004001fc, >+ 0x00400174, >+ NV04_PGRAPH_DMA_START_0, >+ NV04_PGRAPH_DMA_START_1, >+ NV04_PGRAPH_DMA_LENGTH, >+ NV04_PGRAPH_DMA_MISC, >+ NV04_PGRAPH_DMA_PITCH, >+ NV04_PGRAPH_BOFFSET0, >+ NV04_PGRAPH_BBASE0, >+ NV04_PGRAPH_BLIMIT0, >+ NV04_PGRAPH_BOFFSET1, >+ NV04_PGRAPH_BBASE1, >+ NV04_PGRAPH_BLIMIT1, >+ NV04_PGRAPH_BOFFSET2, >+ NV04_PGRAPH_BBASE2, >+ NV04_PGRAPH_BLIMIT2, >+ NV04_PGRAPH_BOFFSET3, >+ NV04_PGRAPH_BBASE3, >+ NV04_PGRAPH_BLIMIT3, >+ NV04_PGRAPH_BOFFSET4, >+ NV04_PGRAPH_BBASE4, >+ NV04_PGRAPH_BLIMIT4, >+ NV04_PGRAPH_BOFFSET5, >+ NV04_PGRAPH_BBASE5, >+ NV04_PGRAPH_BLIMIT5, >+ NV04_PGRAPH_BPITCH0, >+ NV04_PGRAPH_BPITCH1, >+ NV04_PGRAPH_BPITCH2, >+ NV04_PGRAPH_BPITCH3, >+ NV04_PGRAPH_BPITCH4, >+ NV04_PGRAPH_SURFACE, >+ NV04_PGRAPH_STATE, >+ NV04_PGRAPH_BSWIZZLE2, >+ NV04_PGRAPH_BSWIZZLE5, >+ NV04_PGRAPH_BPIXEL, >+ NV04_PGRAPH_NOTIFY, >+ NV04_PGRAPH_PATT_COLOR0, >+ NV04_PGRAPH_PATT_COLOR1, >+ NV04_PGRAPH_PATT_COLORRAM+0x00, >+ NV04_PGRAPH_PATT_COLORRAM+0x01, >+ NV04_PGRAPH_PATT_COLORRAM+0x02, >+ NV04_PGRAPH_PATT_COLORRAM+0x03, >+ NV04_PGRAPH_PATT_COLORRAM+0x04, >+ NV04_PGRAPH_PATT_COLORRAM+0x05, >+ NV04_PGRAPH_PATT_COLORRAM+0x06, >+ NV04_PGRAPH_PATT_COLORRAM+0x07, >+ NV04_PGRAPH_PATT_COLORRAM+0x08, >+ NV04_PGRAPH_PATT_COLORRAM+0x09, >+ NV04_PGRAPH_PATT_COLORRAM+0x0A, >+ NV04_PGRAPH_PATT_COLORRAM+0x0B, >+ NV04_PGRAPH_PATT_COLORRAM+0x0C, >+ NV04_PGRAPH_PATT_COLORRAM+0x0D, >+ NV04_PGRAPH_PATT_COLORRAM+0x0E, >+ NV04_PGRAPH_PATT_COLORRAM+0x0F, >+ NV04_PGRAPH_PATT_COLORRAM+0x10, >+ NV04_PGRAPH_PATT_COLORRAM+0x11, >+ NV04_PGRAPH_PATT_COLORRAM+0x12, >+ NV04_PGRAPH_PATT_COLORRAM+0x13, >+ NV04_PGRAPH_PATT_COLORRAM+0x14, >+ NV04_PGRAPH_PATT_COLORRAM+0x15, >+ NV04_PGRAPH_PATT_COLORRAM+0x16, >+ NV04_PGRAPH_PATT_COLORRAM+0x17, >+ NV04_PGRAPH_PATT_COLORRAM+0x18, >+ NV04_PGRAPH_PATT_COLORRAM+0x19, >+ NV04_PGRAPH_PATT_COLORRAM+0x1A, >+ NV04_PGRAPH_PATT_COLORRAM+0x1B, >+ NV04_PGRAPH_PATT_COLORRAM+0x1C, >+ NV04_PGRAPH_PATT_COLORRAM+0x1D, >+ NV04_PGRAPH_PATT_COLORRAM+0x1E, >+ NV04_PGRAPH_PATT_COLORRAM+0x1F, >+ NV04_PGRAPH_PATT_COLORRAM+0x20, >+ NV04_PGRAPH_PATT_COLORRAM+0x21, >+ NV04_PGRAPH_PATT_COLORRAM+0x22, >+ NV04_PGRAPH_PATT_COLORRAM+0x23, >+ NV04_PGRAPH_PATT_COLORRAM+0x24, >+ NV04_PGRAPH_PATT_COLORRAM+0x25, >+ NV04_PGRAPH_PATT_COLORRAM+0x26, >+ NV04_PGRAPH_PATT_COLORRAM+0x27, >+ NV04_PGRAPH_PATT_COLORRAM+0x28, >+ NV04_PGRAPH_PATT_COLORRAM+0x29, >+ NV04_PGRAPH_PATT_COLORRAM+0x2A, >+ NV04_PGRAPH_PATT_COLORRAM+0x2B, >+ NV04_PGRAPH_PATT_COLORRAM+0x2C, >+ NV04_PGRAPH_PATT_COLORRAM+0x2D, >+ NV04_PGRAPH_PATT_COLORRAM+0x2E, >+ NV04_PGRAPH_PATT_COLORRAM+0x2F, >+ NV04_PGRAPH_PATT_COLORRAM+0x30, >+ NV04_PGRAPH_PATT_COLORRAM+0x31, >+ NV04_PGRAPH_PATT_COLORRAM+0x32, >+ NV04_PGRAPH_PATT_COLORRAM+0x33, >+ NV04_PGRAPH_PATT_COLORRAM+0x34, >+ NV04_PGRAPH_PATT_COLORRAM+0x35, >+ NV04_PGRAPH_PATT_COLORRAM+0x36, >+ NV04_PGRAPH_PATT_COLORRAM+0x37, >+ NV04_PGRAPH_PATT_COLORRAM+0x38, >+ NV04_PGRAPH_PATT_COLORRAM+0x39, >+ NV04_PGRAPH_PATT_COLORRAM+0x3A, >+ NV04_PGRAPH_PATT_COLORRAM+0x3B, >+ NV04_PGRAPH_PATT_COLORRAM+0x3C, >+ NV04_PGRAPH_PATT_COLORRAM+0x3D, >+ NV04_PGRAPH_PATT_COLORRAM+0x3E, >+ NV04_PGRAPH_PATT_COLORRAM+0x3F, >+ NV04_PGRAPH_PATTERN, >+ 0x0040080c, >+ NV04_PGRAPH_PATTERN_SHAPE, >+ 0x00400600, >+ NV04_PGRAPH_ROP3, >+ NV04_PGRAPH_CHROMA, >+ NV04_PGRAPH_BETA_AND, >+ NV04_PGRAPH_BETA_PREMULT, >+ NV04_PGRAPH_CONTROL0, >+ NV04_PGRAPH_CONTROL1, >+ NV04_PGRAPH_CONTROL2, >+ NV04_PGRAPH_BLEND, >+ NV04_PGRAPH_STORED_FMT, >+ NV04_PGRAPH_SOURCE_COLOR, >+ 0x00400560, >+ 0x00400568, >+ 0x00400564, >+ 0x0040056c, >+ 0x00400400, >+ 0x00400480, >+ 0x00400404, >+ 0x00400484, >+ 0x00400408, >+ 0x00400488, >+ 0x0040040c, >+ 0x0040048c, >+ 0x00400410, >+ 0x00400490, >+ 0x00400414, >+ 0x00400494, >+ 0x00400418, >+ 0x00400498, >+ 0x0040041c, >+ 0x0040049c, >+ 0x00400420, >+ 0x004004a0, >+ 0x00400424, >+ 0x004004a4, >+ 0x00400428, >+ 0x004004a8, >+ 0x0040042c, >+ 0x004004ac, >+ 0x00400430, >+ 0x004004b0, >+ 0x00400434, >+ 0x004004b4, >+ 0x00400438, >+ 0x004004b8, >+ 0x0040043c, >+ 0x004004bc, >+ 0x00400440, >+ 0x004004c0, >+ 0x00400444, >+ 0x004004c4, >+ 0x00400448, >+ 0x004004c8, >+ 0x0040044c, >+ 0x004004cc, >+ 0x00400450, >+ 0x004004d0, >+ 0x00400454, >+ 0x004004d4, >+ 0x00400458, >+ 0x004004d8, >+ 0x0040045c, >+ 0x004004dc, >+ 0x00400460, >+ 0x004004e0, >+ 0x00400464, >+ 0x004004e4, >+ 0x00400468, >+ 0x004004e8, >+ 0x0040046c, >+ 0x004004ec, >+ 0x00400470, >+ 0x004004f0, >+ 0x00400474, >+ 0x004004f4, >+ 0x00400478, >+ 0x004004f8, >+ 0x0040047c, >+ 0x004004fc, >+ 0x0040053c, >+ 0x00400544, >+ 0x00400540, >+ 0x00400548, >+ 0x00400560, >+ 0x00400568, >+ 0x00400564, >+ 0x0040056c, >+ 0x00400534, >+ 0x00400538, >+ 0x00400514, >+ 0x00400518, >+ 0x0040051c, >+ 0x00400520, >+ 0x00400524, >+ 0x00400528, >+ 0x0040052c, >+ 0x00400530, >+ 0x00400d00, >+ 0x00400d40, >+ 0x00400d80, >+ 0x00400d04, >+ 0x00400d44, >+ 0x00400d84, >+ 0x00400d08, >+ 0x00400d48, >+ 0x00400d88, >+ 0x00400d0c, >+ 0x00400d4c, >+ 0x00400d8c, >+ 0x00400d10, >+ 0x00400d50, >+ 0x00400d90, >+ 0x00400d14, >+ 0x00400d54, >+ 0x00400d94, >+ 0x00400d18, >+ 0x00400d58, >+ 0x00400d98, >+ 0x00400d1c, >+ 0x00400d5c, >+ 0x00400d9c, >+ 0x00400d20, >+ 0x00400d60, >+ 0x00400da0, >+ 0x00400d24, >+ 0x00400d64, >+ 0x00400da4, >+ 0x00400d28, >+ 0x00400d68, >+ 0x00400da8, >+ 0x00400d2c, >+ 0x00400d6c, >+ 0x00400dac, >+ 0x00400d30, >+ 0x00400d70, >+ 0x00400db0, >+ 0x00400d34, >+ 0x00400d74, >+ 0x00400db4, >+ 0x00400d38, >+ 0x00400d78, >+ 0x00400db8, >+ 0x00400d3c, >+ 0x00400d7c, >+ 0x00400dbc, >+ 0x00400590, >+ 0x00400594, >+ 0x00400598, >+ 0x0040059c, >+ 0x004005a8, >+ 0x004005ac, >+ 0x004005b0, >+ 0x004005b4, >+ 0x004005c0, >+ 0x004005c4, >+ 0x004005c8, >+ 0x004005cc, >+ 0x004005d0, >+ 0x004005d4, >+ 0x004005d8, >+ 0x004005dc, >+ 0x004005e0, >+ NV04_PGRAPH_PASSTHRU_0, >+ NV04_PGRAPH_PASSTHRU_1, >+ NV04_PGRAPH_PASSTHRU_2, >+ NV04_PGRAPH_DVD_COLORFMT, >+ NV04_PGRAPH_SCALED_FORMAT, >+ NV04_PGRAPH_MISC24_0, >+ NV04_PGRAPH_MISC24_1, >+ NV04_PGRAPH_MISC24_2, >+ 0x00400500, >+ 0x00400504, >+ NV04_PGRAPH_VALID1, >+ NV04_PGRAPH_VALID2 >+ > >+}; > >+struct graph_state { >+ int nv04[sizeof(nv04_graph_ctx_regs)/sizeof(nv04_graph_ctx_regs[0])]; > }; > > void nouveau_nv04_context_switch(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; >- int channel, channel_old, i, j, index; >- >- channel=NV_READ(NV03_PFIFO_CACHE1_PUSH1)&(nouveau_fifo_number(dev)-1); >- channel_old = (NV_READ(NV04_PGRAPH_CTX_USER) >> 24) & (nouveau_fifo_number(dev)-1); >- >- DRM_DEBUG("NV: PGRAPH context switch interrupt channel %x -> %x\n",channel_old, channel); >+ struct nouveau_engine *engine = &dev_priv->Engine; >+ struct nouveau_channel *next, *last; >+ int chid; >+ >+ if (!dev) { >+ DRM_DEBUG("Invalid drm_device\n"); >+ return; >+ } >+ dev_priv = dev->dev_private; >+ if (!dev_priv) { >+ DRM_DEBUG("Invalid drm_nouveau_private\n"); >+ return; >+ } >+ if (!dev_priv->fifos) { >+ DRM_DEBUG("Invalid drm_nouveau_private->fifos\n"); >+ return; >+ } >+ >+ chid = engine->fifo.channel_id(dev); >+ next = dev_priv->fifos[chid]; >+ >+ if (!next) { >+ DRM_DEBUG("Invalid next channel\n"); >+ return; >+ } >+ >+ chid = (NV_READ(NV04_PGRAPH_CTX_USER) >> 24) & (engine->fifo.channels - 1); >+ last = dev_priv->fifos[chid]; >+ >+ if (!last) { >+ DRM_DEBUG("WARNING: Invalid last channel, switch to %x\n", >+ next->id); >+ } else { >+ DRM_INFO("NV: PGRAPH context switch interrupt channel %x -> %x\n", >+ last->id, next->id); >+ } > >- NV_WRITE(NV03_PFIFO_CACHES, 0x0); >+/* NV_WRITE(NV03_PFIFO_CACHES, 0x0); > NV_WRITE(NV04_PFIFO_CACHE0_PULL0, 0x0); >- NV_WRITE(NV04_PFIFO_CACHE1_PULL0, 0x0); >+ NV_WRITE(NV04_PFIFO_CACHE1_PULL0, 0x0);*/ > NV_WRITE(NV04_PGRAPH_FIFO,0x0); > >- nouveau_wait_for_idle(dev); >+ if (last) >+ nv04_graph_save_context(last); > >- // save PGRAPH context >- index=0; >- for (i = 0; i<sizeof(nv04_graph_ctx_regs)/sizeof(nv04_graph_ctx_regs[0]); i++) >- for (j = 0; j<nv04_graph_ctx_regs[i].number; j++) >- { >- dev_priv->fifos[channel_old]->pgraph_ctx[index] = NV_READ(nv04_graph_ctx_regs[i].reg+j*4); >- index++; >- } >+ nouveau_wait_for_idle(dev); > > NV_WRITE(NV04_PGRAPH_CTX_CONTROL, 0x10000000); > NV_WRITE(NV04_PGRAPH_CTX_USER, (NV_READ(NV04_PGRAPH_CTX_USER) & 0xffffff) | (0x0f << 24)); > >- // restore PGRAPH context >- index=0; >- for (i = 0; i<sizeof(nv04_graph_ctx_regs)/sizeof(nv04_graph_ctx_regs[0]); i++) >- for (j = 0; j<nv04_graph_ctx_regs[i].number; j++) >- { >- NV_WRITE(nv04_graph_ctx_regs[i].reg+j*4, dev_priv->fifos[channel]->pgraph_ctx[index]); >- index++; >- } >+ nouveau_wait_for_idle(dev); >+ >+ nv04_graph_load_context(next); > > NV_WRITE(NV04_PGRAPH_CTX_CONTROL, 0x10010100); >- NV_WRITE(NV04_PGRAPH_CTX_USER, channel << 24); >+ NV_WRITE(NV04_PGRAPH_CTX_USER, next->id << 24); > NV_WRITE(NV04_PGRAPH_FFINTFC_ST2, NV_READ(NV04_PGRAPH_FFINTFC_ST2)&0x000FFFFF); > >- NV_WRITE(NV04_PGRAPH_FIFO,0x0); >+/* NV_WRITE(NV04_PGRAPH_FIFO,0x0); > NV_WRITE(NV04_PFIFO_CACHE0_PULL0, 0x0); > NV_WRITE(NV04_PFIFO_CACHE1_PULL0, 0x1); >- NV_WRITE(NV03_PFIFO_CACHES, 0x1); >+ NV_WRITE(NV03_PFIFO_CACHES, 0x1);*/ > NV_WRITE(NV04_PGRAPH_FIFO,0x1); > } > > int nv04_graph_create_context(struct nouveau_channel *chan) { >+ struct graph_state* pgraph_ctx; > DRM_DEBUG("nv04_graph_context_create %d\n", chan->id); > >- memset(chan->pgraph_ctx, 0, sizeof(chan->pgraph_ctx)); >+ chan->pgraph_ctx = pgraph_ctx = drm_calloc(1, sizeof(*pgraph_ctx), >+ DRM_MEM_DRIVER); >+ >+ if (pgraph_ctx == NULL) >+ return -ENOMEM; > > //dev_priv->fifos[channel].pgraph_ctx_user = channel << 24; >- chan->pgraph_ctx[0] = 0x0001ffff; >+ pgraph_ctx->nv04[0] = 0x0001ffff; > /* is it really needed ??? */ > //dev_priv->fifos[channel].pgraph_ctx[1] = NV_READ(NV_PGRAPH_DEBUG_4); > //dev_priv->fifos[channel].pgraph_ctx[2] = NV_READ(0x004006b0); >@@ -352,23 +439,40 @@ int nv04_graph_create_context(struct nou > > void nv04_graph_destroy_context(struct nouveau_channel *chan) > { >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; >+ >+ drm_free(pgraph_ctx, sizeof(*pgraph_ctx), DRM_MEM_DRIVER); >+ chan->pgraph_ctx = NULL; > } > > int nv04_graph_load_context(struct nouveau_channel *chan) > { >- DRM_ERROR("stub!\n"); >+ struct drm_device *dev = chan->dev; >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; >+ int i; >+ >+ for (i = 0; i < sizeof(nv04_graph_ctx_regs)/sizeof(nv04_graph_ctx_regs[0]); i++) >+ NV_WRITE(nv04_graph_ctx_regs[i], pgraph_ctx->nv04[i]); >+ > return 0; > } > > int nv04_graph_save_context(struct nouveau_channel *chan) > { >- DRM_ERROR("stub!\n"); >+ struct drm_device *dev = chan->dev; >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; >+ int i; >+ >+ for (i = 0; i < sizeof(nv04_graph_ctx_regs)/sizeof(nv04_graph_ctx_regs[0]); i++) >+ pgraph_ctx->nv04[i] = NV_READ(nv04_graph_ctx_regs[i]); >+ > return 0; > } > > int nv04_graph_init(struct drm_device *dev) { > struct drm_nouveau_private *dev_priv = dev->dev_private; >- int i,sum=0; > > NV_WRITE(NV03_PMC_ENABLE, NV_READ(NV03_PMC_ENABLE) & > ~NV_PMC_ENABLE_PGRAPH); >@@ -379,24 +483,22 @@ int nv04_graph_init(struct drm_device *d > NV_WRITE(NV03_PGRAPH_INTR, 0xFFFFFFFF); > NV_WRITE(NV03_PGRAPH_INTR_EN, 0xFFFFFFFF); > >- // check the context is big enough >- for ( i = 0 ; i<sizeof(nv04_graph_ctx_regs)/sizeof(nv04_graph_ctx_regs[0]); i++) >- sum+=nv04_graph_ctx_regs[i].number; >- if ( sum*4>sizeof(dev_priv->fifos[0]->pgraph_ctx) ) >- DRM_ERROR("pgraph_ctx too small\n"); >- >- NV_WRITE(NV03_PGRAPH_INTR_EN, 0x00000000); >- NV_WRITE(NV03_PGRAPH_INTR , 0xFFFFFFFF); >- >- NV_WRITE(NV04_PGRAPH_DEBUG_0, 0x000001FF); >- NV_WRITE(NV04_PGRAPH_DEBUG_0, 0x1230C000); >- NV_WRITE(NV04_PGRAPH_DEBUG_1, 0x72111101); >- NV_WRITE(NV04_PGRAPH_DEBUG_2, 0x11D5F071); >- NV_WRITE(NV04_PGRAPH_DEBUG_3, 0x0004FF31); >- NV_WRITE(NV04_PGRAPH_DEBUG_3, 0x4004FF31 | >- (0x00D00000) | >- (1<<29) | >- (1<<31)); >+ NV_WRITE(NV04_PGRAPH_VALID1, 0); >+ NV_WRITE(NV04_PGRAPH_VALID2, 0); >+ /*NV_WRITE(NV04_PGRAPH_DEBUG_0, 0x000001FF); >+ NV_WRITE(NV04_PGRAPH_DEBUG_0, 0x001FFFFF);*/ >+ NV_WRITE(NV04_PGRAPH_DEBUG_0, 0x1231c000); >+ /*1231C000 blob, 001 haiku*/ >+ //*V_WRITE(NV04_PGRAPH_DEBUG_1, 0xf2d91100);*/ >+ NV_WRITE(NV04_PGRAPH_DEBUG_1, 0x72111100); >+ /*0x72111100 blob , 01 haiku*/ >+ /*NV_WRITE(NV04_PGRAPH_DEBUG_2, 0x11d5f870);*/ >+ NV_WRITE(NV04_PGRAPH_DEBUG_2, 0x11d5f071); >+ /*haiku same*/ >+ >+ /*NV_WRITE(NV04_PGRAPH_DEBUG_3, 0xfad4ff31);*/ >+ NV_WRITE(NV04_PGRAPH_DEBUG_3, 0x10d4ff31); >+ /*haiku and blob 10d4*/ > > NV_WRITE(NV04_PGRAPH_STATE , 0xFFFFFFFF); > NV_WRITE(NV04_PGRAPH_CTX_CONTROL , 0x10010100); >@@ -412,4 +514,3 @@ int nv04_graph_init(struct drm_device *d > void nv04_graph_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv04_instmem.c linux-2.6.23.i686/drivers/char/drm/nv04_instmem.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv04_instmem.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv04_instmem.c 2008-01-06 09:24:57.000000000 +0100 >@@ -33,6 +33,7 @@ static void > nv04_instmem_configure_fixed_tables(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_engine *engine = &dev_priv->Engine; > > /* FIFO hash table (RAMHT) > * use 4k hash table at RAMIN+0x10000 >@@ -61,8 +62,8 @@ nv04_instmem_configure_fixed_tables(stru > case NV_40: > case NV_44: > dev_priv->ramfc_offset = 0x20000; >- dev_priv->ramfc_size = nouveau_fifo_number(dev) * >- nouveau_fifo_ctx_size(dev); >+ dev_priv->ramfc_size = engine->fifo.channels * >+ nouveau_fifo_ctx_size(dev); > break; > case NV_30: > case NV_20: >@@ -70,11 +71,10 @@ nv04_instmem_configure_fixed_tables(stru > case NV_11: > case NV_10: > case NV_04: >- case NV_03: > default: > dev_priv->ramfc_offset = 0x11400; >- dev_priv->ramfc_size = nouveau_fifo_number(dev) * >- nouveau_fifo_ctx_size(dev); >+ dev_priv->ramfc_size = engine->fifo.channels * >+ nouveau_fifo_ctx_size(dev); > break; > } > DRM_DEBUG("RAMFC offset=0x%x, size=%d\n", dev_priv->ramfc_offset, >@@ -135,7 +135,7 @@ nv04_instmem_clear(struct drm_device *de > if (gpuobj->im_bound) > dev_priv->Engine.instmem.unbind(dev, gpuobj); > gpuobj->im_backing = NULL; >- } >+ } > } > > int >@@ -157,4 +157,3 @@ nv04_instmem_unbind(struct drm_device *d > gpuobj->im_bound = 0; > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv04_mc.c linux-2.6.23.i686/drivers/char/drm/nv04_mc.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv04_mc.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv04_mc.c 2008-01-06 09:24:57.000000000 +0100 >@@ -20,4 +20,3 @@ void > nv04_mc_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv04_timer.c linux-2.6.23.i686/drivers/char/drm/nv04_timer.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv04_timer.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv04_timer.c 2008-01-06 09:24:57.000000000 +0100 >@@ -42,4 +42,3 @@ void > nv04_timer_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv10_fb.c linux-2.6.23.i686/drivers/char/drm/nv10_fb.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv10_fb.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv10_fb.c 2008-01-06 09:24:57.000000000 +0100 >@@ -23,4 +23,3 @@ void > nv10_fb_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv10_fifo.c linux-2.6.23.i686/drivers/char/drm/nv10_fifo.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv10_fifo.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv10_fifo.c 2008-01-06 09:24:57.000000000 +0100 >@@ -37,6 +37,15 @@ > #define NV10_RAMFC__SIZE ((dev_priv->chipset) >= 0x17 ? 64 : 32) > > int >+nv10_fifo_channel_id(struct drm_device *dev) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ >+ return (NV_READ(NV03_PFIFO_CACHE1_PUSH1) & >+ NV10_PFIFO_CACHE1_PUSH1_CHID_MASK); >+} >+ >+int > nv10_fifo_create_context(struct nouveau_channel *chan) > { > struct drm_device *dev = chan->dev; >@@ -87,7 +96,8 @@ nv10_fifo_load_context(struct nouveau_ch > struct drm_nouveau_private *dev_priv = dev->dev_private; > uint32_t tmp; > >- NV_WRITE(NV03_PFIFO_CACHE1_PUSH1 , 0x00000100 | chan->id); >+ NV_WRITE(NV03_PFIFO_CACHE1_PUSH1, >+ NV03_PFIFO_CACHE1_PUSH1_DMA | chan->id); > > NV_WRITE(NV04_PFIFO_CACHE1_DMA_GET , RAMFC_RD(DMA_GET)); > NV_WRITE(NV04_PFIFO_CACHE1_DMA_PUT , RAMFC_RD(DMA_PUT)); >@@ -157,4 +167,3 @@ nv10_fifo_save_context(struct nouveau_ch > > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv10_graph.c linux-2.6.23.i686/drivers/char/drm/nv10_graph.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv10_graph.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv10_graph.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,4 +1,4 @@ >-/* >+/* > * Copyright 2007 Matthieu CASTET <castet.matthieu@free.fr> > * All Rights Reserved. > * >@@ -27,159 +27,20 @@ > #include "nouveau_drm.h" > #include "nouveau_drv.h" > >+#define NV10_FIFO_NUMBER 32 > >-static void nv10_praph_pipe(struct drm_device *dev) { >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- int i; >- >- nouveau_wait_for_idle(dev); >- /* XXX check haiku comments */ >- NV_WRITE(NV10_PGRAPH_XFMODE0, 0x10000000); >- NV_WRITE(NV10_PGRAPH_XFMODE1, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x000064c0); >- for (i = 0; i < 4; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- for (i = 0; i < 4; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00006ab0); >- >- for (i = 0; i < 3; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00006a80); >- for (i = 0; i < 3; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00000040); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000008); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00000200); >- for (i = 0; i < 48; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- nouveau_wait_for_idle(dev); >- >- NV_WRITE(NV10_PGRAPH_XFMODE0, 0x00000000); >- NV_WRITE(NV10_PGRAPH_XFMODE1, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00006400); >- for (i = 0; i < 211; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x40000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x40000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x40000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x40000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00006800); >- for (i = 0; i < 162; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >- for (i = 0; i < 25; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00006c00); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0xbf800000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00007000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x7149f2ca); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x7149f2ca); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x7149f2ca); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x7149f2ca); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x7149f2ca); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x7149f2ca); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x7149f2ca); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x7149f2ca); >- for (i = 0; i < 35; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00007400); >- for (i = 0; i < 48; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00007800); >- for (i = 0; i < 48; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00004400); >- for (i = 0; i < 32; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00000000); >- for (i = 0; i < 16; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00000040); >- for (i = 0; i < 4; i++) >- NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >- >- nouveau_wait_for_idle(dev); >-} >+struct pipe_state { >+ uint32_t pipe_0x0000[0x040/4]; >+ uint32_t pipe_0x0040[0x010/4]; >+ uint32_t pipe_0x0200[0x0c0/4]; >+ uint32_t pipe_0x4400[0x080/4]; >+ uint32_t pipe_0x6400[0x3b0/4]; >+ uint32_t pipe_0x6800[0x2f0/4]; >+ uint32_t pipe_0x6c00[0x030/4]; >+ uint32_t pipe_0x7000[0x130/4]; >+ uint32_t pipe_0x7400[0x0c0/4]; >+ uint32_t pipe_0x7800[0x0c0/4]; >+}; > > static int nv10_graph_ctx_regs [] = { > NV10_PGRAPH_CTX_SWITCH1, >@@ -459,18 +320,18 @@ NV03_PGRAPH_CLIPX_0, > NV03_PGRAPH_CLIPX_1, > NV03_PGRAPH_CLIPY_0, > NV03_PGRAPH_CLIPY_1, >-0x00400e40, >-0x00400e44, >-0x00400e48, >-0x00400e4c, >-0x00400e50, >-0x00400e54, >-0x00400e58, >-0x00400e5c, >-0x00400e60, >-0x00400e64, >-0x00400e68, >-0x00400e6c, >+NV10_PGRAPH_COMBINER0_IN_ALPHA, >+NV10_PGRAPH_COMBINER1_IN_ALPHA, >+NV10_PGRAPH_COMBINER0_IN_RGB, >+NV10_PGRAPH_COMBINER1_IN_RGB, >+NV10_PGRAPH_COMBINER_COLOR0, >+NV10_PGRAPH_COMBINER_COLOR1, >+NV10_PGRAPH_COMBINER0_OUT_ALPHA, >+NV10_PGRAPH_COMBINER1_OUT_ALPHA, >+NV10_PGRAPH_COMBINER0_OUT_RGB, >+NV10_PGRAPH_COMBINER1_OUT_RGB, >+NV10_PGRAPH_COMBINER_FINAL0, >+NV10_PGRAPH_COMBINER_FINAL1, > 0x00400e00, > 0x00400e04, > 0x00400e08, >@@ -524,20 +385,269 @@ NV10_PGRAPH_DEBUG_4, > 0x00400a04, > }; > >+struct graph_state { >+ int nv10[sizeof(nv10_graph_ctx_regs)/sizeof(nv10_graph_ctx_regs[0])]; >+ int nv17[sizeof(nv17_graph_ctx_regs)/sizeof(nv17_graph_ctx_regs[0])]; >+ struct pipe_state pipe_state; >+}; >+ >+static void nv10_graph_save_pipe(struct nouveau_channel *chan) { >+ struct drm_device *dev = chan->dev; >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; >+ struct pipe_state *fifo_pipe_state = &pgraph_ctx->pipe_state; >+ int i; >+#define PIPE_SAVE(addr) \ >+ do { \ >+ NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, addr); \ >+ for (i=0; i < sizeof(fifo_pipe_state->pipe_##addr)/sizeof(fifo_pipe_state->pipe_##addr[0]); i++) \ >+ fifo_pipe_state->pipe_##addr[i] = NV_READ(NV10_PGRAPH_PIPE_DATA); \ >+ } while (0) >+ >+ PIPE_SAVE(0x4400); >+ PIPE_SAVE(0x0200); >+ PIPE_SAVE(0x6400); >+ PIPE_SAVE(0x6800); >+ PIPE_SAVE(0x6c00); >+ PIPE_SAVE(0x7000); >+ PIPE_SAVE(0x7400); >+ PIPE_SAVE(0x7800); >+ PIPE_SAVE(0x0040); >+ PIPE_SAVE(0x0000); >+ >+#undef PIPE_SAVE >+} >+ >+static void nv10_graph_load_pipe(struct nouveau_channel *chan) { >+ struct drm_device *dev = chan->dev; >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; >+ struct pipe_state *fifo_pipe_state = &pgraph_ctx->pipe_state; >+ int i; >+ uint32_t xfmode0, xfmode1; >+#define PIPE_RESTORE(addr) \ >+ do { \ >+ NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, addr); \ >+ for (i=0; i < sizeof(fifo_pipe_state->pipe_##addr)/sizeof(fifo_pipe_state->pipe_##addr[0]); i++) \ >+ NV_WRITE(NV10_PGRAPH_PIPE_DATA, fifo_pipe_state->pipe_##addr[i]); \ >+ } while (0) >+ >+ >+ nouveau_wait_for_idle(dev); >+ /* XXX check haiku comments */ >+ xfmode0 = NV_READ(NV10_PGRAPH_XFMODE0); >+ xfmode1 = NV_READ(NV10_PGRAPH_XFMODE1); >+ NV_WRITE(NV10_PGRAPH_XFMODE0, 0x10000000); >+ NV_WRITE(NV10_PGRAPH_XFMODE1, 0x00000000); >+ NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x000064c0); >+ for (i = 0; i < 4; i++) >+ NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >+ for (i = 0; i < 4; i++) >+ NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >+ >+ NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00006ab0); >+ for (i = 0; i < 3; i++) >+ NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x3f800000); >+ >+ NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00006a80); >+ for (i = 0; i < 3; i++) >+ NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000000); >+ >+ NV_WRITE(NV10_PGRAPH_PIPE_ADDRESS, 0x00000040); >+ NV_WRITE(NV10_PGRAPH_PIPE_DATA, 0x00000008); >+ >+ >+ PIPE_RESTORE(0x0200); >+ nouveau_wait_for_idle(dev); >+ >+ /* restore XFMODE */ >+ NV_WRITE(NV10_PGRAPH_XFMODE0, xfmode0); >+ NV_WRITE(NV10_PGRAPH_XFMODE1, xfmode1); >+ PIPE_RESTORE(0x6400); >+ PIPE_RESTORE(0x6800); >+ PIPE_RESTORE(0x6c00); >+ PIPE_RESTORE(0x7000); >+ PIPE_RESTORE(0x7400); >+ PIPE_RESTORE(0x7800); >+ PIPE_RESTORE(0x4400); >+ PIPE_RESTORE(0x0000); >+ PIPE_RESTORE(0x0040); >+ nouveau_wait_for_idle(dev); >+ >+#undef PIPE_RESTORE >+} >+ >+static void nv10_graph_create_pipe(struct nouveau_channel *chan) { >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; >+ struct pipe_state *fifo_pipe_state = &pgraph_ctx->pipe_state; >+ uint32_t *fifo_pipe_state_addr; >+ int i; >+#define PIPE_INIT(addr) \ >+ do { \ >+ fifo_pipe_state_addr = fifo_pipe_state->pipe_##addr; \ >+ } while (0) >+#define PIPE_INIT_END(addr) \ >+ do { \ >+ if (fifo_pipe_state_addr != \ >+ sizeof(fifo_pipe_state->pipe_##addr)/sizeof(fifo_pipe_state->pipe_##addr[0]) + fifo_pipe_state->pipe_##addr) \ >+ DRM_ERROR("incomplete pipe init for 0x%x : %p/%p\n", addr, fifo_pipe_state_addr, \ >+ sizeof(fifo_pipe_state->pipe_##addr)/sizeof(fifo_pipe_state->pipe_##addr[0]) + fifo_pipe_state->pipe_##addr); \ >+ } while (0) >+#define NV_WRITE_PIPE_INIT(value) *(fifo_pipe_state_addr++) = value >+ >+ PIPE_INIT(0x0200); >+ for (i = 0; i < 48; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x0200); >+ >+ PIPE_INIT(0x6400); >+ for (i = 0; i < 211; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x3f800000); >+ NV_WRITE_PIPE_INIT(0x40000000); >+ NV_WRITE_PIPE_INIT(0x40000000); >+ NV_WRITE_PIPE_INIT(0x40000000); >+ NV_WRITE_PIPE_INIT(0x40000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x3f800000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x3f000000); >+ NV_WRITE_PIPE_INIT(0x3f000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x3f800000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x3f800000); >+ NV_WRITE_PIPE_INIT(0x3f800000); >+ NV_WRITE_PIPE_INIT(0x3f800000); >+ NV_WRITE_PIPE_INIT(0x3f800000); >+ PIPE_INIT_END(0x6400); >+ >+ PIPE_INIT(0x6800); >+ for (i = 0; i < 162; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x3f800000); >+ for (i = 0; i < 25; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x6800); >+ >+ PIPE_INIT(0x6c00); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0xbf800000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x6c00); >+ >+ PIPE_INIT(0x7000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x7149f2ca); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x7149f2ca); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x7149f2ca); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x7149f2ca); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x7149f2ca); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x7149f2ca); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x7149f2ca); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x00000000); >+ NV_WRITE_PIPE_INIT(0x7149f2ca); >+ for (i = 0; i < 35; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x7000); >+ >+ PIPE_INIT(0x7400); >+ for (i = 0; i < 48; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x7400); >+ >+ PIPE_INIT(0x7800); >+ for (i = 0; i < 48; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x7800); >+ >+ PIPE_INIT(0x4400); >+ for (i = 0; i < 32; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x4400); >+ >+ PIPE_INIT(0x0000); >+ for (i = 0; i < 16; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x0000); >+ >+ PIPE_INIT(0x0040); >+ for (i = 0; i < 4; i++) >+ NV_WRITE_PIPE_INIT(0x00000000); >+ PIPE_INIT_END(0x0040); >+ >+#undef PIPE_INIT >+#undef PIPE_INIT_END >+#undef NV_WRITE_PIPE_INIT >+} >+ > static int nv10_graph_ctx_regs_find_offset(struct drm_device *dev, int reg) > { >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- int i, j; >+ int i; > for (i = 0; i < sizeof(nv10_graph_ctx_regs)/sizeof(nv10_graph_ctx_regs[0]); i++) { > if (nv10_graph_ctx_regs[i] == reg) > return i; > } >- if (dev_priv->chipset>=0x17) { >- for (j = 0; j < sizeof(nv17_graph_ctx_regs)/sizeof(nv17_graph_ctx_regs[0]); i++,j++) { >- if (nv17_graph_ctx_regs[j] == reg) >- return i; >- } >+ DRM_ERROR("unknow offset nv10_ctx_regs %d\n", reg); >+ return -1; >+} >+ >+static int nv17_graph_ctx_regs_find_offset(struct drm_device *dev, int reg) >+{ >+ int i; >+ for (i = 0; i < sizeof(nv17_graph_ctx_regs)/sizeof(nv17_graph_ctx_regs[0]); i++) { >+ if (nv17_graph_ctx_regs[i] == reg) >+ return i; > } >+ DRM_ERROR("unknow offset nv17_ctx_regs %d\n", reg); > return -1; > } > >@@ -545,15 +655,17 @@ int nv10_graph_load_context(struct nouve > { > struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; >- int i, j; >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; >+ int i; > > for (i = 0; i < sizeof(nv10_graph_ctx_regs)/sizeof(nv10_graph_ctx_regs[0]); i++) >- NV_WRITE(nv10_graph_ctx_regs[i], chan->pgraph_ctx[i]); >+ NV_WRITE(nv10_graph_ctx_regs[i], pgraph_ctx->nv10[i]); > if (dev_priv->chipset>=0x17) { >- for (j = 0; j < sizeof(nv17_graph_ctx_regs)/sizeof(nv17_graph_ctx_regs[0]); i++,j++) >- NV_WRITE(nv17_graph_ctx_regs[j], chan->pgraph_ctx[i]); >+ for (i = 0; i < sizeof(nv17_graph_ctx_regs)/sizeof(nv17_graph_ctx_regs[0]); i++) >+ NV_WRITE(nv17_graph_ctx_regs[i], pgraph_ctx->nv17[i]); > } >- NV_WRITE(NV10_PGRAPH_CTX_USER, chan->id << 24); >+ >+ nv10_graph_load_pipe(chan); > > return 0; > } >@@ -562,21 +674,25 @@ int nv10_graph_save_context(struct nouve > { > struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; >- int i, j; >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; >+ int i; > > for (i = 0; i < sizeof(nv10_graph_ctx_regs)/sizeof(nv10_graph_ctx_regs[0]); i++) >- chan->pgraph_ctx[i] = NV_READ(nv10_graph_ctx_regs[i]); >+ pgraph_ctx->nv10[i] = NV_READ(nv10_graph_ctx_regs[i]); > if (dev_priv->chipset>=0x17) { >- for (j = 0; j < sizeof(nv17_graph_ctx_regs)/sizeof(nv17_graph_ctx_regs[0]); i++,j++) >- chan->pgraph_ctx[i] = NV_READ(nv17_graph_ctx_regs[j]); >+ for (i = 0; i < sizeof(nv17_graph_ctx_regs)/sizeof(nv17_graph_ctx_regs[0]); i++) >+ pgraph_ctx->nv17[i] = NV_READ(nv17_graph_ctx_regs[i]); > } > >+ nv10_graph_save_pipe(chan); >+ > return 0; > } > > void nouveau_nv10_context_switch(struct drm_device *dev) > { > struct drm_nouveau_private *dev_priv; >+ struct nouveau_engine *engine; > struct nouveau_channel *next, *last; > int chid; > >@@ -593,42 +709,44 @@ void nouveau_nv10_context_switch(struct > DRM_DEBUG("Invalid drm_nouveau_private->fifos\n"); > return; > } >+ engine = &dev_priv->Engine; > >- chid = (NV_READ(NV04_PGRAPH_TRAPPED_ADDR) >> 20)&(nouveau_fifo_number(dev)-1); >+ chid = (NV_READ(NV04_PGRAPH_TRAPPED_ADDR) >> 20) & >+ (engine->fifo.channels - 1); > next = dev_priv->fifos[chid]; > > if (!next) { >- DRM_DEBUG("Invalid next channel\n"); >+ DRM_ERROR("Invalid next channel\n"); > return; > } > >- chid = (NV_READ(NV10_PGRAPH_CTX_USER) >> 24) & (nouveau_fifo_number(dev)-1); >+ chid = (NV_READ(NV10_PGRAPH_CTX_USER) >> 24) & >+ (engine->fifo.channels - 1); > last = dev_priv->fifos[chid]; > > if (!last) { >- DRM_DEBUG("WARNING: Invalid last channel, switch to %x\n", >+ DRM_INFO("WARNING: Invalid last channel, switch to %x\n", > next->id); > } else { >- DRM_INFO("NV: PGRAPH context switch interrupt channel %x -> %x\n", >+ DRM_DEBUG("NV: PGRAPH context switch interrupt channel %x -> %x\n", > last->id, next->id); > } > > NV_WRITE(NV04_PGRAPH_FIFO,0x0); > if (last) { >+ nouveau_wait_for_idle(dev); > nv10_graph_save_context(last); >- } >+ } > > nouveau_wait_for_idle(dev); > > NV_WRITE(NV10_PGRAPH_CTX_CONTROL, 0x10000000); >- NV_WRITE(NV10_PGRAPH_CTX_USER, (NV_READ(NV10_PGRAPH_CTX_USER) & 0xffffff) | (0x1f << 24)); > > nouveau_wait_for_idle(dev); > > nv10_graph_load_context(next); > > NV_WRITE(NV10_PGRAPH_CTX_CONTROL, 0x10010100); >- //NV_WRITE(NV10_PGRAPH_CTX_USER, next->id << 24); > NV_WRITE(NV10_PGRAPH_FFINTFC_ST2, NV_READ(NV10_PGRAPH_FFINTFC_ST2)&0xCFFFFFFF); > NV_WRITE(NV04_PGRAPH_FIFO,0x1); > } >@@ -636,16 +754,27 @@ void nouveau_nv10_context_switch(struct > #define NV_WRITE_CTX(reg, val) do { \ > int offset = nv10_graph_ctx_regs_find_offset(dev, reg); \ > if (offset > 0) \ >- chan->pgraph_ctx[offset] = val; \ >+ pgraph_ctx->nv10[offset] = val; \ >+ } while (0) >+ >+#define NV17_WRITE_CTX(reg, val) do { \ >+ int offset = nv17_graph_ctx_regs_find_offset(dev, reg); \ >+ if (offset > 0) \ >+ pgraph_ctx->nv17[offset] = val; \ > } while (0) > > int nv10_graph_create_context(struct nouveau_channel *chan) { > struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct graph_state* pgraph_ctx; > > DRM_DEBUG("nv10_graph_context_create %d\n", chan->id); > >- memset(chan->pgraph_ctx, 0, sizeof(chan->pgraph_ctx)); >+ chan->pgraph_ctx = pgraph_ctx = drm_calloc(1, sizeof(*pgraph_ctx), >+ DRM_MEM_DRIVER); >+ >+ if (pgraph_ctx == NULL) >+ return -ENOMEM; > > /* mmio trace suggest that should be done in ddx with methods/objects */ > #if 0 >@@ -685,22 +814,16 @@ int nv10_graph_create_context(struct nou > NV_WRITE_CTX(0x00400e34, 0x00080008); > if (dev_priv->chipset>=0x17) { > /* is it really needed ??? */ >- NV_WRITE_CTX(NV10_PGRAPH_DEBUG_4, NV_READ(NV10_PGRAPH_DEBUG_4)); >- NV_WRITE_CTX(0x004006b0, NV_READ(0x004006b0)); >- NV_WRITE_CTX(0x00400eac, 0x0fff0000); >- NV_WRITE_CTX(0x00400eb0, 0x0fff0000); >- NV_WRITE_CTX(0x00400ec0, 0x00000080); >- NV_WRITE_CTX(0x00400ed0, 0x00000080); >+ NV17_WRITE_CTX(NV10_PGRAPH_DEBUG_4, NV_READ(NV10_PGRAPH_DEBUG_4)); >+ NV17_WRITE_CTX(0x004006b0, NV_READ(0x004006b0)); >+ NV17_WRITE_CTX(0x00400eac, 0x0fff0000); >+ NV17_WRITE_CTX(0x00400eb0, 0x0fff0000); >+ NV17_WRITE_CTX(0x00400ec0, 0x00000080); >+ NV17_WRITE_CTX(0x00400ed0, 0x00000080); > } >+ NV_WRITE_CTX(NV10_PGRAPH_CTX_USER, chan->id << 24); > >- /* for the first channel init the regs */ >- if (dev_priv->fifo_alloc_count == 0) >- nv10_graph_load_context(chan); >- >- >- //XXX should be saved/restored for each fifo >- //we supposed here we have X fifo and only one 3D fifo. >- nv10_praph_pipe(dev); >+ nv10_graph_create_pipe(chan); > return 0; > } > >@@ -708,9 +831,18 @@ void nv10_graph_destroy_context(struct n > { > struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; >+ struct nouveau_engine *engine = &dev_priv->Engine; >+ struct graph_state* pgraph_ctx = chan->pgraph_ctx; > int chid; >- chid = (NV_READ(NV10_PGRAPH_CTX_USER) >> 24) & (nouveau_fifo_number(dev)-1); > >+ drm_free(pgraph_ctx, sizeof(*pgraph_ctx), DRM_MEM_DRIVER); >+ chan->pgraph_ctx = NULL; >+ >+ chid = (NV_READ(NV10_PGRAPH_CTX_USER) >> 24) & (engine->fifo.channels - 1); >+ >+ /* This code seems to corrupt the 3D pipe, but blob seems to do similar things ???? >+ */ >+#if 0 > /* does this avoid a potential context switch while we are written graph > * reg, or we should mask graph interrupt ??? > */ >@@ -719,10 +851,16 @@ void nv10_graph_destroy_context(struct n > DRM_INFO("cleanning a channel with graph in current context\n"); > nouveau_wait_for_idle(dev); > DRM_INFO("reseting current graph context\n"); >- nv10_graph_create_context(chan); >+ /* can't be call here because of dynamic mem alloc */ >+ //nv10_graph_create_context(chan); > nv10_graph_load_context(chan); > } >- NV_WRITE(NV04_PGRAPH_FIFO,0x1); >+ NV_WRITE(NV04_PGRAPH_FIFO, 0x1); >+#else >+ if (chid == chan->id) { >+ DRM_INFO("cleanning a channel with graph in current context\n"); >+ } >+#endif > } > > int nv10_graph_init(struct drm_device *dev) { >@@ -774,4 +912,3 @@ int nv10_graph_init(struct drm_device *d > void nv10_graph_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv20_graph.c linux-2.6.23.i686/drivers/char/drm/nv20_graph.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv20_graph.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv20_graph.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,147 +1,685 @@ >-/* >- * Copyright 2007 Matthieu CASTET <castet.matthieu@free.fr> >- * All Rights Reserved. >- * >- * Permission is hereby granted, free of charge, to any person obtaining a >- * copy of this software and associated documentation files (the "Software"), >- * to deal in the Software without restriction, including without limitation >- * the rights to use, copy, modify, merge, publish, distribute, sublicense, >- * and/or sell copies of the Software, and to permit persons to whom the >- * Software is furnished to do so, subject to the following conditions: >- * >- * The above copyright notice and this permission notice (including the next >- * paragraph) shall be included in all copies or substantial portions of the >- * Software. >- * >- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >- * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >- * DEALINGS IN THE SOFTWARE. >- */ >- > #include "drmP.h" > #include "drm.h" > #include "nouveau_drv.h" > #include "nouveau_drm.h" > >-#define NV20_GRCTX_SIZE (3529*4) >+/* >+ * NV20 >+ * ----- >+ * There are 3 families : >+ * NV20 is 0x10de:0x020* >+ * NV25/28 is 0x10de:0x025* / 0x10de:0x028* >+ * NV2A is 0x10de:0x02A0 >+ * >+ * NV30 >+ * ----- >+ * There are 3 families : >+ * NV30/31 is 0x10de:0x030* / 0x10de:0x031* >+ * NV34 is 0x10de:0x032* >+ * NV35/36 is 0x10de:0x033* / 0x10de:0x034* >+ * >+ * Not seen in the wild, no dumps (probably NV35) : >+ * NV37 is 0x10de:0x00fc, 0x10de:0x00fd >+ * NV38 is 0x10de:0x0333, 0x10de:0x00fe >+ * >+ */ > >-int nv20_graph_create_context(struct nouveau_channel *chan) { >- struct drm_device *dev = chan->dev; >+#define NV20_GRCTX_SIZE (3580*4) >+#define NV25_GRCTX_SIZE (3529*4) >+#define NV2A_GRCTX_SIZE (3500*4) >+ >+#define NV30_31_GRCTX_SIZE (24392) >+#define NV34_GRCTX_SIZE (18140) >+#define NV35_36_GRCTX_SIZE (22396) >+ >+static void nv20_graph_context_init(struct drm_device *dev, >+ struct nouveau_gpuobj *ctx) >+{ > struct drm_nouveau_private *dev_priv = dev->dev_private; >- unsigned int ctx_size = NV20_GRCTX_SIZE; >- int ret; >+ int i; >+/* >+write32 #1 block at +0x00740adc NV_PRAMIN+0x40adc of 3369 (0xd29) elements: >++0x00740adc: ffff0000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740afc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740b1c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740b3c: 00000000 0fff0000 0fff0000 00000000 00000000 00000000 00000000 00000000 >++0x00740b5c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740b7c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740b9c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740bbc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740bdc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740bfc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >+ >++0x00740c1c: 00000101 00000000 00000000 00000000 00000000 00000111 00000000 00000000 >++0x00740c3c: 00000000 00000000 00000000 44400000 00000000 00000000 00000000 00000000 >++0x00740c5c: 00000000 00000000 00000000 00000000 00000000 00000000 00030303 00030303 >++0x00740c7c: 00030303 00030303 00000000 00000000 00000000 00000000 00080000 00080000 >++0x00740c9c: 00080000 00080000 00000000 00000000 01012000 01012000 01012000 01012000 >++0x00740cbc: 000105b8 000105b8 000105b8 000105b8 00080008 00080008 00080008 00080008 >++0x00740cdc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740cfc: 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 >++0x00740d1c: 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 >++0x00740d3c: 00000000 00000000 4b7fffff 00000000 00000000 00000000 00000000 00000000 >+ >++0x00740d5c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740d7c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740d9c: 00000001 00000000 00004000 00000000 00000000 00000001 00000000 00040000 >++0x00740dbc: 00010000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740ddc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >+... >+*/ >+ INSTANCE_WR(ctx, (0x33c/4)+0, 0xffff0000); >+ INSTANCE_WR(ctx, (0x33c/4)+25, 0x0fff0000); >+ INSTANCE_WR(ctx, (0x33c/4)+26, 0x0fff0000); >+ INSTANCE_WR(ctx, (0x33c/4)+80, 0x00000101); >+ INSTANCE_WR(ctx, (0x33c/4)+85, 0x00000111); >+ INSTANCE_WR(ctx, (0x33c/4)+91, 0x44400000); >+ for (i = 0; i < 4; ++i) >+ INSTANCE_WR(ctx, (0x33c/4)+102+i, 0x00030303); >+ for (i = 0; i < 4; ++i) >+ INSTANCE_WR(ctx, (0x33c/4)+110+i, 0x00080000); >+ for (i = 0; i < 4; ++i) >+ INSTANCE_WR(ctx, (0x33c/4)+116+i, 0x01012000); >+ for (i = 0; i < 4; ++i) >+ INSTANCE_WR(ctx, (0x33c/4)+120+i, 0x000105b8); >+ for (i = 0; i < 4; ++i) >+ INSTANCE_WR(ctx, (0x33c/4)+124+i, 0x00080008); >+ for (i = 0; i < 16; ++i) >+ INSTANCE_WR(ctx, (0x33c/4)+136+i, 0x07ff0000); >+ INSTANCE_WR(ctx, (0x33c/4)+154, 0x4b7fffff); >+ INSTANCE_WR(ctx, (0x33c/4)+176, 0x00000001); >+ INSTANCE_WR(ctx, (0x33c/4)+178, 0x00004000); >+ INSTANCE_WR(ctx, (0x33c/4)+181, 0x00000001); >+ INSTANCE_WR(ctx, (0x33c/4)+183, 0x00040000); >+ INSTANCE_WR(ctx, (0x33c/4)+184, 0x00010000); >+ >+/* >+... >++0x0074239c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x007423bc: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x007423dc: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x007423fc: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >+... >++0x00742bdc: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742bfc: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742c1c: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742c3c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >+... >+*/ >+ for (i = 0; i < 0x880; i += 0x10) { >+ INSTANCE_WR(ctx, ((0x1c1c + i)/4)+0, 0x10700ff9); >+ INSTANCE_WR(ctx, ((0x1c1c + i)/4)+1, 0x0436086c); >+ INSTANCE_WR(ctx, ((0x1c1c + i)/4)+2, 0x000c001b); >+ } > >- if ((ret = nouveau_gpuobj_new_ref(dev, chan, NULL, 0, ctx_size, 16, >- NVOBJ_FLAG_ZERO_ALLOC, >- &chan->ramin_grctx))) >- return ret; >+/* >+write32 #1 block at +0x00742fbc NV_PRAMIN+0x42fbc of 4 (0x4) elements: >++0x00742fbc: 3f800000 00000000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x281c/4), 0x3f800000); >+ >+/* >+write32 #1 block at +0x00742ffc NV_PRAMIN+0x42ffc of 12 (0xc) elements: >++0x00742ffc: 40000000 3f800000 3f000000 00000000 40000000 3f800000 00000000 bf800000 >++0x0074301c: 00000000 bf800000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x285c/4)+0, 0x40000000); >+ INSTANCE_WR(ctx, (0x285c/4)+1, 0x3f800000); >+ INSTANCE_WR(ctx, (0x285c/4)+2, 0x3f000000); >+ INSTANCE_WR(ctx, (0x285c/4)+4, 0x40000000); >+ INSTANCE_WR(ctx, (0x285c/4)+5, 0x3f800000); >+ INSTANCE_WR(ctx, (0x285c/4)+7, 0xbf800000); >+ INSTANCE_WR(ctx, (0x285c/4)+9, 0xbf800000); >+ >+/* >+write32 #1 block at +0x00742fcc NV_PRAMIN+0x42fcc of 4 (0x4) elements: >++0x00742fcc: 00000000 3f800000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x282c/4)+1, 0x3f800000); >+ >+/* >+write32 #1 block at +0x0074302c NV_PRAMIN+0x4302c of 4 (0x4) elements: >++0x0074302c: 00000000 00000000 00000000 00000000 >+write32 #1 block at +0x00743c9c NV_PRAMIN+0x43c9c of 4 (0x4) elements: >++0x00743c9c: 00000000 00000000 00000000 00000000 >+write32 #1 block at +0x00743c3c NV_PRAMIN+0x43c3c of 8 (0x8) elements: >++0x00743c3c: 00000000 00000000 000fe000 00000000 00000000 00000000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x349c/4)+2, 0x000fe000); >+ >+/* >+write32 #1 block at +0x00743c6c NV_PRAMIN+0x43c6c of 4 (0x4) elements: >++0x00743c6c: 00000000 00000000 00000000 00000000 >+write32 #1 block at +0x00743ccc NV_PRAMIN+0x43ccc of 4 (0x4) elements: >++0x00743ccc: 00000000 000003f8 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x352c/4)+1, 0x000003f8); >+ >+/* write32 #1 NV_PRAMIN+0x43ce0 <- 0x002fe000 */ >+ INSTANCE_WR(ctx, 0x3540/4, 0x002fe000); >+ >+/* >+write32 #1 block at +0x00743cfc NV_PRAMIN+0x43cfc of 8 (0x8) elements: >++0x00743cfc: 001c527c 001c527c 001c527c 001c527c 001c527c 001c527c 001c527c 001c527c >+*/ >+ for (i = 0; i < 8; ++i) >+ INSTANCE_WR(ctx, (0x355c/4)+i, 0x001c527c); >+} > >- /* Initialise default context values */ >- INSTANCE_WR(chan->ramin_grctx->gpuobj, 10, chan->id<<24); /* CTX_USER */ >+static void nv2a_graph_context_init(struct drm_device *dev, >+ struct nouveau_gpuobj *ctx) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ int i; > >- INSTANCE_WR(dev_priv->ctx_table->gpuobj, chan->id, >- chan->ramin_grctx->instance >> 4); >- return 0; >+ INSTANCE_WR(ctx, 0x33c/4, 0xffff0000); >+ for(i = 0x3a0; i< 0x3a8; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0fff0000); >+ INSTANCE_WR(ctx, 0x47c/4, 0x00000101); >+ INSTANCE_WR(ctx, 0x490/4, 0x00000111); >+ INSTANCE_WR(ctx, 0x4a8/4, 0x44400000); >+ for(i = 0x4d4; i< 0x4e4; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00030303); >+ for(i = 0x4f4; i< 0x504; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00080000); >+ for(i = 0x50c; i< 0x51c; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x01012000); >+ for(i = 0x51c; i< 0x52c; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x000105b8); >+ for(i = 0x52c; i< 0x53c; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00080008); >+ for(i = 0x55c; i< 0x59c; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x07ff0000); >+ INSTANCE_WR(ctx, 0x5a4/4, 0x4b7fffff); >+ INSTANCE_WR(ctx, 0x5fc/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x604/4, 0x00004000); >+ INSTANCE_WR(ctx, 0x610/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x618/4, 0x00040000); >+ INSTANCE_WR(ctx, 0x61c/4, 0x00010000); >+ >+ for (i=0x1a9c; i <= 0x22fc/4; i += 32) { >+ INSTANCE_WR(ctx, i/4 , 0x10700ff9); >+ INSTANCE_WR(ctx, i/4 + 1, 0x0436086c); >+ INSTANCE_WR(ctx, i/4 + 2, 0x000c001b); >+ } >+ >+ INSTANCE_WR(ctx, 0x269c/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x26b0/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x26dc/4, 0x40000000); >+ INSTANCE_WR(ctx, 0x26e0/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x26e4/4, 0x3f000000); >+ INSTANCE_WR(ctx, 0x26ec/4, 0x40000000); >+ INSTANCE_WR(ctx, 0x26f0/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x26f8/4, 0xbf800000); >+ INSTANCE_WR(ctx, 0x2700/4, 0xbf800000); >+ INSTANCE_WR(ctx, 0x3024/4, 0x000fe000); >+ INSTANCE_WR(ctx, 0x30a0/4, 0x000003f8); >+ INSTANCE_WR(ctx, 0x33fc/4, 0x002fe000); >+ for(i = 0x341c; i< 0x343c; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x001c527c); > } > >-void nv20_graph_destroy_context(struct nouveau_channel *chan) { >- struct drm_device *dev = chan->dev; >+static void nv25_graph_context_init(struct drm_device *dev, >+ struct nouveau_gpuobj *ctx) >+{ > struct drm_nouveau_private *dev_priv = dev->dev_private; >+ int i; >+/* >+write32 #1 block at +0x00740a7c NV_PRAMIN.GRCTX0+0x35c of 173 (0xad) elements: >++0x00740a7c: ffff0000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740a9c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740abc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740adc: 00000000 0fff0000 0fff0000 00000000 00000000 00000000 00000000 00000000 >++0x00740afc: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740b1c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740b3c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740b5c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >+ >++0x00740b7c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740b9c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740bbc: 00000101 00000000 00000000 00000000 00000000 00000111 00000000 00000000 >++0x00740bdc: 00000000 00000000 00000000 00000080 ffff0000 00000001 00000000 00000000 >++0x00740bfc: 00000000 00000000 44400000 00000000 00000000 00000000 00000000 00000000 >++0x00740c1c: 4b800000 00000000 00000000 00000000 00000000 00030303 00030303 00030303 >++0x00740c3c: 00030303 00000000 00000000 00000000 00000000 00080000 00080000 00080000 >++0x00740c5c: 00080000 00000000 00000000 01012000 01012000 01012000 01012000 000105b8 >+ >++0x00740c7c: 000105b8 000105b8 000105b8 00080008 00080008 00080008 00080008 00000000 >++0x00740c9c: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 07ff0000 >++0x00740cbc: 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 >++0x00740cdc: 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 07ff0000 00000000 >++0x00740cfc: 00000000 4b7fffff 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740d1c: 00000000 00000000 00000000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x35c/4)+0, 0xffff0000); >+ INSTANCE_WR(ctx, (0x35c/4)+25, 0x0fff0000); >+ INSTANCE_WR(ctx, (0x35c/4)+26, 0x0fff0000); >+ INSTANCE_WR(ctx, (0x35c/4)+80, 0x00000101); >+ INSTANCE_WR(ctx, (0x35c/4)+85, 0x00000111); >+ INSTANCE_WR(ctx, (0x35c/4)+91, 0x00000080); >+ INSTANCE_WR(ctx, (0x35c/4)+92, 0xffff0000); >+ INSTANCE_WR(ctx, (0x35c/4)+93, 0x00000001); >+ INSTANCE_WR(ctx, (0x35c/4)+98, 0x44400000); >+ INSTANCE_WR(ctx, (0x35c/4)+104, 0x4b800000); >+ INSTANCE_WR(ctx, (0x35c/4)+109, 0x00030303); >+ INSTANCE_WR(ctx, (0x35c/4)+110, 0x00030303); >+ INSTANCE_WR(ctx, (0x35c/4)+111, 0x00030303); >+ INSTANCE_WR(ctx, (0x35c/4)+112, 0x00030303); >+ INSTANCE_WR(ctx, (0x35c/4)+117, 0x00080000); >+ INSTANCE_WR(ctx, (0x35c/4)+118, 0x00080000); >+ INSTANCE_WR(ctx, (0x35c/4)+119, 0x00080000); >+ INSTANCE_WR(ctx, (0x35c/4)+120, 0x00080000); >+ INSTANCE_WR(ctx, (0x35c/4)+123, 0x01012000); >+ INSTANCE_WR(ctx, (0x35c/4)+124, 0x01012000); >+ INSTANCE_WR(ctx, (0x35c/4)+125, 0x01012000); >+ INSTANCE_WR(ctx, (0x35c/4)+126, 0x01012000); >+ INSTANCE_WR(ctx, (0x35c/4)+127, 0x000105b8); >+ INSTANCE_WR(ctx, (0x35c/4)+128, 0x000105b8); >+ INSTANCE_WR(ctx, (0x35c/4)+129, 0x000105b8); >+ INSTANCE_WR(ctx, (0x35c/4)+130, 0x000105b8); >+ INSTANCE_WR(ctx, (0x35c/4)+131, 0x00080008); >+ INSTANCE_WR(ctx, (0x35c/4)+132, 0x00080008); >+ INSTANCE_WR(ctx, (0x35c/4)+133, 0x00080008); >+ INSTANCE_WR(ctx, (0x35c/4)+134, 0x00080008); >+ for (i=0; i<16; ++i) >+ INSTANCE_WR(ctx, (0x35c/4)+143+i, 0x07ff0000); >+ INSTANCE_WR(ctx, (0x35c/4)+161, 0x4b7fffff); >+ >+/* >+write32 #1 block at +0x00740d34 NV_PRAMIN.GRCTX0+0x614 of 3136 (0xc40) elements: >++0x00740d34: 00000000 00000000 00000000 00000080 30201000 70605040 b0a09080 f0e0d0c0 >++0x00740d54: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00740d74: 00000000 00000000 00000000 00000000 00000001 00000000 00004000 00000000 >++0x00740d94: 00000000 00000001 00000000 00040000 00010000 00000000 00000000 00000000 >++0x00740db4: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >+... >++0x00742214: 00000000 00000000 00000000 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742234: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742254: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742274: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >+... >++0x00742a34: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742a54: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742a74: 10700ff9 0436086c 000c001b 00000000 10700ff9 0436086c 000c001b 00000000 >++0x00742a94: 10700ff9 0436086c 000c001b 00000000 00000000 00000000 00000000 00000000 >++0x00742ab4: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >++0x00742ad4: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x614/4)+3, 0x00000080); >+ INSTANCE_WR(ctx, (0x614/4)+4, 0x30201000); >+ INSTANCE_WR(ctx, (0x614/4)+5, 0x70605040); >+ INSTANCE_WR(ctx, (0x614/4)+6, 0xb0a09080); >+ INSTANCE_WR(ctx, (0x614/4)+7, 0xf0e0d0c0); >+ INSTANCE_WR(ctx, (0x614/4)+20, 0x00000001); >+ INSTANCE_WR(ctx, (0x614/4)+22, 0x00004000); >+ INSTANCE_WR(ctx, (0x614/4)+25, 0x00000001); >+ INSTANCE_WR(ctx, (0x614/4)+27, 0x00040000); >+ INSTANCE_WR(ctx, (0x614/4)+28, 0x00010000); >+ for (i=0; i < 0x880/4; i+=4) { >+ INSTANCE_WR(ctx, (0x1b04/4)+i+0, 0x10700ff9); >+ INSTANCE_WR(ctx, (0x1b04/4)+i+1, 0x0436086c); >+ INSTANCE_WR(ctx, (0x1b04/4)+i+2, 0x000c001b); >+ } > >- nouveau_gpuobj_ref_del(dev, &chan->ramin_grctx); >+/* >+write32 #1 block at +0x00742e24 NV_PRAMIN.GRCTX0+0x2704 of 4 (0x4) elements: >++0x00742e24: 3f800000 00000000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x2704/4), 0x3f800000); >+ >+/* >+write32 #1 block at +0x00742e64 NV_PRAMIN.GRCTX0+0x2744 of 12 (0xc) elements: >++0x00742e64: 40000000 3f800000 3f000000 00000000 40000000 3f800000 00000000 bf800000 >++0x00742e84: 00000000 bf800000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x2744/4)+0, 0x40000000); >+ INSTANCE_WR(ctx, (0x2744/4)+1, 0x3f800000); >+ INSTANCE_WR(ctx, (0x2744/4)+2, 0x3f000000); >+ INSTANCE_WR(ctx, (0x2744/4)+4, 0x40000000); >+ INSTANCE_WR(ctx, (0x2744/4)+5, 0x3f800000); >+ INSTANCE_WR(ctx, (0x2744/4)+7, 0xbf800000); >+ INSTANCE_WR(ctx, (0x2744/4)+9, 0xbf800000); >+ >+/* >+write32 #1 block at +0x00742e34 NV_PRAMIN.GRCTX0+0x2714 of 4 (0x4) elements: >++0x00742e34: 00000000 3f800000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x2714/4)+1, 0x3f800000); >+ >+/* >+write32 #1 block at +0x00742e94 NV_PRAMIN.GRCTX0+0x2774 of 4 (0x4) elements: >++0x00742e94: 00000000 00000000 00000000 00000000 >+write32 #1 block at +0x00743804 NV_PRAMIN.GRCTX0+0x30e4 of 4 (0x4) elements: >++0x00743804: 00000000 00000000 00000000 00000000 >+write32 #1 block at +0x007437a4 NV_PRAMIN.GRCTX0+0x3084 of 8 (0x8) elements: >++0x007437a4: 00000000 00000000 000fe000 00000000 00000000 00000000 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x3084/4)+2, 0x000fe000); >+ >+/* >+write32 #1 block at +0x007437d4 NV_PRAMIN.GRCTX0+0x30b4 of 4 (0x4) elements: >++0x007437d4: 00000000 00000000 00000000 00000000 >+write32 #1 block at +0x00743824 NV_PRAMIN.GRCTX0+0x3104 of 4 (0x4) elements: >++0x00743824: 00000000 000003f8 00000000 00000000 >+*/ >+ INSTANCE_WR(ctx, (0x3104/4)+1, 0x000003f8); >+ >+/* write32 #1 NV_PRAMIN.GRCTX0+0x3468 <- 0x002fe000 */ >+ INSTANCE_WR(ctx, 0x3468/4, 0x002fe000); >+ >+/* >+write32 #1 block at +0x00743ba4 NV_PRAMIN.GRCTX0+0x3484 of 8 (0x8) elements: >++0x00743ba4: 001c527c 001c527c 001c527c 001c527c 001c527c 001c527c 001c527c 001c527c >+*/ >+ for (i=0; i<8; ++i) >+ INSTANCE_WR(ctx, (0x3484/4)+i, 0x001c527c); >+} > >- INSTANCE_WR(dev_priv->ctx_table->gpuobj, chan->id, 0); >+static void nv30_31_graph_context_init(struct drm_device *dev, >+ struct nouveau_gpuobj *ctx) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ int i; >+ >+ INSTANCE_WR(ctx, 0x410/4, 0x00000101); >+ INSTANCE_WR(ctx, 0x424/4, 0x00000111); >+ INSTANCE_WR(ctx, 0x428/4, 0x00000060); >+ INSTANCE_WR(ctx, 0x444/4, 0x00000080); >+ INSTANCE_WR(ctx, 0x448/4, 0xffff0000); >+ INSTANCE_WR(ctx, 0x44c/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x460/4, 0x44400000); >+ INSTANCE_WR(ctx, 0x48c/4, 0xffff0000); >+ for(i = 0x4e0; i< 0x4e8; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0fff0000); >+ INSTANCE_WR(ctx, 0x4ec/4, 0x00011100); >+ for(i = 0x508; i< 0x548; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x07ff0000); >+ INSTANCE_WR(ctx, 0x550/4, 0x4b7fffff); >+ INSTANCE_WR(ctx, 0x58c/4, 0x00000080); >+ INSTANCE_WR(ctx, 0x590/4, 0x30201000); >+ INSTANCE_WR(ctx, 0x594/4, 0x70605040); >+ INSTANCE_WR(ctx, 0x598/4, 0xb8a89888); >+ INSTANCE_WR(ctx, 0x59c/4, 0xf8e8d8c8); >+ INSTANCE_WR(ctx, 0x5b0/4, 0xb0000000); >+ for(i = 0x600; i< 0x640; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00010588); >+ for(i = 0x640; i< 0x680; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00030303); >+ for(i = 0x6c0; i< 0x700; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0008aae4); >+ for(i = 0x700; i< 0x740; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x01012000); >+ for(i = 0x740; i< 0x780; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00080008); >+ INSTANCE_WR(ctx, 0x85c/4, 0x00040000); >+ INSTANCE_WR(ctx, 0x860/4, 0x00010000); >+ for(i = 0x864; i< 0x874; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00040004); >+ for(i = 0x1f18; i<= 0x3088 ; i+= 16) { >+ INSTANCE_WR(ctx, i/4 + 0, 0x10700ff9); >+ INSTANCE_WR(ctx, i/4 + 1, 0x0436086c); >+ INSTANCE_WR(ctx, i/4 + 2, 0x000c001b); >+ } >+ for(i = 0x30b8; i< 0x30c8; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x344c/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x3808/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x381c/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x3848/4, 0x40000000); >+ INSTANCE_WR(ctx, 0x384c/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x3850/4, 0x3f000000); >+ INSTANCE_WR(ctx, 0x3858/4, 0x40000000); >+ INSTANCE_WR(ctx, 0x385c/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x3864/4, 0xbf800000); >+ INSTANCE_WR(ctx, 0x386c/4, 0xbf800000); > } > >-static void nv20_graph_rdi(struct drm_device *dev) { >+static void nv34_graph_context_init(struct drm_device *dev, >+ struct nouveau_gpuobj *ctx) >+{ > struct drm_nouveau_private *dev_priv = dev->dev_private; > int i; > >- NV_WRITE(NV10_PGRAPH_RDI_INDEX, 0x2c80000); >- for (i = 0; i < 32; i++) >- NV_WRITE(NV10_PGRAPH_RDI_DATA, 0); >+ INSTANCE_WR(ctx, 0x40c/4, 0x01000101); >+ INSTANCE_WR(ctx, 0x420/4, 0x00000111); >+ INSTANCE_WR(ctx, 0x424/4, 0x00000060); >+ INSTANCE_WR(ctx, 0x440/4, 0x00000080); >+ INSTANCE_WR(ctx, 0x444/4, 0xffff0000); >+ INSTANCE_WR(ctx, 0x448/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x45c/4, 0x44400000); >+ INSTANCE_WR(ctx, 0x480/4, 0xffff0000); >+ for(i = 0x4d4; i< 0x4dc; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0fff0000); >+ INSTANCE_WR(ctx, 0x4e0/4, 0x00011100); >+ for(i = 0x4fc; i< 0x53c; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x07ff0000); >+ INSTANCE_WR(ctx, 0x544/4, 0x4b7fffff); >+ INSTANCE_WR(ctx, 0x57c/4, 0x00000080); >+ INSTANCE_WR(ctx, 0x580/4, 0x30201000); >+ INSTANCE_WR(ctx, 0x584/4, 0x70605040); >+ INSTANCE_WR(ctx, 0x588/4, 0xb8a89888); >+ INSTANCE_WR(ctx, 0x58c/4, 0xf8e8d8c8); >+ INSTANCE_WR(ctx, 0x5a0/4, 0xb0000000); >+ for(i = 0x5f0; i< 0x630; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00010588); >+ for(i = 0x630; i< 0x670; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00030303); >+ for(i = 0x6b0; i< 0x6f0; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0008aae4); >+ for(i = 0x6f0; i< 0x730; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x01012000); >+ for(i = 0x730; i< 0x770; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00080008); >+ INSTANCE_WR(ctx, 0x850/4, 0x00040000); >+ INSTANCE_WR(ctx, 0x854/4, 0x00010000); >+ for(i = 0x858; i< 0x868; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00040004); >+ for(i = 0x15ac; i<= 0x271c ; i+= 16) { >+ INSTANCE_WR(ctx, i/4 + 0, 0x10700ff9); >+ INSTANCE_WR(ctx, i/4 + 1, 0x0436086c); >+ INSTANCE_WR(ctx, i/4 + 2, 0x000c001b); >+ } >+ for(i = 0x274c; i< 0x275c; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x2ae0/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x2e9c/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x2eb0/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x2edc/4, 0x40000000); >+ INSTANCE_WR(ctx, 0x2ee0/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x2ee4/4, 0x3f000000); >+ INSTANCE_WR(ctx, 0x2eec/4, 0x40000000); >+ INSTANCE_WR(ctx, 0x2ef0/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x2ef8/4, 0xbf800000); >+ INSTANCE_WR(ctx, 0x2f00/4, 0xbf800000); >+} > >- nouveau_wait_for_idle(dev); >+static void nv35_36_graph_context_init(struct drm_device *dev, >+ struct nouveau_gpuobj *ctx) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ int i; >+ >+ INSTANCE_WR(ctx, 0x40c/4, 0x00000101); >+ INSTANCE_WR(ctx, 0x420/4, 0x00000111); >+ INSTANCE_WR(ctx, 0x424/4, 0x00000060); >+ INSTANCE_WR(ctx, 0x440/4, 0x00000080); >+ INSTANCE_WR(ctx, 0x444/4, 0xffff0000); >+ INSTANCE_WR(ctx, 0x448/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x45c/4, 0x44400000); >+ INSTANCE_WR(ctx, 0x488/4, 0xffff0000); >+ for(i = 0x4dc; i< 0x4e4; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0fff0000); >+ INSTANCE_WR(ctx, 0x4e8/4, 0x00011100); >+ for(i = 0x504; i< 0x544; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x07ff0000); >+ INSTANCE_WR(ctx, 0x54c/4, 0x4b7fffff); >+ INSTANCE_WR(ctx, 0x588/4, 0x00000080); >+ INSTANCE_WR(ctx, 0x58c/4, 0x30201000); >+ INSTANCE_WR(ctx, 0x590/4, 0x70605040); >+ INSTANCE_WR(ctx, 0x594/4, 0xb8a89888); >+ INSTANCE_WR(ctx, 0x598/4, 0xf8e8d8c8); >+ INSTANCE_WR(ctx, 0x5ac/4, 0xb0000000); >+ for(i = 0x604; i< 0x644; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00010588); >+ for(i = 0x644; i< 0x684; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00030303); >+ for(i = 0x6c4; i< 0x704; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0008aae4); >+ for(i = 0x704; i< 0x744; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x01012000); >+ for(i = 0x744; i< 0x784; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00080008); >+ INSTANCE_WR(ctx, 0x860/4, 0x00040000); >+ INSTANCE_WR(ctx, 0x864/4, 0x00010000); >+ for(i = 0x868; i< 0x878; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00040004); >+ for(i = 0x1f1c; i<= 0x308c ; i+= 16) { >+ INSTANCE_WR(ctx, i/4 + 0, 0x10700ff9); >+ INSTANCE_WR(ctx, i/4 + 1, 0x0436086c); >+ INSTANCE_WR(ctx, i/4 + 2, 0x000c001b); >+ } >+ for(i = 0x30bc; i< 0x30cc; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x3450/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x380c/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x3820/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x384c/4, 0x40000000); >+ INSTANCE_WR(ctx, 0x3850/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x3854/4, 0x3f000000); >+ INSTANCE_WR(ctx, 0x385c/4, 0x40000000); >+ INSTANCE_WR(ctx, 0x3860/4, 0x3f800000); >+ INSTANCE_WR(ctx, 0x3868/4, 0xbf800000); >+ INSTANCE_WR(ctx, 0x3870/4, 0xbf800000); > } > >-/* Save current context (from PGRAPH) into the channel's context >- */ >-int nv20_graph_save_context(struct nouveau_channel *chan) { >+int nv20_graph_create_context(struct nouveau_channel *chan) >+{ > struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; >- uint32_t instance; >+ void (*ctx_init)(struct drm_device *, struct nouveau_gpuobj *); >+ unsigned int ctx_size; >+ unsigned int idoffs = 0x28/4; >+ int ret; > >- instance = INSTANCE_RD(dev_priv->ctx_table->gpuobj, chan->id); >- if (!instance) { >- return -EINVAL; >+ switch (dev_priv->chipset) { >+ case 0x20: >+ ctx_size = NV20_GRCTX_SIZE; >+ ctx_init = nv20_graph_context_init; >+ idoffs = 0; >+ break; >+ case 0x25: >+ case 0x28: >+ ctx_size = NV25_GRCTX_SIZE; >+ ctx_init = nv25_graph_context_init; >+ break; >+ case 0x2a: >+ ctx_size = NV2A_GRCTX_SIZE; >+ ctx_init = nv2a_graph_context_init; >+ idoffs = 0; >+ break; >+ case 0x30: >+ case 0x31: >+ ctx_size = NV30_31_GRCTX_SIZE; >+ ctx_init = nv30_31_graph_context_init; >+ break; >+ case 0x34: >+ ctx_size = NV34_GRCTX_SIZE; >+ ctx_init = nv34_graph_context_init; >+ break; >+ case 0x35: >+ case 0x36: >+ ctx_size = NV35_36_GRCTX_SIZE; >+ ctx_init = nv35_36_graph_context_init; >+ break; >+ default: >+ ctx_size = 0; >+ ctx_init = nv35_36_graph_context_init; >+ DRM_ERROR("Please contact the devs if you want your NV%x" >+ " card to work\n", dev_priv->chipset); >+ return -ENOSYS; >+ break; > } >- if (instance != (chan->ramin_grctx->instance >> 4)) >- DRM_ERROR("nv20_graph_save_context : bad instance\n"); > >- NV_WRITE(NV10_PGRAPH_CHANNEL_CTX_SIZE, instance); >- NV_WRITE(NV10_PGRAPH_CHANNEL_CTX_POINTER, 2 /* save ctx */); >+ if ((ret = nouveau_gpuobj_new_ref(dev, chan, NULL, 0, ctx_size, 16, >+ NVOBJ_FLAG_ZERO_ALLOC, >+ &chan->ramin_grctx))) >+ return ret; >+ >+ /* Initialise default context values */ >+ ctx_init(dev, chan->ramin_grctx->gpuobj); >+ >+ /* nv20: INSTANCE_WR(chan->ramin_grctx->gpuobj, 10, chan->id<<24); */ >+ INSTANCE_WR(chan->ramin_grctx->gpuobj, idoffs, (chan->id<<24)|0x1); >+ /* CTX_USER */ >+ >+ INSTANCE_WR(dev_priv->ctx_table->gpuobj, chan->id, >+ chan->ramin_grctx->instance >> 4); >+ > return 0; > } > >+void nv20_graph_destroy_context(struct nouveau_channel *chan) >+{ >+ struct drm_device *dev = chan->dev; >+ struct drm_nouveau_private *dev_priv = dev->dev_private; > >-/* Restore the context for a specific channel into PGRAPH >- */ >-int nv20_graph_load_context(struct nouveau_channel *chan) { >+ if (chan->ramin_grctx) >+ nouveau_gpuobj_ref_del(dev, &chan->ramin_grctx); >+ >+ INSTANCE_WR(dev_priv->ctx_table->gpuobj, chan->id, 0); >+} >+ >+int nv20_graph_load_context(struct nouveau_channel *chan) >+{ > struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; >- uint32_t instance; >+ uint32_t inst; > >- instance = INSTANCE_RD(dev_priv->ctx_table->gpuobj, chan->id); >- if (!instance) { >+ if (!chan->ramin_grctx) > return -EINVAL; >- } >- if (instance != (chan->ramin_grctx->instance >> 4)) >- DRM_ERROR("nv20_graph_load_context_current : bad instance\n"); >+ inst = chan->ramin_grctx->instance >> 4; > >- NV_WRITE(NV10_PGRAPH_CTX_USER, chan->id << 24); >- NV_WRITE(NV10_PGRAPH_CHANNEL_CTX_SIZE, instance); >- NV_WRITE(NV10_PGRAPH_CHANNEL_CTX_POINTER, 1 /* restore ctx */); >+ NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_POINTER, inst); >+ NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_XFER, >+ NV20_PGRAPH_CHANNEL_CTX_XFER_LOAD); >+ >+ nouveau_wait_for_idle(dev); > return 0; > } > >-void nouveau_nv20_context_switch(struct drm_device *dev) >+int nv20_graph_save_context(struct nouveau_channel *chan) > { >+ struct drm_device *dev = chan->dev; > struct drm_nouveau_private *dev_priv = dev->dev_private; >- struct nouveau_channel *next, *last; >- int chid; >- >- chid = NV_READ(NV03_PFIFO_CACHE1_PUSH1)&(nouveau_fifo_number(dev)-1); >- next = dev_priv->fifos[chid]; >+ uint32_t inst; > >- chid = (NV_READ(NV10_PGRAPH_CTX_USER) >> 24) & (nouveau_fifo_number(dev)-1); >- last = dev_priv->fifos[chid]; >- >- DRM_DEBUG("NV: PGRAPH context switch interrupt channel %x -> %x\n", >- last->id, next->id); >+ if (!chan->ramin_grctx) >+ return -EINVAL; >+ inst = chan->ramin_grctx->instance >> 4; > >- NV_WRITE(NV04_PGRAPH_FIFO,0x0); >+ NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_POINTER, inst); >+ NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_XFER, >+ NV20_PGRAPH_CHANNEL_CTX_XFER_SAVE); > >- nv20_graph_save_context(last); >- > nouveau_wait_for_idle(dev); >+ return 0; >+} > >- NV_WRITE(NV10_PGRAPH_CTX_CONTROL, 0x10000000); >- >- nv20_graph_load_context(next); >+static void nv20_graph_rdi(struct drm_device *dev) { >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ int i, writecount = 32; >+ uint32_t rdi_index = 0x2c80000; > >- nouveau_wait_for_idle(dev); >- >- if ((NV_READ(NV10_PGRAPH_CTX_USER) >> 24) != next->id) >- DRM_ERROR("nouveau_nv20_context_switch : wrong channel restored %x %x!!!\n", next->id, NV_READ(NV10_PGRAPH_CTX_USER) >> 24); >+ if (dev_priv->chipset == 0x20) { >+ rdi_index = 0x3d0000; >+ writecount = 15; >+ } > >- NV_WRITE(NV10_PGRAPH_CTX_CONTROL, 0x10010100); >- NV_WRITE(NV10_PGRAPH_FFINTFC_ST2, NV_READ(NV10_PGRAPH_FFINTFC_ST2)&0xCFFFFFFF); >+ NV_WRITE(NV10_PGRAPH_RDI_INDEX, rdi_index); >+ for (i = 0; i < writecount; i++) >+ NV_WRITE(NV10_PGRAPH_RDI_DATA, 0); > >- NV_WRITE(NV04_PGRAPH_FIFO,0x1); >+ nouveau_wait_for_idle(dev); > } > > int nv20_graph_init(struct drm_device *dev) { >@@ -163,10 +701,9 @@ int nv20_graph_init(struct drm_device *d > &dev_priv->ctx_table))) > return ret; > >- NV_WRITE(NV10_PGRAPH_CHANNEL_CTX_TABLE, >+ NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_TABLE, > dev_priv->ctx_table->instance >> 4); > >- //XXX need to be done and save/restore for each fifo ??? > nv20_graph_rdi(dev); > > NV_WRITE(NV03_PGRAPH_INTR , 0xFFFFFFFF); >@@ -175,7 +712,7 @@ int nv20_graph_init(struct drm_device *d > NV_WRITE(NV04_PGRAPH_DEBUG_0, 0xFFFFFFFF); > NV_WRITE(NV04_PGRAPH_DEBUG_0, 0x00000000); > NV_WRITE(NV04_PGRAPH_DEBUG_1, 0x00118700); >- NV_WRITE(NV04_PGRAPH_DEBUG_3, 0xF20E0431); >+ NV_WRITE(NV04_PGRAPH_DEBUG_3, 0xF3CE0475); /* 0x4 = auto ctx switch */ > NV_WRITE(NV10_PGRAPH_DEBUG_4, 0x00000000); > NV_WRITE(0x40009C , 0x00000040); > >@@ -187,9 +724,9 @@ int nv20_graph_init(struct drm_device *d > NV_WRITE(0x400098, 0x40000080); > NV_WRITE(0x400B88, 0x000000ff); > } else { >- NV_WRITE(0x400880, 0x00080000); >+ NV_WRITE(0x400880, 0x00080000); /* 0x0008c7df */ > NV_WRITE(0x400094, 0x00000005); >- NV_WRITE(0x400B80, 0x45CAA208); >+ NV_WRITE(0x400B80, 0x45CAA208); /* 0x45eae20e */ > NV_WRITE(0x400B84, 0x24000000); > NV_WRITE(0x400098, 0x00000040); > NV_WRITE(NV10_PGRAPH_RDI_INDEX, 0x00E00038); >@@ -199,12 +736,28 @@ int nv20_graph_init(struct drm_device *d > } > > /* copy tile info from PFB */ >- for (i=0; i<NV10_PFB_TILE__SIZE; i++) { >- NV_WRITE(NV10_PGRAPH_TILE(i), NV_READ(NV10_PFB_TILE(i))); >- NV_WRITE(NV10_PGRAPH_TLIMIT(i), NV_READ(NV10_PFB_TLIMIT(i))); >- NV_WRITE(NV10_PGRAPH_TSIZE(i), NV_READ(NV10_PFB_TSIZE(i))); >- NV_WRITE(NV10_PGRAPH_TSTATUS(i), NV_READ(NV10_PFB_TSTATUS(i))); >+ for (i = 0; i < NV10_PFB_TILE__SIZE; i++) { >+ NV_WRITE(0x00400904 + i*0x10, NV_READ(NV10_PFB_TLIMIT(i))); >+ /* which is NV40_PGRAPH_TLIMIT0(i) ?? */ >+ NV_WRITE(NV10_PGRAPH_RDI_INDEX, 0x00EA0030+i*4); >+ NV_WRITE(NV10_PGRAPH_RDI_DATA, NV_READ(NV10_PFB_TLIMIT(i))); >+ NV_WRITE(0x00400908 + i*0x10, NV_READ(NV10_PFB_TSIZE(i))); >+ /* which is NV40_PGRAPH_TSIZE0(i) ?? */ >+ NV_WRITE(NV10_PGRAPH_RDI_INDEX, 0x00EA0050+i*4); >+ NV_WRITE(NV10_PGRAPH_RDI_DATA, NV_READ(NV10_PFB_TSIZE(i))); >+ NV_WRITE(0x00400900 + i*0x10, NV_READ(NV10_PFB_TILE(i))); >+ /* which is NV40_PGRAPH_TILE0(i) ?? */ >+ NV_WRITE(NV10_PGRAPH_RDI_INDEX, 0x00EA0010+i*4); >+ NV_WRITE(NV10_PGRAPH_RDI_DATA, NV_READ(NV10_PFB_TILE(i))); >+ } >+ for (i = 0; i < 8; i++) { >+ NV_WRITE(0x400980+i*4, NV_READ(0x100300+i*4)); >+ NV_WRITE(NV10_PGRAPH_RDI_INDEX, 0x00EA0090+i*4); >+ NV_WRITE(NV10_PGRAPH_RDI_DATA, NV_READ(0x100300+i*4)); > } >+ NV_WRITE(0x4009a0, NV_READ(0x100324)); >+ NV_WRITE(NV10_PGRAPH_RDI_INDEX, 0x00EA000C); >+ NV_WRITE(NV10_PGRAPH_RDI_DATA, NV_READ(0x100324)); > > NV_WRITE(NV10_PGRAPH_CTX_CONTROL, 0x10010100); > NV_WRITE(NV10_PGRAPH_STATE , 0xFFFFFFFF); >@@ -247,3 +800,90 @@ void nv20_graph_takedown(struct drm_devi > nouveau_gpuobj_ref_del(dev, &dev_priv->ctx_table); > } > >+int nv30_graph_init(struct drm_device *dev) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ uint32_t vramsz, tmp; >+ int ret, i; >+ >+ NV_WRITE(NV03_PMC_ENABLE, NV_READ(NV03_PMC_ENABLE) & >+ ~NV_PMC_ENABLE_PGRAPH); >+ NV_WRITE(NV03_PMC_ENABLE, NV_READ(NV03_PMC_ENABLE) | >+ NV_PMC_ENABLE_PGRAPH); >+ >+ /* Create Context Pointer Table */ >+ dev_priv->ctx_table_size = 32 * 4; >+ if ((ret = nouveau_gpuobj_new_ref(dev, NULL, NULL, 0, >+ dev_priv->ctx_table_size, 16, >+ NVOBJ_FLAG_ZERO_ALLOC, >+ &dev_priv->ctx_table))) >+ return ret; >+ >+ NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_TABLE, >+ dev_priv->ctx_table->instance >> 4); >+ >+ NV_WRITE(NV03_PGRAPH_INTR , 0xFFFFFFFF); >+ NV_WRITE(NV03_PGRAPH_INTR_EN, 0xFFFFFFFF); >+ >+ NV_WRITE(NV04_PGRAPH_DEBUG_0, 0xFFFFFFFF); >+ NV_WRITE(NV04_PGRAPH_DEBUG_0, 0x00000000); >+ NV_WRITE(NV04_PGRAPH_DEBUG_1, 0x401287c0); >+ NV_WRITE(0x400890, 0x01b463ff); >+ NV_WRITE(NV04_PGRAPH_DEBUG_3, 0xf2de0475); >+ NV_WRITE(NV10_PGRAPH_DEBUG_4, 0x00008000); >+ NV_WRITE(NV04_PGRAPH_LIMIT_VIOL_PIX, 0xf04bdff6); >+ NV_WRITE(0x400B80, 0x1003d888); >+ NV_WRITE(0x400098, 0x00000000); >+ NV_WRITE(0x40009C, 0x0005ad00); >+ NV_WRITE(0x400B88, 0x62ff00ff); // suspiciously like PGRAPH_DEBUG_2 >+ NV_WRITE(0x4000a0, 0x00000000); >+ NV_WRITE(0x4000a4, 0x00000008); >+ NV_WRITE(0x4008a8, 0xb784a400); >+ NV_WRITE(0x400ba0, 0x002f8685); >+ NV_WRITE(0x400ba4, 0x00231f3f); >+ NV_WRITE(0x4008a4, 0x40000020); >+ NV_WRITE(0x400B84, 0x0c000000); >+ NV_WRITE(NV04_PGRAPH_DEBUG_2, 0x62ff0f7f); >+ NV_WRITE(0x4000c0, 0x00000016); >+ >+ /* copy tile info from PFB */ >+ for (i=0; i<NV10_PFB_TILE__SIZE; i++) { >+ NV_WRITE(NV10_PGRAPH_TILE(i), NV_READ(NV10_PFB_TILE(i))); >+ NV_WRITE(NV10_PGRAPH_TLIMIT(i), NV_READ(NV10_PFB_TLIMIT(i))); >+ NV_WRITE(NV10_PGRAPH_TSIZE(i), NV_READ(NV10_PFB_TSIZE(i))); >+ NV_WRITE(NV10_PGRAPH_TSTATUS(i), NV_READ(NV10_PFB_TSTATUS(i))); >+ } >+ >+ NV_WRITE(NV10_PGRAPH_CTX_CONTROL, 0x10010100); >+ NV_WRITE(NV10_PGRAPH_STATE , 0xFFFFFFFF); >+ NV_WRITE(NV04_PGRAPH_FIFO , 0x00000001); >+ >+ /* begin RAM config */ >+ vramsz = drm_get_resource_len(dev, 0) - 1; >+ NV_WRITE(0x4009A4, NV_READ(NV04_PFB_CFG0)); >+ NV_WRITE(0x4009A8, NV_READ(NV04_PFB_CFG1)); >+ NV_WRITE(0x400750, 0x00EA0000); >+ NV_WRITE(0x400754, NV_READ(NV04_PFB_CFG0)); >+ NV_WRITE(0x400750, 0x00EA0004); >+ NV_WRITE(0x400754, NV_READ(NV04_PFB_CFG1)); >+ NV_WRITE(0x400820, 0); >+ NV_WRITE(0x400824, 0); >+ NV_WRITE(0x400864, vramsz-1); >+ NV_WRITE(0x400868, vramsz-1); >+ >+ NV_WRITE(0x400B20, 0x00000000); >+ NV_WRITE(0x400B04, 0xFFFFFFFF); >+ >+ /* per-context state, doesn't belong here */ >+ tmp = NV_READ(NV10_PGRAPH_SURFACE) & 0x0007ff00; >+ NV_WRITE(NV10_PGRAPH_SURFACE, tmp); >+ tmp = NV_READ(NV10_PGRAPH_SURFACE) | 0x00020100; >+ NV_WRITE(NV10_PGRAPH_SURFACE, tmp); >+ >+ NV_WRITE(NV03_PGRAPH_ABS_UCLIP_XMIN, 0); >+ NV_WRITE(NV03_PGRAPH_ABS_UCLIP_YMIN, 0); >+ NV_WRITE(NV03_PGRAPH_ABS_UCLIP_XMAX, 0x7fff); >+ NV_WRITE(NV03_PGRAPH_ABS_UCLIP_YMAX, 0x7fff); >+ >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv30_graph.c linux-2.6.23.i686/drivers/char/drm/nv30_graph.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv30_graph.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv30_graph.c 1970-01-01 01:00:00.000000000 +0100 >@@ -1,2911 +0,0 @@ >-/* >- * Based on nv40_graph.c >- * Someday this will all go away... >- */ >-#include "drmP.h" >-#include "drm.h" >-#include "nouveau_drv.h" >-#include "nouveau_drm.h" >- >-/* >- * There are 4 families : >- * NV30 is 0x10de:0x030* (not working, no dump for that one) >- * >- * NV31 is 0x10de:0x031* >- * >- * NV34 is 0x10de:0x032* >- * >- * NV35 is 0x10de:0x033* (NV35 and NV36 are the same) >- * NV36 is 0x10de:0x034* >- * >- * Not seen in the wild, no dumps (probably NV35) : >- * NV37 is 0x10de:0x00fc, 0x10de:0x00fd >- * NV38 is 0x10de:0x0333, 0x10de:0x00fe >- * >- */ >- >- >-#define NV31_GRCTX_SIZE (22392) >-#define NV34_GRCTX_SIZE (18140) >-#define NV35_GRCTX_SIZE (22396) >- >-static void nv31_graph_context_init(struct drm_device *dev, struct nouveau_gpuobj *ctx) >-{ >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- int i; >- >- INSTANCE_WR(ctx, 0x410/4, 0x00000101); >- INSTANCE_WR(ctx, 0x424/4, 0x00000111); >- INSTANCE_WR(ctx, 0x428/4, 0x00000060); >- INSTANCE_WR(ctx, 0x444/4, 0x00000080); >- INSTANCE_WR(ctx, 0x448/4, 0xffff0000); >- INSTANCE_WR(ctx, 0x44c/4, 0x00000001); >- INSTANCE_WR(ctx, 0x460/4, 0x44400000); >- INSTANCE_WR(ctx, 0x48c/4, 0xffff0000); >- for(i = 0x4e0; i< 0x4e8; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0fff0000); >- INSTANCE_WR(ctx, 0x4ec/4, 0x00011100); >- for(i = 0x508; i< 0x548; i += 4) >- INSTANCE_WR(ctx, i/4, 0x07ff0000); >- INSTANCE_WR(ctx, 0x550/4, 0x4b7fffff); >- INSTANCE_WR(ctx, 0x58c/4, 0x00000080); >- INSTANCE_WR(ctx, 0x590/4, 0x30201000); >- INSTANCE_WR(ctx, 0x594/4, 0x70605040); >- INSTANCE_WR(ctx, 0x598/4, 0xb8a89888); >- INSTANCE_WR(ctx, 0x59c/4, 0xf8e8d8c8); >- INSTANCE_WR(ctx, 0x5b0/4, 0xb0000000); >- for(i = 0x600; i< 0x640; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00010588); >- for(i = 0x640; i< 0x680; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00030303); >- for(i = 0x6c0; i< 0x700; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0008aae4); >- for(i = 0x700; i< 0x740; i += 4) >- INSTANCE_WR(ctx, i/4, 0x01012000); >- for(i = 0x740; i< 0x780; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00080008); >- INSTANCE_WR(ctx, 0x85c/4, 0x00040000); >- INSTANCE_WR(ctx, 0x860/4, 0x00010000); >- for(i = 0x864; i< 0x874; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00040004); >- INSTANCE_WR(ctx, 0x1f18/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f1c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f20/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f28/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f2c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f30/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f38/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f3c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f40/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f48/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f4c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f50/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f58/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f5c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f60/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f68/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f6c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f70/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f78/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f7c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f80/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f88/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f8c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f90/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f98/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f9c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fa0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fa8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fb0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fb8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fbc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fc0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fc8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fcc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fd0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fd8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fdc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fe0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fe8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ff0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1ff8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ffc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2000/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2008/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x200c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2010/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2018/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x201c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2020/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2028/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x202c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2030/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2038/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x203c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2040/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2048/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x204c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2050/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2058/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x205c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2060/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2068/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x206c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2070/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2078/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x207c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2080/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2088/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x208c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2090/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2098/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x209c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2100/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2108/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x210c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2110/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2118/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x211c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2120/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2128/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x212c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2130/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2138/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x213c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2140/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2148/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x214c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2150/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2158/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x215c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2160/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2168/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x216c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2170/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2178/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x217c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2180/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2188/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x218c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2190/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2198/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x219c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2200/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2208/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x220c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2210/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2218/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x221c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2220/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2228/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x222c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2230/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2238/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x223c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2240/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2248/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x224c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2250/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2258/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x225c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2260/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2268/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x226c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2270/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2278/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x227c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2280/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2288/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x228c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2290/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2298/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x229c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2300/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2308/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x230c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2310/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2318/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x231c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2320/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2328/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x232c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2330/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2338/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x233c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2340/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2348/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x234c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2350/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2358/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x235c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2360/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2368/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x236c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2370/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2378/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x237c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2380/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2388/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x238c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2390/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2398/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x239c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2400/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2408/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x240c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2410/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2418/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x241c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2420/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2428/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x242c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2430/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2438/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x243c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2440/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2448/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x244c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2450/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2458/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x245c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2460/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2468/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x246c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2470/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2478/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x247c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2480/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2488/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x248c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2490/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2498/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x249c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2500/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2508/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x250c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2510/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2518/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x251c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2520/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2528/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x252c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2530/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2538/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x253c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2540/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2548/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x254c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2550/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2558/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x255c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2560/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2568/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x256c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2570/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2578/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x257c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2580/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2588/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x258c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2590/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2598/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x259c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2600/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2608/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x260c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2610/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2618/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x261c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2620/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2628/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x262c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2630/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2638/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x263c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2640/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2648/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x264c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2650/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2658/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x265c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2660/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2668/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x266c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2670/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2678/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x267c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2680/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2688/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x268c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2690/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2698/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x269c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2700/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2708/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x270c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2710/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2718/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x271c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2720/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2728/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x272c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2730/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2738/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x273c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2740/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2748/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x274c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2750/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2758/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x275c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2760/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2768/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x276c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2770/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2778/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x277c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2780/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2788/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x278c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2790/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2798/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x279c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2800/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2808/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x280c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2810/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2818/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x281c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2820/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2828/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x282c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2830/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2838/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x283c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2840/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2848/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x284c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2850/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2858/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x285c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2860/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2868/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x286c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2870/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2878/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x287c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2880/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2888/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x288c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2890/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2898/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x289c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2900/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2908/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x290c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2910/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2918/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x291c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2920/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2928/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x292c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2930/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2938/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x293c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2940/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2948/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x294c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2950/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2958/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x295c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2960/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2968/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x296c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2970/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2978/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x297c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2980/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2988/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x298c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2990/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2998/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x299c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29a0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29a8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29ac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29b0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29b8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29bc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29c0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29c8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29cc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29d0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29d8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29dc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29e0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29e8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29ec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29f0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29f8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29fc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a00/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a08/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a0c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a10/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a18/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a1c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a20/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a28/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a2c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a30/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a38/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a3c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a40/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a48/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a4c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a50/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a58/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a5c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a60/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a68/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a6c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a70/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a78/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a7c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a80/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a88/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a8c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a90/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a98/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a9c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2aa0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2aa8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2aac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ab0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ab8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2abc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ac0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ac8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2acc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ad0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ad8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2adc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ae0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ae8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2aec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2af0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2af8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2afc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b00/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b08/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b0c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b10/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b18/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b1c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b20/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b28/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b2c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b30/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b38/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b3c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b40/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b48/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b4c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b50/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b58/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b5c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b60/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b68/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b6c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b70/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b78/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b7c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b80/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b88/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b8c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b90/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b98/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b9c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ba0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ba8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2bb0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bb8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bbc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2bc0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bc8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bcc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2bd0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bd8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bdc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2be0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2be8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2bf0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bf8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bfc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c00/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c08/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c0c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c10/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c18/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c1c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c20/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c28/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c2c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c30/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c38/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c3c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c40/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c48/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c4c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c50/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c58/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c5c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c60/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c68/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c6c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c70/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c78/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c7c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c80/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c88/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c8c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c90/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c98/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c9c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ca0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ca8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2cb0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cb8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cbc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2cc0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cc8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ccc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2cd0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cd8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cdc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ce0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ce8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2cf0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cf8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cfc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d00/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d08/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d0c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d10/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d18/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d1c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d20/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d28/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d2c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d30/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d38/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d3c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d40/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d48/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d4c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d50/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d58/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d5c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d60/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d68/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d6c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d70/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d78/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d7c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d80/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d88/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d8c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d90/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d98/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d9c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2da0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2da8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2dac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2db0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2db8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2dbc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2dc0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2dc8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2dcc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2dd0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2dd8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ddc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2de0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2de8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2dec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2df0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2df8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2dfc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e00/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e08/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e0c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e10/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e18/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e1c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e20/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e28/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e2c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e30/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e38/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e3c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e40/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e48/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e4c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e50/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e58/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e5c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e60/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e68/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e6c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e70/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e78/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e7c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e80/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e88/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e8c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e90/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e98/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e9c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ea0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ea8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2eac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2eb0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2eb8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ebc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ec0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ec8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ecc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ed0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ed8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2edc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ee0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ee8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2eec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ef0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ef8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2efc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f00/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f08/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f0c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f10/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f18/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f1c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f20/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f28/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f2c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f30/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f38/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f3c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f40/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f48/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f4c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f50/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f58/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f5c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f60/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f68/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f6c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f70/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f78/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f7c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f80/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f88/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f8c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f90/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f98/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f9c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fa0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fa8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fac/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fb0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fb8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fbc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fc0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fc8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fcc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fd0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fd8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fdc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fe0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fe8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fec/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ff0/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ff8/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ffc/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3000/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3008/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x300c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3010/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3018/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x301c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3020/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3028/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x302c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3030/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3038/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x303c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3040/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3048/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x304c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3050/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3058/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x305c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3060/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3068/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x306c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3070/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3078/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x307c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3080/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x3088/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x308c/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3090/4, 0x000c001b); >- for(i = 0x30b8; i< 0x30c8; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0000ffff); >- INSTANCE_WR(ctx, 0x344c/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x3808/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x381c/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x3848/4, 0x40000000); >- INSTANCE_WR(ctx, 0x384c/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x3850/4, 0x3f000000); >- INSTANCE_WR(ctx, 0x3858/4, 0x40000000); >- INSTANCE_WR(ctx, 0x385c/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x3864/4, 0xbf800000); >- INSTANCE_WR(ctx, 0x386c/4, 0xbf800000);} >- >-static void nv34_graph_context_init(struct drm_device *dev, struct nouveau_gpuobj *ctx) >-{ >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- int i; >- >- INSTANCE_WR(ctx, 0x40c/4, 0x01000101); >- INSTANCE_WR(ctx, 0x420/4, 0x00000111); >- INSTANCE_WR(ctx, 0x424/4, 0x00000060); >- INSTANCE_WR(ctx, 0x440/4, 0x00000080); >- INSTANCE_WR(ctx, 0x444/4, 0xffff0000); >- INSTANCE_WR(ctx, 0x448/4, 0x00000001); >- INSTANCE_WR(ctx, 0x45c/4, 0x44400000); >- INSTANCE_WR(ctx, 0x480/4, 0xffff0000); >- for(i = 0x4d4; i< 0x4dc; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0fff0000); >- INSTANCE_WR(ctx, 0x4e0/4, 0x00011100); >- for(i = 0x4fc; i< 0x53c; i += 4) >- INSTANCE_WR(ctx, i/4, 0x07ff0000); >- INSTANCE_WR(ctx, 0x544/4, 0x4b7fffff); >- INSTANCE_WR(ctx, 0x57c/4, 0x00000080); >- INSTANCE_WR(ctx, 0x580/4, 0x30201000); >- INSTANCE_WR(ctx, 0x584/4, 0x70605040); >- INSTANCE_WR(ctx, 0x588/4, 0xb8a89888); >- INSTANCE_WR(ctx, 0x58c/4, 0xf8e8d8c8); >- INSTANCE_WR(ctx, 0x5a0/4, 0xb0000000); >- for(i = 0x5f0; i< 0x630; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00010588); >- for(i = 0x630; i< 0x670; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00030303); >- for(i = 0x6b0; i< 0x6f0; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0008aae4); >- for(i = 0x6f0; i< 0x730; i += 4) >- INSTANCE_WR(ctx, i/4, 0x01012000); >- for(i = 0x730; i< 0x770; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00080008); >- INSTANCE_WR(ctx, 0x850/4, 0x00040000); >- INSTANCE_WR(ctx, 0x854/4, 0x00010000); >- for(i = 0x858; i< 0x868; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00040004); >- INSTANCE_WR(ctx, 0x15ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x15b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x15b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x15bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x15c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x15c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x15cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x15d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x15d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x15dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x15e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x15e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x15ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x15f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x15f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x15fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1600/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1604/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x160c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1610/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1614/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x161c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1620/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1624/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x162c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1630/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1634/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x163c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1640/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1644/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x164c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1650/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1654/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x165c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1660/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1664/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x166c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1670/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1674/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x167c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1680/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1684/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x168c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1690/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1694/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x169c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x16a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x16a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x16ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x16b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x16b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x16bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x16c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x16c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x16cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x16d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x16d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x16dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x16e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x16e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x16ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x16f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x16f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x16fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1700/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1704/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x170c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1710/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1714/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x171c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1720/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1724/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x172c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1730/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1734/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x173c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1740/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1744/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x174c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1750/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1754/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x175c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1760/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1764/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x176c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1770/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1774/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x177c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1780/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1784/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x178c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1790/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1794/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x179c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x17a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x17a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x17ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x17b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x17b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x17bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x17c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x17c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x17cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x17d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x17d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x17dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x17e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x17e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x17ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x17f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x17f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x17fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1800/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1804/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x180c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1810/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1814/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x181c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1820/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1824/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x182c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1830/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1834/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x183c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1840/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1844/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x184c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1850/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1854/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x185c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1860/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1864/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x186c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1870/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1874/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x187c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1880/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1884/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x188c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1890/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1894/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x189c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x18a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x18a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x18ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x18b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x18b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x18bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x18c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x18c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x18cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x18d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x18d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x18dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x18e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x18e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x18ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x18f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x18f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x18fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1900/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1904/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x190c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1910/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1914/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x191c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1920/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1924/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x192c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1930/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1934/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x193c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1940/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1944/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x194c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1950/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1954/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x195c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1960/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1964/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x196c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1970/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1974/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x197c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1980/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1984/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x198c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1990/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1994/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x199c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x19a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x19a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x19ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x19b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x19b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x19bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x19c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x19c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x19cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x19d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x19d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x19dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x19e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x19e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x19ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x19f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x19f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x19fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1a90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1a94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1a9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1aa0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1aa4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1aac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ab0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ab4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1abc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ac0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ac4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1acc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ad0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ad4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1adc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ae0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ae4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1aec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1af0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1af4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1afc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1b90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1b94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1b9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ba0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ba4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1bac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1bb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1bb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1bbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1bc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1bc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1bcc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1bd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1bd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1bdc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1be0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1be4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1bec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1bf0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1bf4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1bfc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1c90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1c94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1c9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ca0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ca4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1cac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1cb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1cb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1cbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1cc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1cc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1ccc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1cd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1cd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1cdc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ce0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ce4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1cec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1cf0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1cf4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1cfc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1d90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1d94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1d9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1da0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1da4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1dac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1db0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1db4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1dbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1dc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1dc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1dcc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1dd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1dd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1ddc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1de0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1de4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1dec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1df0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1df4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1dfc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1e90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1e94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1e9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ea0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ea4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1eac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1eb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1eb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1ebc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ec0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ec4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1ecc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ed0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ed4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1edc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ee0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ee4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1eec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ef0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ef4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1efc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fa0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fa4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fcc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fdc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fe0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fe4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ff0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ff4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1ffc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2000/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2004/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x200c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2010/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2014/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x201c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2020/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2024/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x202c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2030/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2034/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x203c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2040/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2044/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x204c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2050/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2054/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x205c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2060/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2064/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x206c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2070/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2074/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x207c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2080/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2084/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x208c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2090/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2094/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x209c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2100/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2104/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x210c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2110/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2114/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x211c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2120/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2124/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x212c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2130/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2134/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x213c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2140/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2144/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x214c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2150/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2154/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x215c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2160/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2164/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x216c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2170/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2174/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x217c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2180/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2184/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x218c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2190/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2194/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x219c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2200/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2204/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x220c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2210/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2214/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x221c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2220/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2224/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x222c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2230/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2234/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x223c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2240/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2244/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x224c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2250/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2254/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x225c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2260/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2264/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x226c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2270/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2274/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x227c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2280/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2284/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x228c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2290/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2294/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x229c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2300/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2304/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x230c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2310/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2314/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x231c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2320/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2324/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x232c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2330/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2334/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x233c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2340/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2344/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x234c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2350/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2354/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x235c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2360/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2364/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x236c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2370/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2374/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x237c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2380/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2384/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x238c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2390/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2394/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x239c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2400/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2404/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x240c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2410/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2414/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x241c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2420/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2424/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x242c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2430/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2434/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x243c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2440/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2444/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x244c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2450/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2454/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x245c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2460/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2464/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x246c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2470/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2474/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x247c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2480/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2484/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x248c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2490/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2494/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x249c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2500/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2504/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x250c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2510/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2514/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x251c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2520/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2524/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x252c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2530/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2534/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x253c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2540/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2544/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x254c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2550/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2554/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x255c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2560/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2564/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x256c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2570/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2574/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x257c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2580/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2584/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x258c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2590/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2594/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x259c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2600/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2604/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x260c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2610/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2614/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x261c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2620/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2624/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x262c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2630/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2634/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x263c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2640/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2644/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x264c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2650/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2654/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x265c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2660/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2664/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x266c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2670/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2674/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x267c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2680/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2684/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x268c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2690/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2694/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x269c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2700/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2704/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x270c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2710/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2714/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x271c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2720/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2724/4, 0x000c001b); >- for(i = 0x274c; i< 0x275c; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0000ffff); >- INSTANCE_WR(ctx, 0x2ae0/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x2e9c/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x2eb0/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x2edc/4, 0x40000000); >- INSTANCE_WR(ctx, 0x2ee0/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x2ee4/4, 0x3f000000); >- INSTANCE_WR(ctx, 0x2eec/4, 0x40000000); >- INSTANCE_WR(ctx, 0x2ef0/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x2ef8/4, 0xbf800000); >- INSTANCE_WR(ctx, 0x2f00/4, 0xbf800000); >-} >- >-static void nv35_graph_context_init(struct drm_device *dev, struct nouveau_gpuobj *ctx) >-{ >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- int i; >- >- INSTANCE_WR(ctx, 0x40c/4, 0x00000101); >- INSTANCE_WR(ctx, 0x420/4, 0x00000111); >- INSTANCE_WR(ctx, 0x424/4, 0x00000060); >- INSTANCE_WR(ctx, 0x440/4, 0x00000080); >- INSTANCE_WR(ctx, 0x444/4, 0xffff0000); >- INSTANCE_WR(ctx, 0x448/4, 0x00000001); >- INSTANCE_WR(ctx, 0x45c/4, 0x44400000); >- INSTANCE_WR(ctx, 0x488/4, 0xffff0000); >- for(i = 0x4dc; i< 0x4e4; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0fff0000); >- INSTANCE_WR(ctx, 0x4e8/4, 0x00011100); >- for(i = 0x504; i< 0x544; i += 4) >- INSTANCE_WR(ctx, i/4, 0x07ff0000); >- INSTANCE_WR(ctx, 0x54c/4, 0x4b7fffff); >- INSTANCE_WR(ctx, 0x588/4, 0x00000080); >- INSTANCE_WR(ctx, 0x58c/4, 0x30201000); >- INSTANCE_WR(ctx, 0x590/4, 0x70605040); >- INSTANCE_WR(ctx, 0x594/4, 0xb8a89888); >- INSTANCE_WR(ctx, 0x598/4, 0xf8e8d8c8); >- INSTANCE_WR(ctx, 0x5ac/4, 0xb0000000); >- for(i = 0x604; i< 0x644; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00010588); >- for(i = 0x644; i< 0x684; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00030303); >- for(i = 0x6c4; i< 0x704; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0008aae4); >- for(i = 0x704; i< 0x744; i += 4) >- INSTANCE_WR(ctx, i/4, 0x01012000); >- for(i = 0x744; i< 0x784; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00080008); >- INSTANCE_WR(ctx, 0x860/4, 0x00040000); >- INSTANCE_WR(ctx, 0x864/4, 0x00010000); >- for(i = 0x868; i< 0x878; i += 4) >- INSTANCE_WR(ctx, i/4, 0x00040004); >- INSTANCE_WR(ctx, 0x1f1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1f90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1f94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1f9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fa0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fa4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fcc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fdc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1fe0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1fe4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1fec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x1ff0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x1ff4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x1ffc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2000/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2004/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x200c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2010/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2014/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x201c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2020/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2024/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x202c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2030/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2034/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x203c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2040/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2044/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x204c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2050/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2054/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x205c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2060/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2064/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x206c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2070/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2074/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x207c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2080/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2084/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x208c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2090/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2094/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x209c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x20f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x20f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x20fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2100/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2104/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x210c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2110/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2114/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x211c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2120/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2124/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x212c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2130/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2134/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x213c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2140/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2144/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x214c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2150/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2154/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x215c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2160/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2164/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x216c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2170/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2174/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x217c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2180/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2184/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x218c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2190/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2194/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x219c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x21f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x21f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x21fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2200/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2204/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x220c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2210/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2214/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x221c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2220/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2224/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x222c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2230/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2234/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x223c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2240/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2244/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x224c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2250/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2254/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x225c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2260/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2264/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x226c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2270/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2274/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x227c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2280/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2284/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x228c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2290/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2294/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x229c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x22f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x22f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x22fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2300/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2304/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x230c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2310/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2314/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x231c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2320/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2324/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x232c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2330/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2334/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x233c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2340/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2344/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x234c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2350/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2354/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x235c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2360/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2364/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x236c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2370/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2374/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x237c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2380/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2384/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x238c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2390/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2394/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x239c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x23f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x23f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x23fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2400/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2404/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x240c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2410/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2414/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x241c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2420/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2424/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x242c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2430/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2434/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x243c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2440/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2444/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x244c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2450/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2454/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x245c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2460/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2464/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x246c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2470/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2474/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x247c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2480/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2484/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x248c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2490/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2494/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x249c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x24f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x24f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x24fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2500/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2504/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x250c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2510/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2514/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x251c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2520/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2524/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x252c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2530/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2534/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x253c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2540/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2544/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x254c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2550/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2554/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x255c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2560/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2564/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x256c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2570/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2574/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x257c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2580/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2584/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x258c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2590/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2594/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x259c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x25f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x25f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x25fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2600/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2604/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x260c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2610/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2614/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x261c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2620/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2624/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x262c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2630/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2634/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x263c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2640/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2644/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x264c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2650/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2654/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x265c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2660/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2664/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x266c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2670/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2674/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x267c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2680/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2684/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x268c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2690/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2694/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x269c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x26f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x26f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x26fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2700/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2704/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x270c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2710/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2714/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x271c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2720/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2724/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x272c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2730/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2734/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x273c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2740/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2744/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x274c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2750/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2754/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x275c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2760/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2764/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x276c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2770/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2774/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x277c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2780/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2784/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x278c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2790/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2794/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x279c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x27f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x27f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x27fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2800/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2804/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x280c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2810/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2814/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x281c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2820/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2824/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x282c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2830/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2834/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x283c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2840/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2844/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x284c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2850/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2854/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x285c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2860/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2864/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x286c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2870/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2874/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x287c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2880/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2884/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x288c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2890/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2894/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x289c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x28f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x28f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x28fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2900/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2904/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x290c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2910/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2914/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x291c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2920/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2924/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x292c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2930/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2934/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x293c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2940/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2944/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x294c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2950/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2954/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x295c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2960/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2964/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x296c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2970/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2974/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x297c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2980/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2984/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x298c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2990/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2994/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x299c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29a0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29a4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29ac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29b0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29b4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29bc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29c0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29c4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29cc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29d0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29d4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29dc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29e0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29e4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29ec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x29f0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x29f4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x29fc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2a90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2a94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2a9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2aa0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2aa4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2aac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ab0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ab4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2abc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ac0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ac4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2acc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ad0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ad4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2adc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ae0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ae4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2aec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2af0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2af4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2afc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2b90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2b94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2b9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ba0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ba4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2bb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2bc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bcc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2bd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bdc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2be0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2be4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2bf0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2bf4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2bfc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2c90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2c94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2c9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ca0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ca4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2cb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2cc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ccc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2cd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cdc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ce0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ce4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2cf0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2cf4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2cfc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2d90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2d94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2d9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2da0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2da4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2dac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2db0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2db4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2dbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2dc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2dc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2dcc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2dd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2dd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ddc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2de0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2de4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2dec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2df0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2df4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2dfc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2e90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2e94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2e9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ea0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ea4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2eac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2eb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2eb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ebc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ec0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ec4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ecc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ed0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ed4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2edc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ee0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ee4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2eec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ef0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ef4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2efc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f00/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f04/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f0c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f10/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f14/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f1c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f20/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f24/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f2c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f30/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f34/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f3c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f40/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f44/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f4c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f50/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f54/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f5c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f60/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f64/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f6c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f70/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f74/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f7c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f80/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f84/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f8c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2f90/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2f94/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2f9c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fa0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fa4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fac/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fb0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fb4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fbc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fc0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fc4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fcc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fd0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fd4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fdc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2fe0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2fe4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2fec/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x2ff0/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x2ff4/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x2ffc/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3000/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3004/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x300c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3010/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3014/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x301c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3020/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3024/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x302c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3030/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3034/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x303c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3040/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3044/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x304c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3050/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3054/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x305c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3060/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3064/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x306c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3070/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3074/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x307c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3080/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3084/4, 0x000c001b); >- INSTANCE_WR(ctx, 0x308c/4, 0x10700ff9); >- INSTANCE_WR(ctx, 0x3090/4, 0x0436086c); >- INSTANCE_WR(ctx, 0x3094/4, 0x000c001b); >- for(i = 0x30bc; i< 0x30cc; i += 4) >- INSTANCE_WR(ctx, i/4, 0x0000ffff); >- INSTANCE_WR(ctx, 0x3450/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x380c/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x3820/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x384c/4, 0x40000000); >- INSTANCE_WR(ctx, 0x3850/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x3854/4, 0x3f000000); >- INSTANCE_WR(ctx, 0x385c/4, 0x40000000); >- INSTANCE_WR(ctx, 0x3860/4, 0x3f800000); >- INSTANCE_WR(ctx, 0x3868/4, 0xbf800000); >- INSTANCE_WR(ctx, 0x3870/4, 0xbf800000);} >- >-int nv30_graph_create_context(struct nouveau_channel *chan) >-{ >- struct drm_device *dev = chan->dev; >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- void (*ctx_init)(struct drm_device *, struct nouveau_gpuobj *); >- unsigned int ctx_size; >- int ret; >- >- switch (dev_priv->chipset) { >- case 0x31: >- ctx_size = NV31_GRCTX_SIZE; >- ctx_init = nv31_graph_context_init; >- break; >- case 0x34: >- ctx_size = NV34_GRCTX_SIZE; >- ctx_init = nv34_graph_context_init; >- break; >- case 0x35: >- case 0x36: >- ctx_size = NV35_GRCTX_SIZE; >- ctx_init = nv35_graph_context_init; >- break; >- default: >- ctx_size = 0; >- ctx_init = nv35_graph_context_init; >- DRM_ERROR("Please contact the devs if you want your NV%x card to work\n",dev_priv->chipset); >- break; >- } >- >- if ((ret = nouveau_gpuobj_new_ref(dev, chan, NULL, 0, ctx_size, 16, >- NVOBJ_FLAG_ZERO_ALLOC, >- &chan->ramin_grctx))) >- return ret; >- >- /* Initialise default context values */ >- ctx_init(dev, chan->ramin_grctx->gpuobj); >- >- INSTANCE_WR(chan->ramin_grctx->gpuobj, 0x28/4, (chan->id<<24)|0x1); /* CTX_USER */ >- INSTANCE_WR(dev_priv->ctx_table->gpuobj, chan->id, >- chan->ramin_grctx->instance >> 4); >- >- return 0; >-} >- >-void nv30_graph_destroy_context(struct nouveau_channel *chan) >-{ >- struct drm_device *dev = chan->dev; >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- >- if (chan->ramin_grctx) >- nouveau_gpuobj_ref_del(dev, &chan->ramin_grctx); >- >- INSTANCE_WR(dev_priv->ctx_table->gpuobj, chan->id, 0); >-} >- >-static int >-nouveau_graph_wait_idle(struct drm_device *dev) >-{ >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- int tv = 1000; >- >- while (tv--) { >- if (NV_READ(0x400700) == 0) >- break; >- } >- >- if (NV_READ(0x400700)) { >- DRM_ERROR("timeout!\n"); >- return -EBUSY; >- } >- return 0; >-} >- >-int nv30_graph_load_context(struct nouveau_channel *chan) >-{ >- struct drm_device *dev = chan->dev; >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- uint32_t inst; >- >- if (!chan->ramin_grctx) >- return -EINVAL; >- inst = chan->ramin_grctx->instance >> 4; >- >- NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_POINTER, inst); >- NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_XFER, >- NV20_PGRAPH_CHANNEL_CTX_XFER_LOAD); >- >- return nouveau_graph_wait_idle(dev); >-} >- >-int nv30_graph_save_context(struct nouveau_channel *chan) >-{ >- struct drm_device *dev = chan->dev; >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- uint32_t inst; >- >- if (!chan->ramin_grctx) >- return -EINVAL; >- inst = chan->ramin_grctx->instance >> 4; >- >- NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_POINTER, inst); >- NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_XFER, >- NV20_PGRAPH_CHANNEL_CTX_XFER_SAVE); >- >- return nouveau_graph_wait_idle(dev); >-} >- >-int nv30_graph_init(struct drm_device *dev) >-{ >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- uint32_t vramsz, tmp; >- int ret, i; >- >- NV_WRITE(NV03_PMC_ENABLE, NV_READ(NV03_PMC_ENABLE) & >- ~NV_PMC_ENABLE_PGRAPH); >- NV_WRITE(NV03_PMC_ENABLE, NV_READ(NV03_PMC_ENABLE) | >- NV_PMC_ENABLE_PGRAPH); >- >- /* Create Context Pointer Table */ >- dev_priv->ctx_table_size = 32 * 4; >- if ((ret = nouveau_gpuobj_new_ref(dev, NULL, NULL, 0, >- dev_priv->ctx_table_size, 16, >- NVOBJ_FLAG_ZERO_ALLOC, >- &dev_priv->ctx_table))) >- return ret; >- >- NV_WRITE(NV10_PGRAPH_CHANNEL_CTX_TABLE, >- dev_priv->ctx_table->instance >> 4); >- >- NV_WRITE(NV03_PGRAPH_INTR , 0xFFFFFFFF); >- NV_WRITE(NV03_PGRAPH_INTR_EN, 0xFFFFFFFF); >- >- NV_WRITE(NV04_PGRAPH_DEBUG_0, 0xFFFFFFFF); >- NV_WRITE(NV04_PGRAPH_DEBUG_0, 0x00000000); >- NV_WRITE(NV04_PGRAPH_DEBUG_1, 0x401287c0); >- NV_WRITE(0x400890, 0x01b463ff); >- NV_WRITE(NV04_PGRAPH_DEBUG_3, 0xf3de0471); >- NV_WRITE(NV10_PGRAPH_DEBUG_4, 0x00008000); >- NV_WRITE(NV04_PGRAPH_LIMIT_VIOL_PIX, 0xf04bdff6); >- NV_WRITE(0x400B80, 0x1003d888); >- NV_WRITE(0x400098, 0x00000000); >- NV_WRITE(0x40009C, 0x0005ad00); >- NV_WRITE(0x400B88, 0x62ff00ff); // suspiciously like PGRAPH_DEBUG_2 >- NV_WRITE(0x4000a0, 0x00000000); >- NV_WRITE(0x4000a4, 0x00000008); >- NV_WRITE(0x4008a8, 0xb784a400); >- NV_WRITE(0x400ba0, 0x002f8685); >- NV_WRITE(0x400ba4, 0x00231f3f); >- NV_WRITE(0x4008a4, 0x40000020); >- NV_WRITE(0x400B84, 0x0c000000); >- NV_WRITE(NV04_PGRAPH_DEBUG_2, 0x62ff0f7f); >- NV_WRITE(0x4000c0, 0x00000016); >- NV_WRITE(0x400780, 0x000014e4); >- >- /* copy tile info from PFB */ >- for (i=0; i<NV10_PFB_TILE__SIZE; i++) { >- NV_WRITE(NV10_PGRAPH_TILE(i), NV_READ(NV10_PFB_TILE(i))); >- NV_WRITE(NV10_PGRAPH_TLIMIT(i), NV_READ(NV10_PFB_TLIMIT(i))); >- NV_WRITE(NV10_PGRAPH_TSIZE(i), NV_READ(NV10_PFB_TSIZE(i))); >- NV_WRITE(NV10_PGRAPH_TSTATUS(i), NV_READ(NV10_PFB_TSTATUS(i))); >- } >- >- NV_WRITE(NV10_PGRAPH_CTX_CONTROL, 0x10010100); >- NV_WRITE(NV10_PGRAPH_STATE , 0xFFFFFFFF); >- NV_WRITE(NV04_PGRAPH_FIFO , 0x00000001); >- >- /* begin RAM config */ >- vramsz = drm_get_resource_len(dev, 0) - 1; >- NV_WRITE(0x4009A4, NV_READ(NV04_PFB_CFG0)); >- NV_WRITE(0x4009A8, NV_READ(NV04_PFB_CFG1)); >- NV_WRITE(0x400750, 0x00EA0000); >- NV_WRITE(0x400754, NV_READ(NV04_PFB_CFG0)); >- NV_WRITE(0x400750, 0x00EA0004); >- NV_WRITE(0x400754, NV_READ(NV04_PFB_CFG1)); >- NV_WRITE(0x400820, 0); >- NV_WRITE(0x400824, 0); >- NV_WRITE(0x400864, vramsz-1); >- NV_WRITE(0x400868, vramsz-1); >- >- NV_WRITE(0x400B20, 0x00000000); >- NV_WRITE(0x400B04, 0xFFFFFFFF); >- >- /* per-context state, doesn't belong here */ >- tmp = NV_READ(NV10_PGRAPH_SURFACE) & 0x0007ff00; >- NV_WRITE(NV10_PGRAPH_SURFACE, tmp); >- tmp = NV_READ(NV10_PGRAPH_SURFACE) | 0x00020100; >- NV_WRITE(NV10_PGRAPH_SURFACE, tmp); >- >- NV_WRITE(NV03_PGRAPH_ABS_UCLIP_XMIN, 0); >- NV_WRITE(NV03_PGRAPH_ABS_UCLIP_YMIN, 0); >- NV_WRITE(NV03_PGRAPH_ABS_UCLIP_XMAX, 0x7fff); >- NV_WRITE(NV03_PGRAPH_ABS_UCLIP_YMAX, 0x7fff); >- >- return 0; >-} >- >-void nv30_graph_takedown(struct drm_device *dev) >-{ >- struct drm_nouveau_private *dev_priv = dev->dev_private; >- >- nouveau_gpuobj_ref_del(dev, &dev_priv->ctx_table); >-} >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv40_fb.c linux-2.6.23.i686/drivers/char/drm/nv40_fb.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv40_fb.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv40_fb.c 2008-01-06 09:24:57.000000000 +0100 >@@ -53,4 +53,3 @@ void > nv40_fb_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv40_fifo.c linux-2.6.23.i686/drivers/char/drm/nv40_fifo.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv40_fifo.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv40_fifo.c 2008-01-06 09:24:57.000000000 +0100 >@@ -135,7 +135,9 @@ nv40_fifo_load_context(struct nouveau_ch > NV_WRITE(NV04_PFIFO_DMA_TIMESLICE, tmp); > > /* Set channel active, and in DMA mode */ >- NV_WRITE(NV03_PFIFO_CACHE1_PUSH1 , 0x00010000 | chan->id); >+ NV_WRITE(NV03_PFIFO_CACHE1_PUSH1, >+ NV03_PFIFO_CACHE1_PUSH1_DMA | chan->id); >+ > /* Reset DMA_CTL_AT_INFO to INVALID */ > tmp = NV_READ(NV04_PFIFO_CACHE1_DMA_CTL) & ~(1<<31); > NV_WRITE(NV04_PFIFO_CACHE1_DMA_CTL, tmp); >@@ -205,4 +207,3 @@ nv40_fifo_init(struct drm_device *dev) > NV_WRITE(NV04_PFIFO_DMA_TIMESLICE, 0x2101ffff); > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv40_graph.c linux-2.6.23.i686/drivers/char/drm/nv40_graph.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv40_graph.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv40_graph.c 2008-01-06 09:24:57.000000000 +0100 >@@ -34,8 +34,10 @@ > * between the contexts > */ > #define NV40_GRCTX_SIZE (175*1024) >+#define NV41_GRCTX_SIZE (92*1024) > #define NV43_GRCTX_SIZE (70*1024) > #define NV46_GRCTX_SIZE (70*1024) /* probably ~64KiB */ >+#define NV47_GRCTX_SIZE (125*1024) > #define NV49_GRCTX_SIZE (164640) > #define NV4A_GRCTX_SIZE (64*1024) > #define NV4B_GRCTX_SIZE (164640) >@@ -188,11 +190,121 @@ nv40_graph_context_init(struct drm_devic > } > > static void >+nv41_graph_context_init(struct drm_device *dev, struct nouveau_gpuobj *ctx) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ int i; >+ >+ INSTANCE_WR(ctx, 0x00000/4, ctx->im_pramin->start); >+ INSTANCE_WR(ctx, 0x00000024/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x00000028/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x00000030/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x0000011c/4, 0x20010001); >+ INSTANCE_WR(ctx, 0x00000120/4, 0x0f73ef00); >+ INSTANCE_WR(ctx, 0x00000128/4, 0x02008821); >+ for (i = 0x00000178; i <= 0x00000180; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00000040); >+ INSTANCE_WR(ctx, 0x00000188/4, 0x00000040); >+ for (i = 0x00000194; i <= 0x000001b0; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x80000000); >+ INSTANCE_WR(ctx, 0x000001d0/4, 0x0b0b0b0c); >+ INSTANCE_WR(ctx, 0x00000340/4, 0x00040000); >+ for (i = 0x00000350; i <= 0x0000035c; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x55555555); >+ INSTANCE_WR(ctx, 0x00000388/4, 0x00000008); >+ INSTANCE_WR(ctx, 0x0000039c/4, 0x00001010); >+ INSTANCE_WR(ctx, 0x000003cc/4, 0x00000111); >+ INSTANCE_WR(ctx, 0x000003d0/4, 0x00080060); >+ INSTANCE_WR(ctx, 0x000003ec/4, 0x00000080); >+ INSTANCE_WR(ctx, 0x000003f0/4, 0xffff0000); >+ INSTANCE_WR(ctx, 0x000003f4/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x00000408/4, 0x46400000); >+ INSTANCE_WR(ctx, 0x00000418/4, 0xffff0000); >+ INSTANCE_WR(ctx, 0x00000424/4, 0x0fff0000); >+ INSTANCE_WR(ctx, 0x00000428/4, 0x0fff0000); >+ INSTANCE_WR(ctx, 0x00000430/4, 0x00011100); >+ for (i = 0x0000044c; i <= 0x00000488; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x07ff0000); >+ INSTANCE_WR(ctx, 0x00000494/4, 0x4b7fffff); >+ INSTANCE_WR(ctx, 0x000004bc/4, 0x30201000); >+ INSTANCE_WR(ctx, 0x000004c0/4, 0x70605040); >+ INSTANCE_WR(ctx, 0x000004c4/4, 0xb8a89888); >+ INSTANCE_WR(ctx, 0x000004c8/4, 0xf8e8d8c8); >+ INSTANCE_WR(ctx, 0x000004dc/4, 0x40100000); >+ INSTANCE_WR(ctx, 0x000004f8/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x0000052c/4, 0x435185d6); >+ INSTANCE_WR(ctx, 0x00000530/4, 0x2155b699); >+ INSTANCE_WR(ctx, 0x00000534/4, 0xfedcba98); >+ INSTANCE_WR(ctx, 0x00000538/4, 0x00000098); >+ INSTANCE_WR(ctx, 0x00000548/4, 0xffffffff); >+ INSTANCE_WR(ctx, 0x0000054c/4, 0x00ff7000); >+ INSTANCE_WR(ctx, 0x00000550/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x00000560/4, 0x00ff0000); >+ INSTANCE_WR(ctx, 0x00000598/4, 0x00ffff00); >+ for (i = 0x000005dc; i <= 0x00000618; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00018488); >+ for (i = 0x0000061c; i <= 0x00000658; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00028202); >+ for (i = 0x0000069c; i <= 0x000006d8; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0000aae4); >+ for (i = 0x000006dc; i <= 0x00000718; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x01012000); >+ for (i = 0x0000071c; i <= 0x00000758; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00080008); >+ for (i = 0x0000079c; i <= 0x000007d8; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00100008); >+ for (i = 0x0000082c; i <= 0x00000838; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x0001bc80); >+ for (i = 0x0000083c; i <= 0x00000848; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00000202); >+ for (i = 0x0000085c; i <= 0x00000868; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00000008); >+ for (i = 0x0000087c; i <= 0x00000888; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00080008); >+ INSTANCE_WR(ctx, 0x0000089c/4, 0x00000002); >+ INSTANCE_WR(ctx, 0x000008d0/4, 0x00000021); >+ INSTANCE_WR(ctx, 0x000008d4/4, 0x030c30c3); >+ INSTANCE_WR(ctx, 0x000008e0/4, 0x3e020200); >+ INSTANCE_WR(ctx, 0x000008e4/4, 0x00ffffff); >+ INSTANCE_WR(ctx, 0x000008e8/4, 0x20103f00); >+ INSTANCE_WR(ctx, 0x000008f4/4, 0x00020000); >+ INSTANCE_WR(ctx, 0x0000092c/4, 0x00008100); >+ INSTANCE_WR(ctx, 0x000009b8/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x000009fc/4, 0x00001001); >+ INSTANCE_WR(ctx, 0x00000a04/4, 0x00000003); >+ INSTANCE_WR(ctx, 0x00000a08/4, 0x00888001); >+ INSTANCE_WR(ctx, 0x00000aac/4, 0x00000005); >+ INSTANCE_WR(ctx, 0x00000ab8/4, 0x0000ffff); >+ for (i = 0x00000ad4; i <= 0x00000ae4; i += 4) >+ INSTANCE_WR(ctx, i/4, 0x00005555); >+ INSTANCE_WR(ctx, 0x00000ae8/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x00000b20/4, 0x00000001); >+ for (i = 0x00002ee8; i <= 0x00002f60; i += 8) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i = 0x00005168; i <= 0x00007358; i += 24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i = 0x00007368; i <= 0x00007758; i += 16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i = 0x0000a068; i <= 0x0000c258; i += 24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i = 0x0000c268; i <= 0x0000c658; i += 16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i = 0x0000ef68; i <= 0x00011158; i += 24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i = 0x00011168; i <= 0x00011558; i += 16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i = 0x00013e68; i <= 0x00016058; i += 24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i = 0x00016068; i <= 0x00016458; i += 16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+}; >+ >+static void > nv43_graph_context_init(struct drm_device *dev, struct nouveau_gpuobj *ctx) > { > struct drm_nouveau_private *dev_priv = dev->dev_private; > int i; >- >+ > INSTANCE_WR(ctx, 0x00000/4, ctx->im_pramin->start); > INSTANCE_WR(ctx, 0x00024/4, 0x0000ffff); > INSTANCE_WR(ctx, 0x00028/4, 0x0000ffff); >@@ -454,6 +566,136 @@ nv46_graph_context_init(struct drm_devic > INSTANCE_WR(ctx, i/4, 0x3f800000); > } > >+/* This may only work on 7800 AGP cards, will include a warning */ >+static void >+nv47_graph_context_init(struct drm_device *dev, struct nouveau_gpuobj *ctx) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ int i; >+ >+ INSTANCE_WR(ctx, 0x00000000/4, ctx->im_pramin->start); >+ INSTANCE_WR(ctx, 0x00000024/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x00000028/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x00000030/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x0000011c/4, 0x20010001); >+ INSTANCE_WR(ctx, 0x00000120/4, 0x0f73ef00); >+ INSTANCE_WR(ctx, 0x00000128/4, 0x02008821); >+ INSTANCE_WR(ctx, 0x00000178/4, 0x00000040); >+ INSTANCE_WR(ctx, 0x0000017c/4, 0x00000040); >+ INSTANCE_WR(ctx, 0x00000180/4, 0x00000040); >+ INSTANCE_WR(ctx, 0x00000188/4, 0x00000040); >+ for (i=0x00000194; i<=0x000001b0; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x80000000); >+ INSTANCE_WR(ctx, 0x000001d0/4, 0x0b0b0b0c); >+ INSTANCE_WR(ctx, 0x00000340/4, 0x00040000); >+ INSTANCE_WR(ctx, 0x00000350/4, 0x55555555); >+ INSTANCE_WR(ctx, 0x00000354/4, 0x55555555); >+ INSTANCE_WR(ctx, 0x00000358/4, 0x55555555); >+ INSTANCE_WR(ctx, 0x0000035c/4, 0x55555555); >+ INSTANCE_WR(ctx, 0x00000388/4, 0x00000008); >+ INSTANCE_WR(ctx, 0x0000039c/4, 0x00001010); >+ for (i=0x000003c0; i<=0x000003fc; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x00000111); >+ INSTANCE_WR(ctx, 0x00000454/4, 0x00000111); >+ INSTANCE_WR(ctx, 0x00000458/4, 0x00080060); >+ INSTANCE_WR(ctx, 0x00000474/4, 0x00000080); >+ INSTANCE_WR(ctx, 0x00000478/4, 0xffff0000); >+ INSTANCE_WR(ctx, 0x0000047c/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x00000490/4, 0x46400000); >+ INSTANCE_WR(ctx, 0x000004a0/4, 0xffff0000); >+ for (i=0x000004a4; i<=0x000004e0; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x88888888); >+ INSTANCE_WR(ctx, 0x000004f4/4, 0x0fff0000); >+ INSTANCE_WR(ctx, 0x000004f8/4, 0x0fff0000); >+ INSTANCE_WR(ctx, 0x00000500/4, 0x00011100); >+ for (i=0x0000051c; i<=0x00000558; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x07ff0000); >+ INSTANCE_WR(ctx, 0x00000564/4, 0x4b7fffff); >+ INSTANCE_WR(ctx, 0x0000058c/4, 0x30201000); >+ INSTANCE_WR(ctx, 0x00000590/4, 0x70605040); >+ INSTANCE_WR(ctx, 0x00000594/4, 0xb8a89888); >+ INSTANCE_WR(ctx, 0x00000598/4, 0xf8e8d8c8); >+ INSTANCE_WR(ctx, 0x000005ac/4, 0x40100000); >+ INSTANCE_WR(ctx, 0x000005c8/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x000005fc/4, 0x435185d6); >+ INSTANCE_WR(ctx, 0x00000600/4, 0x2155b699); >+ INSTANCE_WR(ctx, 0x00000604/4, 0xfedcba98); >+ INSTANCE_WR(ctx, 0x00000608/4, 0x00000098); >+ INSTANCE_WR(ctx, 0x00000618/4, 0xffffffff); >+ INSTANCE_WR(ctx, 0x0000061c/4, 0x00ff7000); >+ INSTANCE_WR(ctx, 0x00000620/4, 0x0000ffff); >+ INSTANCE_WR(ctx, 0x00000630/4, 0x00ff0000); >+ INSTANCE_WR(ctx, 0x0000066c/4, 0x00ffff00); >+ for (i=0x000006b0; i<=0x000006ec; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x00018488); >+ for (i=0x000006f0; i<=0x0000072c; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x00028202); >+ for (i=0x00000770; i<=0x000007ac; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x0000aae4); >+ for (i=0x000007b0; i<=0x000007ec; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x01012000); >+ for (i=0x000007f0; i<=0x0000082c; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x00080008); >+ for (i=0x00000870; i<=0x000008ac; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x00100008); >+ INSTANCE_WR(ctx, 0x00000900/4, 0x0001bc80); >+ INSTANCE_WR(ctx, 0x00000904/4, 0x0001bc80); >+ INSTANCE_WR(ctx, 0x00000908/4, 0x0001bc80); >+ INSTANCE_WR(ctx, 0x0000090c/4, 0x0001bc80); >+ INSTANCE_WR(ctx, 0x00000910/4, 0x00000202); >+ INSTANCE_WR(ctx, 0x00000914/4, 0x00000202); >+ INSTANCE_WR(ctx, 0x00000918/4, 0x00000202); >+ INSTANCE_WR(ctx, 0x0000091c/4, 0x00000202); >+ for (i=0x00000930; i<=0x0000095c; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x00000008); >+ INSTANCE_WR(ctx, 0x00000970/4, 0x00000002); >+ INSTANCE_WR(ctx, 0x000009a4/4, 0x00000021); >+ INSTANCE_WR(ctx, 0x000009a8/4, 0x030c30c3); >+ INSTANCE_WR(ctx, 0x000009b4/4, 0x3e020200); >+ INSTANCE_WR(ctx, 0x000009b8/4, 0x00ffffff); >+ INSTANCE_WR(ctx, 0x000009bc/4, 0x40103f00); >+ INSTANCE_WR(ctx, 0x000009c8/4, 0x00040000); >+ INSTANCE_WR(ctx, 0x00000a00/4, 0x00008100); >+ INSTANCE_WR(ctx, 0x00000a8c/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x00000ad0/4, 0x00001001); >+ INSTANCE_WR(ctx, 0x00000adc/4, 0x00000003); >+ INSTANCE_WR(ctx, 0x00000ae0/4, 0x00888001); >+ for (i=0x00000b10; i<=0x00000b8c; i+=4) >+ INSTANCE_WR(ctx, i/4, 0xffffffff); >+ INSTANCE_WR(ctx, 0x00000bb4/4, 0x00000005); >+ INSTANCE_WR(ctx, 0x00000bc0/4, 0x0000ffff); >+ for (i=0x00000bdc; i<=0x00000bf8; i+=4) >+ INSTANCE_WR(ctx, i/4, 0x00005555); >+ INSTANCE_WR(ctx, 0x00000bfc/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x00000c34/4, 0x00000001); >+ INSTANCE_WR(ctx, 0x00000c38/4, 0x08e00001); >+ INSTANCE_WR(ctx, 0x00000c3c/4, 0x000e3000); >+ for (i=0x00003000; i<=0x00003078; i+=8) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i=0x00004dc0; i<=0x00006fb0; i+=24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i=0x00006fc0; i<=0x000073b0; i+=16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i=0x00009800; i<=0x0000b9f0; i+=24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i=0x0000ba00; i<=0x00010430; i+=24) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i=0x00010440; i<=0x00010830; i+=16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i=0x00012c80; i<=0x00014e70; i+=24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i=0x00014e80; i<=0x00015270; i+=16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i=0x000176c0; i<=0x000198b0; i+=24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i=0x000198c0; i<=0x00019cb0; i+=16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+ for (i=0x0001c100; i<=0x0001e2f0; i+=24) >+ INSTANCE_WR(ctx, i/4, 0x00000001); >+ for (i=0x0001e300; i<=0x0001e6f0; i+=16) >+ INSTANCE_WR(ctx, i/4, 0x3f800000); >+} >+ > static void > nv49_graph_context_init(struct drm_device *dev, struct nouveau_gpuobj *ctx) > { >@@ -1237,6 +1479,11 @@ nv40_graph_create_context(struct nouveau > ctx_size = NV40_GRCTX_SIZE; > ctx_init = nv40_graph_context_init; > break; >+ case 0x41: >+ case 0x42: >+ ctx_size = NV41_GRCTX_SIZE; >+ ctx_init = nv41_graph_context_init; >+ break; > case 0x43: > ctx_size = NV43_GRCTX_SIZE; > ctx_init = nv43_graph_context_init; >@@ -1245,6 +1492,11 @@ nv40_graph_create_context(struct nouveau > ctx_size = NV46_GRCTX_SIZE; > ctx_init = nv46_graph_context_init; > break; >+ case 0x47: >+ DRM_INFO("NV47 warning: If your card behaves strangely, please come to the irc channel\n"); >+ ctx_size = NV47_GRCTX_SIZE; >+ ctx_init = nv47_graph_context_init; >+ break; > case 0x49: > ctx_size = NV49_GRCTX_SIZE; > ctx_init = nv49_graph_context_init; >@@ -1303,7 +1555,7 @@ nv40_graph_transfer_context(struct drm_d > tmp |= save ? NV40_PGRAPH_CTXCTL_0310_XFER_SAVE : > NV40_PGRAPH_CTXCTL_0310_XFER_LOAD; > NV_WRITE(NV40_PGRAPH_CTXCTL_0310, tmp); >- >+ > tmp = NV_READ(NV40_PGRAPH_CTXCTL_0304); > tmp |= NV40_PGRAPH_CTXCTL_0304_XFER_CTX; > NV_WRITE(NV40_PGRAPH_CTXCTL_0304, tmp); >@@ -1431,6 +1683,37 @@ static uint32_t nv40_ctx_voodoo[] = { > ~0 > }; > >+static uint32_t nv41_ctx_voodoo[] = { >+ 0x00400889, 0x00200000, 0x0060000a, 0x00200000, 0x00300000, 0x00800001, >+ 0x00700009, 0x0060000e, 0x00400d64, 0x00400d05, 0x00408f65, 0x00409306, >+ 0x0040a068, 0x0040198f, 0x00200001, 0x0060000a, 0x00700080, 0x00104042, >+ 0x00200001, 0x0060000a, 0x00700000, 0x001040c5, 0x00401826, 0x00401968, >+ 0x0060000d, 0x00200000, 0x0060000a, 0x00700000, 0x00106000, 0x00700080, >+ 0x004020e6, 0x007000a0, 0x00500060, 0x00200001, 0x0060000a, 0x0011814d, >+ 0x00110158, 0x00105401, 0x0020003a, 0x00100051, 0x001040c5, 0x0010c1c4, >+ 0x001041c9, 0x0010c1dc, 0x00150210, 0x0012c225, 0x00108238, 0x0010823e, >+ 0x001242c0, 0x00200040, 0x00100280, 0x00128100, 0x00128120, 0x00128143, >+ 0x0011415f, 0x0010815c, 0x0010c140, 0x00104029, 0x00110400, 0x00104d10, >+ 0x001046ec, 0x00500060, 0x00404087, 0x0060000d, 0x004079e6, 0x002000f1, >+ 0x0060000a, 0x00148653, 0x00104668, 0x0010c66d, 0x00120682, 0x0011068b, >+ 0x00168691, 0x001046ae, 0x001046b0, 0x001206b4, 0x001046c4, 0x001146c6, >+ 0x00200020, 0x001006cc, 0x001046ed, 0x001246f0, 0x002000c0, 0x00100700, >+ 0x0010c3d7, 0x001043e1, 0x00500060, 0x00200233, 0x0060000a, 0x00104800, >+ 0x00108901, 0x00124920, 0x0020001f, 0x00100940, 0x00140965, 0x00148a00, >+ 0x00108a14, 0x00200020, 0x00100b00, 0x00134b2c, 0x0010cd00, 0x0010cd04, >+ 0x00114d08, 0x00104d80, 0x00104e00, 0x0012d600, 0x00105c00, 0x00104f06, >+ 0x002002d2, 0x0060000a, 0x00300000, 0x00200680, 0x00407200, 0x00200684, >+ 0x00800001, 0x00200b1a, 0x0060000a, 0x00206380, 0x0040788a, 0x00201480, >+ 0x00800041, 0x00408900, 0x00600006, 0x004085e6, 0x00700080, 0x0020007a, >+ 0x0060000a, 0x00104280, 0x002002d2, 0x0060000a, 0x00200004, 0x00800001, >+ 0x00700000, 0x00200000, 0x0060000a, 0x00106002, 0x0040a068, 0x00700000, >+ 0x00200000, 0x0060000a, 0x00106002, 0x00700080, 0x00400a68, 0x00500060, >+ 0x00600007, 0x00409388, 0x0060000f, 0x00500060, 0x00200000, 0x0060000a, >+ 0x00700000, 0x00106001, 0x00910880, 0x00901ffe, 0x00940400, 0x00200020, >+ 0x0060000b, 0x00500069, 0x0060000c, 0x00402168, 0x0040a206, 0x0040a305, >+ 0x00600009, 0x00700005, 0x00700006, 0x0060000e, ~0 >+}; >+ > static uint32_t nv43_ctx_voodoo[] = { > 0x00400889, 0x00200000, 0x0060000a, 0x00200000, 0x00300000, 0x00800001, > 0x00700009, 0x0060000e, 0x00400d64, 0x00400d05, 0x00409565, 0x00409a06, >@@ -1528,6 +1811,38 @@ static uint32_t nv46_ctx_voodoo[] = { > 0x00600009, 0x00700005, 0x00700006, 0x0060000e, ~0 > }; > >+static uint32_t nv47_ctx_voodoo[] = { >+ 0x00400889, 0x00200000, 0x0060000a, 0x00200000, 0x00300000, 0x00800001, >+ 0x00700009, 0x0060000e, 0x00400d64, 0x00400d05, 0x00409265, 0x00409606, >+ 0x0040a368, 0x0040198f, 0x00200001, 0x0060000a, 0x00700080, 0x00104042, >+ 0x00200001, 0x0060000a, 0x00700000, 0x001040c5, 0x00401826, 0x00401968, >+ 0x0060000d, 0x00200000, 0x0060000a, 0x00700000, 0x00106000, 0x00700080, >+ 0x004020e6, 0x007000a0, 0x00500060, 0x00200001, 0x0060000a, 0x0011814d, >+ 0x00110158, 0x00105401, 0x0020003a, 0x00100051, 0x001040c5, 0x0010c1c4, >+ 0x001041c9, 0x0010c1dc, 0x00150210, 0x0012c225, 0x00108238, 0x0010823e, >+ 0x001242c0, 0x00200040, 0x00100280, 0x00128100, 0x00128120, 0x00128143, >+ 0x0011415f, 0x0010815c, 0x0010c140, 0x00104029, 0x00110400, 0x00104d12, >+ 0x00500060, 0x00403f87, 0x0060000d, 0x00407ce6, 0x002000f0, 0x0060000a, >+ 0x00200020, 0x00100620, 0x00154650, 0x00104668, 0x0017466d, 0x0011068b, >+ 0x00168691, 0x001046ae, 0x001046b0, 0x001206b4, 0x001046c4, 0x001146c6, >+ 0x00200022, 0x001006cc, 0x001246f0, 0x002000c0, 0x00100700, 0x0010c3d7, >+ 0x001043e1, 0x00500060, 0x00200268, 0x0060000a, 0x00104800, 0x00108901, >+ 0x00124920, 0x0020001f, 0x00100940, 0x00140965, 0x00144a00, 0x00104a19, >+ 0x0010ca1c, 0x00110b00, 0x00200028, 0x00100b08, 0x00134c2e, 0x0010cd00, >+ 0x0010cd04, 0x00120d08, 0x00104d80, 0x00104e00, 0x0012d600, 0x00105c00, >+ 0x00104f06, 0x00105406, 0x00105709, 0x00200318, 0x0060000a, 0x00300000, >+ 0x00200680, 0x00407500, 0x00200684, 0x00800001, 0x00200b60, 0x0060000a, >+ 0x00209540, 0x00407b8a, 0x00201350, 0x00800041, 0x00408c00, 0x00600006, >+ 0x004088e6, 0x00700080, 0x0020007a, 0x0060000a, 0x00104280, 0x00200318, >+ 0x0060000a, 0x00200004, 0x00800001, 0x00700000, 0x00200000, 0x0060000a, >+ 0x00106002, 0x0040a368, 0x00700000, 0x00200000, 0x0060000a, 0x00106002, >+ 0x00700080, 0x00400a68, 0x00500060, 0x00600007, 0x00409688, 0x0060000f, >+ 0x00500060, 0x00200000, 0x0060000a, 0x00700000, 0x00106001, 0x0091a880, >+ 0x00901ffe, 0x10940000, 0x00200020, 0x0060000b, 0x00500069, 0x0060000c, >+ 0x00402168, 0x0040a506, 0x0040a605, 0x00600009, 0x00700005, 0x00700006, >+ 0x0060000e, ~0 >+}; >+ > //this is used for nv49 and nv4b > static uint32_t nv49_4b_ctx_voodoo[] ={ > 0x00400564, 0x00400505, 0x00408165, 0x00408206, 0x00409e68, 0x00200020, >@@ -1562,35 +1877,35 @@ static uint32_t nv49_4b_ctx_voodoo[] ={ > > > static uint32_t nv4a_ctx_voodoo[] = { >- 0x00400889, 0x00200000, 0x0060000a, 0x00200000, 0x00300000, 0x00800001, >- 0x00700009, 0x0060000e, 0x00400d64, 0x00400d05, 0x00409965, 0x00409e06, >- 0x0040ac68, 0x00200000, 0x0060000a, 0x00700000, 0x00106000, 0x00700080, >- 0x004014e6, 0x007000a0, 0x00401a84, 0x00700082, 0x00600001, 0x00500061, >- 0x00600002, 0x00401b68, 0x00500060, 0x00200001, 0x0060000a, 0x0011814d, >- 0x00110158, 0x00105401, 0x0020003a, 0x00100051, 0x001040c5, 0x0010c1c4, >- 0x001041c9, 0x0010c1dc, 0x00150210, 0x0012c225, 0x00108238, 0x0010823e, >- 0x001242c0, 0x00200040, 0x00100280, 0x00128100, 0x00128120, 0x00128143, >- 0x0011415f, 0x0010815c, 0x0010c140, 0x00104029, 0x00110400, 0x00104d10, >- 0x001046ec, 0x00500060, 0x00403a87, 0x0060000d, 0x00407de6, 0x002000f1, >- 0x0060000a, 0x00148653, 0x00104668, 0x0010c66d, 0x00120682, 0x0011068b, >- 0x00168691, 0x001046ae, 0x001046b0, 0x001206b4, 0x001046c4, 0x001146c6, >- 0x001646cc, 0x001186e6, 0x001046ed, 0x001246f0, 0x002000c0, 0x00100700, >- 0x0010c3d7, 0x001043e1, 0x00500060, 0x00405800, 0x00405884, 0x00600003, >- 0x00500067, 0x00600008, 0x00500060, 0x00700082, 0x00200232, 0x0060000a, >- 0x00104800, 0x00108901, 0x00104910, 0x00124920, 0x0020001f, 0x00100940, >- 0x00140965, 0x00148a00, 0x00108a14, 0x00160b00, 0x00134b2c, 0x0010cd00, >- 0x0010cd04, 0x0010cd08, 0x00104d80, 0x00104e00, 0x0012d600, 0x00105c00, >- 0x00104f06, 0x002002c8, 0x0060000a, 0x00300000, 0x00200080, 0x00407300, >- 0x00200084, 0x00800001, 0x00200510, 0x0060000a, 0x002037e0, 0x0040798a, >- 0x00201320, 0x00800029, 0x00407d84, 0x00201560, 0x00800002, 0x00409100, >- 0x00600006, 0x00700003, 0x00408ae6, 0x00700080, 0x0020007a, 0x0060000a, >- 0x00104280, 0x002002c8, 0x0060000a, 0x00200004, 0x00800001, 0x00700000, >- 0x00200000, 0x0060000a, 0x00106002, 0x0040ac84, 0x00700002, 0x00600004, >- 0x0040ac68, 0x00700000, 0x00200000, 0x0060000a, 0x00106002, 0x00700080, >- 0x00400a84, 0x00700002, 0x00400a68, 0x00500060, 0x00600007, 0x00409d88, >- 0x0060000f, 0x00000000, 0x00500060, 0x00200000, 0x0060000a, 0x00700000, >- 0x00106001, 0x00700083, 0x00910880, 0x00901ffe, 0x01940000, 0x00200020, >- 0x0060000b, 0x00500069, 0x0060000c, 0x00401b68, 0x0040ae06, 0x0040af05, >+ 0x00400889, 0x00200000, 0x0060000a, 0x00200000, 0x00300000, 0x00800001, >+ 0x00700009, 0x0060000e, 0x00400d64, 0x00400d05, 0x00409965, 0x00409e06, >+ 0x0040ac68, 0x00200000, 0x0060000a, 0x00700000, 0x00106000, 0x00700080, >+ 0x004014e6, 0x007000a0, 0x00401a84, 0x00700082, 0x00600001, 0x00500061, >+ 0x00600002, 0x00401b68, 0x00500060, 0x00200001, 0x0060000a, 0x0011814d, >+ 0x00110158, 0x00105401, 0x0020003a, 0x00100051, 0x001040c5, 0x0010c1c4, >+ 0x001041c9, 0x0010c1dc, 0x00150210, 0x0012c225, 0x00108238, 0x0010823e, >+ 0x001242c0, 0x00200040, 0x00100280, 0x00128100, 0x00128120, 0x00128143, >+ 0x0011415f, 0x0010815c, 0x0010c140, 0x00104029, 0x00110400, 0x00104d10, >+ 0x001046ec, 0x00500060, 0x00403a87, 0x0060000d, 0x00407de6, 0x002000f1, >+ 0x0060000a, 0x00148653, 0x00104668, 0x0010c66d, 0x00120682, 0x0011068b, >+ 0x00168691, 0x001046ae, 0x001046b0, 0x001206b4, 0x001046c4, 0x001146c6, >+ 0x001646cc, 0x001186e6, 0x001046ed, 0x001246f0, 0x002000c0, 0x00100700, >+ 0x0010c3d7, 0x001043e1, 0x00500060, 0x00405800, 0x00405884, 0x00600003, >+ 0x00500067, 0x00600008, 0x00500060, 0x00700082, 0x00200232, 0x0060000a, >+ 0x00104800, 0x00108901, 0x00104910, 0x00124920, 0x0020001f, 0x00100940, >+ 0x00140965, 0x00148a00, 0x00108a14, 0x00160b00, 0x00134b2c, 0x0010cd00, >+ 0x0010cd04, 0x0010cd08, 0x00104d80, 0x00104e00, 0x0012d600, 0x00105c00, >+ 0x00104f06, 0x002002c8, 0x0060000a, 0x00300000, 0x00200080, 0x00407300, >+ 0x00200084, 0x00800001, 0x00200510, 0x0060000a, 0x002037e0, 0x0040798a, >+ 0x00201320, 0x00800029, 0x00407d84, 0x00201560, 0x00800002, 0x00409100, >+ 0x00600006, 0x00700003, 0x00408ae6, 0x00700080, 0x0020007a, 0x0060000a, >+ 0x00104280, 0x002002c8, 0x0060000a, 0x00200004, 0x00800001, 0x00700000, >+ 0x00200000, 0x0060000a, 0x00106002, 0x0040ac84, 0x00700002, 0x00600004, >+ 0x0040ac68, 0x00700000, 0x00200000, 0x0060000a, 0x00106002, 0x00700080, >+ 0x00400a84, 0x00700002, 0x00400a68, 0x00500060, 0x00600007, 0x00409d88, >+ 0x0060000f, 0x00000000, 0x00500060, 0x00200000, 0x0060000a, 0x00700000, >+ 0x00106001, 0x00700083, 0x00910880, 0x00901ffe, 0x01940000, 0x00200020, >+ 0x0060000b, 0x00500069, 0x0060000c, 0x00401b68, 0x0040ae06, 0x0040af05, > 0x00600009, 0x00700005, 0x00700006, 0x0060000e, ~0 > }; > >@@ -1683,9 +1998,12 @@ nv40_graph_init(struct drm_device *dev) > > switch (dev_priv->chipset) { > case 0x40: ctx_voodoo = nv40_ctx_voodoo; break; >+ case 0x41: >+ case 0x42: ctx_voodoo = nv41_ctx_voodoo; break; > case 0x43: ctx_voodoo = nv43_ctx_voodoo; break; > case 0x44: ctx_voodoo = nv44_ctx_voodoo; break; > case 0x46: ctx_voodoo = nv46_ctx_voodoo; break; >+ case 0x47: ctx_voodoo = nv47_ctx_voodoo; break; > case 0x49: ctx_voodoo = nv49_4b_ctx_voodoo; break; > case 0x4a: ctx_voodoo = nv4a_ctx_voodoo; break; > case 0x4b: ctx_voodoo = nv49_4b_ctx_voodoo; break; >@@ -1708,7 +2026,7 @@ nv40_graph_init(struct drm_device *dev) > NV_WRITE(NV40_PGRAPH_CTXCTL_UCODE_DATA, ctx_voodoo[i]); > i++; > } >- } >+ } > > /* No context present currently */ > NV_WRITE(NV40_PGRAPH_CTXCTL_CUR, 0x00000000); >@@ -1903,4 +2221,3 @@ nv40_graph_init(struct drm_device *dev) > void nv40_graph_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv40_mc.c linux-2.6.23.i686/drivers/char/drm/nv40_mc.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv40_mc.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv40_mc.c 2008-01-06 09:24:57.000000000 +0100 >@@ -36,4 +36,3 @@ void > nv40_mc_takedown(struct drm_device *dev) > { > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv50_fifo.c linux-2.6.23.i686/drivers/char/drm/nv50_fifo.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv50_fifo.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv50_fifo.c 2008-01-06 09:24:57.000000000 +0100 >@@ -213,6 +213,15 @@ nv50_fifo_takedown(struct drm_device *de > } > > int >+nv50_fifo_channel_id(struct drm_device *dev) >+{ >+ struct drm_nouveau_private *dev_priv = dev->dev_private; >+ >+ return (NV_READ(NV03_PFIFO_CACHE1_PUSH1) & >+ NV50_PFIFO_CACHE1_PUSH1_CHID_MASK); >+} >+ >+int > nv50_fifo_create_context(struct nouveau_channel *chan) > { > struct drm_device *dev = chan->dev; >@@ -324,4 +333,3 @@ nv50_fifo_save_context(struct nouveau_ch > DRM_ERROR("stub!\n"); > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv50_graph.c linux-2.6.23.i686/drivers/char/drm/nv50_graph.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv50_graph.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv50_graph.c 2008-01-06 09:24:57.000000000 +0100 >@@ -177,7 +177,7 @@ nv50_graph_init_ctxctl(struct drm_device > NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_POINTER, 0); > } > >-int >+int > nv50_graph_init(struct drm_device *dev) > { > DRM_DEBUG("\n"); >@@ -262,7 +262,7 @@ nv50_graph_transfer_context(struct drm_d > NV_WRITE(NV20_PGRAPH_CHANNEL_CTX_POINTER, inst | (1<<31)); > NV_WRITE(0x400824, NV_READ(0x400824) | > (save ? NV40_PGRAPH_CTXCTL_0310_XFER_SAVE : >- NV40_PGRAPH_CTXCTL_0310_XFER_LOAD)); >+ NV40_PGRAPH_CTXCTL_0310_XFER_LOAD)); > NV_WRITE(NV40_PGRAPH_CTXCTL_0304, NV40_PGRAPH_CTXCTL_0304_XFER_CTX); > > for (i = 0; i < tv; i++) { >@@ -313,4 +313,3 @@ nv50_graph_save_context(struct nouveau_c > > return nv50_graph_transfer_context(dev, inst, 1); > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv50_instmem.c linux-2.6.23.i686/drivers/char/drm/nv50_instmem.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv50_instmem.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv50_instmem.c 2008-01-06 09:24:57.000000000 +0100 >@@ -69,7 +69,7 @@ nv50_instmem_init(struct drm_device *dev > return -ENOMEM; > dev_priv->Engine.instmem.priv = priv; > >- /* Reserve the last MiB of VRAM, we should probably try to avoid >+ /* Reserve the last MiB of VRAM, we should probably try to avoid > * setting up the below tables over the top of the VBIOS image at > * some point. > */ >@@ -144,7 +144,7 @@ nv50_instmem_init(struct drm_device *dev > BAR0_WI32(priv->pramin_pt->gpuobj, i + 0, v | 1); > else > BAR0_WI32(priv->pramin_pt->gpuobj, i + 0, 0x00000009); >- BAR0_WI32(priv->pramin_pt->gpuobj, i + 4, 0x00000000); >+ BAR0_WI32(priv->pramin_pt->gpuobj, i + 4, 0x00000000); > } > > BAR0_WI32(chan->vm_pd, 0x00, priv->pramin_pt->instance | 0x63); >@@ -259,7 +259,7 @@ nv50_instmem_clear(struct drm_device *de > dev_priv->Engine.instmem.unbind(dev, gpuobj); > nouveau_mem_free(dev, gpuobj->im_backing); > gpuobj->im_backing = NULL; >- } >+ } > } > > int >@@ -317,4 +317,3 @@ nv50_instmem_unbind(struct drm_device *d > gpuobj->im_bound = 0; > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv_drv.c linux-2.6.23.i686/drivers/char/drm/nv_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/nv_drv.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,94 @@ >+/* nv_drv.c -- nv driver -*- linux-c -*- >+ * Created: Thu Oct 7 10:38:32 1999 by faith@precisioninsight.com >+ * >+ * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. >+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. >+ * Copyright 2005 Lars Knoll <lars@trolltech.com> >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * THE AUTHORS AND/OR THEIR SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Rickard E. (Rik) Faith <faith@valinux.com> >+ * Daryll Strauss <daryll@valinux.com> >+ * Gareth Hughes <gareth@valinux.com> >+ * Lars Knoll <lars@trolltech.com> >+ */ >+ >+#include "drmP.h" >+#include "nv_drv.h" >+ >+#include "drm_pciids.h" >+ >+static struct pci_device_id pciidlist[] = { >+ nv_PCI_IDS >+}; >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); >+static struct drm_driver driver = { >+ .driver_features = DRIVER_USE_MTRR | DRIVER_USE_AGP, >+ .reclaim_buffers = drm_core_reclaim_buffers, >+ .get_map_ofs = drm_core_get_map_ofs, >+ .get_reg_ofs = drm_core_get_reg_ofs, >+ .fops = { >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+ }, >+ .pci_driver = { >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), >+ }, >+ .name = DRIVER_NAME, >+ .desc = DRIVER_DESC, >+ .date = DRIVER_DATE, >+ .major = DRIVER_MAJOR, >+ .minor = DRIVER_MINOR, >+ .patchlevel = DRIVER_PATCHLEVEL, >+}; >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ >+ >+static int __init nv_init(void) >+{ >+ return drm_init(&driver, pciidlist); >+} >+ >+static void __exit nv_exit(void) >+{ >+ drm_exit(&driver); >+} >+ >+module_init(nv_init); >+module_exit(nv_exit); >+ >+MODULE_AUTHOR(DRIVER_AUTHOR); >+MODULE_DESCRIPTION(DRIVER_DESC); >+MODULE_LICENSE("GPL and additional rights"); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/nv_drv.h linux-2.6.23.i686/drivers/char/drm/nv_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/nv_drv.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/nv_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,52 @@ >+/* nv_drv.h -- NV DRM template customization -*- linux-c -*- >+ * Created: Wed Feb 14 12:32:32 2001 by gareth@valinux.com >+ * >+ * Copyright 2005 Lars Knoll <lars@trolltech.com> >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * the rights to use, copy, modify, merge, publish, distribute, sublicense, >+ * and/or sell copies of the Software, and to permit persons to whom the >+ * Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL >+ * THE AUTHORS AND/OR THEIR SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR >+ * OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Lars Knoll <lars@trolltech.com> >+ */ >+ >+#ifndef __NV_H__ >+#define __NV_H__ >+ >+/* General customization: >+ */ >+ >+#define DRIVER_AUTHOR "Lars Knoll" >+ >+#define DRIVER_NAME "nv" >+#define DRIVER_DESC "NV" >+#define DRIVER_DATE "20051006" >+ >+#define DRIVER_MAJOR 0 >+#define DRIVER_MINOR 0 >+#define DRIVER_PATCHLEVEL 1 >+ >+#define NV04 04 >+#define NV10 10 >+#define NV20 20 >+#define NV30 30 >+#define NV40 40 >+ >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/r128_cce.c linux-2.6.23.i686/drivers/char/drm/r128_cce.c >--- linux-2.6.23.i686.orig/drivers/char/drm/r128_cce.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/r128_cce.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,4 +1,4 @@ >-/* r128_cce.c -- ATI Rage 128 driver -*- linux-c -*- >+/* r128_cce.c -- ATI Rage 128 driver -*- linux-c -*- > * Created: Wed Apr 5 19:24:19 2000 by kevin@precisioninsight.com > */ > /* >@@ -325,7 +325,7 @@ static void r128_cce_init_ring_buffer(st > else > #endif > ring_start = dev_priv->cce_ring->offset - >- (unsigned long)dev->sg->virtual; >+ (unsigned long)dev->sg->virtual; > > R128_WRITE(R128_PM4_BUFFER_OFFSET, ring_start | R128_AGP_OFFSET); > >@@ -611,10 +611,8 @@ int r128_do_cleanup_cce(struct drm_devic > #endif > { > if (dev_priv->gart_info.bus_addr) >- if (!drm_ati_pcigart_cleanup(dev, >- &dev_priv->gart_info)) >- DRM_ERROR >- ("failed to cleanup PCI GART!\n"); >+ if (!drm_ati_pcigart_cleanup(dev, &dev_priv->gart_info)) >+ DRM_ERROR("failed to cleanup PCI GART!\n"); > } > > drm_free(dev->dev_private, sizeof(drm_r128_private_t), >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/r128_drv.c linux-2.6.23.i686/drivers/char/drm/r128_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/r128_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/r128_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -40,6 +40,7 @@ static struct pci_device_id pciidlist[] > r128_PCI_IDS > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = > DRIVER_USE_AGP | DRIVER_USE_MTRR | DRIVER_PCI_DMA | DRIVER_SG | >@@ -59,21 +60,22 @@ static struct drm_driver driver = { > .ioctls = r128_ioctls, > .dma_ioctl = r128_cce_buffers, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >-#ifdef CONFIG_COMPAT >- .compat_ioctl = r128_compat_ioctl, >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+#if defined(CONFIG_COMPAT) && LINUX_VERSION_CODE > KERNEL_VERSION(2,6,9) >+ .compat_ioctl = r128_compat_ioctl, > #endif >- }, >- >+ }, > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), > }, > > .name = DRIVER_NAME, >@@ -84,10 +86,17 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ >+ > static int __init r128_init(void) > { > driver.num_ioctls = r128_max_ioctl; >- return drm_init(&driver); >+ >+ return drm_init(&driver, pciidlist); > } > > static void __exit r128_exit(void) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/r128_drv.h linux-2.6.23.i686/drivers/char/drm/r128_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/r128_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/r128_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -493,7 +493,7 @@ do { \ > write * sizeof(u32) ); \ > } \ > if (((dev_priv->ring.tail + _nr) & tail_mask) != write) { \ >- DRM_ERROR( \ >+ DRM_ERROR( \ > "ADVANCE_RING(): mismatch: nr: %x write: %x line: %d\n", \ > ((dev_priv->ring.tail + _nr) & tail_mask), \ > write, __LINE__); \ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/r128_ioc32.c linux-2.6.23.i686/drivers/char/drm/r128_ioc32.c >--- linux-2.6.23.i686.orig/drivers/char/drm/r128_ioc32.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/r128_ioc32.c 2008-01-06 09:24:57.000000000 +0100 >@@ -95,10 +95,11 @@ static int compat_r128_init(struct file > &init->agp_textures_offset)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_R128_INIT, (unsigned long)init); > } > >+ > typedef struct drm_r128_depth32 { > int func; > int n; >@@ -129,7 +130,7 @@ static int compat_r128_depth(struct file > &depth->mask)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_R128_DEPTH, (unsigned long)depth); > > } >@@ -153,7 +154,7 @@ static int compat_r128_stipple(struct fi > &stipple->mask)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_R128_STIPPLE, (unsigned long)stipple); > } > >@@ -178,7 +179,7 @@ static int compat_r128_getparam(struct f > &getparam->value)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >+ return drm_ioctl(file->f_dentry->d_inode, file, > DRM_IOCTL_R128_GETPARAM, (unsigned long)getparam); > } > >@@ -212,9 +213,9 @@ long r128_compat_ioctl(struct file *filp > > lock_kernel(); /* XXX for now */ > if (fn != NULL) >- ret = (*fn) (filp, cmd, arg); >+ ret = (*fn)(filp, cmd, arg); > else >- ret = drm_ioctl(filp->f_path.dentry->d_inode, filp, cmd, arg); >+ ret = drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg); > unlock_kernel(); > > return ret; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/r300_cmdbuf.c linux-2.6.23.i686/drivers/char/drm/r300_cmdbuf.c >--- linux-2.6.23.i686.orig/drivers/char/drm/r300_cmdbuf.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/r300_cmdbuf.c 2008-01-06 09:24:57.000000000 +0100 >@@ -486,7 +486,7 @@ static __inline__ int r300_emit_bitblt_m > if (cmd[0] & 0x8000) { > u32 offset; > >- if (cmd[1] & (RADEON_GMC_SRC_PITCH_OFFSET_CNTL >+ if (cmd[1] & (RADEON_GMC_SRC_PITCH_OFFSET_CNTL > | RADEON_GMC_DST_PITCH_OFFSET_CNTL)) { > offset = cmd[2] << 10; > ret = !radeon_check_offset(dev_priv, offset); >@@ -504,7 +504,7 @@ static __inline__ int r300_emit_bitblt_m > DRM_ERROR("Invalid bitblt second offset is %08X\n", offset); > return -EINVAL; > } >- >+ > } > } > >@@ -723,54 +723,53 @@ static int r300_scratch(drm_radeon_priva > u32 *ref_age_base; > u32 i, buf_idx, h_pending; > RING_LOCALS; >- >- if (cmdbuf->bufsz < >- (sizeof(u64) + header.scratch.n_bufs * sizeof(buf_idx))) { >+ >+ if (cmdbuf->bufsz < sizeof(uint64_t) + header.scratch.n_bufs * sizeof(buf_idx) ) { > return -EINVAL; > } >- >+ > if (header.scratch.reg >= 5) { > return -EINVAL; > } >- >- dev_priv->scratch_ages[header.scratch.reg]++; >- >- ref_age_base = (u32 *)(unsigned long)*((uint64_t *)cmdbuf->buf); >- >- cmdbuf->buf += sizeof(u64); >- cmdbuf->bufsz -= sizeof(u64); >- >+ >+ dev_priv->scratch_ages[header.scratch.reg] ++; >+ >+ ref_age_base = (u32 *)(unsigned long)*((uint64_t *)cmdbuf->buf); >+ >+ cmdbuf->buf += sizeof(uint64_t); >+ cmdbuf->bufsz -= sizeof(uint64_t); >+ > for (i=0; i < header.scratch.n_bufs; i++) { > buf_idx = *(u32 *)cmdbuf->buf; > buf_idx *= 2; /* 8 bytes per buf */ >- >+ > if (DRM_COPY_TO_USER(ref_age_base + buf_idx, &dev_priv->scratch_ages[header.scratch.reg], sizeof(u32))) { > return -EINVAL; > } >- >+ > if (DRM_COPY_FROM_USER(&h_pending, ref_age_base + buf_idx + 1, sizeof(u32))) { > return -EINVAL; > } >- >+ > if (h_pending == 0) { > return -EINVAL; > } >- >+ > h_pending--; >- >+ > if (DRM_COPY_TO_USER(ref_age_base + buf_idx + 1, &h_pending, sizeof(u32))) { > return -EINVAL; > } >- >+ > cmdbuf->buf += sizeof(buf_idx); > cmdbuf->bufsz -= sizeof(buf_idx); > } >- >+ > BEGIN_RING(2); > OUT_RING( CP_PACKET0( RADEON_SCRATCH_REG0 + header.scratch.reg * 4, 0 ) ); > OUT_RING( dev_priv->scratch_ages[header.scratch.reg] ); > ADVANCE_RING(); >- >+ > return 0; > } > >@@ -919,7 +918,7 @@ int r300_do_cp_cmdbuf(struct drm_device > goto cleanup; > } > break; >- >+ > default: > DRM_ERROR("bad cmd_type %i at %p\n", > header.header.cmd_type, >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/r300_reg.h linux-2.6.23.i686/drivers/char/drm/r300_reg.h >--- linux-2.6.23.i686.orig/drivers/char/drm/r300_reg.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/r300_reg.h 2008-01-06 09:24:57.000000000 +0100 >@@ -23,6 +23,8 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. > > **************************************************************************/ > >+/* *INDENT-OFF* */ >+ > #ifndef _R300_REG_H > #define _R300_REG_H > >@@ -36,6 +38,7 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. > # define R300_MC_MISC__MC_SAME_PAGE_PRIO_SHIFT 24 > # define R300_MC_MISC__MC_GLOBW_INIT_LAT_SHIFT 28 > >+ > #define R300_MC_INIT_GFX_LAT_TIMER 0x154 > # define R300_MC_MISC__MC_G3D0R_INIT_LAT_SHIFT 0 > # define R300_MC_MISC__MC_G3D1R_INIT_LAT_SHIFT 4 >@@ -853,13 +856,13 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. > # define R300_TX_FORMAT_W8Z8Y8X8 0xC > # define R300_TX_FORMAT_W2Z10Y10X10 0xD > # define R300_TX_FORMAT_W16Z16Y16X16 0xE >-# define R300_TX_FORMAT_DXT1 0xF >-# define R300_TX_FORMAT_DXT3 0x10 >-# define R300_TX_FORMAT_DXT5 0x11 >+# define R300_TX_FORMAT_DXT1 0xF >+# define R300_TX_FORMAT_DXT3 0x10 >+# define R300_TX_FORMAT_DXT5 0x11 > # define R300_TX_FORMAT_D3DMFT_CxV8U8 0x12 /* no swizzle */ >-# define R300_TX_FORMAT_A8R8G8B8 0x13 /* no swizzle */ >-# define R300_TX_FORMAT_B8G8_B8G8 0x14 /* no swizzle */ >-# define R300_TX_FORMAT_G8R8_G8B8 0x15 /* no swizzle */ >+# define R300_TX_FORMAT_A8R8G8B8 0x13 /* no swizzle */ >+# define R300_TX_FORMAT_B8G8_B8G8 0x14 /* no swizzle */ >+# define R300_TX_FORMAT_G8R8_G8B8 0x15 /* no swizzle */ > /* 0x16 - some 16 bit green format.. ?? */ > # define R300_TX_FORMAT_UNK25 (1 << 25) /* no swizzle */ > # define R300_TX_FORMAT_CUBIC_MAP (1 << 26) >@@ -867,19 +870,19 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. > /* gap */ > /* Floating point formats */ > /* Note - hardware supports both 16 and 32 bit floating point */ >-# define R300_TX_FORMAT_FL_I16 0x18 >-# define R300_TX_FORMAT_FL_I16A16 0x19 >+# define R300_TX_FORMAT_FL_I16 0x18 >+# define R300_TX_FORMAT_FL_I16A16 0x19 > # define R300_TX_FORMAT_FL_R16G16B16A16 0x1A >-# define R300_TX_FORMAT_FL_I32 0x1B >-# define R300_TX_FORMAT_FL_I32A32 0x1C >+# define R300_TX_FORMAT_FL_I32 0x1B >+# define R300_TX_FORMAT_FL_I32A32 0x1C > # define R300_TX_FORMAT_FL_R32G32B32A32 0x1D > /* alpha modes, convenience mostly */ > /* if you have alpha, pick constant appropriate to the > number of channels (1 for I8, 2 for I8A8, 4 for R8G8B8A8, etc */ >-# define R300_TX_FORMAT_ALPHA_1CH 0x000 >-# define R300_TX_FORMAT_ALPHA_2CH 0x200 >-# define R300_TX_FORMAT_ALPHA_4CH 0x600 >-# define R300_TX_FORMAT_ALPHA_NONE 0xA00 >+# define R300_TX_FORMAT_ALPHA_1CH 0x000 >+# define R300_TX_FORMAT_ALPHA_2CH 0x200 >+# define R300_TX_FORMAT_ALPHA_4CH 0x600 >+# define R300_TX_FORMAT_ALPHA_NONE 0xA00 > /* Swizzling */ > /* constants */ > # define R300_TX_FORMAT_X 0 >@@ -1360,11 +1363,11 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. > # define R300_RB3D_Z_DISABLED_2 0x00000014 > # define R300_RB3D_Z_TEST 0x00000012 > # define R300_RB3D_Z_TEST_AND_WRITE 0x00000016 >-# define R300_RB3D_Z_WRITE_ONLY 0x00000006 >+# define R300_RB3D_Z_WRITE_ONLY 0x00000006 > > # define R300_RB3D_Z_TEST 0x00000012 > # define R300_RB3D_Z_TEST_AND_WRITE 0x00000016 >-# define R300_RB3D_Z_WRITE_ONLY 0x00000006 >+# define R300_RB3D_Z_WRITE_ONLY 0x00000006 > # define R300_RB3D_STENCIL_ENABLE 0x00000001 > > #define R300_RB3D_ZSTENCIL_CNTL_1 0x4F04 >@@ -1624,3 +1627,5 @@ USE OR OTHER DEALINGS IN THE SOFTWARE. > #define R300_CP_CMD_BITBLT_MULTI 0xC0009B00 > > #endif /* _R300_REG_H */ >+ >+/* *INDENT-ON* */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/radeon_cp.c linux-2.6.23.i686/drivers/char/drm/radeon_cp.c >--- linux-2.6.23.i686.orig/drivers/char/drm/radeon_cp.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/radeon_cp.c 2008-01-06 09:24:57.000000000 +0100 >@@ -558,264 +558,279 @@ static const u32 radeon_cp_microcode[][2 > }; > > static const u32 R300_cp_microcode[][2] = { >- {0x4200e000, 0000000000}, >- {0x4000e000, 0000000000}, >- {0x000000af, 0x00000008}, >- {0x000000b3, 0x00000008}, >- {0x6c5a504f, 0000000000}, >- {0x4f4f497a, 0000000000}, >- {0x5a578288, 0000000000}, >- {0x4f91906a, 0000000000}, >- {0x4f4f4f4f, 0000000000}, >- {0x4fe24f44, 0000000000}, >- {0x4f9c9c9c, 0000000000}, >- {0xdc4f4fde, 0000000000}, >- {0xa1cd4f4f, 0000000000}, >- {0xd29d9d9d, 0000000000}, >- {0x4f0f9fd7, 0000000000}, >- {0x000ca000, 0x00000004}, >- {0x000d0012, 0x00000038}, >- {0x0000e8b4, 0x00000004}, >- {0x000d0014, 0x00000038}, >- {0x0000e8b6, 0x00000004}, >- {0x000d0016, 0x00000038}, >- {0x0000e854, 0x00000004}, >- {0x000d0018, 0x00000038}, >- {0x0000e855, 0x00000004}, >- {0x000d001a, 0x00000038}, >- {0x0000e856, 0x00000004}, >- {0x000d001c, 0x00000038}, >- {0x0000e857, 0x00000004}, >- {0x000d001e, 0x00000038}, >- {0x0000e824, 0x00000004}, >- {0x000d0020, 0x00000038}, >- {0x0000e825, 0x00000004}, >- {0x000d0022, 0x00000038}, >- {0x0000e830, 0x00000004}, >- {0x000d0024, 0x00000038}, >- {0x0000f0c0, 0x00000004}, >- {0x000d0026, 0x00000038}, >- {0x0000f0c1, 0x00000004}, >- {0x000d0028, 0x00000038}, >- {0x0000f041, 0x00000004}, >- {0x000d002a, 0x00000038}, >- {0x0000f184, 0x00000004}, >- {0x000d002c, 0x00000038}, >- {0x0000f185, 0x00000004}, >- {0x000d002e, 0x00000038}, >- {0x0000f186, 0x00000004}, >- {0x000d0030, 0x00000038}, >- {0x0000f187, 0x00000004}, >- {0x000d0032, 0x00000038}, >- {0x0000f180, 0x00000004}, >- {0x000d0034, 0x00000038}, >- {0x0000f393, 0x00000004}, >- {0x000d0036, 0x00000038}, >- {0x0000f38a, 0x00000004}, >- {0x000d0038, 0x00000038}, >- {0x0000f38e, 0x00000004}, >- {0x0000e821, 0x00000004}, >- {0x0140a000, 0x00000004}, >- {0x00000043, 0x00000018}, >- {0x00cce800, 0x00000004}, >- {0x001b0001, 0x00000004}, >- {0x08004800, 0x00000004}, >- {0x001b0001, 0x00000004}, >- {0x08004800, 0x00000004}, >- {0x001b0001, 0x00000004}, >- {0x08004800, 0x00000004}, >- {0x0000003a, 0x00000008}, >- {0x0000a000, 0000000000}, >- {0x02c0a000, 0x00000004}, >- {0x000ca000, 0x00000004}, >- {0x00130000, 0x00000004}, >- {0x000c2000, 0x00000004}, >- {0xc980c045, 0x00000008}, >- {0x2000451d, 0x00000004}, >- {0x0000e580, 0x00000004}, >- {0x000ce581, 0x00000004}, >- {0x08004580, 0x00000004}, >- {0x000ce581, 0x00000004}, >- {0x0000004c, 0x00000008}, >- {0x0000a000, 0000000000}, >- {0x000c2000, 0x00000004}, >- {0x0000e50e, 0x00000004}, >- {0x00032000, 0x00000004}, >- {0x00022056, 0x00000028}, >- {0x00000056, 0x00000024}, >- {0x0800450f, 0x00000004}, >- {0x0000a050, 0x00000008}, >- {0x0000e565, 0x00000004}, >- {0x0000e566, 0x00000004}, >- {0x00000057, 0x00000008}, >- {0x03cca5b4, 0x00000004}, >- {0x05432000, 0x00000004}, >- {0x00022000, 0x00000004}, >- {0x4ccce063, 0x00000030}, >- {0x08274565, 0x00000004}, >- {0x00000063, 0x00000030}, >- {0x08004564, 0x00000004}, >- {0x0000e566, 0x00000004}, >- {0x0000005a, 0x00000008}, >- {0x00802066, 0x00000010}, >- {0x00202000, 0x00000004}, >- {0x001b00ff, 0x00000004}, >- {0x01000069, 0x00000010}, >- {0x001f2000, 0x00000004}, >- {0x001c00ff, 0x00000004}, >- {0000000000, 0x0000000c}, >- {0x00000085, 0x00000030}, >- {0x0000005a, 0x00000008}, >- {0x0000e576, 0x00000004}, >- {0x000ca000, 0x00000004}, >- {0x00012000, 0x00000004}, >- {0x00082000, 0x00000004}, >- {0x1800650e, 0x00000004}, >- {0x00092000, 0x00000004}, >- {0x000a2000, 0x00000004}, >- {0x000f0000, 0x00000004}, >- {0x00400000, 0x00000004}, >- {0x00000079, 0x00000018}, >- {0x0000e563, 0x00000004}, >- {0x00c0e5f9, 0x000000c2}, >- {0x0000006e, 0x00000008}, >- {0x0000a06e, 0x00000008}, >- {0x0000e576, 0x00000004}, >- {0x0000e577, 0x00000004}, >- {0x0000e50e, 0x00000004}, >- {0x0000e50f, 0x00000004}, >- {0x0140a000, 0x00000004}, >- {0x0000007c, 0x00000018}, >- {0x00c0e5f9, 0x000000c2}, >- {0x0000007c, 0x00000008}, >- {0x0014e50e, 0x00000004}, >- {0x0040e50f, 0x00000004}, >- {0x00c0007f, 0x00000008}, >- {0x0000e570, 0x00000004}, >- {0x0000e571, 0x00000004}, >- {0x0000e572, 0x0000000c}, >- {0x0000a000, 0x00000004}, >- {0x0140a000, 0x00000004}, >- {0x0000e568, 0x00000004}, >- {0x000c2000, 0x00000004}, >- {0x00000089, 0x00000018}, >- {0x000b0000, 0x00000004}, >- {0x18c0e562, 0x00000004}, >- {0x0000008b, 0x00000008}, >- {0x00c0008a, 0x00000008}, >- {0x000700e4, 0x00000004}, >- {0x00000097, 0x00000038}, >- {0x000ca099, 0x00000030}, >- {0x080045bb, 0x00000004}, >- {0x000c209a, 0x00000030}, >- {0x0800e5bc, 0000000000}, >- {0x0000e5bb, 0x00000004}, >- {0x0000e5bc, 0000000000}, >- {0x00120000, 0x0000000c}, >- {0x00120000, 0x00000004}, >- {0x001b0002, 0x0000000c}, >- {0x0000a000, 0x00000004}, >- {0x0000e821, 0x00000004}, >- {0x0000e800, 0000000000}, >- {0x0000e821, 0x00000004}, >- {0x0000e82e, 0000000000}, >- {0x02cca000, 0x00000004}, >- {0x00140000, 0x00000004}, >- {0x000ce1cc, 0x00000004}, >- {0x050de1cd, 0x00000004}, >- {0x000000a7, 0x00000020}, >- {0x4200e000, 0000000000}, >- {0x000000ae, 0x00000038}, >- {0x000ca000, 0x00000004}, >- {0x00140000, 0x00000004}, >- {0x000c2000, 0x00000004}, >- {0x00160000, 0x00000004}, >- {0x700ce000, 0x00000004}, >- {0x001400aa, 0x00000008}, >- {0x4000e000, 0000000000}, >- {0x02400000, 0x00000004}, >- {0x400ee000, 0x00000004}, >- {0x02400000, 0x00000004}, >- {0x4000e000, 0000000000}, >- {0x000c2000, 0x00000004}, >- {0x0240e51b, 0x00000004}, >- {0x0080e50a, 0x00000005}, >- {0x0080e50b, 0x00000005}, >- {0x00220000, 0x00000004}, >- {0x000700e4, 0x00000004}, >- {0x000000c1, 0x00000038}, >- {0x000c209a, 0x00000030}, >- {0x0880e5bd, 0x00000005}, >- {0x000c2099, 0x00000030}, >- {0x0800e5bb, 0x00000005}, >- {0x000c209a, 0x00000030}, >- {0x0880e5bc, 0x00000005}, >- {0x000000c4, 0x00000008}, >- {0x0080e5bd, 0x00000005}, >- {0x0000e5bb, 0x00000005}, >- {0x0080e5bc, 0x00000005}, >- {0x00210000, 0x00000004}, >- {0x02800000, 0x00000004}, >- {0x00c000c8, 0x00000018}, >- {0x4180e000, 0x00000040}, >- {0x000000ca, 0x00000024}, >- {0x01000000, 0x0000000c}, >- {0x0100e51d, 0x0000000c}, >- {0x000045bb, 0x00000004}, >- {0x000080c4, 0x00000008}, >- {0x0000f3ce, 0x00000004}, >- {0x0140a000, 0x00000004}, >- {0x00cc2000, 0x00000004}, >- {0x08c053cf, 0x00000040}, >- {0x00008000, 0000000000}, >- {0x0000f3d2, 0x00000004}, >- {0x0140a000, 0x00000004}, >- {0x00cc2000, 0x00000004}, >- {0x08c053d3, 0x00000040}, >- {0x00008000, 0000000000}, >- {0x0000f39d, 0x00000004}, >- {0x0140a000, 0x00000004}, >- {0x00cc2000, 0x00000004}, >- {0x08c0539e, 0x00000040}, >- {0x00008000, 0000000000}, >- {0x03c00830, 0x00000004}, >- {0x4200e000, 0000000000}, >- {0x0000a000, 0x00000004}, >- {0x200045e0, 0x00000004}, >- {0x0000e5e1, 0000000000}, >- {0x00000001, 0000000000}, >- {0x000700e1, 0x00000004}, >- {0x0800e394, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >- {0000000000, 0000000000}, >+ { 0x4200e000, 0000000000 }, >+ { 0x4000e000, 0000000000 }, >+ { 0x000000af, 0x00000008 }, >+ { 0x000000b3, 0x00000008 }, >+ { 0x6c5a504f, 0000000000 }, >+ { 0x4f4f497a, 0000000000 }, >+ { 0x5a578288, 0000000000 }, >+ { 0x4f91906a, 0000000000 }, >+ { 0x4f4f4f4f, 0000000000 }, >+ { 0x4fe24f44, 0000000000 }, >+ { 0x4f9c9c9c, 0000000000 }, >+ { 0xdc4f4fde, 0000000000 }, >+ { 0xa1cd4f4f, 0000000000 }, >+ { 0xd29d9d9d, 0000000000 }, >+ { 0x4f0f9fd7, 0000000000 }, >+ { 0x000ca000, 0x00000004 }, >+ { 0x000d0012, 0x00000038 }, >+ { 0x0000e8b4, 0x00000004 }, >+ { 0x000d0014, 0x00000038 }, >+ { 0x0000e8b6, 0x00000004 }, >+ { 0x000d0016, 0x00000038 }, >+ { 0x0000e854, 0x00000004 }, >+ { 0x000d0018, 0x00000038 }, >+ { 0x0000e855, 0x00000004 }, >+ { 0x000d001a, 0x00000038 }, >+ { 0x0000e856, 0x00000004 }, >+ { 0x000d001c, 0x00000038 }, >+ { 0x0000e857, 0x00000004 }, >+ { 0x000d001e, 0x00000038 }, >+ { 0x0000e824, 0x00000004 }, >+ { 0x000d0020, 0x00000038 }, >+ { 0x0000e825, 0x00000004 }, >+ { 0x000d0022, 0x00000038 }, >+ { 0x0000e830, 0x00000004 }, >+ { 0x000d0024, 0x00000038 }, >+ { 0x0000f0c0, 0x00000004 }, >+ { 0x000d0026, 0x00000038 }, >+ { 0x0000f0c1, 0x00000004 }, >+ { 0x000d0028, 0x00000038 }, >+ { 0x0000f041, 0x00000004 }, >+ { 0x000d002a, 0x00000038 }, >+ { 0x0000f184, 0x00000004 }, >+ { 0x000d002c, 0x00000038 }, >+ { 0x0000f185, 0x00000004 }, >+ { 0x000d002e, 0x00000038 }, >+ { 0x0000f186, 0x00000004 }, >+ { 0x000d0030, 0x00000038 }, >+ { 0x0000f187, 0x00000004 }, >+ { 0x000d0032, 0x00000038 }, >+ { 0x0000f180, 0x00000004 }, >+ { 0x000d0034, 0x00000038 }, >+ { 0x0000f393, 0x00000004 }, >+ { 0x000d0036, 0x00000038 }, >+ { 0x0000f38a, 0x00000004 }, >+ { 0x000d0038, 0x00000038 }, >+ { 0x0000f38e, 0x00000004 }, >+ { 0x0000e821, 0x00000004 }, >+ { 0x0140a000, 0x00000004 }, >+ { 0x00000043, 0x00000018 }, >+ { 0x00cce800, 0x00000004 }, >+ { 0x001b0001, 0x00000004 }, >+ { 0x08004800, 0x00000004 }, >+ { 0x001b0001, 0x00000004 }, >+ { 0x08004800, 0x00000004 }, >+ { 0x001b0001, 0x00000004 }, >+ { 0x08004800, 0x00000004 }, >+ { 0x0000003a, 0x00000008 }, >+ { 0x0000a000, 0000000000 }, >+ { 0x02c0a000, 0x00000004 }, >+ { 0x000ca000, 0x00000004 }, >+ { 0x00130000, 0x00000004 }, >+ { 0x000c2000, 0x00000004 }, >+ { 0xc980c045, 0x00000008 }, >+ { 0x2000451d, 0x00000004 }, >+ { 0x0000e580, 0x00000004 }, >+ { 0x000ce581, 0x00000004 }, >+ { 0x08004580, 0x00000004 }, >+ { 0x000ce581, 0x00000004 }, >+ { 0x0000004c, 0x00000008 }, >+ { 0x0000a000, 0000000000 }, >+ { 0x000c2000, 0x00000004 }, >+ { 0x0000e50e, 0x00000004 }, >+ { 0x00032000, 0x00000004 }, >+ { 0x00022056, 0x00000028 }, >+ { 0x00000056, 0x00000024 }, >+ { 0x0800450f, 0x00000004 }, >+ { 0x0000a050, 0x00000008 }, >+ { 0x0000e565, 0x00000004 }, >+ { 0x0000e566, 0x00000004 }, >+ { 0x00000057, 0x00000008 }, >+ { 0x03cca5b4, 0x00000004 }, >+ { 0x05432000, 0x00000004 }, >+ { 0x00022000, 0x00000004 }, >+ { 0x4ccce063, 0x00000030 }, >+ { 0x08274565, 0x00000004 }, >+ { 0x00000063, 0x00000030 }, >+ { 0x08004564, 0x00000004 }, >+ { 0x0000e566, 0x00000004 }, >+ { 0x0000005a, 0x00000008 }, >+ { 0x00802066, 0x00000010 }, >+ { 0x00202000, 0x00000004 }, >+ { 0x001b00ff, 0x00000004 }, >+ { 0x01000069, 0x00000010 }, >+ { 0x001f2000, 0x00000004 }, >+ { 0x001c00ff, 0x00000004 }, >+ { 0000000000, 0x0000000c }, >+ { 0x00000085, 0x00000030 }, >+ { 0x0000005a, 0x00000008 }, >+ { 0x0000e576, 0x00000004 }, >+ { 0x000ca000, 0x00000004 }, >+ { 0x00012000, 0x00000004 }, >+ { 0x00082000, 0x00000004 }, >+ { 0x1800650e, 0x00000004 }, >+ { 0x00092000, 0x00000004 }, >+ { 0x000a2000, 0x00000004 }, >+ { 0x000f0000, 0x00000004 }, >+ { 0x00400000, 0x00000004 }, >+ { 0x00000079, 0x00000018 }, >+ { 0x0000e563, 0x00000004 }, >+ { 0x00c0e5f9, 0x000000c2 }, >+ { 0x0000006e, 0x00000008 }, >+ { 0x0000a06e, 0x00000008 }, >+ { 0x0000e576, 0x00000004 }, >+ { 0x0000e577, 0x00000004 }, >+ { 0x0000e50e, 0x00000004 }, >+ { 0x0000e50f, 0x00000004 }, >+ { 0x0140a000, 0x00000004 }, >+ { 0x0000007c, 0x00000018 }, >+ { 0x00c0e5f9, 0x000000c2 }, >+ { 0x0000007c, 0x00000008 }, >+ { 0x0014e50e, 0x00000004 }, >+ { 0x0040e50f, 0x00000004 }, >+ { 0x00c0007f, 0x00000008 }, >+ { 0x0000e570, 0x00000004 }, >+ { 0x0000e571, 0x00000004 }, >+ { 0x0000e572, 0x0000000c }, >+ { 0x0000a000, 0x00000004 }, >+ { 0x0140a000, 0x00000004 }, >+ { 0x0000e568, 0x00000004 }, >+ { 0x000c2000, 0x00000004 }, >+ { 0x00000089, 0x00000018 }, >+ { 0x000b0000, 0x00000004 }, >+ { 0x18c0e562, 0x00000004 }, >+ { 0x0000008b, 0x00000008 }, >+ { 0x00c0008a, 0x00000008 }, >+ { 0x000700e4, 0x00000004 }, >+ { 0x00000097, 0x00000038 }, >+ { 0x000ca099, 0x00000030 }, >+ { 0x080045bb, 0x00000004 }, >+ { 0x000c209a, 0x00000030 }, >+ { 0x0800e5bc, 0000000000 }, >+ { 0x0000e5bb, 0x00000004 }, >+ { 0x0000e5bc, 0000000000 }, >+ { 0x00120000, 0x0000000c }, >+ { 0x00120000, 0x00000004 }, >+ { 0x001b0002, 0x0000000c }, >+ { 0x0000a000, 0x00000004 }, >+ { 0x0000e821, 0x00000004 }, >+ { 0x0000e800, 0000000000 }, >+ { 0x0000e821, 0x00000004 }, >+ { 0x0000e82e, 0000000000 }, >+ { 0x02cca000, 0x00000004 }, >+ { 0x00140000, 0x00000004 }, >+ { 0x000ce1cc, 0x00000004 }, >+ { 0x050de1cd, 0x00000004 }, >+ { 0x000000a7, 0x00000020 }, >+ { 0x4200e000, 0000000000 }, >+ { 0x000000ae, 0x00000038 }, >+ { 0x000ca000, 0x00000004 }, >+ { 0x00140000, 0x00000004 }, >+ { 0x000c2000, 0x00000004 }, >+ { 0x00160000, 0x00000004 }, >+ { 0x700ce000, 0x00000004 }, >+ { 0x001400aa, 0x00000008 }, >+ { 0x4000e000, 0000000000 }, >+ { 0x02400000, 0x00000004 }, >+ { 0x400ee000, 0x00000004 }, >+ { 0x02400000, 0x00000004 }, >+ { 0x4000e000, 0000000000 }, >+ { 0x000c2000, 0x00000004 }, >+ { 0x0240e51b, 0x00000004 }, >+ { 0x0080e50a, 0x00000005 }, >+ { 0x0080e50b, 0x00000005 }, >+ { 0x00220000, 0x00000004 }, >+ { 0x000700e4, 0x00000004 }, >+ { 0x000000c1, 0x00000038 }, >+ { 0x000c209a, 0x00000030 }, >+ { 0x0880e5bd, 0x00000005 }, >+ { 0x000c2099, 0x00000030 }, >+ { 0x0800e5bb, 0x00000005 }, >+ { 0x000c209a, 0x00000030 }, >+ { 0x0880e5bc, 0x00000005 }, >+ { 0x000000c4, 0x00000008 }, >+ { 0x0080e5bd, 0x00000005 }, >+ { 0x0000e5bb, 0x00000005 }, >+ { 0x0080e5bc, 0x00000005 }, >+ { 0x00210000, 0x00000004 }, >+ { 0x02800000, 0x00000004 }, >+ { 0x00c000c8, 0x00000018 }, >+ { 0x4180e000, 0x00000040 }, >+ { 0x000000ca, 0x00000024 }, >+ { 0x01000000, 0x0000000c }, >+ { 0x0100e51d, 0x0000000c }, >+ { 0x000045bb, 0x00000004 }, >+ { 0x000080c4, 0x00000008 }, >+ { 0x0000f3ce, 0x00000004 }, >+ { 0x0140a000, 0x00000004 }, >+ { 0x00cc2000, 0x00000004 }, >+ { 0x08c053cf, 0x00000040 }, >+ { 0x00008000, 0000000000 }, >+ { 0x0000f3d2, 0x00000004 }, >+ { 0x0140a000, 0x00000004 }, >+ { 0x00cc2000, 0x00000004 }, >+ { 0x08c053d3, 0x00000040 }, >+ { 0x00008000, 0000000000 }, >+ { 0x0000f39d, 0x00000004 }, >+ { 0x0140a000, 0x00000004 }, >+ { 0x00cc2000, 0x00000004 }, >+ { 0x08c0539e, 0x00000040 }, >+ { 0x00008000, 0000000000 }, >+ { 0x03c00830, 0x00000004 }, >+ { 0x4200e000, 0000000000 }, >+ { 0x0000a000, 0x00000004 }, >+ { 0x200045e0, 0x00000004 }, >+ { 0x0000e5e1, 0000000000 }, >+ { 0x00000001, 0000000000 }, >+ { 0x000700e1, 0x00000004 }, >+ { 0x0800e394, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, >+ { 0000000000, 0000000000 }, > }; > >+u32 radeon_read_fb_location(drm_radeon_private_t *dev_priv) >+{ >+ return RADEON_READ(RADEON_MC_FB_LOCATION); >+} >+ >+static void radeon_write_fb_location(drm_radeon_private_t *dev_priv, u32 fb_loc) >+{ >+ RADEON_WRITE(RADEON_MC_FB_LOCATION, fb_loc); >+} >+ >+static void radeon_write_agp_location(drm_radeon_private_t *dev_priv, u32 agp_loc) >+{ >+ RADEON_WRITE(RADEON_MC_AGP_LOCATION, agp_loc); >+} >+ > static int RADEON_READ_PLL(struct drm_device * dev, int addr) > { > drm_radeon_private_t *dev_priv = dev->dev_private; >@@ -824,7 +839,7 @@ static int RADEON_READ_PLL(struct drm_de > return RADEON_READ(RADEON_CLOCK_CNTL_DATA); > } > >-static int RADEON_READ_PCIE(drm_radeon_private_t *dev_priv, int addr) >+static u32 RADEON_READ_PCIE(drm_radeon_private_t *dev_priv, int addr) > { > RADEON_WRITE8(RADEON_PCIE_INDEX, addr & 0xff); > return RADEON_READ(RADEON_PCIE_DATA); >@@ -1127,21 +1142,21 @@ static void radeon_cp_init_ring_buffer(s > { > u32 ring_start, cur_read_ptr; > u32 tmp; >- >+ > /* Initialize the memory controller. With new memory map, the fb location > * is not changed, it should have been properly initialized already. Part > * of the problem is that the code below is bogus, assuming the GART is > * always appended to the fb which is not necessarily the case > */ > if (!dev_priv->new_memmap) >- RADEON_WRITE(RADEON_MC_FB_LOCATION, >+ radeon_write_fb_location(dev_priv, > ((dev_priv->gart_vm_start - 1) & 0xffff0000) > | (dev_priv->fb_location >> 16)); > > #if __OS_HAS_AGP > if (dev_priv->flags & RADEON_IS_AGP) { > RADEON_WRITE(RADEON_AGP_BASE, (unsigned int)dev->agp->base); >- RADEON_WRITE(RADEON_MC_AGP_LOCATION, >+ radeon_write_agp_location(dev_priv, > (((dev_priv->gart_vm_start - 1 + > dev_priv->gart_size) & 0xffff0000) | > (dev_priv->gart_vm_start >> 16))); >@@ -1190,9 +1205,15 @@ static void radeon_cp_init_ring_buffer(s > /* Set ring buffer size */ > #ifdef __BIG_ENDIAN > RADEON_WRITE(RADEON_CP_RB_CNTL, >- dev_priv->ring.size_l2qw | RADEON_BUF_SWAP_32BIT); >+ RADEON_BUF_SWAP_32BIT | >+ (dev_priv->ring.fetch_size_l2ow << 18) | >+ (dev_priv->ring.rptr_update_l2qw << 8) | >+ dev_priv->ring.size_l2qw); > #else >- RADEON_WRITE(RADEON_CP_RB_CNTL, dev_priv->ring.size_l2qw); >+ RADEON_WRITE(RADEON_CP_RB_CNTL, >+ (dev_priv->ring.fetch_size_l2ow << 18) | >+ (dev_priv->ring.rptr_update_l2qw << 8) | >+ dev_priv->ring.size_l2qw); > #endif > > /* Start with assuming that writeback doesn't work */ >@@ -1269,9 +1290,8 @@ static void radeon_test_writeback(drm_ra > } > > if (!dev_priv->writeback_works) { >- /* Disable writeback to avoid unnecessary bus master transfer */ >- RADEON_WRITE(RADEON_CP_RB_CNTL, RADEON_READ(RADEON_CP_RB_CNTL) | >- RADEON_RB_NO_UPDATE); >+ /* Disable writeback to avoid unnecessary bus master transfers */ >+ RADEON_WRITE(RADEON_CP_RB_CNTL, RADEON_READ(RADEON_CP_RB_CNTL) | RADEON_RB_NO_UPDATE); > RADEON_WRITE(RADEON_SCRATCH_UMSK, 0); > } > } >@@ -1282,6 +1302,7 @@ static void radeon_set_igpgart(drm_radeo > u32 temp, tmp; > > tmp = RADEON_READ(RADEON_AIC_CNTL); >+ DRM_DEBUG("setting igpgart AIC CNTL is %08X\n", tmp); > if (on) { > DRM_DEBUG("programming igpgart %08X %08lX %08X\n", > dev_priv->gart_vm_start, >@@ -1299,7 +1320,7 @@ static void radeon_set_igpgart(drm_radeo > > RADEON_WRITE(RADEON_AGP_BASE, (unsigned int)dev_priv->gart_vm_start); > dev_priv->gart_size = 32*1024*1024; >- RADEON_WRITE(RADEON_MC_AGP_LOCATION, >+ radeon_write_agp_location(dev_priv, > (((dev_priv->gart_vm_start - 1 + > dev_priv->gart_size) & 0xffff0000) | > (dev_priv->gart_vm_start >> 16))); >@@ -1333,7 +1354,7 @@ static void radeon_set_pciegart(drm_rade > dev_priv->gart_vm_start + > dev_priv->gart_size - 1); > >- RADEON_WRITE(RADEON_MC_AGP_LOCATION, 0xffffffc0); /* ?? */ >+ radeon_write_agp_location(dev_priv, 0xffffffc0); /* ?? */ > > RADEON_WRITE_PCIE(RADEON_PCIE_TX_GART_CNTL, > RADEON_PCIE_TX_GART_EN); >@@ -1358,7 +1379,7 @@ static void radeon_set_pcigart(drm_radeo > return; > } > >- tmp = RADEON_READ(RADEON_AIC_CNTL); >+ tmp = RADEON_READ(RADEON_AIC_CNTL); > > if (on) { > RADEON_WRITE(RADEON_AIC_CNTL, >@@ -1376,7 +1397,7 @@ static void radeon_set_pcigart(drm_radeo > > /* Turn off AGP aperture -- is this required for PCI GART? > */ >- RADEON_WRITE(RADEON_MC_AGP_LOCATION, 0xffffffc0); /* ?? */ >+ radeon_write_agp_location(dev_priv, 0xffffffc0); > RADEON_WRITE(RADEON_AGP_COMMAND, 0); /* clear AGP_COMMAND */ > } else { > RADEON_WRITE(RADEON_AIC_CNTL, >@@ -1397,11 +1418,14 @@ static int radeon_do_init_cp(struct drm_ > return -EINVAL; > } > >- if (init->is_pci && (dev_priv->flags & RADEON_IS_AGP)) { >+ if (init->is_pci && (dev_priv->flags & RADEON_IS_AGP)) >+ { > DRM_DEBUG("Forcing AGP card to PCI mode\n"); > dev_priv->flags &= ~RADEON_IS_AGP; >- } else if (!(dev_priv->flags & (RADEON_IS_AGP | RADEON_IS_PCI | RADEON_IS_PCIE)) >- && !init->is_pci) { >+ } >+ else if (!(dev_priv->flags & (RADEON_IS_AGP | RADEON_IS_PCI | RADEON_IS_PCIE)) >+ && !init->is_pci) >+ { > DRM_DEBUG("Restoring AGP flag\n"); > dev_priv->flags |= RADEON_IS_AGP; > } >@@ -1581,10 +1605,9 @@ static int radeon_do_init_cp(struct drm_ > dev->agp_buffer_map->handle); > } > >- dev_priv->fb_location = (RADEON_READ(RADEON_MC_FB_LOCATION) >- & 0xffff) << 16; >- dev_priv->fb_size = >- ((RADEON_READ(RADEON_MC_FB_LOCATION) & 0xffff0000u) + 0x10000) >+ dev_priv->fb_location = (radeon_read_fb_location(dev_priv) & 0xffff) << 16; >+ dev_priv->fb_size = >+ ((radeon_read_fb_location(dev_priv) & 0xffff0000u) + 0x10000) > - dev_priv->fb_location; > > dev_priv->front_pitch_offset = (((dev_priv->front_pitch / 64) << 22) | >@@ -1630,7 +1653,7 @@ static int radeon_do_init_cp(struct drm_ > ((base + dev_priv->gart_size) & 0xfffffffful) < base) > base = dev_priv->fb_location > - dev_priv->gart_size; >- } >+ } > dev_priv->gart_vm_start = base & 0xffc00000u; > if (dev_priv->gart_vm_start != base) > DRM_INFO("GART aligned down from 0x%08x to 0x%08x\n", >@@ -1663,6 +1686,12 @@ static int radeon_do_init_cp(struct drm_ > dev_priv->ring.size = init->ring_size; > dev_priv->ring.size_l2qw = drm_order(init->ring_size / 8); > >+ dev_priv->ring.rptr_update = /* init->rptr_update */ 4096; >+ dev_priv->ring.rptr_update_l2qw = drm_order( /* init->rptr_update */ 4096 / 8); >+ >+ dev_priv->ring.fetch_size = /* init->fetch_size */ 32; >+ dev_priv->ring.fetch_size_l2ow = drm_order( /* init->fetch_size */ 32 / 16); >+ > dev_priv->ring.tail_mask = (dev_priv->ring.size / sizeof(u32)) - 1; > > dev_priv->ring.high_mark = RADEON_RING_HIGH_MARK; >@@ -1922,8 +1951,13 @@ void radeon_do_release(struct drm_device > #ifdef __linux__ > schedule(); > #else >+#if defined(__FreeBSD__) && __FreeBSD_version > 500000 >+ mtx_sleep(&ret, &dev->dev_lock, PZERO, "rdnrel", >+ 1); >+#else > tsleep(&ret, PZERO, "rdnrel", 1); > #endif >+#endif > } > radeon_do_cp_stop(dev_priv); > radeon_do_engine_reset(dev); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/radeon_drm.h linux-2.6.23.i686/drivers/char/drm/radeon_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/radeon_drm.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/radeon_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -223,10 +223,10 @@ typedef union { > #define R300_CMD_CP_DELAY 5 > #define R300_CMD_DMA_DISCARD 6 > #define R300_CMD_WAIT 7 >-# define R300_WAIT_2D 0x1 >-# define R300_WAIT_3D 0x2 >-# define R300_WAIT_2D_CLEAN 0x3 >-# define R300_WAIT_3D_CLEAN 0x4 >+# define R300_WAIT_2D 0x1 >+# define R300_WAIT_3D 0x2 >+# define R300_WAIT_2D_CLEAN 0x3 >+# define R300_WAIT_3D_CLEAN 0x4 > #define R300_CMD_SCRATCH 8 > > typedef union { >@@ -510,7 +510,7 @@ typedef struct drm_radeon_init { > RADEON_INIT_R300_CP = 0x04 > } func; > unsigned long sarea_priv_offset; >- int is_pci; >+ int is_pci; /* for overriding only */ > int cp_mode; > int gart_size; > int ring_size; >@@ -522,8 +522,8 @@ typedef struct drm_radeon_init { > unsigned int depth_bpp; > unsigned int depth_offset, depth_pitch; > >- unsigned long fb_offset; >- unsigned long mmio_offset; >+ unsigned long fb_offset DEPRECATED; /* deprecated, driver asks hardware */ >+ unsigned long mmio_offset DEPRECATED; /* deprecated, driver asks hardware */ > unsigned long ring_offset; > unsigned long ring_rptr_offset; > unsigned long buffers_offset; >@@ -656,6 +656,7 @@ typedef struct drm_radeon_indirect { > #define RADEON_PARAM_SCRATCH_OFFSET 11 > #define RADEON_PARAM_CARD_TYPE 12 > #define RADEON_PARAM_VBLANK_CRTC 13 /* VBLANK CRTC */ >+#define RADEON_PARAM_FB_LOCATION 14 /* FB location */ > > typedef struct drm_radeon_getparam { > int param; >@@ -707,6 +708,7 @@ typedef struct drm_radeon_setparam { > #define RADEON_SETPARAM_FB_LOCATION 1 /* determined framebuffer location */ > #define RADEON_SETPARAM_SWITCH_TILING 2 /* enable/disable color tiling */ > #define RADEON_SETPARAM_PCIGART_LOCATION 3 /* PCI Gart Location */ >+ > #define RADEON_SETPARAM_NEW_MEMMAP 4 /* Use new memory map */ > #define RADEON_SETPARAM_PCIGART_TABLE_SIZE 5 /* PCI GART Table Size */ > #define RADEON_SETPARAM_VBLANK_CRTC 6 /* VBLANK CRTC */ >@@ -722,7 +724,7 @@ typedef struct drm_radeon_surface_free { > unsigned int address; > } drm_radeon_surface_free_t; > >-#define DRM_RADEON_VBLANK_CRTC1 1 >-#define DRM_RADEON_VBLANK_CRTC2 2 >+#define DRM_RADEON_VBLANK_CRTC1 1 >+#define DRM_RADEON_VBLANK_CRTC2 2 > > #endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/radeon_drv.c linux-2.6.23.i686/drivers/char/drm/radeon_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/radeon_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/radeon_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -41,21 +41,22 @@ int radeon_no_wb; > MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers\n"); > module_param_named(no_wb, radeon_no_wb, int, 0444); > >-static int dri_library_name(struct drm_device *dev, char *buf) >+static int dri_library_name(struct drm_device * dev, char * buf) > { > drm_radeon_private_t *dev_priv = dev->dev_private; > int family = dev_priv->flags & RADEON_FAMILY_MASK; > > return snprintf(buf, PAGE_SIZE, "%s\n", >- (family < CHIP_R200) ? "radeon" : >- ((family < CHIP_R300) ? "r200" : >- "r300")); >+ (family < CHIP_R200) ? "radeon" : >+ ((family < CHIP_R300) ? "r200" : >+ "r300")); > } > > static struct pci_device_id pciidlist[] = { > radeon_PCI_IDS > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = > DRIVER_USE_AGP | DRIVER_USE_MTRR | DRIVER_PCI_DMA | DRIVER_SG | >@@ -82,21 +83,22 @@ static struct drm_driver driver = { > .ioctls = radeon_ioctls, > .dma_ioctl = radeon_cp_buffers, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >-#ifdef CONFIG_COMPAT >- .compat_ioctl = radeon_compat_ioctl, >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+#if defined(CONFIG_COMPAT) && LINUX_VERSION_CODE > KERNEL_VERSION(2,6,9) >+ .compat_ioctl = radeon_compat_ioctl, > #endif >- }, >- >+ }, > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), > }, > > .name = DRIVER_NAME, >@@ -107,10 +109,15 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ > static int __init radeon_init(void) > { > driver.num_ioctls = radeon_max_ioctl; >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit radeon_exit(void) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/radeon_drv.h linux-2.6.23.i686/drivers/char/drm/radeon_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/radeon_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/radeon_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -99,6 +99,7 @@ > * 1.27- Add support for IGP GART > * 1.28- Add support for VBL on CRTC2 > */ >+ > #define DRIVER_MAJOR 1 > #define DRIVER_MINOR 28 > #define DRIVER_PATCHLEVEL 0 >@@ -163,8 +164,14 @@ typedef struct drm_radeon_freelist { > typedef struct drm_radeon_ring_buffer { > u32 *start; > u32 *end; >- int size; >- int size_l2qw; >+ int size; /* Double Words */ >+ int size_l2qw; /* log2 Quad Words */ >+ >+ int rptr_update; /* Double Words */ >+ int rptr_update_l2qw; /* log2 Quad Words */ >+ >+ int fetch_size; /* Double Words */ >+ int fetch_size_l2ow; /* log2 Oct Words */ > > u32 tail; > u32 tail_mask; >@@ -207,6 +214,7 @@ struct radeon_virt_surface { > }; > > typedef struct drm_radeon_private { >+ > drm_radeon_ring_buffer_t ring; > drm_radeon_sarea_t *sarea_priv; > >@@ -294,6 +302,7 @@ typedef struct drm_radeon_private { > /* starting from here on, data is preserved accross an open */ > uint32_t flags; /* see radeon_chip_flags */ > unsigned long fb_aper_offset; >+ > } drm_radeon_private_t; > > typedef struct drm_radeon_buf_priv { >@@ -336,6 +345,7 @@ extern int radeon_cp_resume(struct drm_d > extern int radeon_engine_reset(struct drm_device *dev, void *data, struct drm_file *file_priv); > extern int radeon_fullscreen(struct drm_device *dev, void *data, struct drm_file *file_priv); > extern int radeon_cp_buffers(struct drm_device *dev, void *data, struct drm_file *file_priv); >+extern u32 radeon_read_fb_location(drm_radeon_private_t *dev_priv); > > extern void radeon_freelist_reset(struct drm_device * dev); > extern struct drm_buf *radeon_freelist_get(struct drm_device * dev); >@@ -344,10 +354,6 @@ extern int radeon_wait_ring(drm_radeon_p > > extern int radeon_do_cp_idle(drm_radeon_private_t * dev_priv); > >-extern int radeon_driver_preinit(struct drm_device *dev, unsigned long flags); >-extern int radeon_presetup(struct drm_device *dev); >-extern int radeon_driver_postcleanup(struct drm_device *dev); >- > extern int radeon_mem_alloc(struct drm_device *dev, void *data, struct drm_file *file_priv); > extern int radeon_mem_free(struct drm_device *dev, void *data, struct drm_file *file_priv); > extern int radeon_mem_init_heap(struct drm_device *dev, void *data, struct drm_file *file_priv); >@@ -374,19 +380,22 @@ extern int radeon_vblank_crtc_set(struct > extern int radeon_driver_load(struct drm_device *dev, unsigned long flags); > extern int radeon_driver_unload(struct drm_device *dev); > extern int radeon_driver_firstopen(struct drm_device *dev); >-extern void radeon_driver_preclose(struct drm_device * dev, struct drm_file *file_priv); >-extern void radeon_driver_postclose(struct drm_device * dev, struct drm_file * filp); >+extern void radeon_driver_preclose(struct drm_device * dev, >+ struct drm_file *file_priv); >+extern void radeon_driver_postclose(struct drm_device * dev, >+ struct drm_file *file_priv); > extern void radeon_driver_lastclose(struct drm_device * dev); >-extern int radeon_driver_open(struct drm_device * dev, struct drm_file * filp_priv); >+extern int radeon_driver_open(struct drm_device * dev, >+ struct drm_file * file_priv); > extern long radeon_compat_ioctl(struct file *filp, unsigned int cmd, >- unsigned long arg); >+ unsigned long arg); > > /* r300_cmdbuf.c */ > extern void r300_init_reg_flags(void); > >-extern int r300_do_cp_cmdbuf(struct drm_device * dev, >+extern int r300_do_cp_cmdbuf(struct drm_device *dev, > struct drm_file *file_priv, >- drm_radeon_kcmd_buffer_t * cmdbuf); >+ drm_radeon_kcmd_buffer_t *cmdbuf); > > /* Flags for stats.boxes > */ >@@ -399,10 +408,9 @@ extern int r300_do_cp_cmdbuf(struct drm_ > /* Register definitions, register access macros and drmAddMap constants > * for Radeon kernel driver. > */ >- > #define RADEON_AGP_COMMAND 0x0f60 >-#define RADEON_AGP_COMMAND_PCI_CONFIG 0x0060 /* offset in PCI config */ >-# define RADEON_AGP_ENABLE (1<<8) >+#define RADEON_AGP_COMMAND_PCI_CONFIG 0x0060 /* offset in PCI config */ >+# define RADEON_AGP_ENABLE (1<<8) > #define RADEON_AUX_SCISSOR_CNTL 0x26f0 > # define RADEON_EXCLUSIVE_SCISSOR_0 (1 << 24) > # define RADEON_EXCLUSIVE_SCISSOR_1 (1 << 25) >@@ -418,7 +426,7 @@ extern int r300_do_cp_cmdbuf(struct drm_ > # define RADEON_PLL_WR_EN (1 << 7) > #define RADEON_CLOCK_CNTL_INDEX 0x0008 > #define RADEON_CONFIG_APER_SIZE 0x0108 >-#define RADEON_CONFIG_MEMSIZE 0x00f8 >+#define RADEON_CONFIG_MEMSIZE 0x00f8 > #define RADEON_CRTC_OFFSET 0x0224 > #define RADEON_CRTC_OFFSET_CNTL 0x0228 > # define RADEON_CRTC_TILE_EN (1 << 15) >@@ -429,7 +437,7 @@ extern int r300_do_cp_cmdbuf(struct drm_ > #define RADEON_PCIE_INDEX 0x0030 > #define RADEON_PCIE_DATA 0x0034 > #define RADEON_PCIE_TX_GART_CNTL 0x10 >-# define RADEON_PCIE_TX_GART_EN (1 << 0) >+# define RADEON_PCIE_TX_GART_EN (1 << 0) > # define RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_PASS_THRU (0<<1) > # define RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_CLAMP_LO (1<<1) > # define RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD (3<<1) >@@ -439,7 +447,7 @@ extern int r300_do_cp_cmdbuf(struct drm_ > # define RADEON_PCIE_TX_GART_INVALIDATE_TLB (1<<8) > #define RADEON_PCIE_TX_DISCARD_RD_ADDR_LO 0x11 > #define RADEON_PCIE_TX_DISCARD_RD_ADDR_HI 0x12 >-#define RADEON_PCIE_TX_GART_BASE 0x13 >+#define RADEON_PCIE_TX_GART_BASE 0x13 > #define RADEON_PCIE_TX_GART_START_LO 0x14 > #define RADEON_PCIE_TX_GART_START_HI 0x15 > #define RADEON_PCIE_TX_GART_END_LO 0x16 >@@ -454,6 +462,7 @@ extern int r300_do_cp_cmdbuf(struct drm_ > #define RADEON_IGPGART_ENABLE 0x38 > #define RADEON_IGPGART_UNK_39 0x39 > >+ > #define RADEON_MPP_TB_CONFIG 0x01c0 > #define RADEON_MEM_CNTL 0x0140 > #define RADEON_MEM_SDRAM_MODE_REG 0x0158 >@@ -512,12 +521,12 @@ extern int r300_do_cp_cmdbuf(struct drm_ > > #define RADEON_GEN_INT_STATUS 0x0044 > # define RADEON_CRTC_VBLANK_STAT (1 << 0) >-# define RADEON_CRTC_VBLANK_STAT_ACK (1 << 0) >+# define RADEON_CRTC_VBLANK_STAT_ACK (1 << 0) > # define RADEON_CRTC2_VBLANK_STAT (1 << 9) >-# define RADEON_CRTC2_VBLANK_STAT_ACK (1 << 9) >+# define RADEON_CRTC2_VBLANK_STAT_ACK (1 << 9) > # define RADEON_GUI_IDLE_INT_TEST_ACK (1 << 19) > # define RADEON_SW_INT_TEST (1 << 25) >-# define RADEON_SW_INT_TEST_ACK (1 << 25) >+# define RADEON_SW_INT_TEST_ACK (1 << 25) > # define RADEON_SW_INT_FIRE (1 << 26) > > #define RADEON_HOST_PATH_CNTL 0x0130 >@@ -589,7 +598,7 @@ extern int r300_do_cp_cmdbuf(struct drm_ > # define RADEON_RB3D_ZC_FREE (1 << 2) > # define RADEON_RB3D_ZC_FLUSH_ALL 0x5 > # define RADEON_RB3D_ZC_BUSY (1 << 31) >-#define RADEON_RB3D_DSTCACHE_CTLSTAT 0x325c >+#define RADEON_RB3D_DSTCACHE_CTLSTAT 0x325c > # define RADEON_RB3D_DC_FLUSH (3 << 0) > # define RADEON_RB3D_DC_FREE (3 << 2) > # define RADEON_RB3D_DC_FLUSH_ALL 0xf >@@ -597,15 +606,15 @@ extern int r300_do_cp_cmdbuf(struct drm_ > #define RADEON_RB3D_ZSTENCILCNTL 0x1c2c > # define RADEON_Z_TEST_MASK (7 << 4) > # define RADEON_Z_TEST_ALWAYS (7 << 4) >-# define RADEON_Z_HIERARCHY_ENABLE (1 << 8) >+# define RADEON_Z_HIERARCHY_ENABLE (1 << 8) > # define RADEON_STENCIL_TEST_ALWAYS (7 << 12) > # define RADEON_STENCIL_S_FAIL_REPLACE (2 << 16) > # define RADEON_STENCIL_ZPASS_REPLACE (2 << 20) > # define RADEON_STENCIL_ZFAIL_REPLACE (2 << 24) >-# define RADEON_Z_COMPRESSION_ENABLE (1 << 28) >-# define RADEON_FORCE_Z_DIRTY (1 << 29) >+# define RADEON_Z_COMPRESSION_ENABLE (1 << 28) >+# define RADEON_FORCE_Z_DIRTY (1 << 29) > # define RADEON_Z_WRITE_ENABLE (1 << 30) >-# define RADEON_Z_DECOMPRESSION_ENABLE (1 << 31) >+# define RADEON_Z_DECOMPRESSION_ENABLE (1 << 31) > #define RADEON_RBBM_SOFT_RESET 0x00f0 > # define RADEON_SOFT_RESET_CP (1 << 0) > # define RADEON_SOFT_RESET_HI (1 << 1) >@@ -615,9 +624,51 @@ extern int r300_do_cp_cmdbuf(struct drm_ > # define RADEON_SOFT_RESET_E2 (1 << 5) > # define RADEON_SOFT_RESET_RB (1 << 6) > # define RADEON_SOFT_RESET_HDP (1 << 7) >+/* >+ * 6:0 Available slots in the FIFO >+ * 8 Host Interface active >+ * 9 CP request active >+ * 10 FIFO request active >+ * 11 Host Interface retry active >+ * 12 CP retry active >+ * 13 FIFO retry active >+ * 14 FIFO pipeline busy >+ * 15 Event engine busy >+ * 16 CP command stream busy >+ * 17 2D engine busy >+ * 18 2D portion of render backend busy >+ * 20 3D setup engine busy >+ * 26 GA engine busy >+ * 27 CBA 2D engine busy >+ * 31 2D engine busy or 3D engine busy or FIFO not empty or CP busy or >+ * command stream queue not empty or Ring Buffer not empty >+ */ > #define RADEON_RBBM_STATUS 0x0e40 >+/* Same as the previous RADEON_RBBM_STATUS; this is a mirror of that register. */ >+/* #define RADEON_RBBM_STATUS 0x1740 */ >+/* bits 6:0 are dword slots available in the cmd fifo */ > # define RADEON_RBBM_FIFOCNT_MASK 0x007f >-# define RADEON_RBBM_ACTIVE (1 << 31) >+# define RADEON_HIRQ_ON_RBB (1 << 8) >+# define RADEON_CPRQ_ON_RBB (1 << 9) >+# define RADEON_CFRQ_ON_RBB (1 << 10) >+# define RADEON_HIRQ_IN_RTBUF (1 << 11) >+# define RADEON_CPRQ_IN_RTBUF (1 << 12) >+# define RADEON_CFRQ_IN_RTBUF (1 << 13) >+# define RADEON_PIPE_BUSY (1 << 14) >+# define RADEON_ENG_EV_BUSY (1 << 15) >+# define RADEON_CP_CMDSTRM_BUSY (1 << 16) >+# define RADEON_E2_BUSY (1 << 17) >+# define RADEON_RB2D_BUSY (1 << 18) >+# define RADEON_RB3D_BUSY (1 << 19) /* not used on r300 */ >+# define RADEON_VAP_BUSY (1 << 20) >+# define RADEON_RE_BUSY (1 << 21) /* not used on r300 */ >+# define RADEON_TAM_BUSY (1 << 22) /* not used on r300 */ >+# define RADEON_TDM_BUSY (1 << 23) /* not used on r300 */ >+# define RADEON_PB_BUSY (1 << 24) /* not used on r300 */ >+# define RADEON_TIM_BUSY (1 << 25) /* not used on r300 */ >+# define RADEON_GA_BUSY (1 << 26) >+# define RADEON_CBA2D_BUSY (1 << 27) >+# define RADEON_RBBM_ACTIVE (1 << 31) > #define RADEON_RE_LINE_PATTERN 0x1cd0 > #define RADEON_RE_MISC 0x26c4 > #define RADEON_RE_TOP_LEFT 0x26c0 >@@ -769,7 +820,7 @@ extern int r300_do_cp_cmdbuf(struct drm_ > # define RADEON_CP_NEXT_CHAR 0x00001900 > # define RADEON_CP_PLY_NEXTSCAN 0x00001D00 > # define RADEON_CP_SET_SCISSORS 0x00001E00 >- /* GEN_INDX_PRIM is unsupported starting with R300 */ >+ /* GEN_INDX_PRIM is unsupported starting with R300 */ > # define RADEON_3D_RNDR_GEN_INDX_PRIM 0x00002300 > # define RADEON_WAIT_FOR_IDLE 0x00002600 > # define RADEON_3D_DRAW_VBUF 0x00002800 >@@ -954,13 +1005,43 @@ extern int r300_do_cp_cmdbuf(struct drm_ > > #define R200_SE_TCL_POINT_SPRITE_CNTL 0x22c4 > >-#define R200_PP_TRI_PERF 0x2cf8 >+#define R200_PP_TRI_PERF 0x2cf8 > > #define R200_PP_AFS_0 0x2f80 >-#define R200_PP_AFS_1 0x2f00 /* same as txcblend_0 */ >+#define R200_PP_AFS_1 0x2f00 /* same as txcblend_0 */ > > #define R200_VAP_PVS_CNTL_1 0x22D0 > >+/* MPEG settings from VHA code */ >+#define RADEON_VHA_SETTO16_1 0x2694 >+#define RADEON_VHA_SETTO16_2 0x2680 >+#define RADEON_VHA_SETTO0_1 0x1840 >+#define RADEON_VHA_FB_OFFSET 0x19e4 >+#define RADEON_VHA_SETTO1AND70S 0x19d8 >+#define RADEON_VHA_DST_PITCH 0x1408 >+ >+// set as reference header >+#define RADEON_VHA_BACKFRAME0_OFF_Y 0x1840 >+#define RADEON_VHA_BACKFRAME1_OFF_PITCH_Y 0x1844 >+#define RADEON_VHA_BACKFRAME0_OFF_U 0x1848 >+#define RADEON_VHA_BACKFRAME1_OFF_PITCH_U 0x184c >+#define RADOEN_VHA_BACKFRAME0_OFF_V 0x1850 >+#define RADEON_VHA_BACKFRAME1_OFF_PITCH_V 0x1854 >+#define RADEON_VHA_FORWFRAME0_OFF_Y 0x1858 >+#define RADEON_VHA_FORWFRAME1_OFF_PITCH_Y 0x185c >+#define RADEON_VHA_FORWFRAME0_OFF_U 0x1860 >+#define RADEON_VHA_FORWFRAME1_OFF_PITCH_U 0x1864 >+#define RADEON_VHA_FORWFRAME0_OFF_V 0x1868 >+#define RADEON_VHA_FORWFRAME0_OFF_PITCH_V 0x1880 >+#define RADEON_VHA_BACKFRAME0_OFF_Y_2 0x1884 >+#define RADEON_VHA_BACKFRAME1_OFF_PITCH_Y_2 0x1888 >+#define RADEON_VHA_BACKFRAME0_OFF_U_2 0x188c >+#define RADEON_VHA_BACKFRAME1_OFF_PITCH_U_2 0x1890 >+#define RADEON_VHA_BACKFRAME0_OFF_V_2 0x1894 >+#define RADEON_VHA_BACKFRAME1_OFF_PITCH_V_2 0x1898 >+ >+ >+ > /* Constants */ > #define RADEON_MAX_USEC_TIMEOUT 100000 /* 100 ms */ > >@@ -1118,7 +1199,7 @@ do { \ > n, __FUNCTION__ ); \ > } \ > if ( dev_priv->ring.space <= (n) * sizeof(u32) ) { \ >- COMMIT_RING(); \ >+ COMMIT_RING(); \ > radeon_wait_ring( dev_priv, (n) * sizeof(u32) ); \ > } \ > _nr = n; dev_priv->ring.space -= (n) * sizeof(u32); \ >@@ -1133,7 +1214,7 @@ do { \ > write, dev_priv->ring.tail ); \ > } \ > if (((dev_priv->ring.tail + _nr) & mask) != write) { \ >- DRM_ERROR( \ >+ DRM_ERROR( \ > "ADVANCE_RING(): mismatch: nr: %x write: %x line: %d\n", \ > ((dev_priv->ring.tail + _nr) & mask), \ > write, __LINE__); \ >@@ -1164,14 +1245,14 @@ do { \ > OUT_RING( val ); \ > } while (0) > >-#define OUT_RING_TABLE( tab, sz ) do { \ >+#define OUT_RING_TABLE( tab, sz ) do { \ > int _size = (sz); \ > int *_tab = (int *)(tab); \ > \ > if (write + _size > mask) { \ > int _i = (mask+1) - write; \ > _size -= _i; \ >- while (_i > 0 ) { \ >+ while (_i > 0) { \ > *(int *)(ring + write) = *_tab++; \ > write++; \ > _i--; \ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/radeon_ioc32.c linux-2.6.23.i686/drivers/char/drm/radeon_ioc32.c >--- linux-2.6.23.i686.orig/drivers/char/drm/radeon_ioc32.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/radeon_ioc32.c 2008-01-06 09:24:57.000000000 +0100 >@@ -92,8 +92,8 @@ static int compat_radeon_cp_init(struct > &init->gart_textures_offset)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_CP_INIT, (unsigned long)init); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_CP_INIT, (unsigned long) init); > } > > typedef struct drm_radeon_clear32 { >@@ -101,8 +101,8 @@ typedef struct drm_radeon_clear32 { > unsigned int clear_color; > unsigned int clear_depth; > unsigned int color_mask; >- unsigned int depth_mask; /* misnamed field: should be stencil */ >- u32 depth_boxes; >+ unsigned int depth_mask; /* misnamed field: should be stencil */ >+ u32 depth_boxes; > } drm_radeon_clear32_t; > > static int compat_radeon_cp_clear(struct file *file, unsigned int cmd, >@@ -125,8 +125,8 @@ static int compat_radeon_cp_clear(struct > &clr->depth_boxes)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_CLEAR, (unsigned long)clr); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_CLEAR, (unsigned long) clr); > } > > typedef struct drm_radeon_stipple32 { >@@ -145,16 +145,16 @@ static int compat_radeon_cp_stipple(stru > > request = compat_alloc_user_space(sizeof(*request)); > if (!access_ok(VERIFY_WRITE, request, sizeof(*request)) >- || __put_user((unsigned int __user *)(unsigned long)mask, >+ || __put_user((unsigned int __user *)(unsigned long) mask, > &request->mask)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_STIPPLE, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_STIPPLE, (unsigned long) request); > } > > typedef struct drm_radeon_tex_image32 { >- unsigned int x, y; /* Blit coordinates */ >+ unsigned int x, y; /* Blit coordinates */ > unsigned int width, height; > u32 data; > } drm_radeon_tex_image32_t; >@@ -163,7 +163,7 @@ typedef struct drm_radeon_texture32 { > unsigned int offset; > int pitch; > int format; >- int width; /* Texture image coordinates */ >+ int width; /* Texture image coordinates */ > int height; > u32 image; > } drm_radeon_texture32_t; >@@ -204,13 +204,13 @@ static int compat_radeon_cp_texture(stru > &image->data)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_TEXTURE, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_TEXTURE, (unsigned long) request); > } > > typedef struct drm_radeon_vertex2_32 { >- int idx; /* Index of vertex buffer */ >- int discard; /* Client finished with buffer? */ >+ int idx; /* Index of vertex buffer */ >+ int discard; /* Client finished with buffer? */ > int nr_states; > u32 state; > int nr_prims; >@@ -238,8 +238,8 @@ static int compat_radeon_cp_vertex2(stru > &request->prim)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_VERTEX2, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_VERTEX2, (unsigned long) request); > } > > typedef struct drm_radeon_cmd_buffer32 { >@@ -268,8 +268,8 @@ static int compat_radeon_cp_cmdbuf(struc > &request->boxes)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_CMDBUF, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_CMDBUF, (unsigned long) request); > } > > typedef struct drm_radeon_getparam32 { >@@ -293,8 +293,8 @@ static int compat_radeon_cp_getparam(str > &request->value)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_GETPARAM, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_GETPARAM, (unsigned long) request); > } > > typedef struct drm_radeon_mem_alloc32 { >@@ -322,8 +322,8 @@ static int compat_radeon_mem_alloc(struc > &request->region_offset)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_ALLOC, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_ALLOC, (unsigned long) request); > } > > typedef struct drm_radeon_irq_emit32 { >@@ -345,8 +345,8 @@ static int compat_radeon_irq_emit(struct > &request->irq_seq)) > return -EFAULT; > >- return drm_ioctl(file->f_path.dentry->d_inode, file, >- DRM_IOCTL_RADEON_IRQ_EMIT, (unsigned long)request); >+ return drm_ioctl(file->f_dentry->d_inode, file, >+ DRM_IOCTL_RADEON_IRQ_EMIT, (unsigned long) request); > } > > /* The two 64-bit arches where alignof(u64)==4 in 32-bit code */ >@@ -362,7 +362,7 @@ static int compat_radeon_cp_setparam(str > drm_radeon_setparam32_t req32; > drm_radeon_setparam_t __user *request; > >- if (copy_from_user(&req32, (void __user *) arg, sizeof(req32))) >+ if (copy_from_user(&req32, (void __user *)arg, sizeof(req32))) > return -EFAULT; > > request = compat_alloc_user_space(sizeof(*request)); >@@ -415,9 +415,9 @@ long radeon_compat_ioctl(struct file *fi > > lock_kernel(); /* XXX for now */ > if (fn != NULL) >- ret = (*fn) (filp, cmd, arg); >+ ret = (*fn)(filp, cmd, arg); > else >- ret = drm_ioctl(filp->f_path.dentry->d_inode, filp, cmd, arg); >+ ret = drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg); > unlock_kernel(); > > return ret; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/radeon_irq.c linux-2.6.23.i686/drivers/char/drm/radeon_irq.c >--- linux-2.6.23.i686.orig/drivers/char/drm/radeon_irq.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/radeon_irq.c 2008-01-06 09:24:57.000000000 +0100 >@@ -144,8 +144,8 @@ static int radeon_wait_irq(struct drm_de > return ret; > } > >-int radeon_driver_vblank_do_wait(struct drm_device * dev, unsigned int *sequence, >- int crtc) >+static int radeon_driver_vblank_do_wait(struct drm_device * dev, >+ unsigned int *sequence, int crtc) > { > drm_radeon_private_t *dev_priv = > (drm_radeon_private_t *) dev->dev_private; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/radeon_mem.c linux-2.6.23.i686/drivers/char/drm/radeon_mem.c >--- linux-2.6.23.i686.orig/drivers/char/drm/radeon_mem.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/radeon_mem.c 2008-01-06 09:24:57.000000000 +0100 >@@ -100,8 +100,8 @@ static struct mem_block *find_block(stru > struct mem_block *p; > > list_for_each(p, heap) >- if (p->start == start) >- return p; >+ if (p->start == start) >+ return p; > > return NULL; > } >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/radeon_state.c linux-2.6.23.i686/drivers/char/drm/radeon_state.c >--- linux-2.6.23.i686.orig/drivers/char/drm/radeon_state.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/radeon_state.c 2008-01-06 09:24:57.000000000 +0100 >@@ -39,8 +39,8 @@ > > static __inline__ int radeon_check_and_fixup_offset(drm_radeon_private_t * > dev_priv, >- struct drm_file * file_priv, >- u32 *offset) >+ struct drm_file *file_priv, >+ u32 * offset) > { > u64 off = *offset; > u32 fb_end = dev_priv->fb_location + dev_priv->fb_size - 1; >@@ -169,7 +169,7 @@ static __inline__ int radeon_check_and_f > } > break; > >- case R200_EMIT_VAP_CTL:{ >+ case R200_EMIT_VAP_CTL: { > RING_LOCALS; > BEGIN_RING(2); > OUT_RING_REG(RADEON_SE_TCL_STATE_FLUSH, 0); >@@ -1861,6 +1861,7 @@ static int radeon_cp_dispatch_texture(st > OUT_RING((image->width << 16) | height); > RADEON_WAIT_UNTIL_2D_IDLE(); > ADVANCE_RING(); >+ COMMIT_RING(); > > radeon_cp_discard_buffer(dev, buf); > >@@ -1878,6 +1879,8 @@ static int radeon_cp_dispatch_texture(st > RADEON_FLUSH_CACHE(); > RADEON_WAIT_UNTIL_2D_IDLE(); > ADVANCE_RING(); >+ COMMIT_RING(); >+ > return 0; > } > >@@ -2080,6 +2083,11 @@ static int radeon_surface_alloc(struct d > drm_radeon_private_t *dev_priv = dev->dev_private; > drm_radeon_surface_alloc_t *alloc = data; > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ > if (alloc_surface(alloc, dev_priv, file_priv) == -1) > return -EINVAL; > else >@@ -2091,6 +2099,11 @@ static int radeon_surface_free(struct dr > drm_radeon_private_t *dev_priv = dev->dev_private; > drm_radeon_surface_free_t *memfree = data; > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ > if (free_surface(file_priv, dev_priv, memfree->address)) > return -EINVAL; > else >@@ -2193,7 +2206,7 @@ static int radeon_cp_swap(struct drm_dev > static int radeon_cp_vertex(struct drm_device *dev, void *data, struct drm_file *file_priv) > { > drm_radeon_private_t *dev_priv = dev->dev_private; >- drm_radeon_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_radeon_sarea_t *sarea_priv; > struct drm_device_dma *dma = dev->dma; > struct drm_buf *buf; > drm_radeon_vertex_t *vertex = data; >@@ -2201,6 +2214,13 @@ static int radeon_cp_vertex(struct drm_d > > LOCK_TEST_WITH_RETURN(dev, file_priv); > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ >+ sarea_priv = dev_priv->sarea_priv; >+ > DRM_DEBUG("pid=%d index=%d count=%d discard=%d\n", > DRM_CURRENTPID, vertex->idx, vertex->count, vertex->discard); > >@@ -2269,7 +2289,7 @@ static int radeon_cp_vertex(struct drm_d > static int radeon_cp_indices(struct drm_device *dev, void *data, struct drm_file *file_priv) > { > drm_radeon_private_t *dev_priv = dev->dev_private; >- drm_radeon_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_radeon_sarea_t *sarea_priv; > struct drm_device_dma *dma = dev->dma; > struct drm_buf *buf; > drm_radeon_indices_t *elts = data; >@@ -2278,6 +2298,12 @@ static int radeon_cp_indices(struct drm_ > > LOCK_TEST_WITH_RETURN(dev, file_priv); > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ sarea_priv = dev_priv->sarea_priv; >+ > DRM_DEBUG("pid=%d index=%d start=%d end=%d discard=%d\n", > DRM_CURRENTPID, elts->idx, elts->start, elts->end, > elts->discard); >@@ -2378,7 +2404,6 @@ static int radeon_cp_texture(struct drm_ > > ret = radeon_cp_dispatch_texture(dev, file_priv, tex, &image); > >- COMMIT_RING(); > return ret; > } > >@@ -2411,6 +2436,11 @@ static int radeon_cp_indirect(struct drm > > LOCK_TEST_WITH_RETURN(dev, file_priv); > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ > DRM_DEBUG("indirect: idx=%d s=%d e=%d d=%d\n", > indirect->idx, indirect->start, indirect->end, > indirect->discard); >@@ -2469,7 +2499,7 @@ static int radeon_cp_indirect(struct drm > static int radeon_cp_vertex2(struct drm_device *dev, void *data, struct drm_file *file_priv) > { > drm_radeon_private_t *dev_priv = dev->dev_private; >- drm_radeon_sarea_t *sarea_priv = dev_priv->sarea_priv; >+ drm_radeon_sarea_t *sarea_priv; > struct drm_device_dma *dma = dev->dma; > struct drm_buf *buf; > drm_radeon_vertex2_t *vertex = data; >@@ -2478,6 +2508,13 @@ static int radeon_cp_vertex2(struct drm_ > > LOCK_TEST_WITH_RETURN(dev, file_priv); > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ >+ sarea_priv = dev_priv->sarea_priv; >+ > DRM_DEBUG("pid=%d index=%d discard=%d\n", > DRM_CURRENTPID, vertex->idx, vertex->discard); > >@@ -2666,10 +2703,10 @@ static __inline__ int radeon_emit_veclin > int start = header.veclinear.addr_lo | (header.veclinear.addr_hi << 8); > RING_LOCALS; > >- if (!sz) >- return 0; >- if (sz * 4 > cmdbuf->bufsz) >- return -EINVAL; >+ if (!sz) >+ return 0; >+ if (sz * 4 > cmdbuf->bufsz) >+ return -EINVAL; > > BEGIN_RING(5 + sz); > OUT_RING_REG(RADEON_SE_TCL_STATE_FLUSH, 0); >@@ -2814,6 +2851,11 @@ static int radeon_cp_cmdbuf(struct drm_d > > LOCK_TEST_WITH_RETURN(dev, file_priv); > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ > RING_SPACE_TEST_WITH_RETURN(dev_priv); > VB_AGE_TEST_WITH_RETURN(dev_priv); > >@@ -2970,6 +3012,11 @@ static int radeon_cp_getparam(struct drm > drm_radeon_getparam_t *param = data; > int value; > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ > DRM_DEBUG("pid=%d\n", DRM_CURRENTPID); > > switch (param->param) { >@@ -2999,11 +3046,11 @@ static int radeon_cp_getparam(struct drm > case RADEON_PARAM_STATUS_HANDLE: > value = dev_priv->ring_rptr_offset; > break; >-#if BITS_PER_LONG == 32 >+#ifndef __LP64__ > /* > * This ioctl() doesn't work on 64-bit platforms because hw_lock is a > * pointer which can't fit into an int-sized variable. According to >- * Michel Dänzer, the ioctl() is only used on embedded platforms, so >+ * Michel Dänzer, the ioctl() is only used on embedded platforms, so > * not supporting it shouldn't be a problem. If the same functionality > * is needed on 64-bit platforms, a new ioctl() would have to be added, > * so backwards-compatibility for the embedded platforms can be >@@ -3022,6 +3069,7 @@ static int radeon_cp_getparam(struct drm > return -EINVAL; > value = RADEON_SCRATCH_REG_OFFSET; > break; >+ > case RADEON_PARAM_CARD_TYPE: > if (dev_priv->flags & RADEON_IS_PCIE) > value = RADEON_CARD_PCIE; >@@ -3033,8 +3081,11 @@ static int radeon_cp_getparam(struct drm > case RADEON_PARAM_VBLANK_CRTC: > value = radeon_vblank_crtc_get(dev); > break; >+ case RADEON_PARAM_FB_LOCATION: >+ value = radeon_read_fb_location(dev_priv); >+ break; > default: >- DRM_DEBUG("Invalid parameter %d\n", param->param); >+ DRM_DEBUG( "Invalid parameter %d\n", param->param ); > return -EINVAL; > } > >@@ -3052,6 +3103,11 @@ static int radeon_cp_setparam(struct drm > drm_radeon_setparam_t *sp = data; > struct drm_radeon_driver_file_fields *radeon_priv; > >+ if (!dev_priv) { >+ DRM_ERROR("%s called with no initialization\n", __FUNCTION__); >+ return -EINVAL; >+ } >+ > switch (sp->param) { > case RADEON_SETPARAM_FB_LOCATION: > radeon_priv = file_priv->driver_priv; >@@ -3101,7 +3157,8 @@ static int radeon_cp_setparam(struct drm > * > * DRM infrastructure takes care of reclaiming dma buffers. > */ >-void radeon_driver_preclose(struct drm_device *dev, struct drm_file *file_priv) >+void radeon_driver_preclose(struct drm_device *dev, >+ struct drm_file *file_priv) > { > if (dev->dev_private) { > drm_radeon_private_t *dev_priv = dev->dev_private; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/README.drm linux-2.6.23.i686/drivers/char/drm/README.drm >--- linux-2.6.23.i686.orig/drivers/char/drm/README.drm 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/README.drm 2008-01-06 09:24:57.000000000 +0100 >@@ -23,22 +23,3 @@ ways: > > 4. The DRM is extensible via the use of small device-specific modules > that rely extensively on the API exported by the DRM module. >- >- >-Documentation on the DRI is available from: >- http://dri.freedesktop.org/wiki/Documentation >- http://sourceforge.net/project/showfiles.php?group_id=387 >- http://dri.sourceforge.net/doc/ >- >-For specific information about kernel-level support, see: >- >- The Direct Rendering Manager, Kernel Support for the Direct Rendering >- Infrastructure >- http://dri.sourceforge.net/doc/drm_low_level.html >- >- Hardware Locking for the Direct Rendering Infrastructure >- http://dri.sourceforge.net/doc/hardware_locking_low_level.html >- >- A Security Analysis of the Direct Rendering Infrastructure >- http://dri.sourceforge.net/doc/security_low_level.html >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/savage_bci.c linux-2.6.23.i686/drivers/char/drm/savage_bci.c >--- linux-2.6.23.i686.orig/drivers/char/drm/savage_bci.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/savage_bci.c 2008-01-06 09:24:57.000000000 +0100 >@@ -28,14 +28,14 @@ > > /* Need a long timeout for shadow status updates can take a while > * and so can waiting for events when the queue is full. */ >-#define SAVAGE_DEFAULT_USEC_TIMEOUT 1000000 /* 1s */ >-#define SAVAGE_EVENT_USEC_TIMEOUT 5000000 /* 5s */ >+#define SAVAGE_DEFAULT_USEC_TIMEOUT 1000000 /* 1s */ >+#define SAVAGE_EVENT_USEC_TIMEOUT 5000000 /* 5s */ > #define SAVAGE_FREELIST_DEBUG 0 > > static int savage_do_cleanup_bci(struct drm_device *dev); > > static int >-savage_bci_wait_fifo_shadow(drm_savage_private_t * dev_priv, unsigned int n) >+savage_bci_wait_fifo_shadow(drm_savage_private_t *dev_priv, unsigned int n) > { > uint32_t mask = dev_priv->status_used_mask; > uint32_t threshold = dev_priv->bci_threshold_hi; >@@ -64,7 +64,7 @@ savage_bci_wait_fifo_shadow(drm_savage_p > } > > static int >-savage_bci_wait_fifo_s3d(drm_savage_private_t * dev_priv, unsigned int n) >+savage_bci_wait_fifo_s3d(drm_savage_private_t *dev_priv, unsigned int n) > { > uint32_t maxUsed = dev_priv->cob_size + SAVAGE_BCI_FIFO_SIZE - n; > uint32_t status; >@@ -85,7 +85,7 @@ savage_bci_wait_fifo_s3d(drm_savage_priv > } > > static int >-savage_bci_wait_fifo_s4(drm_savage_private_t * dev_priv, unsigned int n) >+savage_bci_wait_fifo_s4(drm_savage_private_t *dev_priv, unsigned int n) > { > uint32_t maxUsed = dev_priv->cob_size + SAVAGE_BCI_FIFO_SIZE - n; > uint32_t status; >@@ -117,7 +117,7 @@ savage_bci_wait_fifo_s4(drm_savage_priva > * rule. Otherwise there may be glitches every 2^16 events. > */ > static int >-savage_bci_wait_event_shadow(drm_savage_private_t * dev_priv, uint16_t e) >+savage_bci_wait_event_shadow(drm_savage_private_t *dev_priv, uint16_t e) > { > uint32_t status; > int i; >@@ -140,7 +140,7 @@ savage_bci_wait_event_shadow(drm_savage_ > } > > static int >-savage_bci_wait_event_reg(drm_savage_private_t * dev_priv, uint16_t e) >+savage_bci_wait_event_reg(drm_savage_private_t *dev_priv, uint16_t e) > { > uint32_t status; > int i; >@@ -161,7 +161,7 @@ savage_bci_wait_event_reg(drm_savage_pri > return -EBUSY; > } > >-uint16_t savage_bci_emit_event(drm_savage_private_t * dev_priv, >+uint16_t savage_bci_emit_event(drm_savage_private_t *dev_priv, > unsigned int flags) > { > uint16_t count; >@@ -177,12 +177,12 @@ uint16_t savage_bci_emit_event(drm_savag > } > count = (count + 1) & 0xffff; > if (count == 0) { >- count++; /* See the comment above savage_wait_event_*. */ >+ count++; /* See the comment above savage_wait_event_*. */ > dev_priv->event_wrap++; > } > dev_priv->event_counter = count; > if (dev_priv->status_ptr) >- dev_priv->status_ptr[1023] = (uint32_t) count; >+ dev_priv->status_ptr[1023] = (uint32_t)count; > > if ((flags & (SAVAGE_WAIT_2D | SAVAGE_WAIT_3D))) { > unsigned int wait_cmd = BCI_CMD_WAIT; >@@ -195,7 +195,7 @@ uint16_t savage_bci_emit_event(drm_savag > } else { > BEGIN_BCI(1); > } >- BCI_WRITE(BCI_CMD_UPDATE_EVENT_TAG | (uint32_t) count); >+ BCI_WRITE(BCI_CMD_UPDATE_EVENT_TAG | (uint32_t)count); > > return count; > } >@@ -203,7 +203,7 @@ uint16_t savage_bci_emit_event(drm_savag > /* > * Freelist management > */ >-static int savage_freelist_init(struct drm_device * dev) >+static int savage_freelist_init(struct drm_device *dev) > { > drm_savage_private_t *dev_priv = dev->dev_private; > struct drm_device_dma *dma = dev->dma; >@@ -236,7 +236,7 @@ static int savage_freelist_init(struct d > return 0; > } > >-static struct drm_buf *savage_freelist_get(struct drm_device * dev) >+static struct drm_buf *savage_freelist_get(struct drm_device *dev) > { > drm_savage_private_t *dev_priv = dev->dev_private; > drm_savage_buf_priv_t *tail = dev_priv->tail.prev; >@@ -251,7 +251,7 @@ static struct drm_buf *savage_freelist_g > event = SAVAGE_READ(SAVAGE_STATUS_WORD1) & 0xffff; > wrap = dev_priv->event_wrap; > if (event > dev_priv->event_counter) >- wrap--; /* hardware hasn't passed the last wrap yet */ >+ wrap--; /* hardware hasn't passed the last wrap yet */ > > DRM_DEBUG(" tail=0x%04x %d\n", tail->age.event, tail->age.wrap); > DRM_DEBUG(" head=0x%04x %d\n", event, wrap); >@@ -269,7 +269,7 @@ static struct drm_buf *savage_freelist_g > return NULL; > } > >-void savage_freelist_put(struct drm_device * dev, struct drm_buf * buf) >+void savage_freelist_put(struct drm_device *dev, struct drm_buf *buf) > { > drm_savage_private_t *dev_priv = dev->dev_private; > drm_savage_buf_priv_t *entry = buf->dev_private, *prev, *next; >@@ -292,12 +292,12 @@ void savage_freelist_put(struct drm_devi > /* > * Command DMA > */ >-static int savage_dma_init(drm_savage_private_t * dev_priv) >+static int savage_dma_init(drm_savage_private_t *dev_priv) > { > unsigned int i; > > dev_priv->nr_dma_pages = dev_priv->cmd_dma->size / >- (SAVAGE_DMA_PAGE_SIZE * 4); >+ (SAVAGE_DMA_PAGE_SIZE*4); > dev_priv->dma_pages = drm_alloc(sizeof(drm_savage_dma_page_t) * > dev_priv->nr_dma_pages, DRM_MEM_DRIVER); > if (dev_priv->dma_pages == NULL) >@@ -316,7 +316,7 @@ static int savage_dma_init(drm_savage_pr > return 0; > } > >-void savage_dma_reset(drm_savage_private_t * dev_priv) >+void savage_dma_reset(drm_savage_private_t *dev_priv) > { > uint16_t event; > unsigned int wrap, i; >@@ -331,7 +331,7 @@ void savage_dma_reset(drm_savage_private > dev_priv->first_dma_page = dev_priv->current_dma_page = 0; > } > >-void savage_dma_wait(drm_savage_private_t * dev_priv, unsigned int page) >+void savage_dma_wait(drm_savage_private_t *dev_priv, unsigned int page) > { > uint16_t event; > unsigned int wrap; >@@ -347,7 +347,7 @@ void savage_dma_wait(drm_savage_private_ > event = SAVAGE_READ(SAVAGE_STATUS_WORD1) & 0xffff; > wrap = dev_priv->event_wrap; > if (event > dev_priv->event_counter) >- wrap--; /* hardware hasn't passed the last wrap yet */ >+ wrap--; /* hardware hasn't passed the last wrap yet */ > > if (dev_priv->dma_pages[page].age.wrap > wrap || > (dev_priv->dma_pages[page].age.wrap == wrap && >@@ -359,13 +359,13 @@ void savage_dma_wait(drm_savage_private_ > } > } > >-uint32_t *savage_dma_alloc(drm_savage_private_t * dev_priv, unsigned int n) >+uint32_t *savage_dma_alloc(drm_savage_private_t *dev_priv, unsigned int n) > { > unsigned int cur = dev_priv->current_dma_page; > unsigned int rest = SAVAGE_DMA_PAGE_SIZE - >- dev_priv->dma_pages[cur].used; >+ dev_priv->dma_pages[cur].used; > unsigned int nr_pages = (n - rest + SAVAGE_DMA_PAGE_SIZE - 1) / >- SAVAGE_DMA_PAGE_SIZE; >+ SAVAGE_DMA_PAGE_SIZE; > uint32_t *dma_ptr; > unsigned int i; > >@@ -373,7 +373,7 @@ uint32_t *savage_dma_alloc(drm_savage_pr > cur, dev_priv->dma_pages[cur].used, n, rest, nr_pages); > > if (cur + nr_pages < dev_priv->nr_dma_pages) { >- dma_ptr = (uint32_t *) dev_priv->cmd_dma->handle + >+ dma_ptr = (uint32_t *)dev_priv->cmd_dma->handle + > cur * SAVAGE_DMA_PAGE_SIZE + dev_priv->dma_pages[cur].used; > if (n < rest) > rest = n; >@@ -389,7 +389,7 @@ uint32_t *savage_dma_alloc(drm_savage_pr > dev_priv->dma_pages[i].used = 0; > dev_priv->dma_pages[i].flushed = 0; > } >- dma_ptr = (uint32_t *) dev_priv->cmd_dma->handle; >+ dma_ptr = (uint32_t *)dev_priv->cmd_dma->handle; > dev_priv->first_dma_page = cur = 0; > } > for (i = cur; nr_pages > 0; ++i, --nr_pages) { >@@ -415,7 +415,7 @@ uint32_t *savage_dma_alloc(drm_savage_pr > return dma_ptr; > } > >-static void savage_dma_flush(drm_savage_private_t * dev_priv) >+static void savage_dma_flush(drm_savage_private_t *dev_priv) > { > unsigned int first = dev_priv->first_dma_page; > unsigned int cur = dev_priv->current_dma_page; >@@ -440,7 +440,7 @@ static void savage_dma_flush(drm_savage_ > > /* pad with noops */ > if (pad) { >- uint32_t *dma_ptr = (uint32_t *) dev_priv->cmd_dma->handle + >+ uint32_t *dma_ptr = (uint32_t *)dev_priv->cmd_dma->handle + > cur * SAVAGE_DMA_PAGE_SIZE + dev_priv->dma_pages[cur].used; > dev_priv->dma_pages[cur].used += pad; > while (pad != 0) { >@@ -453,8 +453,8 @@ static void savage_dma_flush(drm_savage_ > > /* do flush ... */ > phys_addr = dev_priv->cmd_dma->offset + >- (first * SAVAGE_DMA_PAGE_SIZE + >- dev_priv->dma_pages[first].flushed) * 4; >+ (first * SAVAGE_DMA_PAGE_SIZE + >+ dev_priv->dma_pages[first].flushed) * 4; > len = (cur - first) * SAVAGE_DMA_PAGE_SIZE + > dev_priv->dma_pages[cur].used - dev_priv->dma_pages[first].flushed; > >@@ -498,7 +498,7 @@ static void savage_dma_flush(drm_savage_ > dev_priv->dma_pages[cur].flushed); > } > >-static void savage_fake_dma_flush(drm_savage_private_t * dev_priv) >+static void savage_fake_dma_flush(drm_savage_private_t *dev_priv) > { > unsigned int i, j; > BCI_LOCALS; >@@ -514,8 +514,8 @@ static void savage_fake_dma_flush(drm_sa > for (i = dev_priv->first_dma_page; > i <= dev_priv->current_dma_page && dev_priv->dma_pages[i].used; > ++i) { >- uint32_t *dma_ptr = (uint32_t *) dev_priv->cmd_dma->handle + >- i * SAVAGE_DMA_PAGE_SIZE; >+ uint32_t *dma_ptr = (uint32_t *)dev_priv->cmd_dma->handle + >+ i * SAVAGE_DMA_PAGE_SIZE; > #if SAVAGE_DMA_DEBUG > /* Sanity check: all pages except the last one must be full. */ > if (i < dev_priv->current_dma_page && >@@ -551,7 +551,6 @@ int savage_driver_load(struct drm_device > return 0; > } > >- > /* > * Initalize mappings. On Savage4 and SavageIX the alignment > * and size of the aperture is not suitable for automatic MTRR setup >@@ -587,7 +586,7 @@ int savage_driver_firstopen(struct drm_d > dev_priv->mtrr[0].size = 0x01000000; > dev_priv->mtrr[0].handle = > drm_mtrr_add(dev_priv->mtrr[0].base, >- dev_priv->mtrr[0].size, DRM_MTRR_WC); >+ dev_priv->mtrr[0].size, DRM_MTRR_WC); > dev_priv->mtrr[1].base = fb_base + 0x02000000; > dev_priv->mtrr[1].size = 0x02000000; > dev_priv->mtrr[1].handle = >@@ -597,7 +596,7 @@ int savage_driver_firstopen(struct drm_d > dev_priv->mtrr[2].size = 0x04000000; > dev_priv->mtrr[2].handle = > drm_mtrr_add(dev_priv->mtrr[2].base, >- dev_priv->mtrr[2].size, DRM_MTRR_WC); >+ dev_priv->mtrr[2].size, DRM_MTRR_WC); > } else { > DRM_ERROR("strange pci_resource_len %08lx\n", > drm_get_resource_len(dev, 0)); >@@ -663,8 +662,8 @@ void savage_driver_lastclose(struct drm_ > for (i = 0; i < 3; ++i) > if (dev_priv->mtrr[i].handle >= 0) > drm_mtrr_del(dev_priv->mtrr[i].handle, >- dev_priv->mtrr[i].base, >- dev_priv->mtrr[i].size, DRM_MTRR_WC); >+ dev_priv->mtrr[i].base, >+ dev_priv->mtrr[i].size, DRM_MTRR_WC); > } > > int savage_driver_unload(struct drm_device *dev) >@@ -676,7 +675,7 @@ int savage_driver_unload(struct drm_devi > return 0; > } > >-static int savage_do_init_bci(struct drm_device * dev, drm_savage_init_t * init) >+static int savage_do_init_bci(struct drm_device *dev, drm_savage_init_t *init) > { > drm_savage_private_t *dev_priv = dev->dev_private; > >@@ -745,7 +744,7 @@ static int savage_do_init_bci(struct drm > } > if (init->agp_textures_offset) { > dev_priv->agp_textures = >- drm_core_findmap(dev, init->agp_textures_offset); >+ drm_core_findmap(dev, init->agp_textures_offset); > if (!dev_priv->agp_textures) { > DRM_ERROR("could not find agp texture region!\n"); > savage_do_cleanup_bci(dev); >@@ -816,8 +815,8 @@ static int savage_do_init_bci(struct drm > } > > dev_priv->sarea_priv = >- (drm_savage_sarea_t *) ((uint8_t *) dev_priv->sarea->handle + >- init->sarea_priv_offset); >+ (drm_savage_sarea_t *)((uint8_t *)dev_priv->sarea->handle + >+ init->sarea_priv_offset); > > /* setup bitmap descriptors */ > { >@@ -826,9 +825,9 @@ static int savage_do_init_bci(struct drm > unsigned int front_stride, back_stride, depth_stride; > if (dev_priv->chipset <= S3_SAVAGE4) { > color_tile_format = dev_priv->fb_bpp == 16 ? >- SAVAGE_BD_TILE_16BPP : SAVAGE_BD_TILE_32BPP; >+ SAVAGE_BD_TILE_16BPP : SAVAGE_BD_TILE_32BPP; > depth_tile_format = dev_priv->depth_bpp == 16 ? >- SAVAGE_BD_TILE_16BPP : SAVAGE_BD_TILE_32BPP; >+ SAVAGE_BD_TILE_16BPP : SAVAGE_BD_TILE_32BPP; > } else { > color_tile_format = SAVAGE_BD_TILE_DEST; > depth_tile_format = SAVAGE_BD_TILE_DEST; >@@ -839,23 +838,23 @@ static int savage_do_init_bci(struct drm > dev_priv->depth_pitch / (dev_priv->depth_bpp / 8); > > dev_priv->front_bd = front_stride | SAVAGE_BD_BW_DISABLE | >- (dev_priv->fb_bpp << SAVAGE_BD_BPP_SHIFT) | >- (color_tile_format << SAVAGE_BD_TILE_SHIFT); >+ (dev_priv->fb_bpp << SAVAGE_BD_BPP_SHIFT) | >+ (color_tile_format << SAVAGE_BD_TILE_SHIFT); > >- dev_priv->back_bd = back_stride | SAVAGE_BD_BW_DISABLE | >- (dev_priv->fb_bpp << SAVAGE_BD_BPP_SHIFT) | >- (color_tile_format << SAVAGE_BD_TILE_SHIFT); >+ dev_priv-> back_bd = back_stride | SAVAGE_BD_BW_DISABLE | >+ (dev_priv->fb_bpp << SAVAGE_BD_BPP_SHIFT) | >+ (color_tile_format << SAVAGE_BD_TILE_SHIFT); > > dev_priv->depth_bd = depth_stride | SAVAGE_BD_BW_DISABLE | >- (dev_priv->depth_bpp << SAVAGE_BD_BPP_SHIFT) | >- (depth_tile_format << SAVAGE_BD_TILE_SHIFT); >+ (dev_priv->depth_bpp << SAVAGE_BD_BPP_SHIFT) | >+ (depth_tile_format << SAVAGE_BD_TILE_SHIFT); > } > > /* setup status and bci ptr */ > dev_priv->event_counter = 0; > dev_priv->event_wrap = 0; > dev_priv->bci_ptr = (volatile uint32_t *) >- ((uint8_t *) dev_priv->mmio->handle + SAVAGE_BCI_OFFSET); >+ ((uint8_t *)dev_priv->mmio->handle + SAVAGE_BCI_OFFSET); > if (S3_SAVAGE3D_SERIES(dev_priv->chipset)) { > dev_priv->status_used_mask = SAVAGE_FIFO_USED_MASK_S3D; > } else { >@@ -863,7 +862,7 @@ static int savage_do_init_bci(struct drm > } > if (dev_priv->status != NULL) { > dev_priv->status_ptr = >- (volatile uint32_t *)dev_priv->status->handle; >+ (volatile uint32_t *)dev_priv->status->handle; > dev_priv->wait_fifo = savage_bci_wait_fifo_shadow; > dev_priv->wait_evnt = savage_bci_wait_event_shadow; > dev_priv->status_ptr[1023] = dev_priv->event_counter; >@@ -898,7 +897,7 @@ static int savage_do_init_bci(struct drm > return 0; > } > >-static int savage_do_cleanup_bci(struct drm_device * dev) >+static int savage_do_cleanup_bci(struct drm_device *dev) > { > drm_savage_private_t *dev_priv = dev->dev_private; > >@@ -922,7 +921,7 @@ static int savage_do_cleanup_bci(struct > > if (dev_priv->dma_pages) > drm_free(dev_priv->dma_pages, >- sizeof(drm_savage_dma_page_t) * dev_priv->nr_dma_pages, >+ sizeof(drm_savage_dma_page_t)*dev_priv->nr_dma_pages, > DRM_MEM_DRIVER); > > return 0; >@@ -968,9 +967,6 @@ static int savage_bci_event_wait(struct > > DRM_DEBUG("\n"); > >- DRM_COPY_FROM_USER_IOCTL(event, (drm_savage_event_wait_t __user *) data, >- sizeof(event)); >- > UPDATE_EVENT_COUNTER(); > if (dev_priv->status_ptr) > hw_e = dev_priv->status_ptr[1] & 0xffff; >@@ -978,7 +974,7 @@ static int savage_bci_event_wait(struct > hw_e = SAVAGE_READ(SAVAGE_STATUS_WORD1) & 0xffff; > hw_w = dev_priv->event_wrap; > if (hw_e > dev_priv->event_counter) >- hw_w--; /* hardware hasn't passed the last wrap yet */ >+ hw_w--; /* hardware hasn't passed the last wrap yet */ > > event_e = event->count & 0xffff; > event_w = event->count >> 16; >@@ -1069,8 +1065,6 @@ void savage_reclaim_buffers(struct drm_d > if (!dma->buflist) > return; > >- /*i830_flush_queue(dev); */ >- > for (i = 0; i < dma->buf_count; i++) { > struct drm_buf *buf = dma->buflist[i]; > drm_savage_buf_priv_t *buf_priv = buf->dev_private; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/savage_drm.h linux-2.6.23.i686/drivers/char/drm/savage_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/savage_drm.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/savage_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -42,13 +42,12 @@ > #define SAVAGE_NR_TEX_REGIONS 16 > #define SAVAGE_LOG_MIN_TEX_REGION_SIZE 16 > >-#endif /* __SAVAGE_SAREA_DEFINES__ */ >+#endif /* __SAVAGE_SAREA_DEFINES__ */ > > typedef struct _drm_savage_sarea { > /* LRU lists for texture memory in agp space and on the card. > */ >- struct drm_tex_region texList[SAVAGE_NR_TEX_HEAPS][SAVAGE_NR_TEX_REGIONS + >- 1]; >+ struct drm_tex_region texList[SAVAGE_NR_TEX_HEAPS][SAVAGE_NR_TEX_REGIONS+1]; > unsigned int texAge[SAVAGE_NR_TEX_HEAPS]; > > /* Mechanism to validate card state. >@@ -102,24 +101,24 @@ typedef struct drm_savage_init { > > typedef union drm_savage_cmd_header drm_savage_cmd_header_t; > typedef struct drm_savage_cmdbuf { >- /* command buffer in client's address space */ >+ /* command buffer in client's address space */ > drm_savage_cmd_header_t __user *cmd_addr; > unsigned int size; /* size of the command buffer in 64bit units */ > > unsigned int dma_idx; /* DMA buffer index to use */ > int discard; /* discard DMA buffer when done */ >- /* vertex buffer in client's address space */ >+ /* vertex buffer in client's address space */ > unsigned int __user *vb_addr; > unsigned int vb_size; /* size of client vertex buffer in bytes */ > unsigned int vb_stride; /* stride of vertices in 32bit words */ >- /* boxes in client's address space */ >+ /* boxes in client's address space */ > struct drm_clip_rect __user *box_addr; > unsigned int nbox; /* number of clipping boxes */ > } drm_savage_cmdbuf_t; > >-#define SAVAGE_WAIT_2D 0x1 /* wait for 2D idle before updating event tag */ >-#define SAVAGE_WAIT_3D 0x2 /* wait for 3D idle before updating event tag */ >-#define SAVAGE_WAIT_IRQ 0x4 /* emit or wait for IRQ, not implemented yet */ >+#define SAVAGE_WAIT_2D 0x1 /* wait for 2D idle before updating event tag */ >+#define SAVAGE_WAIT_3D 0x2 /* wait for 3D idle before updating event tag */ >+#define SAVAGE_WAIT_IRQ 0x4 /* emit or wait for IRQ, not implemented yet */ > typedef struct drm_savage_event { > unsigned int count; > unsigned int flags; >@@ -127,21 +126,21 @@ typedef struct drm_savage_event { > > /* Commands for the cmdbuf ioctl > */ >-#define SAVAGE_CMD_STATE 0 /* a range of state registers */ >-#define SAVAGE_CMD_DMA_PRIM 1 /* vertices from DMA buffer */ >-#define SAVAGE_CMD_VB_PRIM 2 /* vertices from client vertex buffer */ >-#define SAVAGE_CMD_DMA_IDX 3 /* indexed vertices from DMA buffer */ >-#define SAVAGE_CMD_VB_IDX 4 /* indexed vertices client vertex buffer */ >-#define SAVAGE_CMD_CLEAR 5 /* clear buffers */ >-#define SAVAGE_CMD_SWAP 6 /* swap buffers */ >+#define SAVAGE_CMD_STATE 0 /* a range of state registers */ >+#define SAVAGE_CMD_DMA_PRIM 1 /* vertices from DMA buffer */ >+#define SAVAGE_CMD_VB_PRIM 2 /* vertices from client vertex buffer */ >+#define SAVAGE_CMD_DMA_IDX 3 /* indexed vertices from DMA buffer */ >+#define SAVAGE_CMD_VB_IDX 4 /* indexed vertices client vertex buffer */ >+#define SAVAGE_CMD_CLEAR 5 /* clear buffers */ >+#define SAVAGE_CMD_SWAP 6 /* swap buffers */ > > /* Primitive types > */ >-#define SAVAGE_PRIM_TRILIST 0 /* triangle list */ >-#define SAVAGE_PRIM_TRISTRIP 1 /* triangle strip */ >-#define SAVAGE_PRIM_TRIFAN 2 /* triangle fan */ >-#define SAVAGE_PRIM_TRILIST_201 3 /* reorder verts for correct flat >- * shading on s3d */ >+#define SAVAGE_PRIM_TRILIST 0 /* triangle list */ >+#define SAVAGE_PRIM_TRISTRIP 1 /* triangle strip */ >+#define SAVAGE_PRIM_TRIFAN 2 /* triangle fan */ >+#define SAVAGE_PRIM_TRILIST_201 3 /* reorder verts for correct flat >+ * shading on s3d */ > > /* Skip flags (vertex format) > */ >@@ -173,38 +172,38 @@ union drm_savage_cmd_header { > unsigned short pad1; > unsigned short pad2; > unsigned short pad3; >- } cmd; /* generic */ >+ } cmd; /* generic */ > struct { > unsigned char cmd; > unsigned char global; /* need idle engine? */ > unsigned short count; /* number of consecutive registers */ > unsigned short start; /* first register */ > unsigned short pad3; >- } state; /* SAVAGE_CMD_STATE */ >+ } state; /* SAVAGE_CMD_STATE */ > struct { > unsigned char cmd; > unsigned char prim; /* primitive type */ > unsigned short skip; /* vertex format (skip flags) */ > unsigned short count; /* number of vertices */ > unsigned short start; /* first vertex in DMA/vertex buffer */ >- } prim; /* SAVAGE_CMD_DMA_PRIM, SAVAGE_CMD_VB_PRIM */ >+ } prim; /* SAVAGE_CMD_DMA_PRIM, SAVAGE_CMD_VB_PRIM */ > struct { > unsigned char cmd; > unsigned char prim; > unsigned short skip; > unsigned short count; /* number of indices that follow */ > unsigned short pad3; >- } idx; /* SAVAGE_CMD_DMA_IDX, SAVAGE_CMD_VB_IDX */ >+ } idx; /* SAVAGE_CMD_DMA_IDX, SAVAGE_CMD_VB_IDX */ > struct { > unsigned char cmd; > unsigned char pad0; > unsigned short pad1; > unsigned int flags; >- } clear0; /* SAVAGE_CMD_CLEAR */ >+ } clear0; /* SAVAGE_CMD_CLEAR */ > struct { > unsigned int mask; > unsigned int value; >- } clear1; /* SAVAGE_CMD_CLEAR data */ >+ } clear1; /* SAVAGE_CMD_CLEAR data */ > }; > > #endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/savage_drv.c linux-2.6.23.i686/drivers/char/drm/savage_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/savage_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/savage_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -33,9 +33,11 @@ static struct pci_device_id pciidlist[] > savage_PCI_IDS > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = >- DRIVER_USE_AGP | DRIVER_USE_MTRR | DRIVER_HAVE_DMA | DRIVER_PCI_DMA, >+ DRIVER_USE_AGP | DRIVER_USE_MTRR | >+ DRIVER_HAVE_DMA | DRIVER_PCI_DMA, > .dev_priv_size = sizeof(drm_savage_buf_priv_t), > .load = savage_driver_load, > .firstopen = savage_driver_firstopen, >@@ -47,18 +49,19 @@ static struct drm_driver driver = { > .ioctls = savage_ioctls, > .dma_ioctl = savage_bci_buffers, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, > }, >- > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), > }, > > .name = DRIVER_NAME, >@@ -69,10 +72,15 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ > static int __init savage_init(void) > { > driver.num_ioctls = savage_max_ioctl; >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit savage_exit(void) >@@ -83,6 +91,6 @@ static void __exit savage_exit(void) > module_init(savage_init); > module_exit(savage_exit); > >-MODULE_AUTHOR(DRIVER_AUTHOR); >-MODULE_DESCRIPTION(DRIVER_DESC); >+MODULE_AUTHOR( DRIVER_AUTHOR ); >+MODULE_DESCRIPTION( DRIVER_DESC ); > MODULE_LICENSE("GPL and additional rights"); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/savage_drv.h linux-2.6.23.i686/drivers/char/drm/savage_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/savage_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/savage_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -65,7 +65,7 @@ typedef struct drm_savage_dma_page { > drm_savage_age_t age; > unsigned int used, flushed; > } drm_savage_dma_page_t; >-#define SAVAGE_DMA_PAGE_SIZE 1024 /* in dwords */ >+#define SAVAGE_DMA_PAGE_SIZE 1024 /* in dwords */ > /* Fake DMA buffer size in bytes. 4 pages. Allows a maximum command > * size of 16kbytes or 4k entries. Minimum requirement would be > * 10kbytes for 255 40-byte vertices in one drawing command. */ >@@ -187,13 +187,13 @@ typedef struct drm_savage_private { > unsigned int waiting; > > /* config/hardware-dependent function pointers */ >- int (*wait_fifo) (struct drm_savage_private * dev_priv, unsigned int n); >- int (*wait_evnt) (struct drm_savage_private * dev_priv, uint16_t e); >+ int (*wait_fifo)(struct drm_savage_private *dev_priv, unsigned int n); >+ int (*wait_evnt)(struct drm_savage_private *dev_priv, uint16_t e); > /* Err, there is a macro wait_event in include/linux/wait.h. > * Avoid unwanted macro expansion. */ >- void (*emit_clip_rect) (struct drm_savage_private * dev_priv, >- const struct drm_clip_rect * pbox); >- void (*dma_flush) (struct drm_savage_private * dev_priv); >+ void (*emit_clip_rect)(struct drm_savage_private *dev_priv, >+ const struct drm_clip_rect *pbox); >+ void (*dma_flush)(struct drm_savage_private *dev_priv); > } drm_savage_private_t; > > /* ioctls */ >@@ -201,12 +201,12 @@ extern int savage_bci_cmdbuf(struct drm_ > extern int savage_bci_buffers(struct drm_device *dev, void *data, struct drm_file *file_priv); > > /* BCI functions */ >-extern uint16_t savage_bci_emit_event(drm_savage_private_t * dev_priv, >+extern uint16_t savage_bci_emit_event(drm_savage_private_t *dev_priv, > unsigned int flags); >-extern void savage_freelist_put(struct drm_device * dev, struct drm_buf * buf); >-extern void savage_dma_reset(drm_savage_private_t * dev_priv); >-extern void savage_dma_wait(drm_savage_private_t * dev_priv, unsigned int page); >-extern uint32_t *savage_dma_alloc(drm_savage_private_t * dev_priv, >+extern void savage_freelist_put(struct drm_device *dev, struct drm_buf *buf); >+extern void savage_dma_reset(drm_savage_private_t *dev_priv); >+extern void savage_dma_wait(drm_savage_private_t *dev_priv, unsigned int page); >+extern uint32_t *savage_dma_alloc(drm_savage_private_t *dev_priv, > unsigned int n); > extern int savage_driver_load(struct drm_device *dev, unsigned long chipset); > extern int savage_driver_firstopen(struct drm_device *dev); >@@ -216,10 +216,10 @@ extern void savage_reclaim_buffers(struc > struct drm_file *file_priv); > > /* state functions */ >-extern void savage_emit_clip_rect_s3d(drm_savage_private_t * dev_priv, >- const struct drm_clip_rect * pbox); >-extern void savage_emit_clip_rect_s4(drm_savage_private_t * dev_priv, >- const struct drm_clip_rect * pbox); >+extern void savage_emit_clip_rect_s3d(drm_savage_private_t *dev_priv, >+ const struct drm_clip_rect *pbox); >+extern void savage_emit_clip_rect_s4(drm_savage_private_t *dev_priv, >+ const struct drm_clip_rect *pbox); > > #define SAVAGE_FB_SIZE_S3 0x01000000 /* 16MB */ > #define SAVAGE_FB_SIZE_S4 0x02000000 /* 32MB */ >@@ -227,17 +227,17 @@ extern void savage_emit_clip_rect_s4(drm > #define SAVAGE_APERTURE_OFFSET 0x02000000 /* 32MB */ > #define SAVAGE_APERTURE_SIZE 0x05000000 /* 5 tiled surfaces, 16MB each */ > >-#define SAVAGE_BCI_OFFSET 0x00010000 /* offset of the BCI region >+#define SAVAGE_BCI_OFFSET 0x00010000 /* offset of the BCI region > * inside the MMIO region */ >-#define SAVAGE_BCI_FIFO_SIZE 32 /* number of entries in on-chip >- * BCI FIFO */ >+#define SAVAGE_BCI_FIFO_SIZE 32 /* number of entries in on-chip >+ * BCI FIFO */ > > /* > * MMIO registers > */ > #define SAVAGE_STATUS_WORD0 0x48C00 > #define SAVAGE_STATUS_WORD1 0x48C04 >-#define SAVAGE_ALT_STATUS_WORD0 0x48C60 >+#define SAVAGE_ALT_STATUS_WORD0 0x48C60 > > #define SAVAGE_FIFO_USED_MASK_S3D 0x0001ffff > #define SAVAGE_FIFO_USED_MASK_S4 0x001fffff >@@ -283,7 +283,7 @@ extern void savage_emit_clip_rect_s4(drm > #define SAVAGE_TEXADDR1_S4 0x23 > #define SAVAGE_TEXBLEND0_S4 0x24 > #define SAVAGE_TEXBLEND1_S4 0x25 >-#define SAVAGE_TEXXPRCLR_S4 0x26 /* never used */ >+#define SAVAGE_TEXXPRCLR_S4 0x26 /* never used */ > #define SAVAGE_TEXDESCR_S4 0x27 > #define SAVAGE_FOGTABLE_S4 0x28 > #define SAVAGE_FOGCTRL_S4 0x30 >@@ -298,7 +298,7 @@ extern void savage_emit_clip_rect_s4(drm > #define SAVAGE_TEXBLENDCOLOR_S4 0x39 > /* Savage3D/MX/IX 3D registers */ > #define SAVAGE_TEXPALADDR_S3D 0x18 >-#define SAVAGE_TEXXPRCLR_S3D 0x19 /* never used */ >+#define SAVAGE_TEXXPRCLR_S3D 0x19 /* never used */ > #define SAVAGE_TEXADDR_S3D 0x1A > #define SAVAGE_TEXDESCR_S3D 0x1B > #define SAVAGE_TEXCTRL_S3D 0x1C >@@ -318,9 +318,9 @@ extern void savage_emit_clip_rect_s4(drm > #define SAVAGE_DMABUFADDR 0x51 > > /* texture enable bits (needed for tex addr checking) */ >-#define SAVAGE_TEXCTRL_TEXEN_MASK 0x00010000 /* S3D */ >-#define SAVAGE_TEXDESCR_TEX0EN_MASK 0x02000000 /* S4 */ >-#define SAVAGE_TEXDESCR_TEX1EN_MASK 0x04000000 /* S4 */ >+#define SAVAGE_TEXCTRL_TEXEN_MASK 0x00010000 /* S3D */ >+#define SAVAGE_TEXDESCR_TEX0EN_MASK 0x02000000 /* S4 */ >+#define SAVAGE_TEXDESCR_TEX1EN_MASK 0x04000000 /* S4 */ > > /* Global fields in Savage4/Twister/ProSavage 3D registers: > * >@@ -572,4 +572,4 @@ extern void savage_emit_clip_rect_s4(drm > #define TEST_AGE( age, e, w ) \ > ( (age)->wrap < (w) || ( (age)->wrap == (w) && (age)->event <= (e) ) ) > >-#endif /* __SAVAGE_DRV_H__ */ >+#endif /* __SAVAGE_DRV_H__ */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/savage_state.c linux-2.6.23.i686/drivers/char/drm/savage_state.c >--- linux-2.6.23.i686.orig/drivers/char/drm/savage_state.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/savage_state.c 2008-01-06 09:24:57.000000000 +0100 >@@ -26,19 +26,19 @@ > #include "savage_drm.h" > #include "savage_drv.h" > >-void savage_emit_clip_rect_s3d(drm_savage_private_t * dev_priv, >- const struct drm_clip_rect * pbox) >+void savage_emit_clip_rect_s3d(drm_savage_private_t *dev_priv, >+ const struct drm_clip_rect *pbox) > { > uint32_t scstart = dev_priv->state.s3d.new_scstart; > uint32_t scend = dev_priv->state.s3d.new_scend; > scstart = (scstart & ~SAVAGE_SCISSOR_MASK_S3D) | >- ((uint32_t) pbox->x1 & 0x000007ff) | >- (((uint32_t) pbox->y1 << 16) & 0x07ff0000); >- scend = (scend & ~SAVAGE_SCISSOR_MASK_S3D) | >- (((uint32_t) pbox->x2 - 1) & 0x000007ff) | >- ((((uint32_t) pbox->y2 - 1) << 16) & 0x07ff0000); >+ ((uint32_t)pbox->x1 & 0x000007ff) | >+ (((uint32_t)pbox->y1 << 16) & 0x07ff0000); >+ scend = (scend & ~SAVAGE_SCISSOR_MASK_S3D) | >+ (((uint32_t)pbox->x2 - 1) & 0x000007ff) | >+ ((((uint32_t)pbox->y2 - 1) << 16) & 0x07ff0000); > if (scstart != dev_priv->state.s3d.scstart || >- scend != dev_priv->state.s3d.scend) { >+ scend != dev_priv->state.s3d.scend) { > DMA_LOCALS; > BEGIN_DMA(4); > DMA_WRITE(BCI_CMD_WAIT | BCI_CMD_WAIT_3D); >@@ -52,17 +52,17 @@ void savage_emit_clip_rect_s3d(drm_savag > } > } > >-void savage_emit_clip_rect_s4(drm_savage_private_t * dev_priv, >- const struct drm_clip_rect * pbox) >+void savage_emit_clip_rect_s4(drm_savage_private_t *dev_priv, >+ const struct drm_clip_rect *pbox) > { > uint32_t drawctrl0 = dev_priv->state.s4.new_drawctrl0; > uint32_t drawctrl1 = dev_priv->state.s4.new_drawctrl1; > drawctrl0 = (drawctrl0 & ~SAVAGE_SCISSOR_MASK_S4) | >- ((uint32_t) pbox->x1 & 0x000007ff) | >- (((uint32_t) pbox->y1 << 12) & 0x00fff000); >+ ((uint32_t)pbox->x1 & 0x000007ff) | >+ (((uint32_t)pbox->y1 << 12) & 0x00fff000); > drawctrl1 = (drawctrl1 & ~SAVAGE_SCISSOR_MASK_S4) | >- (((uint32_t) pbox->x2 - 1) & 0x000007ff) | >- ((((uint32_t) pbox->y2 - 1) << 12) & 0x00fff000); >+ (((uint32_t)pbox->x2 - 1) & 0x000007ff) | >+ ((((uint32_t)pbox->y2 - 1) << 12) & 0x00fff000); > if (drawctrl0 != dev_priv->state.s4.drawctrl0 || > drawctrl1 != dev_priv->state.s4.drawctrl1) { > DMA_LOCALS; >@@ -78,14 +78,14 @@ void savage_emit_clip_rect_s4(drm_savage > } > } > >-static int savage_verify_texaddr(drm_savage_private_t * dev_priv, int unit, >+static int savage_verify_texaddr(drm_savage_private_t *dev_priv, int unit, > uint32_t addr) > { >- if ((addr & 6) != 2) { /* reserved bits */ >+ if ((addr & 6) != 2) { /* reserved bits */ > DRM_ERROR("bad texAddr%d %08x (reserved bits)\n", unit, addr); > return -EINVAL; > } >- if (!(addr & 1)) { /* local */ >+ if (!(addr & 1)) { /* local */ > addr &= ~7; > if (addr < dev_priv->texture_offset || > addr >= dev_priv->texture_offset + dev_priv->texture_size) { >@@ -94,7 +94,7 @@ static int savage_verify_texaddr(drm_sav > unit, addr); > return -EINVAL; > } >- } else { /* AGP */ >+ } else { /* AGP */ > if (!dev_priv->agp_textures) { > DRM_ERROR("bad texAddr%d %08x (AGP not available)\n", > unit, addr); >@@ -114,18 +114,17 @@ static int savage_verify_texaddr(drm_sav > } > > #define SAVE_STATE(reg,where) \ >- if(start <= reg && start+count > reg) \ >+ if(start <= reg && start + count > reg) \ > dev_priv->state.where = regs[reg - start] > #define SAVE_STATE_MASK(reg,where,mask) do { \ >- if(start <= reg && start+count > reg) { \ >+ if(start <= reg && start + count > reg) { \ > uint32_t tmp; \ > tmp = regs[reg - start]; \ > dev_priv->state.where = (tmp & (mask)) | \ > (dev_priv->state.where & ~(mask)); \ > } \ > } while (0) >- >-static int savage_verify_state_s3d(drm_savage_private_t * dev_priv, >+static int savage_verify_state_s3d(drm_savage_private_t *dev_priv, > unsigned int start, unsigned int count, > const uint32_t *regs) > { >@@ -155,7 +154,7 @@ static int savage_verify_state_s3d(drm_s > return 0; > } > >-static int savage_verify_state_s4(drm_savage_private_t * dev_priv, >+static int savage_verify_state_s4(drm_savage_private_t *dev_priv, > unsigned int start, unsigned int count, > const uint32_t *regs) > { >@@ -190,12 +189,11 @@ static int savage_verify_state_s4(drm_sa > > return ret; > } >- > #undef SAVE_STATE > #undef SAVE_STATE_MASK > >-static int savage_dispatch_state(drm_savage_private_t * dev_priv, >- const drm_savage_cmd_header_t * cmd_header, >+static int savage_dispatch_state(drm_savage_private_t *dev_priv, >+ const drm_savage_cmd_header_t *cmd_header, > const uint32_t *regs) > { > unsigned int count = cmd_header->state.count; >@@ -275,9 +273,9 @@ static int savage_dispatch_state(drm_sav > return 0; > } > >-static int savage_dispatch_dma_prim(drm_savage_private_t * dev_priv, >- const drm_savage_cmd_header_t * cmd_header, >- const struct drm_buf * dmabuf) >+static int savage_dispatch_dma_prim(drm_savage_private_t *dev_priv, >+ const drm_savage_cmd_header_t *cmd_header, >+ const struct drm_buf *dmabuf) > { > unsigned char reorder = 0; > unsigned int prim = cmd_header->prim.prim; >@@ -310,8 +308,8 @@ static int savage_dispatch_dma_prim(drm_ > case SAVAGE_PRIM_TRIFAN: > if (n < 3) { > DRM_ERROR >- ("wrong number of vertices %u in TRIFAN/STRIP\n", >- n); >+ ("wrong number of vertices %u in TRIFAN/STRIP\n", >+ n); > return -EINVAL; > } > break; >@@ -327,8 +325,8 @@ static int savage_dispatch_dma_prim(drm_ > } > } else { > unsigned int size = 10 - (skip & 1) - (skip >> 1 & 1) - >- (skip >> 2 & 1) - (skip >> 3 & 1) - (skip >> 4 & 1) - >- (skip >> 5 & 1) - (skip >> 6 & 1) - (skip >> 7 & 1); >+ (skip >> 2 & 1) - (skip >> 3 & 1) - (skip >> 4 & 1) - >+ (skip >> 5 & 1) - (skip >> 6 & 1) - (skip >> 7 & 1); > if (skip > SAVAGE_SKIP_ALL_S4 || size != 8) { > DRM_ERROR("invalid skip flags 0x%04x for DMA\n", skip); > return -EINVAL; >@@ -415,8 +413,8 @@ static int savage_dispatch_dma_prim(drm_ > return 0; > } > >-static int savage_dispatch_vb_prim(drm_savage_private_t * dev_priv, >- const drm_savage_cmd_header_t * cmd_header, >+static int savage_dispatch_vb_prim(drm_savage_private_t *dev_priv, >+ const drm_savage_cmd_header_t *cmd_header, > const uint32_t *vtxbuf, unsigned int vb_size, > unsigned int vb_stride) > { >@@ -462,18 +460,18 @@ static int savage_dispatch_vb_prim(drm_s > DRM_ERROR("invalid skip flags 0x%04x\n", skip); > return -EINVAL; > } >- vtx_size = 8; /* full vertex */ >+ vtx_size = 8; /* full vertex */ > } else { > if (skip > SAVAGE_SKIP_ALL_S4) { > DRM_ERROR("invalid skip flags 0x%04x\n", skip); > return -EINVAL; > } >- vtx_size = 10; /* full vertex */ >+ vtx_size = 10; /* full vertex */ > } > > vtx_size -= (skip & 1) + (skip >> 1 & 1) + >- (skip >> 2 & 1) + (skip >> 3 & 1) + (skip >> 4 & 1) + >- (skip >> 5 & 1) + (skip >> 6 & 1) + (skip >> 7 & 1); >+ (skip >> 2 & 1) + (skip >> 3 & 1) + (skip >> 4 & 1) + >+ (skip >> 5 & 1) + (skip >> 6 & 1) + (skip >> 7 & 1); > > if (vtx_size > vb_stride) { > DRM_ERROR("vertex size greater than vb stride (%u > %u)\n", >@@ -512,11 +510,11 @@ static int savage_dispatch_vb_prim(drm_s > DMA_DRAW_PRIMITIVE(count, prim, skip); > > if (vb_stride == vtx_size) { >- DMA_COPY(&vtxbuf[vb_stride * start], >+ DMA_COPY(&vtxbuf[vb_stride * start], > vtx_size * count); > } else { > for (i = start; i < start + count; ++i) { >- DMA_COPY(&vtxbuf [vb_stride * i], >+ DMA_COPY(&vtxbuf[vb_stride * i], > vtx_size); > } > } >@@ -533,10 +531,10 @@ static int savage_dispatch_vb_prim(drm_s > return 0; > } > >-static int savage_dispatch_dma_idx(drm_savage_private_t * dev_priv, >- const drm_savage_cmd_header_t * cmd_header, >+static int savage_dispatch_dma_idx(drm_savage_private_t *dev_priv, >+ const drm_savage_cmd_header_t *cmd_header, > const uint16_t *idx, >- const struct drm_buf * dmabuf) >+ const struct drm_buf *dmabuf) > { > unsigned char reorder = 0; > unsigned int prim = cmd_header->idx.prim; >@@ -583,8 +581,8 @@ static int savage_dispatch_dma_idx(drm_s > } > } else { > unsigned int size = 10 - (skip & 1) - (skip >> 1 & 1) - >- (skip >> 2 & 1) - (skip >> 3 & 1) - (skip >> 4 & 1) - >- (skip >> 5 & 1) - (skip >> 6 & 1) - (skip >> 7 & 1); >+ (skip >> 2 & 1) - (skip >> 3 & 1) - (skip >> 4 & 1) - >+ (skip >> 5 & 1) - (skip >> 6 & 1) - (skip >> 7 & 1); > if (skip > SAVAGE_SKIP_ALL_S4 || size != 8) { > DRM_ERROR("invalid skip flags 0x%04x for DMA\n", skip); > return -EINVAL; >@@ -674,8 +672,8 @@ static int savage_dispatch_dma_idx(drm_s > return 0; > } > >-static int savage_dispatch_vb_idx(drm_savage_private_t * dev_priv, >- const drm_savage_cmd_header_t * cmd_header, >+static int savage_dispatch_vb_idx(drm_savage_private_t *dev_priv, >+ const drm_savage_cmd_header_t *cmd_header, > const uint16_t *idx, > const uint32_t *vtxbuf, > unsigned int vb_size, unsigned int vb_stride) >@@ -719,18 +717,18 @@ static int savage_dispatch_vb_idx(drm_sa > DRM_ERROR("invalid skip flags 0x%04x\n", skip); > return -EINVAL; > } >- vtx_size = 8; /* full vertex */ >+ vtx_size = 8; /* full vertex */ > } else { > if (skip > SAVAGE_SKIP_ALL_S4) { > DRM_ERROR("invalid skip flags 0x%04x\n", skip); > return -EINVAL; > } >- vtx_size = 10; /* full vertex */ >+ vtx_size = 10; /* full vertex */ > } > > vtx_size -= (skip & 1) + (skip >> 1 & 1) + >- (skip >> 2 & 1) + (skip >> 3 & 1) + (skip >> 4 & 1) + >- (skip >> 5 & 1) + (skip >> 6 & 1) + (skip >> 7 & 1); >+ (skip >> 2 & 1) + (skip >> 3 & 1) + (skip >> 4 & 1) + >+ (skip >> 5 & 1) + (skip >> 6 & 1) + (skip >> 7 & 1); > > if (vtx_size > vb_stride) { > DRM_ERROR("vertex size greater than vb stride (%u > %u)\n", >@@ -742,12 +740,12 @@ static int savage_dispatch_vb_idx(drm_sa > while (n != 0) { > /* Can emit up to 255 vertices (85 triangles) at once. */ > unsigned int count = n > 255 ? 255 : n; >- >+ > /* Check indices */ > for (i = 0; i < count; ++i) { > if (idx[i] > vb_size / (vb_stride * 4)) { > DRM_ERROR("idx[%u]=%u out of range (0-%u)\n", >- i, idx[i], vb_size / (vb_stride * 4)); >+ i, idx[i], vb_size / (vb_stride * 4)); > return -EINVAL; > } > } >@@ -788,8 +786,8 @@ static int savage_dispatch_vb_idx(drm_sa > return 0; > } > >-static int savage_dispatch_clear(drm_savage_private_t * dev_priv, >- const drm_savage_cmd_header_t * cmd_header, >+static int savage_dispatch_clear(drm_savage_private_t *dev_priv, >+ const drm_savage_cmd_header_t *cmd_header, > const drm_savage_cmd_header_t *data, > unsigned int nbox, > const struct drm_clip_rect *boxes) >@@ -803,8 +801,8 @@ static int savage_dispatch_clear(drm_sav > return 0; > > clear_cmd = BCI_CMD_RECT | BCI_CMD_RECT_XP | BCI_CMD_RECT_YP | >- BCI_CMD_SEND_COLOR | BCI_CMD_DEST_PBD_NEW; >- BCI_CMD_SET_ROP(clear_cmd, 0xCC); >+ BCI_CMD_SEND_COLOR | BCI_CMD_DEST_PBD_NEW; >+ BCI_CMD_SET_ROP(clear_cmd,0xCC); > > nbufs = ((flags & SAVAGE_FRONT) ? 1 : 0) + > ((flags & SAVAGE_BACK) ? 1 : 0) + ((flags & SAVAGE_DEPTH) ? 1 : 0); >@@ -821,6 +819,7 @@ static int savage_dispatch_clear(drm_sav > for (i = 0; i < nbox; ++i) { > unsigned int x, y, w, h; > unsigned int buf; >+ > x = boxes[i].x1, y = boxes[i].y1; > w = boxes[i].x2 - boxes[i].x1; > h = boxes[i].y2 - boxes[i].y1; >@@ -860,7 +859,7 @@ static int savage_dispatch_clear(drm_sav > return 0; > } > >-static int savage_dispatch_swap(drm_savage_private_t * dev_priv, >+static int savage_dispatch_swap(drm_savage_private_t *dev_priv, > unsigned int nbox, const struct drm_clip_rect *boxes) > { > unsigned int swap_cmd; >@@ -871,8 +870,8 @@ static int savage_dispatch_swap(drm_sava > return 0; > > swap_cmd = BCI_CMD_RECT | BCI_CMD_RECT_XP | BCI_CMD_RECT_YP | >- BCI_CMD_SRC_PBD_COLOR_NEW | BCI_CMD_DEST_GBD; >- BCI_CMD_SET_ROP(swap_cmd, 0xCC); >+ BCI_CMD_SRC_PBD_COLOR_NEW | BCI_CMD_DEST_GBD; >+ BCI_CMD_SET_ROP(swap_cmd,0xCC); > > for (i = 0; i < nbox; ++i) { > BEGIN_DMA(6); >@@ -889,10 +888,10 @@ static int savage_dispatch_swap(drm_sava > return 0; > } > >-static int savage_dispatch_draw(drm_savage_private_t * dev_priv, >+static int savage_dispatch_draw(drm_savage_private_t *dev_priv, > const drm_savage_cmd_header_t *start, > const drm_savage_cmd_header_t *end, >- const struct drm_buf * dmabuf, >+ const struct drm_buf *dmabuf, > const unsigned int *vtxbuf, > unsigned int vb_size, unsigned int vb_stride, > unsigned int nbox, >@@ -933,7 +932,7 @@ static int savage_dispatch_draw(drm_sava > /* j was check in savage_bci_cmdbuf */ > ret = savage_dispatch_vb_idx(dev_priv, > &cmd_header, (const uint16_t *)cmdbuf, >- (const uint32_t *)vtxbuf, vb_size, >+ (const uint32_t *)vtxbuf, vb_size, > vb_stride); > cmdbuf += j; > break; >@@ -1015,19 +1014,21 @@ int savage_bci_cmdbuf(struct drm_device > cmdbuf->vb_addr = kvb_addr; > } > if (cmdbuf->nbox) { >- kbox_addr = drm_alloc(cmdbuf->nbox * sizeof(struct drm_clip_rect), >- DRM_MEM_DRIVER); >+ kbox_addr = drm_alloc(cmdbuf->nbox * >+ sizeof(struct drm_clip_rect), >+ DRM_MEM_DRIVER); > if (kbox_addr == NULL) { > ret = -ENOMEM; > goto done; > } > > if (DRM_COPY_FROM_USER(kbox_addr, cmdbuf->box_addr, >- cmdbuf->nbox * sizeof(struct drm_clip_rect))) { >+ cmdbuf->nbox * >+ sizeof(struct drm_clip_rect))) { > ret = -EFAULT; > goto done; > } >- cmdbuf->box_addr = kbox_addr; >+ cmdbuf->box_addr = kbox_addr; > } > > /* Make sure writes to DMA buffers are finished before sending >@@ -1070,11 +1071,12 @@ int savage_bci_cmdbuf(struct drm_device > default: > if (first_draw_cmd) { > ret = savage_dispatch_draw( >- dev_priv, first_draw_cmd, >- cmdbuf->cmd_addr - 1, >- dmabuf, cmdbuf->vb_addr, cmdbuf->vb_size, >- cmdbuf->vb_stride, >- cmdbuf->nbox, cmdbuf->box_addr); >+ dev_priv, first_draw_cmd, >+ cmdbuf->cmd_addr - 1, >+ dmabuf, cmdbuf->vb_addr, >+ cmdbuf->vb_size, >+ cmdbuf->vb_stride, >+ cmdbuf->nbox, cmdbuf->box_addr); > if (ret != 0) > return ret; > first_draw_cmd = NULL; >@@ -1132,7 +1134,7 @@ int savage_bci_cmdbuf(struct drm_device > } > > if (first_draw_cmd) { >- ret = savage_dispatch_draw ( >+ ret = savage_dispatch_draw( > dev_priv, first_draw_cmd, cmdbuf->cmd_addr, dmabuf, > cmdbuf->vb_addr, cmdbuf->vb_size, cmdbuf->vb_stride, > cmdbuf->nbox, cmdbuf->box_addr); >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/sis_drv.c linux-2.6.23.i686/drivers/char/drm/sis_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/sis_drv.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/sis_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -32,9 +32,10 @@ > #include "drm_pciids.h" > > static struct pci_device_id pciidlist[] = { >- sisdrv_PCI_IDS >+ sis_PCI_IDS > }; > >+ > static int sis_driver_load(struct drm_device *dev, unsigned long chipset) > { > drm_sis_private_t *dev_priv; >@@ -64,6 +65,8 @@ static int sis_driver_unload(struct drm_ > return 0; > } > >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = DRIVER_USE_AGP | DRIVER_USE_MTRR, > .load = sis_driver_load, >@@ -77,17 +80,19 @@ static struct drm_driver driver = { > .get_reg_ofs = drm_core_get_reg_ofs, > .ioctls = sis_ioctls, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >- }, >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+ }, > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), > }, > > .name = DRIVER_NAME, >@@ -98,10 +103,15 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ > static int __init sis_init(void) > { > driver.num_ioctls = sis_max_ioctl; >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit sis_exit(void) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/sis_drv.h linux-2.6.23.i686/drivers/char/drm/sis_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/sis_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/sis_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -44,11 +44,15 @@ enum sis_family { > SIS_CHIP_315 = 1, > }; > >-#include "drm_sman.h" >+#if defined(__linux__) >+#define SIS_HAVE_CORE_MM >+#endif > >+#ifdef SIS_HAVE_CORE_MM >+#include "drm_sman.h" > > #define SIS_BASE (dev_priv->mmio) >-#define SIS_READ(reg) DRM_READ32(SIS_BASE, reg); >+#define SIS_READ(reg) DRM_READ32(SIS_BASE, reg); > #define SIS_WRITE(reg, val) DRM_WRITE32(SIS_BASE, reg, val); > > typedef struct drm_sis_private { >@@ -67,6 +71,19 @@ extern void sis_reclaim_buffers_locked(s > struct drm_file *file_priv); > extern void sis_lastclose(struct drm_device *dev); > >+#else >+#include "sis_ds.h" >+ >+typedef struct drm_sis_private { >+ memHeap_t *AGPHeap; >+ memHeap_t *FBHeap; >+} drm_sis_private_t; >+ >+extern int sis_init_context(struct drm_device * dev, int context); >+extern int sis_final_context(struct drm_device * dev, int context); >+ >+#endif >+ > extern struct drm_ioctl_desc sis_ioctls[]; > extern int sis_max_ioctl; > >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/sis_mm.c linux-2.6.23.i686/drivers/char/drm/sis_mm.c >--- linux-2.6.23.i686.orig/drivers/char/drm/sis_mm.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/sis_mm.c 2008-01-06 09:24:57.000000000 +0100 >@@ -35,13 +35,17 @@ > #include "sis_drm.h" > #include "sis_drv.h" > >+#if defined(__linux__) > #include <video/sisfb.h> >+#endif > > #define VIDEO_TYPE 0 > #define AGP_TYPE 1 > >+#define SIS_MM_ALIGN_SHIFT 4 >+#define SIS_MM_ALIGN_MASK ( (1 << SIS_MM_ALIGN_SHIFT) - 1) > >-#if defined(CONFIG_FB_SIS) >+#if defined(__linux__) && defined(CONFIG_FB_SIS) > /* fb management via fb device */ > > #define SIS_MM_ALIGN_SHIFT 0 >@@ -75,12 +79,7 @@ static unsigned long sis_sman_mm_offset( > return ~((unsigned long)ref); > } > >-#else /* CONFIG_FB_SIS */ >- >-#define SIS_MM_ALIGN_SHIFT 4 >-#define SIS_MM_ALIGN_MASK ( (1 << SIS_MM_ALIGN_SHIFT) - 1) >- >-#endif /* CONFIG_FB_SIS */ >+#endif > > static int sis_fb_init(struct drm_device *dev, void *data, struct drm_file *file_priv) > { >@@ -89,7 +88,7 @@ static int sis_fb_init(struct drm_device > int ret; > > mutex_lock(&dev->struct_mutex); >-#if defined(CONFIG_FB_SIS) >+#if defined(__linux__) && defined(CONFIG_FB_SIS) > { > struct drm_sman_mm sman_mm; > sman_mm.private = (void *)0xFFFFFFFF; >@@ -134,6 +133,7 @@ static int sis_drm_alloc(struct drm_devi > dev_priv->agp_initialized)) { > DRM_ERROR > ("Attempt to allocate from uninitialized memory manager.\n"); >+ mutex_unlock(&dev->struct_mutex); > return -EINVAL; > } > >@@ -248,7 +248,7 @@ int sis_idle(struct drm_device *dev) > return 0; > } > } >- >+ > /* > * Implement a device switch here if needed > */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/tdfx_drv.c linux-2.6.23.i686/drivers/char/drm/tdfx_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/tdfx_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/tdfx_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -39,23 +39,26 @@ static struct pci_device_id pciidlist[] > tdfx_PCI_IDS > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = DRIVER_USE_MTRR, > .reclaim_buffers = drm_core_reclaim_buffers, > .get_map_ofs = drm_core_get_map_ofs, > .get_reg_ofs = drm_core_get_reg_ofs, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >- }, >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+ }, > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), > }, > > .name = DRIVER_NAME, >@@ -66,9 +69,15 @@ static struct drm_driver driver = { > .patchlevel = DRIVER_PATCHLEVEL, > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ >+ > static int __init tdfx_init(void) > { >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit tdfx_exit(void) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_buffer.c linux-2.6.23.i686/drivers/char/drm/via_buffer.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_buffer.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_buffer.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,163 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2007 Tungsten Graphics, Inc., Cedar Park, TX., USA, >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+#include "via_drm.h" >+#include "via_drv.h" >+ >+struct drm_ttm_backend *via_create_ttm_backend_entry(struct drm_device * dev) >+{ >+ return drm_agp_init_ttm(dev); >+} >+ >+int via_fence_types(struct drm_buffer_object *bo, uint32_t * fclass, >+ uint32_t * type) >+{ >+ *type = 3; >+ return 0; >+} >+ >+int via_invalidate_caches(struct drm_device * dev, uint64_t flags) >+{ >+ /* >+ * FIXME: Invalidate texture caches here. >+ */ >+ >+ return 0; >+} >+ >+ >+static int via_vram_info(struct drm_device *dev, >+ unsigned long *offset, >+ unsigned long *size) >+{ >+ struct pci_dev *pdev = dev->pdev; >+ unsigned long flags; >+ >+ int ret = -EINVAL; >+ int i; >+ for (i=0; i<6; ++i) { >+ flags = pci_resource_flags(pdev, i); >+ if ((flags & (IORESOURCE_MEM | IORESOURCE_PREFETCH)) == >+ (IORESOURCE_MEM | IORESOURCE_PREFETCH)) { >+ ret = 0; >+ break; >+ } >+ } >+ >+ if (ret) { >+ DRM_ERROR("Could not find VRAM PCI resource\n"); >+ return ret; >+ } >+ >+ *offset = pci_resource_start(pdev, i); >+ *size = pci_resource_end(pdev, i) - *offset + 1; >+ return 0; >+} >+ >+int via_init_mem_type(struct drm_device * dev, uint32_t type, >+ struct drm_mem_type_manager * man) >+{ >+ switch (type) { >+ case DRM_BO_MEM_LOCAL: >+ /* System memory */ >+ >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_MEMTYPE_CACHED; >+ man->drm_bus_maptype = 0; >+ break; >+ >+ case DRM_BO_MEM_TT: >+ /* Dynamic agpgart memory */ >+ >+ if (!(drm_core_has_AGP(dev) && dev->agp)) { >+ DRM_ERROR("AGP is not enabled for memory type %u\n", >+ (unsigned)type); >+ return -EINVAL; >+ } >+ man->io_offset = dev->agp->agp_info.aper_base; >+ man->io_size = dev->agp->agp_info.aper_size * 1024 * 1024; >+ man->io_addr = NULL; >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | _DRM_FLAG_NEEDS_IOREMAP; >+ >+ /* Only to get pte protection right. */ >+ >+ man->drm_bus_maptype = _DRM_AGP; >+ break; >+ >+ case DRM_BO_MEM_VRAM: >+ /* "On-card" video ram */ >+ >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | _DRM_FLAG_NEEDS_IOREMAP; >+ man->drm_bus_maptype = _DRM_FRAME_BUFFER; >+ man->io_addr = NULL; >+ return via_vram_info(dev, &man->io_offset, &man->io_size); >+ break; >+ >+ case DRM_BO_MEM_PRIV0: >+ /* Pre-bound agpgart memory */ >+ >+ if (!(drm_core_has_AGP(dev) && dev->agp)) { >+ DRM_ERROR("AGP is not enabled for memory type %u\n", >+ (unsigned)type); >+ return -EINVAL; >+ } >+ man->io_offset = dev->agp->agp_info.aper_base; >+ man->io_size = dev->agp->agp_info.aper_size * 1024 * 1024; >+ man->io_addr = NULL; >+ man->flags = _DRM_FLAG_MEMTYPE_MAPPABLE | >+ _DRM_FLAG_MEMTYPE_FIXED | _DRM_FLAG_NEEDS_IOREMAP; >+ man->drm_bus_maptype = _DRM_AGP; >+ break; >+ >+ default: >+ DRM_ERROR("Unsupported memory type %u\n", (unsigned)type); >+ return -EINVAL; >+ } >+ return 0; >+} >+ >+uint64_t via_evict_flags(struct drm_buffer_object *bo) >+{ >+ switch (bo->mem.mem_type) { >+ case DRM_BO_MEM_LOCAL: >+ case DRM_BO_MEM_TT: >+ return DRM_BO_FLAG_MEM_LOCAL; /* Evict TT to local */ >+ case DRM_BO_MEM_PRIV0: /* Evict pre-bound AGP to TT */ >+ return DRM_BO_MEM_TT; >+ case DRM_BO_MEM_VRAM: >+ if (bo->mem.num_pages > 128) >+ return DRM_BO_MEM_TT; >+ else >+ return DRM_BO_MEM_LOCAL; >+ default: >+ return DRM_BO_MEM_LOCAL; >+ } >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_dmablit.c linux-2.6.23.i686/drivers/char/drm/via_dmablit.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_dmablit.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_dmablit.c 2008-01-06 09:24:57.000000000 +0100 >@@ -1,5 +1,5 @@ > /* via_dmablit.c -- PCI DMA BitBlt support for the VIA Unichrome/Pro >- * >+ * > * Copyright (C) 2005 Thomas Hellstrom, All Rights Reserved. > * > * Permission is hereby granted, free of charge, to any person obtaining a >@@ -16,22 +16,22 @@ > * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >- * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE > * USE OR OTHER DEALINGS IN THE SOFTWARE. > * >- * Authors: >+ * Authors: > * Thomas Hellstrom. > * Partially based on code obtained from Digeo Inc. > */ > > > /* >- * Unmaps the DMA mappings. >- * FIXME: Is this a NoOp on x86? Also >- * FIXME: What happens if this one is called and a pending blit has previously done >- * the same DMA mappings? >+ * Unmaps the DMA mappings. >+ * FIXME: Is this a NoOp on x86? Also >+ * FIXME: What happens if this one is called and a pending blit has previously done >+ * the same DMA mappings? > */ > > #include "drmP.h" >@@ -41,9 +41,9 @@ > > #include <linux/pagemap.h> > >-#define VIA_PGDN(x) (((unsigned long)(x)) & PAGE_MASK) >-#define VIA_PGOFF(x) (((unsigned long)(x)) & ~PAGE_MASK) >-#define VIA_PFN(x) ((unsigned long)(x) >> PAGE_SHIFT) >+#define VIA_PGDN(x) (((unsigned long)(x)) & PAGE_MASK) >+#define VIA_PGOFF(x) (((unsigned long)(x)) & ~PAGE_MASK) >+#define VIA_PFN(x) ((unsigned long)(x) >> PAGE_SHIFT) > > typedef struct _drm_via_descriptor { > uint32_t mem_addr; >@@ -65,7 +65,7 @@ via_unmap_blit_from_device(struct pci_de > int num_desc = vsg->num_desc; > unsigned cur_descriptor_page = num_desc / vsg->descriptors_per_page; > unsigned descriptor_this_page = num_desc % vsg->descriptors_per_page; >- drm_via_descriptor_t *desc_ptr = vsg->desc_pages[cur_descriptor_page] + >+ drm_via_descriptor_t *desc_ptr = vsg->desc_pages[cur_descriptor_page] + > descriptor_this_page; > dma_addr_t next = vsg->chain_start; > >@@ -73,7 +73,7 @@ via_unmap_blit_from_device(struct pci_de > if (descriptor_this_page-- == 0) { > cur_descriptor_page--; > descriptor_this_page = vsg->descriptors_per_page - 1; >- desc_ptr = vsg->desc_pages[cur_descriptor_page] + >+ desc_ptr = vsg->desc_pages[cur_descriptor_page] + > descriptor_this_page; > } > dma_unmap_single(&pdev->dev, next, sizeof(*desc_ptr), DMA_TO_DEVICE); >@@ -93,7 +93,7 @@ via_unmap_blit_from_device(struct pci_de > static void > via_map_blit_for_device(struct pci_dev *pdev, > const drm_via_dmablit_t *xfer, >- drm_via_sg_info_t *vsg, >+ drm_via_sg_info_t *vsg, > int mode) > { > unsigned cur_descriptor_page = 0; >@@ -110,7 +110,7 @@ via_map_blit_for_device(struct pci_dev * > dma_addr_t next = 0 | VIA_DMA_DPR_EC; > drm_via_descriptor_t *desc_ptr = NULL; > >- if (mode == 1) >+ if (mode == 1) > desc_ptr = vsg->desc_pages[cur_descriptor_page]; > > for (cur_line = 0; cur_line < xfer->num_lines; ++cur_line) { >@@ -118,24 +118,23 @@ via_map_blit_for_device(struct pci_dev * > line_len = xfer->line_length; > cur_fb = fb_addr; > cur_mem = mem_addr; >- >+ > while (line_len > 0) { > > remaining_len = min(PAGE_SIZE-VIA_PGOFF(cur_mem), line_len); > line_len -= remaining_len; > > if (mode == 1) { >- desc_ptr->mem_addr = >- dma_map_page(&pdev->dev, >- vsg->pages[VIA_PFN(cur_mem) - >- VIA_PFN(first_addr)], >- VIA_PGOFF(cur_mem), remaining_len, >- vsg->direction); >+ desc_ptr->mem_addr = dma_map_page(&pdev->dev, >+ vsg->pages[VIA_PFN(cur_mem) - >+ VIA_PFN(first_addr)], >+ VIA_PGOFF(cur_mem), remaining_len, >+ vsg->direction); > desc_ptr->dev_addr = cur_fb; >- >+ > desc_ptr->size = remaining_len; > desc_ptr->next = (uint32_t) next; >- next = dma_map_single(&pdev->dev, desc_ptr, sizeof(*desc_ptr), >+ next = dma_map_single(&pdev->dev, desc_ptr, sizeof(*desc_ptr), > DMA_TO_DEVICE); > desc_ptr++; > if (++num_descriptors_this_page >= vsg->descriptors_per_page) { >@@ -143,12 +142,12 @@ via_map_blit_for_device(struct pci_dev * > desc_ptr = vsg->desc_pages[++cur_descriptor_page]; > } > } >- >+ > num_desc++; > cur_mem += remaining_len; > cur_fb += remaining_len; > } >- >+ > mem_addr += xfer->mem_stride; > fb_addr += xfer->fb_stride; > } >@@ -161,14 +160,14 @@ via_map_blit_for_device(struct pci_dev * > } > > /* >- * Function that frees up all resources for a blit. It is usable even if the >+ * Function that frees up all resources for a blit. It is usable even if the > * blit info has only been partially built as long as the status enum is consistent > * with the actual status of the used resources. > */ > > > static void >-via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) >+via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) > { > struct page *page; > int i; >@@ -185,7 +184,7 @@ via_free_sg_info(struct pci_dev *pdev, d > case dr_via_pages_locked: > for (i=0; i<vsg->num_pages; ++i) { > if ( NULL != (page = vsg->pages[i])) { >- if (! PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction)) >+ if (! PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction)) > SetPageDirty(page); > page_cache_release(page); > } >@@ -200,7 +199,7 @@ via_free_sg_info(struct pci_dev *pdev, d > vsg->bounce_buffer = NULL; > } > vsg->free_on_sequence = 0; >-} >+} > > /* > * Fire a blit engine. >@@ -213,7 +212,7 @@ via_fire_dmablit(struct drm_device *dev, > > VIA_WRITE(VIA_PCI_DMA_MAR0 + engine*0x10, 0); > VIA_WRITE(VIA_PCI_DMA_DAR0 + engine*0x10, 0); >- VIA_WRITE(VIA_PCI_DMA_CSR0 + engine*0x04, VIA_DMA_CSR_DD | VIA_DMA_CSR_TD | >+ VIA_WRITE(VIA_PCI_DMA_CSR0 + engine*0x04, VIA_DMA_CSR_DD | VIA_DMA_CSR_TD | > VIA_DMA_CSR_DE); > VIA_WRITE(VIA_PCI_DMA_MR0 + engine*0x04, VIA_DMA_MR_CM | VIA_DMA_MR_TDIE); > VIA_WRITE(VIA_PCI_DMA_BCR0 + engine*0x10, 0); >@@ -233,9 +232,9 @@ via_lock_all_dma_pages(drm_via_sg_info_t > { > int ret; > unsigned long first_pfn = VIA_PFN(xfer->mem_addr); >- vsg->num_pages = VIA_PFN(xfer->mem_addr + (xfer->num_lines * xfer->mem_stride -1)) - >+ vsg->num_pages = VIA_PFN(xfer->mem_addr + (xfer->num_lines * xfer->mem_stride -1)) - > first_pfn + 1; >- >+ > if (NULL == (vsg->pages = vmalloc(sizeof(struct page *) * vsg->num_pages))) > return -ENOMEM; > memset(vsg->pages, 0, sizeof(struct page *) * vsg->num_pages); >@@ -248,7 +247,7 @@ via_lock_all_dma_pages(drm_via_sg_info_t > > up_read(¤t->mm->mmap_sem); > if (ret != vsg->num_pages) { >- if (ret < 0) >+ if (ret < 0) > return ret; > vsg->state = dr_via_pages_locked; > return -EINVAL; >@@ -264,21 +263,22 @@ via_lock_all_dma_pages(drm_via_sg_info_t > * quite large for some blits, and pages don't need to be contingous. > */ > >-static int >+static int > via_alloc_desc_pages(drm_via_sg_info_t *vsg) > { > int i; >- >+ > vsg->descriptors_per_page = PAGE_SIZE / sizeof( drm_via_descriptor_t); >- vsg->num_desc_pages = (vsg->num_desc + vsg->descriptors_per_page - 1) / >+ vsg->num_desc_pages = (vsg->num_desc + vsg->descriptors_per_page - 1) / > vsg->descriptors_per_page; > >- if (NULL == (vsg->desc_pages = kcalloc(vsg->num_desc_pages, sizeof(void *), GFP_KERNEL))) >+ if (NULL == (vsg->desc_pages = kmalloc(sizeof(void *) * vsg->num_desc_pages, GFP_KERNEL))) > return -ENOMEM; >- >+ >+ memset(vsg->desc_pages, 0, sizeof(void *) * vsg->num_desc_pages); > vsg->state = dr_via_desc_pages_alloc; > for (i=0; i<vsg->num_desc_pages; ++i) { >- if (NULL == (vsg->desc_pages[i] = >+ if (NULL == (vsg->desc_pages[i] = > (drm_via_descriptor_t *) __get_free_page(GFP_KERNEL))) > return -ENOMEM; > } >@@ -286,7 +286,7 @@ via_alloc_desc_pages(drm_via_sg_info_t * > vsg->num_desc); > return 0; > } >- >+ > static void > via_abort_dmablit(struct drm_device *dev, int engine) > { >@@ -300,7 +300,7 @@ via_dmablit_engine_off(struct drm_device > { > drm_via_private_t *dev_priv = (drm_via_private_t *)dev->dev_private; > >- VIA_WRITE(VIA_PCI_DMA_CSR0 + engine*0x04, VIA_DMA_CSR_TD | VIA_DMA_CSR_DD); >+ VIA_WRITE(VIA_PCI_DMA_CSR0 + engine*0x04, VIA_DMA_CSR_TD | VIA_DMA_CSR_DD); > } > > >@@ -311,7 +311,7 @@ via_dmablit_engine_off(struct drm_device > * task. Basically the task of the interrupt handler is to submit a new blit to the engine, while > * the workqueue task takes care of processing associated with the old blit. > */ >- >+ > void > via_dmablit_handler(struct drm_device *dev, int engine, int from_irq) > { >@@ -331,19 +331,19 @@ via_dmablit_handler(struct drm_device *d > spin_lock_irqsave(&blitq->blit_lock, irqsave); > } > >- done_transfer = blitq->is_active && >+ done_transfer = blitq->is_active && > (( status = VIA_READ(VIA_PCI_DMA_CSR0 + engine*0x04)) & VIA_DMA_CSR_TD); >- done_transfer = done_transfer || ( blitq->aborting && !(status & VIA_DMA_CSR_DE)); >+ done_transfer = done_transfer || ( blitq->aborting && !(status & VIA_DMA_CSR_DE)); > > cur = blitq->cur; > if (done_transfer) { > > blitq->blits[cur]->aborted = blitq->aborting; > blitq->done_blit_handle++; >- DRM_WAKEUP(blitq->blit_queue + cur); >+ DRM_WAKEUP(blitq->blit_queue + cur); > > cur++; >- if (cur >= VIA_NUM_BLIT_SLOTS) >+ if (cur >= VIA_NUM_BLIT_SLOTS) > cur = 0; > blitq->cur = cur; > >@@ -355,7 +355,7 @@ via_dmablit_handler(struct drm_device *d > > blitq->is_active = 0; > blitq->aborting = 0; >- schedule_work(&blitq->wq); >+ schedule_work(&blitq->wq); > > } else if (blitq->is_active && time_after_eq(jiffies, blitq->end)) { > >@@ -367,7 +367,7 @@ via_dmablit_handler(struct drm_device *d > blitq->aborting = 1; > blitq->end = jiffies + DRM_HZ; > } >- >+ > if (!blitq->is_active) { > if (blitq->num_outstanding) { > via_fire_dmablit(dev, blitq->blits[cur], engine); >@@ -375,22 +375,24 @@ via_dmablit_handler(struct drm_device *d > blitq->cur = cur; > blitq->num_outstanding--; > blitq->end = jiffies + DRM_HZ; >- if (!timer_pending(&blitq->poll_timer)) >- mod_timer(&blitq->poll_timer, jiffies + 1); >+ if (!timer_pending(&blitq->poll_timer)) { >+ blitq->poll_timer.expires = jiffies+1; >+ add_timer(&blitq->poll_timer); >+ } > } else { > if (timer_pending(&blitq->poll_timer)) { > del_timer(&blitq->poll_timer); > } > via_dmablit_engine_off(dev, engine); > } >- } >+ } > > if (from_irq) { > spin_unlock(&blitq->blit_lock); > } else { > spin_unlock_irqrestore(&blitq->blit_lock, irqsave); > } >-} >+} > > > >@@ -426,13 +428,13 @@ via_dmablit_active(drm_via_blitq_t *blit > > return active; > } >- >+ > /* > * Sync. Wait for at least three seconds for the blit to be performed. > */ > > static int >-via_dmablit_sync(struct drm_device *dev, uint32_t handle, int engine) >+via_dmablit_sync(struct drm_device *dev, uint32_t handle, int engine) > { > > drm_via_private_t *dev_priv = (drm_via_private_t *)dev->dev_private; >@@ -441,12 +443,12 @@ via_dmablit_sync(struct drm_device *dev, > int ret = 0; > > if (via_dmablit_active(blitq, engine, handle, &queue)) { >- DRM_WAIT_ON(ret, *queue, 3 * DRM_HZ, >+ DRM_WAIT_ON(ret, *queue, 3 * DRM_HZ, > !via_dmablit_active(blitq, engine, handle, NULL)); > } > DRM_DEBUG("DMA blit sync handle 0x%x engine %d returned %d\n", > handle, engine, ret); >- >+ > return ret; > } > >@@ -468,22 +470,22 @@ via_dmablit_timer(unsigned long data) > struct drm_device *dev = blitq->dev; > int engine = (int) > (blitq - ((drm_via_private_t *)dev->dev_private)->blit_queues); >- >- DRM_DEBUG("Polling timer called for engine %d, jiffies %lu\n", engine, >+ >+ DRM_DEBUG("Polling timer called for engine %d, jiffies %lu\n", engine, > (unsigned long) jiffies); > > via_dmablit_handler(dev, engine, 0); >- >- if (!timer_pending(&blitq->poll_timer)) { >- mod_timer(&blitq->poll_timer, jiffies + 1); > >- /* >- * Rerun handler to delete timer if engines are off, and >- * to shorten abort latency. This is a little nasty. >- */ >+ if (!timer_pending(&blitq->poll_timer)) { >+ blitq->poll_timer.expires = jiffies+1; >+ add_timer(&blitq->poll_timer); > >- via_dmablit_handler(dev, engine, 0); >+ /* >+ * Rerun handler to delete timer if engines are off, and >+ * to shorten abort latency. This is a little nasty. >+ */ > >+ via_dmablit_handler(dev, engine, 0); > } > } > >@@ -497,46 +499,54 @@ via_dmablit_timer(unsigned long data) > */ > > >-static void >+static void >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20) >+via_dmablit_workqueue(void *data) >+#else > via_dmablit_workqueue(struct work_struct *work) >+#endif > { >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20) >+ drm_via_blitq_t *blitq = (drm_via_blitq_t *) data; >+#else > drm_via_blitq_t *blitq = container_of(work, drm_via_blitq_t, wq); >+#endif > struct drm_device *dev = blitq->dev; > unsigned long irqsave; > drm_via_sg_info_t *cur_sg; > int cur_released; >- >- >- DRM_DEBUG("Workqueue task called for blit engine %ld\n",(unsigned long) >+ >+ >+ DRM_DEBUG("Workqueue task called for blit engine %ld\n",(unsigned long) > (blitq - ((drm_via_private_t *)dev->dev_private)->blit_queues)); > > spin_lock_irqsave(&blitq->blit_lock, irqsave); >- >+ > while(blitq->serviced != blitq->cur) { > > cur_released = blitq->serviced++; > > DRM_DEBUG("Releasing blit slot %d\n", cur_released); > >- if (blitq->serviced >= VIA_NUM_BLIT_SLOTS) >+ if (blitq->serviced >= VIA_NUM_BLIT_SLOTS) > blitq->serviced = 0; >- >+ > cur_sg = blitq->blits[cur_released]; > blitq->num_free++; >- >+ > spin_unlock_irqrestore(&blitq->blit_lock, irqsave); >- >+ > DRM_WAKEUP(&blitq->busy_queue); >- >+ > via_free_sg_info(dev->pdev, cur_sg); > kfree(cur_sg); >- >+ > spin_lock_irqsave(&blitq->blit_lock, irqsave); > } > > spin_unlock_irqrestore(&blitq->blit_lock, irqsave); > } >- >+ > > /* > * Init all blit engines. Currently we use two, but some hardware have 4. >@@ -550,8 +560,8 @@ via_init_dmablit(struct drm_device *dev) > drm_via_private_t *dev_priv = (drm_via_private_t *)dev->dev_private; > drm_via_blitq_t *blitq; > >- pci_set_master(dev->pdev); >- >+ pci_set_master(dev->pdev); >+ > for (i=0; i< VIA_NUM_BLIT_ENGINES; ++i) { > blitq = dev_priv->blit_queues + i; > blitq->dev = dev; >@@ -569,23 +579,28 @@ via_init_dmablit(struct drm_device *dev) > DRM_INIT_WAITQUEUE(blitq->blit_queue + j); > } > DRM_INIT_WAITQUEUE(&blitq->busy_queue); >+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20) >+ INIT_WORK(&blitq->wq, via_dmablit_workqueue, blitq); >+#else > INIT_WORK(&blitq->wq, via_dmablit_workqueue); >- setup_timer(&blitq->poll_timer, via_dmablit_timer, >- (unsigned long)blitq); >- } >+#endif >+ init_timer(&blitq->poll_timer); >+ blitq->poll_timer.function = &via_dmablit_timer; >+ blitq->poll_timer.data = (unsigned long) blitq; >+ } > } > > /* > * Build all info and do all mappings required for a blit. > */ >- >+ > > static int > via_build_sg_info(struct drm_device *dev, drm_via_sg_info_t *vsg, drm_via_dmablit_t *xfer) > { > int draw = xfer->to_fb; > int ret = 0; >- >+ > vsg->direction = (draw) ? DMA_TO_DEVICE : DMA_FROM_DEVICE; > vsg->bounce_buffer = NULL; > >@@ -599,7 +614,7 @@ via_build_sg_info(struct drm_device *dev > /* > * Below check is a driver limitation, not a hardware one. We > * don't want to lock unused pages, and don't want to incoporate the >- * extra logic of avoiding them. Make sure there are no. >+ * extra logic of avoiding them. Make sure there are no. > * (Not a big limitation anyway.) > */ > >@@ -610,7 +625,7 @@ via_build_sg_info(struct drm_device *dev > } > > if ((xfer->mem_stride == xfer->line_length) && >- (xfer->fb_stride == xfer->line_length)) { >+ (xfer->fb_stride == xfer->line_length)) { > xfer->mem_stride *= xfer->num_lines; > xfer->line_length = xfer->mem_stride; > xfer->fb_stride = xfer->mem_stride; >@@ -625,15 +640,15 @@ via_build_sg_info(struct drm_device *dev > if (xfer->num_lines > 2048 || (xfer->num_lines*xfer->mem_stride > (2048*2048*4))) { > DRM_ERROR("Too large PCI DMA bitblt.\n"); > return -EINVAL; >- } >+ } > >- /* >+ /* > * we allow a negative fb stride to allow flipping of images in >- * transfer. >+ * transfer. > */ > > if (xfer->mem_stride < xfer->line_length || >- abs(xfer->fb_stride) < xfer->line_length) { >+ abs(xfer->fb_stride) < xfer->line_length) { > DRM_ERROR("Invalid frame-buffer / memory stride.\n"); > return -EINVAL; > } >@@ -651,13 +666,11 @@ via_build_sg_info(struct drm_device *dev > return -EINVAL; > } > #else >- if ((((unsigned long)xfer->mem_addr & 15) || >- ((unsigned long)xfer->fb_addr & 3)) || >- ((xfer->num_lines > 1) && >- ((xfer->mem_stride & 15) || (xfer->fb_stride & 3)))) { >+ if ((((unsigned long)xfer->mem_addr & 15) || ((unsigned long)xfer->fb_addr & 3)) || >+ ((xfer->num_lines > 1) && ((xfer->mem_stride & 15) || (xfer->fb_stride & 3)))) { > DRM_ERROR("Invalid DRM bitblt alignment.\n"); > return -EINVAL; >- } >+ } > #endif > > if (0 != (ret = via_lock_all_dma_pages(vsg, xfer))) { >@@ -673,17 +686,17 @@ via_build_sg_info(struct drm_device *dev > return ret; > } > via_map_blit_for_device(dev->pdev, xfer, vsg, 1); >- >+ > return 0; > } >- >+ > > /* > * Reserve one free slot in the blit queue. Will wait for one second for one > * to become available. Otherwise -EBUSY is returned. > */ > >-static int >+static int > via_dmablit_grab_slot(drm_via_blitq_t *blitq, int engine) > { > int ret=0; >@@ -698,10 +711,10 @@ via_dmablit_grab_slot(drm_via_blitq_t *b > if (ret) { > return (-EINTR == ret) ? -EAGAIN : ret; > } >- >+ > spin_lock_irqsave(&blitq->blit_lock, irqsave); > } >- >+ > blitq->num_free--; > spin_unlock_irqrestore(&blitq->blit_lock, irqsave); > >@@ -712,7 +725,7 @@ via_dmablit_grab_slot(drm_via_blitq_t *b > * Hand back a free slot if we changed our mind. > */ > >-static void >+static void > via_dmablit_release_slot(drm_via_blitq_t *blitq) > { > unsigned long irqsave; >@@ -728,8 +741,8 @@ via_dmablit_release_slot(drm_via_blitq_t > */ > > >-static int >-via_dmablit(struct drm_device *dev, drm_via_dmablit_t *xfer) >+static int >+via_dmablit(struct drm_device *dev, drm_via_dmablit_t *xfer) > { > drm_via_private_t *dev_priv = (drm_via_private_t *)dev->dev_private; > drm_via_sg_info_t *vsg; >@@ -760,15 +773,15 @@ via_dmablit(struct drm_device *dev, drm_ > spin_lock_irqsave(&blitq->blit_lock, irqsave); > > blitq->blits[blitq->head++] = vsg; >- if (blitq->head >= VIA_NUM_BLIT_SLOTS) >+ if (blitq->head >= VIA_NUM_BLIT_SLOTS) > blitq->head = 0; > blitq->num_outstanding++; >- xfer->sync.sync_handle = ++blitq->cur_blit_handle; >+ xfer->sync.sync_handle = ++blitq->cur_blit_handle; > > spin_unlock_irqrestore(&blitq->blit_lock, irqsave); > xfer->sync.engine = engine; > >- via_dmablit_handler(dev, engine, 0); >+ via_dmablit_handler(dev, engine, 0); > > return 0; > } >@@ -776,7 +789,7 @@ via_dmablit(struct drm_device *dev, drm_ > /* > * Sync on a previously submitted blit. Note that the X server use signals extensively, and > * that there is a very big probability that this IOCTL will be interrupted by a signal. In that >- * case it returns with -EAGAIN for the signal to be delivered. >+ * case it returns with -EAGAIN for the signal to be delivered. > * The caller should then reissue the IOCTL. This is similar to what is being done for drmGetLock(). > */ > >@@ -786,7 +799,7 @@ via_dma_blit_sync( struct drm_device *de > drm_via_blitsync_t *sync = data; > int err; > >- if (sync->engine >= VIA_NUM_BLIT_ENGINES) >+ if (sync->engine >= VIA_NUM_BLIT_ENGINES) > return -EINVAL; > > err = via_dmablit_sync(dev, sync->sync_handle, sync->engine); >@@ -796,15 +809,15 @@ via_dma_blit_sync( struct drm_device *de > > return err; > } >- >+ > > /* > * Queue a blit and hand back a handle to be used for sync. This IOCTL may be interrupted by a signal >- * while waiting for a free slot in the blit queue. In that case it returns with -EAGAIN and should >+ * while waiting for a free slot in the blit queue. In that case it returns with -EAGAIN and should > * be reissued. See the above IOCTL code. > */ > >-int >+int > via_dma_blit( struct drm_device *dev, void *data, struct drm_file *file_priv ) > { > drm_via_dmablit_t *xfer = data; >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_dmablit.h linux-2.6.23.i686/drivers/char/drm/via_dmablit.h >--- linux-2.6.23.i686.orig/drivers/char/drm/via_dmablit.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/via_dmablit.h 2008-01-06 09:24:57.000000000 +0100 >@@ -1,5 +1,5 @@ > /* via_dmablit.h -- PCI DMA BitBlt support for the VIA Unichrome/Pro >- * >+ * > * Copyright 2005 Thomas Hellstrom. > * All Rights Reserved. > * >@@ -17,12 +17,12 @@ > * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >- * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >- * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >- * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE > * USE OR OTHER DEALINGS IN THE SOFTWARE. > * >- * Authors: >+ * Authors: > * Thomas Hellstrom. > * Register info from Digeo Inc. > */ >@@ -45,12 +45,12 @@ typedef struct _drm_via_sg_info { > int num_desc; > enum dma_data_direction direction; > unsigned char *bounce_buffer; >- dma_addr_t chain_start; >+ dma_addr_t chain_start; > uint32_t free_on_sequence; >- unsigned int descriptors_per_page; >+ unsigned int descriptors_per_page; > int aborted; > enum { >- dr_via_device_mapped, >+ dr_via_device_mapped, > dr_via_desc_pages_alloc, > dr_via_pages_locked, > dr_via_pages_alloc, >@@ -67,8 +67,8 @@ typedef struct _drm_via_blitq { > unsigned cur; > unsigned num_free; > unsigned num_outstanding; >- unsigned long end; >- int aborting; >+ unsigned long end; >+ int aborting; > int is_active; > drm_via_sg_info_t *blits[VIA_NUM_BLIT_SLOTS]; > spinlock_t blit_lock; >@@ -77,46 +77,46 @@ typedef struct _drm_via_blitq { > struct work_struct wq; > struct timer_list poll_timer; > } drm_via_blitq_t; >- > >-/* >+ >+/* > * PCI DMA Registers > * Channels 2 & 3 don't seem to be implemented in hardware. > */ >- >-#define VIA_PCI_DMA_MAR0 0xE40 /* Memory Address Register of Channel 0 */ >-#define VIA_PCI_DMA_DAR0 0xE44 /* Device Address Register of Channel 0 */ >-#define VIA_PCI_DMA_BCR0 0xE48 /* Byte Count Register of Channel 0 */ >-#define VIA_PCI_DMA_DPR0 0xE4C /* Descriptor Pointer Register of Channel 0 */ >- >-#define VIA_PCI_DMA_MAR1 0xE50 /* Memory Address Register of Channel 1 */ >-#define VIA_PCI_DMA_DAR1 0xE54 /* Device Address Register of Channel 1 */ >-#define VIA_PCI_DMA_BCR1 0xE58 /* Byte Count Register of Channel 1 */ >-#define VIA_PCI_DMA_DPR1 0xE5C /* Descriptor Pointer Register of Channel 1 */ >- >-#define VIA_PCI_DMA_MAR2 0xE60 /* Memory Address Register of Channel 2 */ >-#define VIA_PCI_DMA_DAR2 0xE64 /* Device Address Register of Channel 2 */ >-#define VIA_PCI_DMA_BCR2 0xE68 /* Byte Count Register of Channel 2 */ >-#define VIA_PCI_DMA_DPR2 0xE6C /* Descriptor Pointer Register of Channel 2 */ >- >-#define VIA_PCI_DMA_MAR3 0xE70 /* Memory Address Register of Channel 3 */ >-#define VIA_PCI_DMA_DAR3 0xE74 /* Device Address Register of Channel 3 */ >-#define VIA_PCI_DMA_BCR3 0xE78 /* Byte Count Register of Channel 3 */ >-#define VIA_PCI_DMA_DPR3 0xE7C /* Descriptor Pointer Register of Channel 3 */ >- >-#define VIA_PCI_DMA_MR0 0xE80 /* Mode Register of Channel 0 */ >-#define VIA_PCI_DMA_MR1 0xE84 /* Mode Register of Channel 1 */ >-#define VIA_PCI_DMA_MR2 0xE88 /* Mode Register of Channel 2 */ >-#define VIA_PCI_DMA_MR3 0xE8C /* Mode Register of Channel 3 */ >- >-#define VIA_PCI_DMA_CSR0 0xE90 /* Command/Status Register of Channel 0 */ >-#define VIA_PCI_DMA_CSR1 0xE94 /* Command/Status Register of Channel 1 */ >-#define VIA_PCI_DMA_CSR2 0xE98 /* Command/Status Register of Channel 2 */ >-#define VIA_PCI_DMA_CSR3 0xE9C /* Command/Status Register of Channel 3 */ > >-#define VIA_PCI_DMA_PTR 0xEA0 /* Priority Type Register */ >+#define VIA_PCI_DMA_MAR0 0xE40 /* Memory Address Register of Channel 0 */ >+#define VIA_PCI_DMA_DAR0 0xE44 /* Device Address Register of Channel 0 */ >+#define VIA_PCI_DMA_BCR0 0xE48 /* Byte Count Register of Channel 0 */ >+#define VIA_PCI_DMA_DPR0 0xE4C /* Descriptor Pointer Register of Channel 0 */ >+ >+#define VIA_PCI_DMA_MAR1 0xE50 /* Memory Address Register of Channel 1 */ >+#define VIA_PCI_DMA_DAR1 0xE54 /* Device Address Register of Channel 1 */ >+#define VIA_PCI_DMA_BCR1 0xE58 /* Byte Count Register of Channel 1 */ >+#define VIA_PCI_DMA_DPR1 0xE5C /* Descriptor Pointer Register of Channel 1 */ >+ >+#define VIA_PCI_DMA_MAR2 0xE60 /* Memory Address Register of Channel 2 */ >+#define VIA_PCI_DMA_DAR2 0xE64 /* Device Address Register of Channel 2 */ >+#define VIA_PCI_DMA_BCR2 0xE68 /* Byte Count Register of Channel 2 */ >+#define VIA_PCI_DMA_DPR2 0xE6C /* Descriptor Pointer Register of Channel 2 */ >+ >+#define VIA_PCI_DMA_MAR3 0xE70 /* Memory Address Register of Channel 3 */ >+#define VIA_PCI_DMA_DAR3 0xE74 /* Device Address Register of Channel 3 */ >+#define VIA_PCI_DMA_BCR3 0xE78 /* Byte Count Register of Channel 3 */ >+#define VIA_PCI_DMA_DPR3 0xE7C /* Descriptor Pointer Register of Channel 3 */ >+ >+#define VIA_PCI_DMA_MR0 0xE80 /* Mode Register of Channel 0 */ >+#define VIA_PCI_DMA_MR1 0xE84 /* Mode Register of Channel 1 */ >+#define VIA_PCI_DMA_MR2 0xE88 /* Mode Register of Channel 2 */ >+#define VIA_PCI_DMA_MR3 0xE8C /* Mode Register of Channel 3 */ >+ >+#define VIA_PCI_DMA_CSR0 0xE90 /* Command/Status Register of Channel 0 */ >+#define VIA_PCI_DMA_CSR1 0xE94 /* Command/Status Register of Channel 1 */ >+#define VIA_PCI_DMA_CSR2 0xE98 /* Command/Status Register of Channel 2 */ >+#define VIA_PCI_DMA_CSR3 0xE9C /* Command/Status Register of Channel 3 */ >+ >+#define VIA_PCI_DMA_PTR 0xEA0 /* Priority Type Register */ > >-/* Define for DMA engine */ >+/* Define for DMA engine */ > /* DPR */ > #define VIA_DMA_DPR_EC (1<<1) /* end of chain */ > #define VIA_DMA_DPR_DDIE (1<<2) /* descriptor done interrupt enable */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_dma.c linux-2.6.23.i686/drivers/char/drm/via_dma.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_dma.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_dma.c 2008-01-06 09:24:57.000000000 +0100 >@@ -40,20 +40,6 @@ > #include "via_drv.h" > #include "via_3d_reg.h" > >-#define CMDBUF_ALIGNMENT_SIZE (0x100) >-#define CMDBUF_ALIGNMENT_MASK (0x0ff) >- >-/* defines for VIA 3D registers */ >-#define VIA_REG_STATUS 0x400 >-#define VIA_REG_TRANSET 0x43C >-#define VIA_REG_TRANSPACE 0x440 >- >-/* VIA_REG_STATUS(0x400): Engine Status */ >-#define VIA_CMD_RGTR_BUSY 0x00000080 /* Command Regulator is busy */ >-#define VIA_2D_ENG_BUSY 0x00000001 /* 2D Engine is busy */ >-#define VIA_3D_ENG_BUSY 0x00000002 /* 3D Engine is busy */ >-#define VIA_VR_QUEUE_BUSY 0x00020000 /* Virtual Queue is busy */ >- > #define SetReg2DAGP(nReg, nData) { \ > *((uint32_t *)(vb)) = ((nReg) >> 2) | HALCYON_HEADER1; \ > *((uint32_t *)(vb) + 1) = (nData); \ >@@ -68,18 +54,19 @@ > *vb++ = (w2); \ > dev_priv->dma_low += 8; > >-static void via_cmdbuf_start(drm_via_private_t * dev_priv); >-static void via_cmdbuf_pause(drm_via_private_t * dev_priv); >-static void via_cmdbuf_reset(drm_via_private_t * dev_priv); >-static void via_cmdbuf_rewind(drm_via_private_t * dev_priv); >-static int via_wait_idle(drm_via_private_t * dev_priv); >-static void via_pad_cache(drm_via_private_t * dev_priv, int qwords); >+static void via_cmdbuf_start(drm_via_private_t *dev_priv); >+static void via_cmdbuf_pause(drm_via_private_t *dev_priv); >+static void via_cmdbuf_reset(drm_via_private_t *dev_priv); >+static void via_cmdbuf_rewind(drm_via_private_t *dev_priv); >+static int via_wait_idle(drm_via_private_t *dev_priv); >+static void via_pad_cache(drm_via_private_t *dev_priv, int qwords); >+ > > /* > * Free space in command buffer. > */ > >-static uint32_t via_cmdbuf_space(drm_via_private_t * dev_priv) >+static uint32_t via_cmdbuf_space(drm_via_private_t *dev_priv) > { > uint32_t agp_base = dev_priv->dma_offset + (uint32_t) dev_priv->agpAddr; > uint32_t hw_addr = *(dev_priv->hw_addr_ptr) - agp_base; >@@ -93,7 +80,7 @@ static uint32_t via_cmdbuf_space(drm_via > * How much does the command regulator lag behind? > */ > >-static uint32_t via_cmdbuf_lag(drm_via_private_t * dev_priv) >+static uint32_t via_cmdbuf_lag(drm_via_private_t *dev_priv) > { > uint32_t agp_base = dev_priv->dma_offset + (uint32_t) dev_priv->agpAddr; > uint32_t hw_addr = *(dev_priv->hw_addr_ptr) - agp_base; >@@ -130,6 +117,7 @@ via_cmdbuf_wait(drm_via_private_t * dev_ > return 0; > } > >+ > /* > * Checks whether buffer head has reach the end. Rewind the ring buffer > * when necessary. >@@ -155,7 +143,7 @@ int via_dma_cleanup(struct drm_device * > { > if (dev->dev_private) { > drm_via_private_t *dev_priv = >- (drm_via_private_t *) dev->dev_private; >+ (drm_via_private_t *) dev->dev_private; > > if (dev_priv->ring.virtual_start) { > via_cmdbuf_reset(dev_priv); >@@ -258,6 +246,8 @@ static int via_dma_init(struct drm_devic > return retcode; > } > >+ >+ > static int via_dispatch_cmdbuffer(struct drm_device * dev, drm_via_cmdbuffer_t * cmd) > { > drm_via_private_t *dev_priv; >@@ -286,7 +276,7 @@ static int via_dispatch_cmdbuffer(struct > */ > > if ((ret = >- via_verify_command_stream((uint32_t *) dev_priv->pci_buf, >+ via_verify_command_stream((uint32_t *)dev_priv->pci_buf, > cmd->size, dev, 1))) { > return ret; > } >@@ -454,10 +444,13 @@ static int via_hook_segment(drm_via_priv > VIA_READ(VIA_REG_TRANSPACE); > } > } >+ > return paused; > } > >-static int via_wait_idle(drm_via_private_t * dev_priv) >+ >+ >+static int via_wait_idle(drm_via_private_t *dev_priv) > { > int count = 10000000; > >@@ -469,9 +462,9 @@ static int via_wait_idle(drm_via_private > return count; > } > >-static uint32_t *via_align_cmd(drm_via_private_t * dev_priv, uint32_t cmd_type, >- uint32_t addr, uint32_t * cmd_addr_hi, >- uint32_t * cmd_addr_lo, int skip_wait) >+static uint32_t *via_align_cmd(drm_via_private_t *dev_priv, uint32_t cmd_type, >+ uint32_t addr, uint32_t *cmd_addr_hi, >+ uint32_t *cmd_addr_lo, int skip_wait) > { > uint32_t agp_base; > uint32_t cmd_addr, addr_lo, addr_hi; >@@ -484,12 +477,13 @@ static uint32_t *via_align_cmd(drm_via_p > vb = via_get_dma(dev_priv); > VIA_OUT_RING_QW(HC_HEADER2 | ((VIA_REG_TRANSET >> 2) << 12) | > (VIA_REG_TRANSPACE >> 2), HC_ParaType_PreCR << 16); >+ > agp_base = dev_priv->dma_offset + (uint32_t) dev_priv->agpAddr; > qw_pad_count = (CMDBUF_ALIGNMENT_SIZE >> 3) - >- ((dev_priv->dma_low & CMDBUF_ALIGNMENT_MASK) >> 3); >+ ((dev_priv->dma_low & CMDBUF_ALIGNMENT_MASK) >> 3); > > cmd_addr = (addr) ? addr : >- agp_base + dev_priv->dma_low - 8 + (qw_pad_count << 3); >+ agp_base + dev_priv->dma_low - 8 + (qw_pad_count << 3); > addr_lo = ((HC_SubA_HAGPBpL << 24) | (cmd_type & HC_HAGPBpID_MASK) | > (cmd_addr & HC_HAGPBpL_MASK)); > addr_hi = ((HC_SubA_HAGPBpH << 24) | (cmd_addr >> 24)); >@@ -522,8 +516,8 @@ static void via_cmdbuf_start(drm_via_pri > ((end_addr & 0xff000000) >> 16)); > > dev_priv->last_pause_ptr = >- via_align_cmd(dev_priv, HC_HAGPBpID_PAUSE, 0, >- &pause_addr_hi, &pause_addr_lo, 1) - 1; >+ via_align_cmd(dev_priv, HC_HAGPBpID_PAUSE, 0, >+ &pause_addr_hi, & pause_addr_lo, 1) - 1; > > via_flush_write_combine(); > (void) *(volatile uint32_t *)dev_priv->last_pause_ptr; >@@ -558,7 +552,7 @@ static void via_cmdbuf_start(drm_via_pri > dev_priv->dma_diff = ptr - reader; > } > >-static void via_pad_cache(drm_via_private_t * dev_priv, int qwords) >+static void via_pad_cache(drm_via_private_t *dev_priv, int qwords) > { > uint32_t *vb; > >@@ -589,6 +583,7 @@ static void via_cmdbuf_jump(drm_via_priv > > dev_priv->dma_wrap = dev_priv->dma_low; > >+ > /* > * Wrap command buffer to the beginning. > */ >@@ -600,19 +595,15 @@ static void via_cmdbuf_jump(drm_via_priv > > via_dummy_bitblt(dev_priv); > via_dummy_bitblt(dev_priv); >- >- last_pause_ptr = >- via_align_cmd(dev_priv, HC_HAGPBpID_PAUSE, 0, &pause_addr_hi, >- &pause_addr_lo, 0) - 1; >+ last_pause_ptr = via_align_cmd(dev_priv, HC_HAGPBpID_PAUSE, 0, &pause_addr_hi, >+ &pause_addr_lo, 0) -1; > via_align_cmd(dev_priv, HC_HAGPBpID_PAUSE, 0, &pause_addr_hi, > &pause_addr_lo, 0); >- > *last_pause_ptr = pause_addr_lo; > >- via_hook_segment( dev_priv, jump_addr_hi, jump_addr_lo, 0); >+ via_hook_segment(dev_priv, jump_addr_hi, jump_addr_lo, 0); > } > >- > static void via_cmdbuf_rewind(drm_via_private_t * dev_priv) > { > via_cmdbuf_jump(dev_priv); >@@ -626,6 +617,7 @@ static void via_cmdbuf_flush(drm_via_pri > via_hook_segment(dev_priv, pause_addr_hi, pause_addr_lo, 0); > } > >+ > static void via_cmdbuf_pause(drm_via_private_t * dev_priv) > { > via_cmdbuf_flush(dev_priv, HC_HAGPBpID_PAUSE); >@@ -694,6 +686,19 @@ static int via_cmdbuf_size(struct drm_de > return ret; > } > >+#ifndef VIA_HAVE_DMABLIT >+int >+via_dma_blit_sync( struct drm_device *dev, void *data, struct drm_file *file_priv ) { >+ DRM_ERROR("PCI DMA BitBlt is not implemented for this system.\n"); >+ return -EINVAL; >+} >+int >+via_dma_blit( struct drm_device *dev, void *data, struct drm_file *file_priv ) { >+ DRM_ERROR("PCI DMA BitBlt is not implemented for this system.\n"); >+ return -EINVAL; >+} >+#endif >+ > struct drm_ioctl_desc via_ioctls[] = { > DRM_IOCTL_DEF(DRM_VIA_ALLOCMEM, via_mem_alloc, DRM_AUTH), > DRM_IOCTL_DEF(DRM_VIA_FREEMEM, via_mem_free, DRM_AUTH), >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_drm.h linux-2.6.23.i686/drivers/char/drm/via_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/via_drm.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/via_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -31,51 +31,55 @@ > #ifndef _VIA_DEFINES_ > #define _VIA_DEFINES_ > >-#ifndef __KERNEL__ >+ >+#if !defined(__KERNEL__) && !defined(_KERNEL) > #include "via_drmclient.h" > #endif > >-#define VIA_NR_SAREA_CLIPRECTS 8 >-#define VIA_NR_XVMC_PORTS 10 >-#define VIA_NR_XVMC_LOCKS 5 >-#define VIA_MAX_CACHELINE_SIZE 64 >+/* >+ * With the arrival of libdrm there is a need to version this file. >+ * As usual, bump MINOR for new features, MAJOR for changes that create >+ * backwards incompatibilities, (which should be avoided whenever possible). >+ */ >+ >+#define VIA_DRM_DRIVER_DATE "20070202" >+ >+#define VIA_DRM_DRIVER_MAJOR 2 >+#define VIA_DRM_DRIVER_MINOR 11 >+#define VIA_DRM_DRIVER_PATCHLEVEL 1 >+#define VIA_DRM_DRIVER_VERSION (((VIA_DRM_DRIVER_MAJOR) << 16) | (VIA_DRM_DRIVER_MINOR)) >+ >+#define VIA_NR_SAREA_CLIPRECTS 8 >+#define VIA_NR_XVMC_PORTS 10 >+#define VIA_NR_XVMC_LOCKS 5 >+#define VIA_MAX_CACHELINE_SIZE 64 > #define XVMCLOCKPTR(saPriv,lockNo) \ > ((volatile struct drm_hw_lock *)(((((unsigned long) (saPriv)->XvMCLockArea) + \ > (VIA_MAX_CACHELINE_SIZE - 1)) & \ > ~(VIA_MAX_CACHELINE_SIZE - 1)) + \ > VIA_MAX_CACHELINE_SIZE*(lockNo))) >- >-/* Each region is a minimum of 64k, and there are at most 64 of them. >- */ > #define VIA_NR_TEX_REGIONS 64 >-#define VIA_LOG_MIN_TEX_REGION_SIZE 16 >+ > #endif > >-#define VIA_UPLOAD_TEX0IMAGE 0x1 /* handled clientside */ >-#define VIA_UPLOAD_TEX1IMAGE 0x2 /* handled clientside */ >-#define VIA_UPLOAD_CTX 0x4 >-#define VIA_UPLOAD_BUFFERS 0x8 >-#define VIA_UPLOAD_TEX0 0x10 >-#define VIA_UPLOAD_TEX1 0x20 >-#define VIA_UPLOAD_CLIPRECTS 0x40 >-#define VIA_UPLOAD_ALL 0xff >+#define DRM_VIA_FENCE_TYPE_ACCEL 0x00000002 > > /* VIA specific ioctls */ > #define DRM_VIA_ALLOCMEM 0x00 >-#define DRM_VIA_FREEMEM 0x01 >+#define DRM_VIA_FREEMEM 0x01 > #define DRM_VIA_AGP_INIT 0x02 >-#define DRM_VIA_FB_INIT 0x03 >+#define DRM_VIA_FB_INIT 0x03 > #define DRM_VIA_MAP_INIT 0x04 > #define DRM_VIA_DEC_FUTEX 0x05 > #define NOT_USED > #define DRM_VIA_DMA_INIT 0x07 > #define DRM_VIA_CMDBUFFER 0x08 >-#define DRM_VIA_FLUSH 0x09 >-#define DRM_VIA_PCICMD 0x0a >+#define DRM_VIA_FLUSH 0x09 >+#define DRM_VIA_PCICMD 0x0a > #define DRM_VIA_CMDBUF_SIZE 0x0b > #define NOT_USED >-#define DRM_VIA_WAIT_IRQ 0x0d >-#define DRM_VIA_DMA_BLIT 0x0e >+#define DRM_VIA_WAIT_IRQ 0x0d >+#define DRM_VIA_DMA_BLIT 0x0e > #define DRM_VIA_BLIT_SYNC 0x0f > > #define DRM_IOCTL_VIA_ALLOCMEM DRM_IOWR(DRM_COMMAND_BASE + DRM_VIA_ALLOCMEM, drm_via_mem_t) >@@ -107,6 +111,7 @@ > #define VIA_BACK 0x2 > #define VIA_DEPTH 0x4 > #define VIA_STENCIL 0x8 >+ > #define VIA_MEM_VIDEO 0 /* matches drm constant */ > #define VIA_MEM_AGP 1 /* matches drm constant */ > #define VIA_MEM_SYSTEM 2 >@@ -250,7 +255,8 @@ typedef struct drm_via_blitsync { > unsigned engine; > } drm_via_blitsync_t; > >-/* - * Below,"flags" is currently unused but will be used for possible future >+/* >+ * Below,"flags" is currently unused but will be used for possible future > * extensions like kernel space bounce buffers for bad alignments and > * blit engine busy-wait polling for better latency in the absence of > * interrupts. >@@ -259,7 +265,7 @@ typedef struct drm_via_blitsync { > typedef struct drm_via_dmablit { > uint32_t num_lines; > uint32_t line_length; >- >+ > uint32_t fb_addr; > uint32_t fb_stride; > >@@ -272,4 +278,5 @@ typedef struct drm_via_dmablit { > drm_via_blitsync_t sync; > } drm_via_dmablit_t; > >+ > #endif /* _VIA_DRM_H_ */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_drv.c linux-2.6.23.i686/drivers/char/drm/via_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_drv.c 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/via_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -28,21 +28,67 @@ > > #include "drm_pciids.h" > >-static int dri_library_name(struct drm_device *dev, char *buf) >+ >+static int dri_library_name(struct drm_device * dev, char * buf) > { >- return snprintf(buf, PAGE_SIZE, "unichrome"); >+ return snprintf(buf, PAGE_SIZE, "unichrome\n"); > } > > static struct pci_device_id pciidlist[] = { > viadrv_PCI_IDS > }; > >+ >+#ifdef VIA_HAVE_FENCE >+static struct drm_fence_driver via_fence_driver = { >+ .num_classes = 1, >+ .wrap_diff = (1 << 30), >+ .flush_diff = (1 << 20), >+ .sequence_mask = 0xffffffffU, >+ .lazy_capable = 1, >+ .emit = via_fence_emit_sequence, >+ .poke_flush = via_poke_flush, >+ .has_irq = via_fence_has_irq, >+}; >+#endif >+#ifdef VIA_HAVE_BUFFER >+ >+/** >+ * If there's no thrashing. This is the preferred memory type order. >+ */ >+static uint32_t via_mem_prios[] = {DRM_BO_MEM_PRIV0, DRM_BO_MEM_VRAM, DRM_BO_MEM_TT, DRM_BO_MEM_LOCAL}; >+ >+/** >+ * If we have thrashing, most memory will be evicted to TT anyway, so we might as well >+ * just move the new buffer into TT from the start. >+ */ >+static uint32_t via_busy_prios[] = {DRM_BO_MEM_TT, DRM_BO_MEM_PRIV0, DRM_BO_MEM_VRAM, DRM_BO_MEM_LOCAL}; >+ >+ >+static struct drm_bo_driver via_bo_driver = { >+ .mem_type_prio = via_mem_prios, >+ .mem_busy_prio = via_busy_prios, >+ .num_mem_type_prio = ARRAY_SIZE(via_mem_prios), >+ .num_mem_busy_prio = ARRAY_SIZE(via_busy_prios), >+ .create_ttm_backend_entry = via_create_ttm_backend_entry, >+ .fence_type = via_fence_types, >+ .invalidate_caches = via_invalidate_caches, >+ .init_mem_type = via_init_mem_type, >+ .evict_flags = via_evict_flags, >+ .move = NULL, >+}; >+#endif >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); > static struct drm_driver driver = { > .driver_features = > DRIVER_USE_AGP | DRIVER_USE_MTRR | DRIVER_HAVE_IRQ | > DRIVER_IRQ_SHARED | DRIVER_IRQ_VBL, > .load = via_driver_load, > .unload = via_driver_unload, >+#ifndef VIA_HAVE_CORE_MM >+ .context_ctor = via_init_context, >+#endif > .context_dtor = via_final_context, > .vblank_wait = via_driver_vblank_wait, > .irq_preinstall = via_driver_irq_preinstall, >@@ -53,38 +99,53 @@ static struct drm_driver driver = { > .dri_library_name = dri_library_name, > .reclaim_buffers = drm_core_reclaim_buffers, > .reclaim_buffers_locked = NULL, >+#ifdef VIA_HAVE_CORE_MM > .reclaim_buffers_idlelocked = via_reclaim_buffers_locked, > .lastclose = via_lastclose, >+#endif > .get_map_ofs = drm_core_get_map_ofs, > .get_reg_ofs = drm_core_get_reg_ofs, > .ioctls = via_ioctls, > .fops = { >- .owner = THIS_MODULE, >- .open = drm_open, >- .release = drm_release, >- .ioctl = drm_ioctl, >- .mmap = drm_mmap, >- .poll = drm_poll, >- .fasync = drm_fasync, >- }, >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+ }, > .pci_driver = { >- .name = DRIVER_NAME, >- .id_table = pciidlist, >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), > }, >- >+#ifdef VIA_HAVE_FENCE >+ .fence_driver = &via_fence_driver, >+#endif >+#ifdef VIA_HAVE_BUFFER >+ .bo_driver = &via_bo_driver, >+#endif > .name = DRIVER_NAME, > .desc = DRIVER_DESC, >- .date = DRIVER_DATE, >- .major = DRIVER_MAJOR, >- .minor = DRIVER_MINOR, >- .patchlevel = DRIVER_PATCHLEVEL, >+ .date = VIA_DRM_DRIVER_DATE, >+ .major = VIA_DRM_DRIVER_MAJOR, >+ .minor = VIA_DRM_DRIVER_MINOR, >+ .patchlevel = VIA_DRM_DRIVER_PATCHLEVEL > }; > >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ > static int __init via_init(void) > { > driver.num_ioctls = via_max_ioctl; >+ > via_init_command_verifier(); >- return drm_init(&driver); >+ return drm_init(&driver, pciidlist); > } > > static void __exit via_exit(void) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_drv.h linux-2.6.23.i686/drivers/char/drm/via_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/via_drv.h 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -29,16 +29,43 @@ > > #define DRIVER_NAME "via" > #define DRIVER_DESC "VIA Unichrome / Pro" >-#define DRIVER_DATE "20070202" >- >-#define DRIVER_MAJOR 2 >-#define DRIVER_MINOR 11 >-#define DRIVER_PATCHLEVEL 1 > > #include "via_verifier.h" > >+/* >+ * Registers go here. >+ */ >+ >+ >+#define CMDBUF_ALIGNMENT_SIZE (0x100) >+#define CMDBUF_ALIGNMENT_MASK (0x0ff) >+ >+/* defines for VIA 3D registers */ >+#define VIA_REG_STATUS 0x400 >+#define VIA_REG_TRANSET 0x43C >+#define VIA_REG_TRANSPACE 0x440 >+ >+/* VIA_REG_STATUS(0x400): Engine Status */ >+#define VIA_CMD_RGTR_BUSY 0x00000080 /* Command Regulator is busy */ >+#define VIA_2D_ENG_BUSY 0x00000001 /* 2D Engine is busy */ >+#define VIA_3D_ENG_BUSY 0x00000002 /* 3D Engine is busy */ >+#define VIA_VR_QUEUE_BUSY 0x00020000 /* Virtual Queue is busy */ >+ >+ >+ >+#if defined(__linux__) > #include "via_dmablit.h" > >+/* >+ * This define and all its references can be removed when >+ * the DMA blit code has been implemented for FreeBSD. >+ */ >+#define VIA_HAVE_DMABLIT 1 >+#define VIA_HAVE_CORE_MM 1 >+#define VIA_HAVE_FENCE 1 >+#define VIA_HAVE_BUFFER 1 >+#endif >+ > #define VIA_PCI_BUF_SIZE 60000 > #define VIA_FIRE_BUF_SIZE 1024 > #define VIA_NUM_IRQS 4 >@@ -86,14 +113,25 @@ typedef struct drm_via_private { > uint32_t irq_enable_mask; > uint32_t irq_pending_mask; > int *irq_map; >+ /* Memory manager stuff */ >+#ifdef VIA_HAVE_CORE_MM > unsigned int idle_fault; > struct drm_sman sman; > int vram_initialized; > int agp_initialized; > unsigned long vram_offset; > unsigned long agp_offset; >+#endif >+#ifdef VIA_HAVE_DMABLIT > drm_via_blitq_t blit_queues[VIA_NUM_BLIT_ENGINES]; >- uint32_t dma_diff; >+#endif >+ uint32_t dma_diff; >+#ifdef VIA_HAVE_FENCE >+ spinlock_t fence_lock; >+ uint32_t emit_0_sequence; >+ int have_idlelock; >+ struct timer_list fence_timer; >+#endif > } drm_via_private_t; > > enum via_family { >@@ -125,8 +163,6 @@ extern int via_dma_blit( struct drm_devi > > extern int via_driver_load(struct drm_device *dev, unsigned long chipset); > extern int via_driver_unload(struct drm_device *dev); >- >-extern int via_init_context(struct drm_device * dev, int context); > extern int via_final_context(struct drm_device * dev, int context); > > extern int via_do_cleanup_map(struct drm_device * dev); >@@ -140,14 +176,44 @@ extern void via_driver_irq_uninstall(str > extern int via_dma_cleanup(struct drm_device * dev); > extern void via_init_command_verifier(void); > extern int via_driver_dma_quiescent(struct drm_device * dev); >-extern void via_init_futex(drm_via_private_t * dev_priv); >-extern void via_cleanup_futex(drm_via_private_t * dev_priv); >-extern void via_release_futex(drm_via_private_t * dev_priv, int context); >- >-extern void via_reclaim_buffers_locked(struct drm_device *dev, struct drm_file *file_priv); >+extern void via_init_futex(drm_via_private_t *dev_priv); >+extern void via_cleanup_futex(drm_via_private_t *dev_priv); >+extern void via_release_futex(drm_via_private_t *dev_priv, int context); >+ >+#ifdef VIA_HAVE_CORE_MM >+extern void via_reclaim_buffers_locked(struct drm_device *dev, >+ struct drm_file *file_priv); > extern void via_lastclose(struct drm_device *dev); >+#else >+extern int via_init_context(struct drm_device * dev, int context); >+#endif > >+#ifdef VIA_HAVE_DMABLIT > extern void via_dmablit_handler(struct drm_device *dev, int engine, int from_irq); > extern void via_init_dmablit(struct drm_device *dev); >+#endif >+ >+#ifdef VIA_HAVE_FENCE >+extern void via_fence_timer(unsigned long data); >+extern void via_poke_flush(struct drm_device * dev, uint32_t class); >+extern int via_fence_emit_sequence(struct drm_device * dev, uint32_t class, >+ uint32_t flags, >+ uint32_t * sequence, >+ uint32_t * native_type); >+extern int via_fence_has_irq(struct drm_device * dev, uint32_t class, >+ uint32_t flags); >+#endif >+ >+#ifdef VIA_HAVE_BUFFER >+extern struct drm_ttm_backend *via_create_ttm_backend_entry(struct drm_device *dev); >+extern int via_fence_types(struct drm_buffer_object *bo, uint32_t *fclass, >+ uint32_t *type); >+extern int via_invalidate_caches(struct drm_device *dev, uint64_t buffer_flags); >+extern int via_init_mem_type(struct drm_device *dev, uint32_t type, >+ struct drm_mem_type_manager *man); >+extern uint64_t via_evict_flags(struct drm_buffer_object *bo); >+extern int via_move(struct drm_buffer_object *bo, int evict, >+ int no_wait, struct drm_bo_mem_reg *new_mem); >+#endif > > #endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_fence.c linux-2.6.23.i686/drivers/char/drm/via_fence.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_fence.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_fence.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,231 @@ >+/************************************************************************** >+ * >+ * Copyright (c) 2007 Tungsten Graphics, Inc., Cedar Park, TX., USA, >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation the rights to use, copy, modify, merge, publish, >+ * distribute, sub license, and/or sell copies of the Software, and to >+ * permit persons to whom the Software is furnished to do so, subject to >+ * the following conditions: >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, >+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR >+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE >+ * USE OR OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial portions >+ * of the Software. >+ * >+ * >+ **************************************************************************/ >+/* >+ * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> >+ */ >+ >+#include "drmP.h" >+#include "via_drm.h" >+#include "via_drv.h" >+ >+/* >+ * DRM_FENCE_TYPE_EXE guarantees that all command buffers can be evicted. >+ * DRM_VIA_FENCE_TYPE_ACCEL guarantees that all 2D & 3D rendering is complete. >+ */ >+ >+ >+static uint32_t via_perform_flush(struct drm_device *dev, uint32_t class) >+{ >+ drm_via_private_t *dev_priv = (drm_via_private_t *) dev->dev_private; >+ struct drm_fence_class_manager *fc = &dev->fm.fence_class[class]; >+ uint32_t pending_flush_types = 0; >+ uint32_t signaled_flush_types = 0; >+ uint32_t status; >+ >+ if (class != 0) >+ return 0; >+ >+ if (!dev_priv) >+ return 0; >+ >+ spin_lock(&dev_priv->fence_lock); >+ >+ pending_flush_types = fc->pending_flush | >+ ((fc->pending_exe_flush) ? DRM_FENCE_TYPE_EXE : 0); >+ >+ if (pending_flush_types) { >+ >+ /* >+ * Take the idlelock. This guarantees that the next time a client tries >+ * to grab the lock, it will stall until the idlelock is released. This >+ * guarantees that eventually, the GPU engines will be idle, but nothing >+ * else. It cannot be used to protect the hardware. >+ */ >+ >+ >+ if (!dev_priv->have_idlelock) { >+ drm_idlelock_take(&dev->lock); >+ dev_priv->have_idlelock = 1; >+ } >+ >+ /* >+ * Check if AGP command reader is idle. >+ */ >+ >+ if (pending_flush_types & DRM_FENCE_TYPE_EXE) >+ if (VIA_READ(0x41C) & 0x80000000) >+ signaled_flush_types |= DRM_FENCE_TYPE_EXE; >+ >+ /* >+ * Check VRAM command queue empty and 2D + 3D engines idle. >+ */ >+ >+ if (pending_flush_types & DRM_VIA_FENCE_TYPE_ACCEL) { >+ status = VIA_READ(VIA_REG_STATUS); >+ if ((status & VIA_VR_QUEUE_BUSY) && >+ !(status & (VIA_CMD_RGTR_BUSY | VIA_2D_ENG_BUSY | VIA_3D_ENG_BUSY))) >+ signaled_flush_types |= DRM_VIA_FENCE_TYPE_ACCEL; >+ } >+ >+ if (signaled_flush_types) { >+ pending_flush_types &= ~signaled_flush_types; >+ if (!pending_flush_types && dev_priv->have_idlelock) { >+ drm_idlelock_release(&dev->lock); >+ dev_priv->have_idlelock = 0; >+ } >+ drm_fence_handler(dev, 0, dev_priv->emit_0_sequence, >+ signaled_flush_types, 0); >+ } >+ } >+ >+ spin_unlock(&dev_priv->fence_lock); >+ >+ return fc->pending_flush | >+ ((fc->pending_exe_flush) ? DRM_FENCE_TYPE_EXE : 0); >+} >+ >+ >+/** >+ * Emit a fence sequence. >+ */ >+ >+int via_fence_emit_sequence(struct drm_device * dev, uint32_t class, uint32_t flags, >+ uint32_t * sequence, uint32_t * native_type) >+{ >+ drm_via_private_t *dev_priv = (drm_via_private_t *) dev->dev_private; >+ int ret = 0; >+ >+ if (!dev_priv) >+ return -EINVAL; >+ >+ switch(class) { >+ case 0: /* AGP command stream */ >+ >+ /* >+ * The sequence number isn't really used by the hardware yet. >+ */ >+ >+ spin_lock(&dev_priv->fence_lock); >+ *sequence = ++dev_priv->emit_0_sequence; >+ spin_unlock(&dev_priv->fence_lock); >+ >+ /* >+ * When drm_fence_handler() is called with flush type 0x01, and a >+ * sequence number, That means that the EXE flag is expired. >+ * Nothing else. No implicit flushing or other engines idle. >+ */ >+ >+ *native_type = DRM_FENCE_TYPE_EXE; >+ break; >+ default: >+ ret = -EINVAL; >+ break; >+ } >+ return ret; >+} >+ >+/** >+ * Manual poll (from the fence manager). >+ */ >+ >+void via_poke_flush(struct drm_device * dev, uint32_t class) >+{ >+ drm_via_private_t *dev_priv = (drm_via_private_t *) dev->dev_private; >+ struct drm_fence_manager *fm = &dev->fm; >+ unsigned long flags; >+ uint32_t pending_flush; >+ >+ if (!dev_priv) >+ return ; >+ >+ write_lock_irqsave(&fm->lock, flags); >+ pending_flush = via_perform_flush(dev, class); >+ if (pending_flush) >+ pending_flush = via_perform_flush(dev, class); >+ write_unlock_irqrestore(&fm->lock, flags); >+ >+ /* >+ * Kick the timer if there are more flushes pending. >+ */ >+ >+ if (pending_flush && !timer_pending(&dev_priv->fence_timer)) { >+ dev_priv->fence_timer.expires = jiffies + 1; >+ add_timer(&dev_priv->fence_timer); >+ } >+} >+ >+/** >+ * No irq fence expirations implemented yet. >+ * Although both the HQV engines and PCI dmablit engines signal >+ * idle with an IRQ, we haven't implemented this yet. >+ * This means that the drm fence manager will always poll for engine idle, >+ * unless the caller wanting to wait for a fence object has indicated a lazy wait. >+ */ >+ >+int via_fence_has_irq(struct drm_device * dev, uint32_t class, >+ uint32_t flags) >+{ >+ return 0; >+} >+ >+/** >+ * Regularly call the flush function. This enables lazy waits, so we can >+ * set lazy_capable. Lazy waits don't really care when the fence expires, >+ * so a timer tick delay should be fine. >+ */ >+ >+void via_fence_timer(unsigned long data) >+{ >+ struct drm_device *dev = (struct drm_device *) data; >+ drm_via_private_t *dev_priv = (drm_via_private_t *) dev->dev_private; >+ struct drm_fence_manager *fm = &dev->fm; >+ uint32_t pending_flush; >+ struct drm_fence_class_manager *fc = &dev->fm.fence_class[0]; >+ >+ if (!dev_priv) >+ return; >+ if (!fm->initialized) >+ goto out_unlock; >+ >+ via_poke_flush(dev, 0); >+ pending_flush = fc->pending_flush | >+ ((fc->pending_exe_flush) ? DRM_FENCE_TYPE_EXE : 0); >+ >+ /* >+ * disable timer if there are no more flushes pending. >+ */ >+ >+ if (!pending_flush && timer_pending(&dev_priv->fence_timer)) { >+ BUG_ON(dev_priv->have_idlelock); >+ del_timer(&dev_priv->fence_timer); >+ } >+ return; >+out_unlock: >+ return; >+ >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_irq.c linux-2.6.23.i686/drivers/char/drm/via_irq.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_irq.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_irq.c 2008-01-06 09:24:57.000000000 +0100 >@@ -43,7 +43,7 @@ > #define VIA_REG_INTERRUPT 0x200 > > /* VIA_REG_INTERRUPT */ >-#define VIA_IRQ_GLOBAL (1 << 31) >+#define VIA_IRQ_GLOBAL (1 << 31) > #define VIA_IRQ_VBLANK_ENABLE (1 << 19) > #define VIA_IRQ_VBLANK_PENDING (1 << 3) > #define VIA_IRQ_HQV0_ENABLE (1 << 11) >@@ -68,16 +68,15 @@ > > static maskarray_t via_pro_group_a_irqs[] = { > {VIA_IRQ_HQV0_ENABLE, VIA_IRQ_HQV0_PENDING, 0x000003D0, 0x00008010, >- 0x00000000}, >+ 0x00000000 }, > {VIA_IRQ_HQV1_ENABLE, VIA_IRQ_HQV1_PENDING, 0x000013D0, 0x00008010, >- 0x00000000}, >+ 0x00000000 }, > {VIA_IRQ_DMA0_TD_ENABLE, VIA_IRQ_DMA0_TD_PENDING, VIA_PCI_DMA_CSR0, > VIA_DMA_CSR_TA | VIA_DMA_CSR_TD, 0x00000008}, > {VIA_IRQ_DMA1_TD_ENABLE, VIA_IRQ_DMA1_TD_PENDING, VIA_PCI_DMA_CSR1, > VIA_DMA_CSR_TA | VIA_DMA_CSR_TD, 0x00000008}, > }; >-static int via_num_pro_group_a = >- sizeof(via_pro_group_a_irqs) / sizeof(maskarray_t); >+static int via_num_pro_group_a = ARRAY_SIZE(via_pro_group_a_irqs); > static int via_irqmap_pro_group_a[] = {0, 1, -1, 2, -1, 3}; > > static maskarray_t via_unichrome_irqs[] = { >@@ -86,14 +85,15 @@ static maskarray_t via_unichrome_irqs[] > {VIA_IRQ_DMA1_TD_ENABLE, VIA_IRQ_DMA1_TD_PENDING, VIA_PCI_DMA_CSR1, > VIA_DMA_CSR_TA | VIA_DMA_CSR_TD, 0x00000008} > }; >-static int via_num_unichrome = sizeof(via_unichrome_irqs) / sizeof(maskarray_t); >+static int via_num_unichrome = ARRAY_SIZE(via_unichrome_irqs); > static int via_irqmap_unichrome[] = {-1, -1, -1, 0, -1, 1}; > >-static unsigned time_diff(struct timeval *now, struct timeval *then) >+ >+static unsigned time_diff(struct timeval *now,struct timeval *then) > { > return (now->tv_usec >= then->tv_usec) ? >- now->tv_usec - then->tv_usec : >- 1000000 - (then->tv_usec - now->tv_usec); >+ now->tv_usec - then->tv_usec : >+ 1000000 - (then->tv_usec - now->tv_usec); > } > > irqreturn_t via_driver_irq_handler(DRM_IRQ_ARGS) >@@ -110,11 +110,15 @@ irqreturn_t via_driver_irq_handler(DRM_I > if (status & VIA_IRQ_VBLANK_PENDING) { > atomic_inc(&dev->vbl_received); > if (!(atomic_read(&dev->vbl_received) & 0x0F)) { >+#ifdef __linux__ > do_gettimeofday(&cur_vblank); >+#else >+ microtime(&cur_vblank); >+#endif > if (dev_priv->last_vblank_valid) { > dev_priv->usec_per_vblank = >- time_diff(&cur_vblank, >- &dev_priv->last_vblank) >> 4; >+ time_diff(&cur_vblank, >+ &dev_priv->last_vblank) >> 4; > } > dev_priv->last_vblank = cur_vblank; > dev_priv->last_vblank_valid = 1; >@@ -133,11 +137,13 @@ irqreturn_t via_driver_irq_handler(DRM_I > atomic_inc(&cur_irq->irq_received); > DRM_WAKEUP(&cur_irq->irq_queue); > handled = 1; >+#ifdef VIA_HAVE_DMABLIT > if (dev_priv->irq_map[drm_via_irq_dma0_td] == i) { > via_dmablit_handler(dev, 0, 1); > } else if (dev_priv->irq_map[drm_via_irq_dma1_td] == i) { > via_dmablit_handler(dev, 1, 1); > } >+#endif > } > cur_irq++; > } >@@ -145,6 +151,7 @@ irqreturn_t via_driver_irq_handler(DRM_I > /* Acknowlege interrupts */ > VIA_WRITE(VIA_REG_INTERRUPT, status); > >+ > if (handled) > return IRQ_HANDLED; > else >@@ -240,6 +247,7 @@ via_driver_irq_wait(struct drm_device * > return ret; > } > >+ > /* > * drm_dma.h hooks > */ >@@ -353,7 +361,8 @@ int via_wait_irq(struct drm_device *dev, > > switch (irqwait->request.type & ~VIA_IRQ_FLAGS_MASK) { > case VIA_IRQ_RELATIVE: >- irqwait->request.sequence += atomic_read(&cur_irq->irq_received); >+ irqwait->request.sequence += >+ atomic_read(&cur_irq->irq_received); > irqwait->request.type &= ~_DRM_VBLANK_RELATIVE; > case VIA_IRQ_ABSOLUTE: > break; >@@ -371,7 +380,11 @@ int via_wait_irq(struct drm_device *dev, > > ret = via_driver_irq_wait(dev, irqwait->request.irq, force_sequence, > &irqwait->request.sequence); >+#ifdef __linux__ > do_gettimeofday(&now); >+#else >+ microtime(&now); >+#endif > irqwait->reply.tval_sec = now.tv_sec; > irqwait->reply.tval_usec = now.tv_usec; > >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_map.c linux-2.6.23.i686/drivers/char/drm/via_map.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_map.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_map.c 2008-01-06 09:24:57.000000000 +0100 >@@ -28,6 +28,7 @@ > static int via_do_init_map(struct drm_device * dev, drm_via_init_t * init) > { > drm_via_private_t *dev_priv = dev->dev_private; >+ int ret = 0; > > DRM_DEBUG("%s\n", __FUNCTION__); > >@@ -60,12 +61,26 @@ static int via_do_init_map(struct drm_de > > dev_priv->agpAddr = init->agpAddr; > >- via_init_futex(dev_priv); >- >- via_init_dmablit(dev); >- >+ via_init_futex( dev_priv ); >+#ifdef VIA_HAVE_DMABLIT >+ via_init_dmablit( dev ); >+#endif >+#ifdef VIA_HAVE_FENCE >+ dev_priv->emit_0_sequence = 0; >+ dev_priv->have_idlelock = 0; >+ spin_lock_init(&dev_priv->fence_lock); >+ init_timer(&dev_priv->fence_timer); >+ dev_priv->fence_timer.function = &via_fence_timer; >+ dev_priv->fence_timer.data = (unsigned long) dev; >+#endif /* VIA_HAVE_FENCE */ > dev->dev_private = (void *)dev_priv; >- return 0; >+#ifdef VIA_HAVE_BUFFER >+ ret = drm_bo_driver_init(dev); >+ if (ret) >+ DRM_ERROR("Could not initialize buffer object driver.\n"); >+#endif >+ return ret; >+ > } > > int via_do_cleanup_map(struct drm_device * dev) >@@ -75,6 +90,7 @@ int via_do_cleanup_map(struct drm_device > return 0; > } > >+ > int via_map_init(struct drm_device *dev, void *data, struct drm_file *file_priv) > { > drm_via_init_t *init = data; >@@ -104,10 +120,12 @@ int via_driver_load(struct drm_device *d > > dev_priv->chipset = chipset; > >+#ifdef VIA_HAVE_CORE_MM > ret = drm_sman_init(&dev_priv->sman, 2, 12, 8); > if (ret) { > drm_free(dev_priv, sizeof(*dev_priv), DRM_MEM_DRIVER); > } >+#endif > return ret; > } > >@@ -115,10 +133,10 @@ int via_driver_unload(struct drm_device > { > drm_via_private_t *dev_priv = dev->dev_private; > >+#ifdef VIA_HAVE_CORE_MM > drm_sman_takedown(&dev_priv->sman); >- >+#endif > drm_free(dev_priv, sizeof(drm_via_private_t), DRM_MEM_DRIVER); > > return 0; > } >- >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_mm.c linux-2.6.23.i686/drivers/char/drm/via_mm.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_mm.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_mm.c 2008-01-06 09:24:57.000000000 +0100 >@@ -89,6 +89,7 @@ int via_final_context(struct drm_device > > via_release_futex(dev_priv, context); > >+#if defined(__linux__) > /* Linux specific until context tracking code gets ported to BSD */ > /* Last context, perform cleanup */ > if (dev->ctx_count == 1 && dev->dev_private) { >@@ -98,6 +99,7 @@ int via_final_context(struct drm_device > via_cleanup_futex(dev_priv); > via_do_cleanup_map(dev); > } >+#endif > return 1; > } > >@@ -113,7 +115,7 @@ void via_lastclose(struct drm_device *de > dev_priv->vram_initialized = 0; > dev_priv->agp_initialized = 0; > mutex_unlock(&dev->struct_mutex); >-} >+} > > int via_mem_alloc(struct drm_device *dev, void *data, > struct drm_file *file_priv) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_verifier.c linux-2.6.23.i686/drivers/char/drm/via_verifier.c >--- linux-2.6.23.i686.orig/drivers/char/drm/via_verifier.c 2008-01-06 18:54:41.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/via_verifier.c 2008-01-06 09:24:57.000000000 +0100 >@@ -250,22 +250,27 @@ eat_words(const uint32_t ** buf, const u > */ > > static __inline__ drm_local_map_t *via_drm_lookup_agp_map(drm_via_state_t *seq, >- unsigned long offset, >- unsigned long size, >- struct drm_device * dev) >+ unsigned long offset, >+ unsigned long size, >+ struct drm_device *dev) > { >+#ifdef __linux__ > struct drm_map_list *r_list; >+#endif > drm_local_map_t *map = seq->map_cache; > > if (map && map->offset <= offset > && (offset + size) <= (map->offset + map->size)) { > return map; > } >- >+#ifdef __linux__ > list_for_each_entry(r_list, &dev->maplist, head) { > map = r_list->map; > if (!map) > continue; >+#else >+ TAILQ_FOREACH(map, &dev->maplist, link) { >+#endif > if (map->offset <= offset > && (offset + size) <= (map->offset + map->size) > && !(map->flags & _DRM_RESTRICTED) >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/via_verifier.h linux-2.6.23.i686/drivers/char/drm/via_verifier.h >--- linux-2.6.23.i686.orig/drivers/char/drm/via_verifier.h 2007-10-09 22:31:38.000000000 +0200 >+++ linux-2.6.23.i686/drivers/char/drm/via_verifier.h 2008-01-06 09:24:57.000000000 +0100 >@@ -54,9 +54,9 @@ typedef struct { > const uint32_t *buf_start; > } drm_via_state_t; > >-extern int via_verify_command_stream(const uint32_t * buf, unsigned int size, >- struct drm_device * dev, int agp); >+extern int via_verify_command_stream(const uint32_t *buf, unsigned int size, >+ struct drm_device *dev, int agp); > extern int via_parse_command_stream(struct drm_device *dev, const uint32_t *buf, >- unsigned int size); >+ unsigned int size); > > #endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_cmdlist.c linux-2.6.23.i686/drivers/char/drm/xgi_cmdlist.c >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_cmdlist.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_cmdlist.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,326 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#include "xgi_drv.h" >+#include "xgi_regs.h" >+#include "xgi_misc.h" >+#include "xgi_cmdlist.h" >+ >+static void xgi_emit_flush(struct xgi_info * info, bool stop); >+static void xgi_emit_nop(struct xgi_info * info); >+static unsigned int get_batch_command(enum xgi_batch_type type); >+static void triggerHWCommandList(struct xgi_info * info); >+static void xgi_cmdlist_reset(struct xgi_info * info); >+ >+ >+/** >+ * Graphic engine register (2d/3d) acessing interface >+ */ >+static inline void dwWriteReg(struct drm_map * map, u32 addr, u32 data) >+{ >+#ifdef XGI_MMIO_DEBUG >+ DRM_INFO("mmio_map->handle = 0x%p, addr = 0x%x, data = 0x%x\n", >+ map->handle, addr, data); >+#endif >+ DRM_WRITE32(map, addr, data); >+} >+ >+ >+int xgi_cmdlist_initialize(struct xgi_info * info, size_t size, >+ struct drm_file * filp) >+{ >+ struct xgi_mem_alloc mem_alloc = { >+ .location = XGI_MEMLOC_NON_LOCAL, >+ .size = size, >+ }; >+ int err; >+ >+ err = xgi_alloc(info, &mem_alloc, filp); >+ if (err) { >+ return err; >+ } >+ >+ info->cmdring.ptr = xgi_find_pcie_virt(info, mem_alloc.hw_addr); >+ info->cmdring.size = mem_alloc.size; >+ info->cmdring.ring_hw_base = mem_alloc.hw_addr; >+ info->cmdring.last_ptr = NULL; >+ info->cmdring.ring_offset = 0; >+ >+ return 0; >+} >+ >+ >+/** >+ * get_batch_command - Get the command ID for the current begin type. >+ * @type: Type of the current batch >+ * >+ * See section 3.2.2 "Begin" (page 15) of the 3D SPG. >+ * >+ * This function assumes that @type is on the range [0,3]. >+ */ >+unsigned int get_batch_command(enum xgi_batch_type type) >+{ >+ static const unsigned int ports[4] = { >+ 0x30 >> 2, 0x40 >> 2, 0x50 >> 2, 0x20 >> 2 >+ }; >+ >+ return ports[type]; >+} >+ >+ >+int xgi_submit_cmdlist(struct drm_device * dev, void * data, >+ struct drm_file * filp) >+{ >+ struct xgi_info *const info = dev->dev_private; >+ const struct xgi_cmd_info *const pCmdInfo = >+ (struct xgi_cmd_info *) data; >+ const unsigned int cmd = get_batch_command(pCmdInfo->type); >+ u32 begin[4]; >+ >+ >+ begin[0] = (cmd << 24) | BEGIN_VALID_MASK >+ | (BEGIN_BEGIN_IDENTIFICATION_MASK & info->next_sequence); >+ begin[1] = BEGIN_LINK_ENABLE_MASK | pCmdInfo->size; >+ begin[2] = pCmdInfo->hw_addr >> 4; >+ begin[3] = 0; >+ >+ if (info->cmdring.last_ptr == NULL) { >+ const unsigned int portOffset = BASE_3D_ENG + (cmd << 2); >+ >+ >+ /* Enable PCI Trigger Mode >+ */ >+ dwWriteReg(info->mmio_map, >+ BASE_3D_ENG + M2REG_AUTO_LINK_SETTING_ADDRESS, >+ (M2REG_AUTO_LINK_SETTING_ADDRESS << 22) | >+ M2REG_CLEAR_COUNTERS_MASK | 0x08 | >+ M2REG_PCI_TRIGGER_MODE_MASK); >+ >+ dwWriteReg(info->mmio_map, >+ BASE_3D_ENG + M2REG_AUTO_LINK_SETTING_ADDRESS, >+ (M2REG_AUTO_LINK_SETTING_ADDRESS << 22) | 0x08 | >+ M2REG_PCI_TRIGGER_MODE_MASK); >+ >+ >+ /* Send PCI begin command >+ */ >+ dwWriteReg(info->mmio_map, portOffset, begin[0]); >+ dwWriteReg(info->mmio_map, portOffset + 4, begin[1]); >+ dwWriteReg(info->mmio_map, portOffset + 8, begin[2]); >+ dwWriteReg(info->mmio_map, portOffset + 12, begin[3]); >+ } else { >+ DRM_DEBUG("info->cmdring.last_ptr != NULL\n"); >+ >+ if (pCmdInfo->type == BTYPE_3D) { >+ xgi_emit_flush(info, FALSE); >+ } >+ >+ info->cmdring.last_ptr[1] = cpu_to_le32(begin[1]); >+ info->cmdring.last_ptr[2] = cpu_to_le32(begin[2]); >+ info->cmdring.last_ptr[3] = cpu_to_le32(begin[3]); >+ DRM_WRITEMEMORYBARRIER(); >+ info->cmdring.last_ptr[0] = cpu_to_le32(begin[0]); >+ >+ triggerHWCommandList(info); >+ } >+ >+ info->cmdring.last_ptr = xgi_find_pcie_virt(info, pCmdInfo->hw_addr); >+ drm_fence_flush_old(info->dev, 0, info->next_sequence); >+ return 0; >+} >+ >+ >+/* >+ state: 0 - console >+ 1 - graphic >+ 2 - fb >+ 3 - logout >+*/ >+int xgi_state_change(struct xgi_info * info, unsigned int to, >+ unsigned int from) >+{ >+#define STATE_CONSOLE 0 >+#define STATE_GRAPHIC 1 >+#define STATE_FBTERM 2 >+#define STATE_LOGOUT 3 >+#define STATE_REBOOT 4 >+#define STATE_SHUTDOWN 5 >+ >+ if ((from == STATE_GRAPHIC) && (to == STATE_CONSOLE)) { >+ DRM_INFO("Leaving graphical mode (probably VT switch)\n"); >+ } else if ((from == STATE_CONSOLE) && (to == STATE_GRAPHIC)) { >+ DRM_INFO("Entering graphical mode (probably VT switch)\n"); >+ xgi_cmdlist_reset(info); >+ } else if ((from == STATE_GRAPHIC) >+ && ((to == STATE_LOGOUT) >+ || (to == STATE_REBOOT) >+ || (to == STATE_SHUTDOWN))) { >+ DRM_INFO("Leaving graphical mode (probably X shutting down)\n"); >+ } else { >+ DRM_ERROR("Invalid state change.\n"); >+ return -EINVAL; >+ } >+ >+ return 0; >+} >+ >+ >+int xgi_state_change_ioctl(struct drm_device * dev, void * data, >+ struct drm_file * filp) >+{ >+ struct xgi_state_info *const state = >+ (struct xgi_state_info *) data; >+ struct xgi_info *info = dev->dev_private; >+ >+ >+ return xgi_state_change(info, state->_toState, state->_fromState); >+} >+ >+ >+void xgi_cmdlist_reset(struct xgi_info * info) >+{ >+ info->cmdring.last_ptr = NULL; >+ info->cmdring.ring_offset = 0; >+} >+ >+ >+void xgi_cmdlist_cleanup(struct xgi_info * info) >+{ >+ if (info->cmdring.ring_hw_base != 0) { >+ /* If command lists have been issued, terminate the command >+ * list chain with a flush command. >+ */ >+ if (info->cmdring.last_ptr != NULL) { >+ xgi_emit_flush(info, FALSE); >+ xgi_emit_nop(info); >+ } >+ >+ xgi_waitfor_pci_idle(info); >+ >+ (void) memset(&info->cmdring, 0, sizeof(info->cmdring)); >+ } >+} >+ >+static void triggerHWCommandList(struct xgi_info * info) >+{ >+ static unsigned int s_triggerID = 1; >+ >+ dwWriteReg(info->mmio_map, >+ BASE_3D_ENG + M2REG_PCI_TRIGGER_REGISTER_ADDRESS, >+ 0x05000000 + (0x0ffff & s_triggerID++)); >+} >+ >+ >+/** >+ * Emit a flush to the CRTL command stream. >+ * @info XGI info structure >+ * >+ * This function assumes info->cmdring.ptr is non-NULL. >+ */ >+void xgi_emit_flush(struct xgi_info * info, bool stop) >+{ >+ const u32 flush_command[8] = { >+ ((0x10 << 24) >+ | (BEGIN_BEGIN_IDENTIFICATION_MASK & info->next_sequence)), >+ BEGIN_LINK_ENABLE_MASK | (0x00004), >+ 0x00000000, 0x00000000, >+ >+ /* Flush the 2D engine with the default 32 clock delay. >+ */ >+ M2REG_FLUSH_ENGINE_COMMAND | M2REG_FLUSH_2D_ENGINE_MASK, >+ M2REG_FLUSH_ENGINE_COMMAND | M2REG_FLUSH_2D_ENGINE_MASK, >+ M2REG_FLUSH_ENGINE_COMMAND | M2REG_FLUSH_2D_ENGINE_MASK, >+ M2REG_FLUSH_ENGINE_COMMAND | M2REG_FLUSH_2D_ENGINE_MASK, >+ }; >+ const unsigned int flush_size = sizeof(flush_command); >+ u32 *batch_addr; >+ u32 hw_addr; >+ unsigned int i; >+ >+ >+ /* check buf is large enough to contain a new flush batch */ >+ if ((info->cmdring.ring_offset + flush_size) >= info->cmdring.size) { >+ info->cmdring.ring_offset = 0; >+ } >+ >+ hw_addr = info->cmdring.ring_hw_base >+ + info->cmdring.ring_offset; >+ batch_addr = info->cmdring.ptr >+ + (info->cmdring.ring_offset / 4); >+ >+ for (i = 0; i < (flush_size / 4); i++) { >+ batch_addr[i] = cpu_to_le32(flush_command[i]); >+ } >+ >+ if (stop) { >+ *batch_addr |= cpu_to_le32(BEGIN_STOP_STORE_CURRENT_POINTER_MASK); >+ } >+ >+ info->cmdring.last_ptr[1] = cpu_to_le32(BEGIN_LINK_ENABLE_MASK | (flush_size / 4)); >+ info->cmdring.last_ptr[2] = cpu_to_le32(hw_addr >> 4); >+ info->cmdring.last_ptr[3] = 0; >+ DRM_WRITEMEMORYBARRIER(); >+ info->cmdring.last_ptr[0] = cpu_to_le32((get_batch_command(BTYPE_CTRL) << 24) >+ | (BEGIN_VALID_MASK)); >+ >+ triggerHWCommandList(info); >+ >+ info->cmdring.ring_offset += flush_size; >+ info->cmdring.last_ptr = batch_addr; >+} >+ >+ >+/** >+ * Emit an empty command to the CRTL command stream. >+ * @info XGI info structure >+ * >+ * This function assumes info->cmdring.ptr is non-NULL. In addition, since >+ * this function emits a command that does not have linkage information, >+ * it sets info->cmdring.ptr to NULL. >+ */ >+void xgi_emit_nop(struct xgi_info * info) >+{ >+ info->cmdring.last_ptr[1] = cpu_to_le32(BEGIN_LINK_ENABLE_MASK >+ | (BEGIN_BEGIN_IDENTIFICATION_MASK & info->next_sequence)); >+ info->cmdring.last_ptr[2] = 0; >+ info->cmdring.last_ptr[3] = 0; >+ DRM_WRITEMEMORYBARRIER(); >+ info->cmdring.last_ptr[0] = cpu_to_le32((get_batch_command(BTYPE_CTRL) << 24) >+ | (BEGIN_VALID_MASK)); >+ >+ triggerHWCommandList(info); >+ >+ info->cmdring.last_ptr = NULL; >+} >+ >+ >+void xgi_emit_irq(struct xgi_info * info) >+{ >+ if (info->cmdring.last_ptr == NULL) >+ return; >+ >+ xgi_emit_flush(info, TRUE); >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_cmdlist.h linux-2.6.23.i686/drivers/char/drm/xgi_cmdlist.h >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_cmdlist.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_cmdlist.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,66 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#ifndef _XGI_CMDLIST_H_ >+#define _XGI_CMDLIST_H_ >+ >+struct xgi_cmdring_info { >+ /** >+ * Kernel space pointer to the base of the command ring. >+ */ >+ u32 * ptr; >+ >+ /** >+ * Size, in bytes, of the command ring. >+ */ >+ unsigned int size; >+ >+ /** >+ * Base address of the command ring from the hardware's PoV. >+ */ >+ unsigned int ring_hw_base; >+ >+ u32 * last_ptr; >+ >+ /** >+ * Offset, in bytes, from the start of the ring to the next available >+ * location to store a command. >+ */ >+ unsigned int ring_offset; >+}; >+ >+struct xgi_info; >+extern int xgi_cmdlist_initialize(struct xgi_info * info, size_t size, >+ struct drm_file * filp); >+ >+extern int xgi_state_change(struct xgi_info * info, unsigned int to, >+ unsigned int from); >+ >+extern void xgi_cmdlist_cleanup(struct xgi_info * info); >+ >+extern void xgi_emit_irq(struct xgi_info * info); >+ >+#endif /* _XGI_CMDLIST_H_ */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_drm.h linux-2.6.23.i686/drivers/char/drm/xgi_drm.h >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_drm.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_drm.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,133 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, >+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF >+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND >+ * NON-INFRINGEMENT. IN NO EVENT SHALL XGI AND/OR >+ * ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, >+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, >+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#ifndef _XGI_DRM_H_ >+#define _XGI_DRM_H_ >+ >+#include <linux/types.h> >+#include <asm/ioctl.h> >+ >+struct drm_xgi_sarea { >+ __u16 device_id; >+ __u16 vendor_id; >+ >+ char device_name[32]; >+ >+ unsigned int scrn_start; >+ unsigned int scrn_xres; >+ unsigned int scrn_yres; >+ unsigned int scrn_bpp; >+ unsigned int scrn_pitch; >+}; >+ >+ >+struct xgi_bootstrap { >+ /** >+ * Size of PCI-e GART range in megabytes. >+ */ >+ struct drm_map gart; >+}; >+ >+ >+enum xgi_mem_location { >+ XGI_MEMLOC_NON_LOCAL = 0, >+ XGI_MEMLOC_LOCAL = 1, >+ XGI_MEMLOC_INVALID = 0x7fffffff >+}; >+ >+struct xgi_mem_alloc { >+ /** >+ * Memory region to be used for allocation. >+ * >+ * Must be one of XGI_MEMLOC_NON_LOCAL or XGI_MEMLOC_LOCAL. >+ */ >+ unsigned int location; >+ >+ /** >+ * Number of bytes request. >+ * >+ * On successful allocation, set to the actual number of bytes >+ * allocated. >+ */ >+ unsigned int size; >+ >+ /** >+ * Address of the memory from the graphics hardware's point of view. >+ */ >+ __u32 hw_addr; >+ >+ /** >+ * Offset of the allocation in the mapping. >+ */ >+ __u32 offset; >+ >+ /** >+ * Magic handle used to release memory. >+ * >+ * See also DRM_XGI_FREE ioctl. >+ */ >+ __u32 index; >+}; >+ >+enum xgi_batch_type { >+ BTYPE_2D = 0, >+ BTYPE_3D = 1, >+ BTYPE_FLIP = 2, >+ BTYPE_CTRL = 3, >+ BTYPE_NONE = 0x7fffffff >+}; >+ >+struct xgi_cmd_info { >+ __u32 type; >+ __u32 hw_addr; >+ __u32 size; >+ __u32 id; >+}; >+ >+struct xgi_state_info { >+ unsigned int _fromState; >+ unsigned int _toState; >+}; >+ >+ >+/* >+ * Ioctl definitions >+ */ >+ >+#define DRM_XGI_BOOTSTRAP 0 >+#define DRM_XGI_ALLOC 1 >+#define DRM_XGI_FREE 2 >+#define DRM_XGI_SUBMIT_CMDLIST 3 >+#define DRM_XGI_STATE_CHANGE 4 >+ >+#define XGI_IOCTL_BOOTSTRAP DRM_IOWR(DRM_COMMAND_BASE + DRM_XGI_BOOTSTRAP, struct xgi_bootstrap) >+#define XGI_IOCTL_ALLOC DRM_IOWR(DRM_COMMAND_BASE + DRM_XGI_ALLOC, struct xgi_mem_alloc) >+#define XGI_IOCTL_FREE DRM_IOW(DRM_COMMAND_BASE + DRM_XGI_FREE, __u32) >+#define XGI_IOCTL_SUBMIT_CMDLIST DRM_IOW(DRM_COMMAND_BASE + DRM_XGI_SUBMIT_CMDLIST, struct xgi_cmd_info) >+#define XGI_IOCTL_STATE_CHANGE DRM_IOW(DRM_COMMAND_BASE + DRM_XGI_STATE_CHANGE, struct xgi_state_info) >+ >+#endif /* _XGI_DRM_H_ */ >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_drv.c linux-2.6.23.i686/drivers/char/drm/xgi_drv.c >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_drv.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_drv.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,431 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "xgi_drv.h" >+#include "xgi_regs.h" >+#include "xgi_misc.h" >+#include "xgi_cmdlist.h" >+ >+#include "drm_pciids.h" >+ >+static struct pci_device_id pciidlist[] = { >+ xgi_PCI_IDS >+}; >+ >+static struct drm_fence_driver xgi_fence_driver = { >+ .num_classes = 1, >+ .wrap_diff = BEGIN_BEGIN_IDENTIFICATION_MASK, >+ .flush_diff = BEGIN_BEGIN_IDENTIFICATION_MASK - 1, >+ .sequence_mask = BEGIN_BEGIN_IDENTIFICATION_MASK, >+ .lazy_capable = 1, >+ .emit = xgi_fence_emit_sequence, >+ .poke_flush = xgi_poke_flush, >+ .has_irq = xgi_fence_has_irq >+}; >+ >+int xgi_bootstrap(struct drm_device *, void *, struct drm_file *); >+ >+static struct drm_ioctl_desc xgi_ioctls[] = { >+ DRM_IOCTL_DEF(DRM_XGI_BOOTSTRAP, xgi_bootstrap, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), >+ DRM_IOCTL_DEF(DRM_XGI_ALLOC, xgi_alloc_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_XGI_FREE, xgi_free_ioctl, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_XGI_SUBMIT_CMDLIST, xgi_submit_cmdlist, DRM_AUTH), >+ DRM_IOCTL_DEF(DRM_XGI_STATE_CHANGE, xgi_state_change_ioctl, DRM_AUTH|DRM_MASTER), >+}; >+ >+static const int xgi_max_ioctl = DRM_ARRAY_SIZE(xgi_ioctls); >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent); >+static int xgi_driver_load(struct drm_device *dev, unsigned long flags); >+static int xgi_driver_unload(struct drm_device *dev); >+static void xgi_driver_lastclose(struct drm_device * dev); >+static void xgi_reclaim_buffers_locked(struct drm_device * dev, >+ struct drm_file * filp); >+static irqreturn_t xgi_kern_isr(DRM_IRQ_ARGS); >+ >+ >+static struct drm_driver driver = { >+ .driver_features = >+ DRIVER_PCI_DMA | DRIVER_HAVE_DMA | DRIVER_HAVE_IRQ | >+ DRIVER_IRQ_SHARED | DRIVER_SG, >+ .dev_priv_size = sizeof(struct xgi_info), >+ .load = xgi_driver_load, >+ .unload = xgi_driver_unload, >+ .lastclose = xgi_driver_lastclose, >+ .dma_quiescent = NULL, >+ .irq_preinstall = NULL, >+ .irq_postinstall = NULL, >+ .irq_uninstall = NULL, >+ .irq_handler = xgi_kern_isr, >+ .reclaim_buffers = drm_core_reclaim_buffers, >+ .reclaim_buffers_idlelocked = xgi_reclaim_buffers_locked, >+ .get_map_ofs = drm_core_get_map_ofs, >+ .get_reg_ofs = drm_core_get_reg_ofs, >+ .ioctls = xgi_ioctls, >+ .dma_ioctl = NULL, >+ >+ .fops = { >+ .owner = THIS_MODULE, >+ .open = drm_open, >+ .release = drm_release, >+ .ioctl = drm_ioctl, >+ .mmap = drm_mmap, >+ .poll = drm_poll, >+ .fasync = drm_fasync, >+#if defined(CONFIG_COMPAT) && LINUX_VERSION_CODE > KERNEL_VERSION(2,6,9) >+ .compat_ioctl = xgi_compat_ioctl, >+#endif >+ }, >+ >+ .pci_driver = { >+ .name = DRIVER_NAME, >+ .id_table = pciidlist, >+ .probe = probe, >+ .remove = __devexit_p(drm_cleanup_pci), >+ }, >+ >+ .fence_driver = &xgi_fence_driver, >+ >+ .name = DRIVER_NAME, >+ .desc = DRIVER_DESC, >+ .date = DRIVER_DATE, >+ .major = DRIVER_MAJOR, >+ .minor = DRIVER_MINOR, >+ .patchlevel = DRIVER_PATCHLEVEL, >+ >+}; >+ >+static int probe(struct pci_dev *pdev, const struct pci_device_id *ent) >+{ >+ return drm_get_dev(pdev, ent, &driver); >+} >+ >+ >+static int __init xgi_init(void) >+{ >+ driver.num_ioctls = xgi_max_ioctl; >+ return drm_init(&driver, pciidlist); >+} >+ >+static void __exit xgi_exit(void) >+{ >+ drm_exit(&driver); >+} >+ >+module_init(xgi_init); >+module_exit(xgi_exit); >+ >+MODULE_AUTHOR(DRIVER_AUTHOR); >+MODULE_DESCRIPTION(DRIVER_DESC); >+MODULE_LICENSE("GPL and additional rights"); >+ >+ >+void xgi_engine_init(struct xgi_info * info) >+{ >+ u8 temp; >+ >+ >+ OUT3C5B(info->mmio_map, 0x11, 0x92); >+ >+ /* -------> copy from OT2D >+ * PCI Retry Control Register. >+ * disable PCI read retry & enable write retry in mem. (10xx xxxx)b >+ */ >+ temp = IN3X5B(info->mmio_map, 0x55); >+ OUT3X5B(info->mmio_map, 0x55, (temp & 0xbf) | 0x80); >+ >+ xgi_enable_ge(info); >+ >+ /* Enable linear addressing of the card. */ >+ temp = IN3X5B(info->mmio_map, 0x21); >+ OUT3X5B(info->mmio_map, 0x21, temp | 0x20); >+ >+ /* Enable 32-bit internal data path */ >+ temp = IN3X5B(info->mmio_map, 0x2A); >+ OUT3X5B(info->mmio_map, 0x2A, temp | 0x40); >+ >+ /* Enable PCI burst write ,disable burst read and enable MMIO. */ >+ /* >+ * 0x3D4.39 Enable PCI burst write, disable burst read and enable MMIO. >+ * 7 ---- Pixel Data Format 1: big endian 0: little endian >+ * 6 5 4 3---- Memory Data with Big Endian Format, BE[3:0]# with Big Endian Format >+ * 2 ---- PCI Burst Write Enable >+ * 1 ---- PCI Burst Read Enable >+ * 0 ---- MMIO Control >+ */ >+ temp = IN3X5B(info->mmio_map, 0x39); >+ OUT3X5B(info->mmio_map, 0x39, (temp | 0x05) & 0xfd); >+ >+ /* enable GEIO decode */ >+ /* temp = IN3X5B(info->mmio_map, 0x29); >+ * OUT3X5B(info->mmio_map, 0x29, temp | 0x08); >+ */ >+ >+ /* Enable graphic engine I/O PCI retry function*/ >+ /* temp = IN3X5B(info->mmio_map, 0x62); >+ * OUT3X5B(info->mmio_map, 0x62, temp | 0x50); >+ */ >+ >+ /* protect all register except which protected by 3c5.0e.7 */ >+ /* OUT3C5B(info->mmio_map, 0x11, 0x87); */ >+} >+ >+ >+int xgi_bootstrap(struct drm_device * dev, void * data, >+ struct drm_file * filp) >+{ >+ struct xgi_info *info = dev->dev_private; >+ struct xgi_bootstrap * bs = (struct xgi_bootstrap *) data; >+ struct drm_map_list *maplist; >+ int err; >+ >+ >+ DRM_SPININIT(&info->fence_lock, "fence lock"); >+ info->next_sequence = 0; >+ info->complete_sequence = 0; >+ >+ if (info->mmio_map == NULL) { >+ err = drm_addmap(dev, info->mmio.base, info->mmio.size, >+ _DRM_REGISTERS, _DRM_KERNEL, >+ &info->mmio_map); >+ if (err) { >+ DRM_ERROR("Unable to map MMIO region: %d\n", err); >+ return err; >+ } >+ >+ xgi_enable_mmio(info); >+ xgi_engine_init(info); >+ } >+ >+ >+ info->fb.size = IN3CFB(info->mmio_map, 0x54) * 8 * 1024 * 1024; >+ >+ DRM_INFO("fb base: 0x%lx, size: 0x%x (probed)\n", >+ (unsigned long) info->fb.base, info->fb.size); >+ >+ >+ if ((info->fb.base == 0) || (info->fb.size == 0)) { >+ DRM_ERROR("framebuffer appears to be wrong: 0x%lx 0x%x\n", >+ (unsigned long) info->fb.base, info->fb.size); >+ return -EINVAL; >+ } >+ >+ >+ /* Init the resource manager */ >+ if (!info->fb_heap_initialized) { >+ err = xgi_fb_heap_init(info); >+ if (err) { >+ DRM_ERROR("Unable to initialize FB heap.\n"); >+ return err; >+ } >+ } >+ >+ >+ info->pcie.size = bs->gart.size; >+ >+ /* Init the resource manager */ >+ if (!info->pcie_heap_initialized) { >+ err = xgi_pcie_heap_init(info); >+ if (err) { >+ DRM_ERROR("Unable to initialize GART heap.\n"); >+ return err; >+ } >+ >+ /* Alloc 1M bytes for cmdbuffer which is flush2D batch array */ >+ err = xgi_cmdlist_initialize(info, 0x100000, filp); >+ if (err) { >+ DRM_ERROR("xgi_cmdlist_initialize() failed\n"); >+ return err; >+ } >+ } >+ >+ >+ if (info->pcie_map == NULL) { >+ err = drm_addmap(info->dev, 0, info->pcie.size, >+ _DRM_SCATTER_GATHER, _DRM_LOCKED, >+ & info->pcie_map); >+ if (err) { >+ DRM_ERROR("Could not add map for GART backing " >+ "store.\n"); >+ return err; >+ } >+ } >+ >+ >+ maplist = drm_find_matching_map(dev, info->pcie_map); >+ if (maplist == NULL) { >+ DRM_ERROR("Could not find GART backing store map.\n"); >+ return -EINVAL; >+ } >+ >+ bs->gart = *info->pcie_map; >+ bs->gart.handle = (void *)(unsigned long) maplist->user_token; >+ return 0; >+} >+ >+ >+void xgi_driver_lastclose(struct drm_device * dev) >+{ >+ struct xgi_info * info = dev->dev_private; >+ >+ if (info != NULL) { >+ if (info->mmio_map != NULL) { >+ xgi_cmdlist_cleanup(info); >+ xgi_disable_ge(info); >+ xgi_disable_mmio(info); >+ } >+ >+ /* The core DRM lastclose routine will destroy all of our >+ * mappings for us. NULL out the pointers here so that >+ * xgi_bootstrap can do the right thing. >+ */ >+ info->pcie_map = NULL; >+ info->mmio_map = NULL; >+ info->fb_map = NULL; >+ >+ if (info->pcie_heap_initialized) { >+ drm_ati_pcigart_cleanup(dev, &info->gart_info); >+ } >+ >+ if (info->fb_heap_initialized >+ || info->pcie_heap_initialized) { >+ drm_sman_cleanup(&info->sman); >+ >+ info->fb_heap_initialized = FALSE; >+ info->pcie_heap_initialized = FALSE; >+ } >+ } >+} >+ >+ >+void xgi_reclaim_buffers_locked(struct drm_device * dev, >+ struct drm_file * filp) >+{ >+ struct xgi_info * info = dev->dev_private; >+ >+ mutex_lock(&info->dev->struct_mutex); >+ if (drm_sman_owner_clean(&info->sman, (unsigned long) filp)) { >+ mutex_unlock(&info->dev->struct_mutex); >+ return; >+ } >+ >+ if (dev->driver->dma_quiescent) { >+ dev->driver->dma_quiescent(dev); >+ } >+ >+ drm_sman_owner_cleanup(&info->sman, (unsigned long) filp); >+ mutex_unlock(&info->dev->struct_mutex); >+ return; >+} >+ >+ >+/* >+ * driver receives an interrupt if someone waiting, then hand it off. >+ */ >+irqreturn_t xgi_kern_isr(DRM_IRQ_ARGS) >+{ >+ struct drm_device *dev = (struct drm_device *) arg; >+ struct xgi_info *info = dev->dev_private; >+ const u32 irq_bits = le32_to_cpu(DRM_READ32(info->mmio_map, >+ (0x2800 >+ + M2REG_AUTO_LINK_STATUS_ADDRESS))) >+ & (M2REG_ACTIVE_TIMER_INTERRUPT_MASK >+ | M2REG_ACTIVE_INTERRUPT_0_MASK >+ | M2REG_ACTIVE_INTERRUPT_2_MASK >+ | M2REG_ACTIVE_INTERRUPT_3_MASK); >+ >+ >+ if (irq_bits != 0) { >+ DRM_WRITE32(info->mmio_map, >+ 0x2800 + M2REG_AUTO_LINK_SETTING_ADDRESS, >+ cpu_to_le32(M2REG_AUTO_LINK_SETTING_COMMAND | irq_bits)); >+ xgi_fence_handler(dev); >+ return IRQ_HANDLED; >+ } else { >+ return IRQ_NONE; >+ } >+} >+ >+ >+int xgi_driver_load(struct drm_device *dev, unsigned long flags) >+{ >+ struct xgi_info *info = drm_alloc(sizeof(*info), DRM_MEM_DRIVER); >+ int err; >+ >+ if (!info) >+ return -ENOMEM; >+ >+ (void) memset(info, 0, sizeof(*info)); >+ dev->dev_private = info; >+ info->dev = dev; >+ >+ info->mmio.base = drm_get_resource_start(dev, 1); >+ info->mmio.size = drm_get_resource_len(dev, 1); >+ >+ DRM_INFO("mmio base: 0x%lx, size: 0x%x\n", >+ (unsigned long) info->mmio.base, info->mmio.size); >+ >+ >+ if ((info->mmio.base == 0) || (info->mmio.size == 0)) { >+ DRM_ERROR("mmio appears to be wrong: 0x%lx 0x%x\n", >+ (unsigned long) info->mmio.base, info->mmio.size); >+ err = -EINVAL; >+ goto fail; >+ } >+ >+ >+ info->fb.base = drm_get_resource_start(dev, 0); >+ info->fb.size = drm_get_resource_len(dev, 0); >+ >+ DRM_INFO("fb base: 0x%lx, size: 0x%x\n", >+ (unsigned long) info->fb.base, info->fb.size); >+ >+ >+ err = drm_sman_init(&info->sman, 2, 12, 8); >+ if (err) { >+ goto fail; >+ } >+ >+ >+ return 0; >+ >+fail: >+ drm_free(info, sizeof(*info), DRM_MEM_DRIVER); >+ return err; >+} >+ >+int xgi_driver_unload(struct drm_device *dev) >+{ >+ struct xgi_info * info = dev->dev_private; >+ >+ drm_sman_takedown(&info->sman); >+ drm_free(info, sizeof(*info), DRM_MEM_DRIVER); >+ dev->dev_private = NULL; >+ >+ return 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_drv.h linux-2.6.23.i686/drivers/char/drm/xgi_drv.h >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_drv.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_drv.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,117 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#ifndef _XGI_DRV_H_ >+#define _XGI_DRV_H_ >+ >+#include "drmP.h" >+#include "drm.h" >+#include "drm_sman.h" >+ >+#define DRIVER_AUTHOR "Andrea Zhang <andrea_zhang@macrosynergy.com>" >+ >+#define DRIVER_NAME "xgi" >+#define DRIVER_DESC "XGI XP5 / XP10 / XG47" >+#define DRIVER_DATE "20071003" >+ >+#define DRIVER_MAJOR 1 >+#define DRIVER_MINOR 1 >+#define DRIVER_PATCHLEVEL 3 >+ >+#include "xgi_cmdlist.h" >+#include "xgi_drm.h" >+ >+struct xgi_aperture { >+ dma_addr_t base; >+ unsigned int size; >+}; >+ >+struct xgi_info { >+ struct drm_device *dev; >+ >+ bool bootstrap_done; >+ >+ /* physical characteristics */ >+ struct xgi_aperture mmio; >+ struct xgi_aperture fb; >+ struct xgi_aperture pcie; >+ >+ struct drm_map *mmio_map; >+ struct drm_map *pcie_map; >+ struct drm_map *fb_map; >+ >+ /* look up table parameters */ >+ struct drm_ati_pcigart_info gart_info; >+ unsigned int lutPageSize; >+ >+ struct drm_sman sman; >+ bool fb_heap_initialized; >+ bool pcie_heap_initialized; >+ >+ struct xgi_cmdring_info cmdring; >+ >+ DRM_SPINTYPE fence_lock; >+ unsigned complete_sequence; >+ unsigned next_sequence; >+}; >+ >+extern long xgi_compat_ioctl(struct file *filp, unsigned int cmd, >+ unsigned long arg); >+ >+extern int xgi_fb_heap_init(struct xgi_info * info); >+ >+extern int xgi_alloc(struct xgi_info * info, struct xgi_mem_alloc * alloc, >+ struct drm_file * filp); >+ >+extern int xgi_free(struct xgi_info * info, unsigned long index, >+ struct drm_file * filp); >+ >+extern int xgi_pcie_heap_init(struct xgi_info * info); >+ >+extern void *xgi_find_pcie_virt(struct xgi_info * info, u32 address); >+ >+extern void xgi_enable_mmio(struct xgi_info * info); >+extern void xgi_disable_mmio(struct xgi_info * info); >+extern void xgi_enable_ge(struct xgi_info * info); >+extern void xgi_disable_ge(struct xgi_info * info); >+ >+extern void xgi_poke_flush(struct drm_device * dev, uint32_t class); >+extern int xgi_fence_emit_sequence(struct drm_device * dev, uint32_t class, >+ uint32_t flags, uint32_t * sequence, uint32_t * native_type); >+extern void xgi_fence_handler(struct drm_device * dev); >+extern int xgi_fence_has_irq(struct drm_device *dev, uint32_t class, >+ uint32_t flags); >+ >+extern int xgi_alloc_ioctl(struct drm_device * dev, void * data, >+ struct drm_file * filp); >+extern int xgi_free_ioctl(struct drm_device * dev, void * data, >+ struct drm_file * filp); >+extern int xgi_submit_cmdlist(struct drm_device * dev, void * data, >+ struct drm_file * filp); >+extern int xgi_state_change_ioctl(struct drm_device * dev, void * data, >+ struct drm_file * filp); >+ >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_fb.c linux-2.6.23.i686/drivers/char/drm/xgi_fb.c >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_fb.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_fb.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,130 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#include "xgi_drv.h" >+ >+#define XGI_FB_HEAP_START 0x1000000 >+ >+int xgi_alloc(struct xgi_info * info, struct xgi_mem_alloc * alloc, >+ struct drm_file * filp) >+{ >+ struct drm_memblock_item *block; >+ const char *const mem_name = (alloc->location == XGI_MEMLOC_LOCAL) >+ ? "on-card" : "GART"; >+ >+ >+ if ((alloc->location != XGI_MEMLOC_LOCAL) >+ && (alloc->location != XGI_MEMLOC_NON_LOCAL)) { >+ DRM_ERROR("Invalid memory pool (0x%08x) specified.\n", >+ alloc->location); >+ return -EINVAL; >+ } >+ >+ if ((alloc->location == XGI_MEMLOC_LOCAL) >+ ? !info->fb_heap_initialized : !info->pcie_heap_initialized) { >+ DRM_ERROR("Attempt to allocate from uninitialized memory " >+ "pool (0x%08x).\n", alloc->location); >+ return -EINVAL; >+ } >+ >+ mutex_lock(&info->dev->struct_mutex); >+ block = drm_sman_alloc(&info->sman, alloc->location, alloc->size, >+ 0, (unsigned long) filp); >+ mutex_unlock(&info->dev->struct_mutex); >+ >+ if (block == NULL) { >+ alloc->size = 0; >+ DRM_ERROR("%s memory allocation failed\n", mem_name); >+ return -ENOMEM; >+ } else { >+ alloc->offset = (*block->mm->offset)(block->mm, >+ block->mm_info); >+ alloc->hw_addr = alloc->offset; >+ alloc->index = block->user_hash.key; >+ >+ if (block->user_hash.key != (unsigned long) alloc->index) { >+ DRM_ERROR("%s truncated handle %lx for pool %d " >+ "offset %x\n", >+ __func__, block->user_hash.key, >+ alloc->location, alloc->offset); >+ } >+ >+ if (alloc->location == XGI_MEMLOC_NON_LOCAL) { >+ alloc->hw_addr += info->pcie.base; >+ } >+ >+ DRM_DEBUG("%s memory allocation succeeded: 0x%x\n", >+ mem_name, alloc->offset); >+ } >+ >+ return 0; >+} >+ >+ >+int xgi_alloc_ioctl(struct drm_device * dev, void * data, >+ struct drm_file * filp) >+{ >+ struct xgi_info *info = dev->dev_private; >+ >+ return xgi_alloc(info, (struct xgi_mem_alloc *) data, filp); >+} >+ >+ >+int xgi_free(struct xgi_info * info, unsigned long index, >+ struct drm_file * filp) >+{ >+ int err; >+ >+ mutex_lock(&info->dev->struct_mutex); >+ err = drm_sman_free_key(&info->sman, index); >+ mutex_unlock(&info->dev->struct_mutex); >+ >+ return err; >+} >+ >+ >+int xgi_free_ioctl(struct drm_device * dev, void * data, >+ struct drm_file * filp) >+{ >+ struct xgi_info *info = dev->dev_private; >+ >+ return xgi_free(info, *(unsigned long *) data, filp); >+} >+ >+ >+int xgi_fb_heap_init(struct xgi_info * info) >+{ >+ int err; >+ >+ mutex_lock(&info->dev->struct_mutex); >+ err = drm_sman_set_range(&info->sman, XGI_MEMLOC_LOCAL, >+ XGI_FB_HEAP_START, >+ info->fb.size - XGI_FB_HEAP_START); >+ mutex_unlock(&info->dev->struct_mutex); >+ >+ info->fb_heap_initialized = (err == 0); >+ return err; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_fence.c linux-2.6.23.i686/drivers/char/drm/xgi_fence.c >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_fence.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_fence.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,127 @@ >+/* >+ * (C) Copyright IBM Corporation 2007 >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * on the rights to use, copy, modify, merge, publish, distribute, sub >+ * license, and/or sell copies of the Software, and to permit persons to whom >+ * the Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE AUTHORS AND/OR THEIR SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR >+ * OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Ian Romanick <idr@us.ibm.com> >+ */ >+ >+#include "xgi_drv.h" >+#include "xgi_regs.h" >+#include "xgi_misc.h" >+#include "xgi_cmdlist.h" >+ >+static uint32_t xgi_do_flush(struct drm_device * dev, uint32_t class) >+{ >+ struct xgi_info * info = dev->dev_private; >+ struct drm_fence_class_manager * fc = &dev->fm.fence_class[class]; >+ uint32_t pending_flush_types = 0; >+ uint32_t signaled_flush_types = 0; >+ >+ >+ if ((info == NULL) || (class != 0)) >+ return 0; >+ >+ DRM_SPINLOCK(&info->fence_lock); >+ >+ pending_flush_types = fc->pending_flush | >+ ((fc->pending_exe_flush) ? DRM_FENCE_TYPE_EXE : 0); >+ >+ if (pending_flush_types) { >+ if (pending_flush_types & DRM_FENCE_TYPE_EXE) { >+ const u32 begin_id = le32_to_cpu(DRM_READ32(info->mmio_map, >+ 0x2820)) >+ & BEGIN_BEGIN_IDENTIFICATION_MASK; >+ >+ if (begin_id != info->complete_sequence) { >+ info->complete_sequence = begin_id; >+ signaled_flush_types |= DRM_FENCE_TYPE_EXE; >+ } >+ } >+ >+ if (signaled_flush_types) { >+ drm_fence_handler(dev, 0, info->complete_sequence, >+ signaled_flush_types, 0); >+ } >+ } >+ >+ DRM_SPINUNLOCK(&info->fence_lock); >+ >+ return fc->pending_flush | >+ ((fc->pending_exe_flush) ? DRM_FENCE_TYPE_EXE : 0); >+} >+ >+ >+int xgi_fence_emit_sequence(struct drm_device * dev, uint32_t class, >+ uint32_t flags, uint32_t * sequence, >+ uint32_t * native_type) >+{ >+ struct xgi_info * info = dev->dev_private; >+ >+ if ((info == NULL) || (class != 0)) >+ return -EINVAL; >+ >+ >+ DRM_SPINLOCK(&info->fence_lock); >+ info->next_sequence++; >+ if (info->next_sequence > BEGIN_BEGIN_IDENTIFICATION_MASK) { >+ info->next_sequence = 1; >+ } >+ DRM_SPINUNLOCK(&info->fence_lock); >+ >+ >+ xgi_emit_irq(info); >+ >+ *sequence = (uint32_t) info->next_sequence; >+ *native_type = DRM_FENCE_TYPE_EXE; >+ >+ return 0; >+} >+ >+ >+void xgi_poke_flush(struct drm_device * dev, uint32_t class) >+{ >+ struct drm_fence_manager * fm = &dev->fm; >+ unsigned long flags; >+ >+ >+ write_lock_irqsave(&fm->lock, flags); >+ xgi_do_flush(dev, class); >+ write_unlock_irqrestore(&fm->lock, flags); >+} >+ >+ >+void xgi_fence_handler(struct drm_device * dev) >+{ >+ struct drm_fence_manager * fm = &dev->fm; >+ >+ >+ write_lock(&fm->lock); >+ xgi_do_flush(dev, 0); >+ write_unlock(&fm->lock); >+} >+ >+ >+int xgi_fence_has_irq(struct drm_device *dev, uint32_t class, uint32_t flags) >+{ >+ return ((class == 0) && (flags == DRM_FENCE_TYPE_EXE)) ? 1 : 0; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_ioc32.c linux-2.6.23.i686/drivers/char/drm/xgi_ioc32.c >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_ioc32.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_ioc32.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,140 @@ >+/* >+ * (C) Copyright IBM Corporation 2007 >+ * Copyright (C) Paul Mackerras 2005. >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining a >+ * copy of this software and associated documentation files (the "Software"), >+ * to deal in the Software without restriction, including without limitation >+ * on the rights to use, copy, modify, merge, publish, distribute, sub >+ * license, and/or sell copies of the Software, and to permit persons to whom >+ * the Software is furnished to do so, subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the next >+ * paragraph) shall be included in all copies or substantial portions of the >+ * Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR >+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * THE AUTHORS AND/OR THEIR SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR >+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, >+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR >+ * OTHER DEALINGS IN THE SOFTWARE. >+ * >+ * Authors: >+ * Ian Romanick <idr@us.ibm.com> >+ */ >+ >+#include <linux/compat.h> >+ >+#include "drmP.h" >+#include "drm.h" >+ >+#include "xgi_drm.h" >+ >+/* This is copied from drm_ioc32.c. >+ */ >+struct drm_map32 { >+ u32 offset; /**< Requested physical address (0 for SAREA)*/ >+ u32 size; /**< Requested physical size (bytes) */ >+ enum drm_map_type type; /**< Type of memory to map */ >+ enum drm_map_flags flags; /**< Flags */ >+ u32 handle; /**< User-space: "Handle" to pass to mmap() */ >+ int mtrr; /**< MTRR slot used */ >+}; >+ >+struct drm32_xgi_bootstrap { >+ struct drm_map32 gart; >+}; >+ >+ >+extern int xgi_bootstrap(struct drm_device *, void *, struct drm_file *); >+ >+static int compat_xgi_bootstrap(struct file *filp, unsigned int cmd, >+ unsigned long arg) >+{ >+ struct drm32_xgi_bootstrap __user *const argp = (void __user *)arg; >+ struct drm32_xgi_bootstrap bs32; >+ struct xgi_bootstrap __user *bs; >+ int err; >+ void *handle; >+ >+ >+ if (copy_from_user(&bs32, argp, sizeof(bs32))) { >+ return -EFAULT; >+ } >+ >+ bs = compat_alloc_user_space(sizeof(*bs)); >+ if (!access_ok(VERIFY_WRITE, bs, sizeof(*bs))) { >+ return -EFAULT; >+ } >+ >+ if (__put_user(bs32.gart.offset, &bs->gart.offset) >+ || __put_user(bs32.gart.size, &bs->gart.size) >+ || __put_user(bs32.gart.type, &bs->gart.type) >+ || __put_user(bs32.gart.flags, &bs->gart.flags)) { >+ return -EFAULT; >+ } >+ >+ err = drm_ioctl(filp->f_dentry->d_inode, filp, XGI_IOCTL_BOOTSTRAP, >+ (unsigned long)bs); >+ if (err) { >+ return err; >+ } >+ >+ if (__get_user(bs32.gart.offset, &bs->gart.offset) >+ || __get_user(bs32.gart.mtrr, &bs->gart.mtrr) >+ || __get_user(handle, &bs->gart.handle)) { >+ return -EFAULT; >+ } >+ >+ bs32.gart.handle = (unsigned long)handle; >+ if (bs32.gart.handle != (unsigned long)handle && printk_ratelimit()) { >+ printk(KERN_ERR "%s truncated handle %p for type %d " >+ "offset %x\n", >+ __func__, handle, bs32.gart.type, bs32.gart.offset); >+ } >+ >+ if (copy_to_user(argp, &bs32, sizeof(bs32))) { >+ return -EFAULT; >+ } >+ >+ return 0; >+} >+ >+ >+drm_ioctl_compat_t *xgi_compat_ioctls[] = { >+ [DRM_XGI_BOOTSTRAP] = compat_xgi_bootstrap, >+}; >+ >+/** >+ * Called whenever a 32-bit process running under a 64-bit kernel >+ * performs an ioctl on /dev/dri/card<n>. >+ * >+ * \param filp file pointer. >+ * \param cmd command. >+ * \param arg user argument. >+ * \return zero on success or negative number on failure. >+ */ >+long xgi_compat_ioctl(struct file *filp, unsigned int cmd, >+ unsigned long arg) >+{ >+ const unsigned int nr = DRM_IOCTL_NR(cmd); >+ drm_ioctl_compat_t *fn = NULL; >+ int ret; >+ >+ if (nr < DRM_COMMAND_BASE) >+ return drm_compat_ioctl(filp, cmd, arg); >+ >+ if (nr < DRM_COMMAND_BASE + DRM_ARRAY_SIZE(xgi_compat_ioctls)) >+ fn = xgi_compat_ioctls[nr - DRM_COMMAND_BASE]; >+ >+ lock_kernel(); >+ ret = (fn != NULL) >+ ? (*fn)(filp, cmd, arg) >+ : drm_ioctl(filp->f_dentry->d_inode, filp, cmd, arg); >+ unlock_kernel(); >+ >+ return ret; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_misc.c linux-2.6.23.i686/drivers/char/drm/xgi_misc.c >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_misc.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_misc.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,477 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#include "xgi_drv.h" >+#include "xgi_regs.h" >+ >+#include <linux/delay.h> >+ >+/* >+ * irq functions >+ */ >+#define STALL_INTERRUPT_RESET_THRESHOLD 0xffff >+ >+static unsigned int s_invalid_begin = 0; >+ >+static bool xgi_validate_signal(struct drm_map * map) >+{ >+ if (le32_to_cpu(DRM_READ32(map, 0x2800) & 0x001c0000)) { >+ u16 check; >+ >+ /* Check Read back status */ >+ DRM_WRITE8(map, 0x235c, 0x80); >+ check = le16_to_cpu(DRM_READ16(map, 0x2360)); >+ >+ if ((check & 0x3f) != ((check & 0x3f00) >> 8)) { >+ return FALSE; >+ } >+ >+ /* Check RO channel */ >+ DRM_WRITE8(map, 0x235c, 0x83); >+ check = le16_to_cpu(DRM_READ16(map, 0x2360)); >+ if ((check & 0x0f) != ((check & 0xf0) >> 4)) { >+ return FALSE; >+ } >+ >+ /* Check RW channel */ >+ DRM_WRITE8(map, 0x235c, 0x88); >+ check = le16_to_cpu(DRM_READ16(map, 0x2360)); >+ if ((check & 0x0f) != ((check & 0xf0) >> 4)) { >+ return FALSE; >+ } >+ >+ /* Check RO channel outstanding */ >+ DRM_WRITE8(map, 0x235c, 0x8f); >+ check = le16_to_cpu(DRM_READ16(map, 0x2360)); >+ if (0 != (check & 0x3ff)) { >+ return FALSE; >+ } >+ >+ /* Check RW channel outstanding */ >+ DRM_WRITE8(map, 0x235c, 0x90); >+ check = le16_to_cpu(DRM_READ16(map, 0x2360)); >+ if (0 != (check & 0x3ff)) { >+ return FALSE; >+ } >+ >+ /* No pending PCIE request. GE stall. */ >+ } >+ >+ return TRUE; >+} >+ >+ >+static void xgi_ge_hang_reset(struct drm_map * map) >+{ >+ int time_out = 0xffff; >+ >+ DRM_WRITE8(map, 0xb057, 8); >+ while (0 != le32_to_cpu(DRM_READ32(map, 0x2800) & 0xf0000000)) { >+ while (0 != ((--time_out) & 0xfff)) >+ /* empty */ ; >+ >+ if (0 == time_out) { >+ u8 old_3ce; >+ u8 old_3cf; >+ u8 old_index; >+ u8 old_36; >+ >+ DRM_INFO("Can not reset back 0x%x!\n", >+ le32_to_cpu(DRM_READ32(map, 0x2800))); >+ >+ DRM_WRITE8(map, 0xb057, 0); >+ >+ /* Have to use 3x5.36 to reset. */ >+ /* Save and close dynamic gating */ >+ >+ old_3ce = DRM_READ8(map, 0x3ce); >+ DRM_WRITE8(map, 0x3ce, 0x2a); >+ old_3cf = DRM_READ8(map, 0x3cf); >+ DRM_WRITE8(map, 0x3cf, old_3cf & 0xfe); >+ >+ /* Reset GE */ >+ old_index = DRM_READ8(map, 0x3d4); >+ DRM_WRITE8(map, 0x3d4, 0x36); >+ old_36 = DRM_READ8(map, 0x3d5); >+ DRM_WRITE8(map, 0x3d5, old_36 | 0x10); >+ >+ while (0 != ((--time_out) & 0xfff)) >+ /* empty */ ; >+ >+ DRM_WRITE8(map, 0x3d5, old_36); >+ DRM_WRITE8(map, 0x3d4, old_index); >+ >+ /* Restore dynamic gating */ >+ DRM_WRITE8(map, 0x3cf, old_3cf); >+ DRM_WRITE8(map, 0x3ce, old_3ce); >+ break; >+ } >+ } >+ >+ DRM_WRITE8(map, 0xb057, 0); >+} >+ >+ >+bool xgi_ge_irq_handler(struct xgi_info * info) >+{ >+ const u32 int_status = le32_to_cpu(DRM_READ32(info->mmio_map, 0x2810)); >+ bool is_support_auto_reset = FALSE; >+ >+ /* Check GE on/off */ >+ if (0 == (0xffffc0f0 & int_status)) { >+ if (0 != (0x1000 & int_status)) { >+ /* We got GE stall interrupt. >+ */ >+ DRM_WRITE32(info->mmio_map, 0x2810, >+ cpu_to_le32(int_status | 0x04000000)); >+ >+ if (is_support_auto_reset) { >+ static cycles_t last_tick; >+ static unsigned continue_int_count = 0; >+ >+ /* OE II is busy. */ >+ >+ if (!xgi_validate_signal(info->mmio_map)) { >+ /* Nothing but skip. */ >+ } else if (0 == continue_int_count++) { >+ last_tick = get_cycles(); >+ } else { >+ const cycles_t new_tick = get_cycles(); >+ if ((new_tick - last_tick) > >+ STALL_INTERRUPT_RESET_THRESHOLD) { >+ continue_int_count = 0; >+ } else if (continue_int_count >= 3) { >+ continue_int_count = 0; >+ >+ /* GE Hung up, need reset. */ >+ DRM_INFO("Reset GE!\n"); >+ >+ xgi_ge_hang_reset(info->mmio_map); >+ } >+ } >+ } >+ } else if (0 != (0x1 & int_status)) { >+ s_invalid_begin++; >+ DRM_WRITE32(info->mmio_map, 0x2810, >+ cpu_to_le32((int_status & ~0x01) | 0x04000000)); >+ } >+ >+ return TRUE; >+ } >+ >+ return FALSE; >+} >+ >+bool xgi_crt_irq_handler(struct xgi_info * info) >+{ >+ bool ret = FALSE; >+ u8 save_3ce = DRM_READ8(info->mmio_map, 0x3ce); >+ >+ /* CRT1 interrupt just happened >+ */ >+ if (IN3CFB(info->mmio_map, 0x37) & 0x01) { >+ u8 op3cf_3d; >+ u8 op3cf_37; >+ >+ /* What happened? >+ */ >+ op3cf_37 = IN3CFB(info->mmio_map, 0x37); >+ >+ /* Clear CRT interrupt >+ */ >+ op3cf_3d = IN3CFB(info->mmio_map, 0x3d); >+ OUT3CFB(info->mmio_map, 0x3d, (op3cf_3d | 0x04)); >+ OUT3CFB(info->mmio_map, 0x3d, (op3cf_3d & ~0x04)); >+ ret = TRUE; >+ } >+ DRM_WRITE8(info->mmio_map, 0x3ce, save_3ce); >+ >+ return (ret); >+} >+ >+bool xgi_dvi_irq_handler(struct xgi_info * info) >+{ >+ bool ret = FALSE; >+ const u8 save_3ce = DRM_READ8(info->mmio_map, 0x3ce); >+ >+ /* DVI interrupt just happened >+ */ >+ if (IN3CFB(info->mmio_map, 0x38) & 0x20) { >+ const u8 save_3x4 = DRM_READ8(info->mmio_map, 0x3d4); >+ u8 op3cf_39; >+ u8 op3cf_37; >+ u8 op3x5_5a; >+ >+ /* What happened? >+ */ >+ op3cf_37 = IN3CFB(info->mmio_map, 0x37); >+ >+ /* Notify BIOS that DVI plug/unplug happened >+ */ >+ op3x5_5a = IN3X5B(info->mmio_map, 0x5a); >+ OUT3X5B(info->mmio_map, 0x5a, op3x5_5a & 0xf7); >+ >+ DRM_WRITE8(info->mmio_map, 0x3d4, save_3x4); >+ >+ /* Clear DVI interrupt >+ */ >+ op3cf_39 = IN3CFB(info->mmio_map, 0x39); >+ OUT3C5B(info->mmio_map, 0x39, (op3cf_39 & ~0x01)); >+ OUT3C5B(info->mmio_map, 0x39, (op3cf_39 | 0x01)); >+ >+ ret = TRUE; >+ } >+ DRM_WRITE8(info->mmio_map, 0x3ce, save_3ce); >+ >+ return (ret); >+} >+ >+ >+static void dump_reg_header(unsigned regbase) >+{ >+ printk("\n=====xgi_dump_register========0x%x===============\n", >+ regbase); >+ printk(" 0 1 2 3 4 5 6 7 8 9 a b c d e f\n"); >+} >+ >+ >+static void dump_indexed_reg(struct xgi_info * info, unsigned regbase) >+{ >+ unsigned i, j; >+ u8 temp; >+ >+ >+ dump_reg_header(regbase); >+ for (i = 0; i < 0x10; i++) { >+ printk("%1x ", i); >+ >+ for (j = 0; j < 0x10; j++) { >+ DRM_WRITE8(info->mmio_map, regbase - 1, >+ (i * 0x10) + j); >+ temp = DRM_READ8(info->mmio_map, regbase); >+ printk("%3x", temp); >+ } >+ printk("\n"); >+ } >+} >+ >+ >+static void dump_reg(struct xgi_info * info, unsigned regbase, unsigned range) >+{ >+ unsigned i, j; >+ >+ >+ dump_reg_header(regbase); >+ for (i = 0; i < range; i++) { >+ printk("%1x ", i); >+ >+ for (j = 0; j < 0x10; j++) { >+ u8 temp = DRM_READ8(info->mmio_map, >+ regbase + (i * 0x10) + j); >+ printk("%3x", temp); >+ } >+ printk("\n"); >+ } >+} >+ >+ >+void xgi_dump_register(struct xgi_info * info) >+{ >+ dump_indexed_reg(info, 0x3c5); >+ dump_indexed_reg(info, 0x3d5); >+ dump_indexed_reg(info, 0x3cf); >+ >+ dump_reg(info, 0xB000, 0x05); >+ dump_reg(info, 0x2200, 0x0B); >+ dump_reg(info, 0x2300, 0x07); >+ dump_reg(info, 0x2400, 0x10); >+ dump_reg(info, 0x2800, 0x10); >+} >+ >+ >+#define WHOLD_GE_STATUS 0x2800 >+ >+/* Test everything except the "whole GE busy" bit, the "master engine busy" >+ * bit, and the reserved bits [26:21]. >+ */ >+#define IDLE_MASK ~((1U<<31) | (1U<<28) | (0x3f<<21)) >+ >+void xgi_waitfor_pci_idle(struct xgi_info * info) >+{ >+ unsigned int idleCount = 0; >+ u32 old_status = 0; >+ unsigned int same_count = 0; >+ >+ while (idleCount < 5) { >+ const u32 status = DRM_READ32(info->mmio_map, WHOLD_GE_STATUS) >+ & IDLE_MASK; >+ >+ if (status == old_status) { >+ same_count++; >+ >+ if ((same_count % 100) == 0) { >+ DRM_ERROR("GE status stuck at 0x%08x for %u iterations!\n", >+ old_status, same_count); >+ } >+ } else { >+ old_status = status; >+ same_count = 0; >+ } >+ >+ if (status != 0) { >+ msleep(1); >+ idleCount = 0; >+ } else { >+ idleCount++; >+ } >+ } >+} >+ >+ >+void xgi_enable_mmio(struct xgi_info * info) >+{ >+ u8 protect = 0; >+ u8 temp; >+ >+ /* Unprotect registers */ >+ DRM_WRITE8(info->mmio_map, 0x3C4, 0x11); >+ protect = DRM_READ8(info->mmio_map, 0x3C5); >+ DRM_WRITE8(info->mmio_map, 0x3C5, 0x92); >+ >+ DRM_WRITE8(info->mmio_map, 0x3D4, 0x3A); >+ temp = DRM_READ8(info->mmio_map, 0x3D5); >+ DRM_WRITE8(info->mmio_map, 0x3D5, temp | 0x20); >+ >+ /* Enable MMIO */ >+ DRM_WRITE8(info->mmio_map, 0x3D4, 0x39); >+ temp = DRM_READ8(info->mmio_map, 0x3D5); >+ DRM_WRITE8(info->mmio_map, 0x3D5, temp | 0x01); >+ >+ /* Protect registers */ >+ OUT3C5B(info->mmio_map, 0x11, protect); >+} >+ >+ >+void xgi_disable_mmio(struct xgi_info * info) >+{ >+ u8 protect = 0; >+ u8 temp; >+ >+ /* Unprotect registers */ >+ DRM_WRITE8(info->mmio_map, 0x3C4, 0x11); >+ protect = DRM_READ8(info->mmio_map, 0x3C5); >+ DRM_WRITE8(info->mmio_map, 0x3C5, 0x92); >+ >+ /* Disable MMIO access */ >+ DRM_WRITE8(info->mmio_map, 0x3D4, 0x39); >+ temp = DRM_READ8(info->mmio_map, 0x3D5); >+ DRM_WRITE8(info->mmio_map, 0x3D5, temp & 0xFE); >+ >+ /* Protect registers */ >+ OUT3C5B(info->mmio_map, 0x11, protect); >+} >+ >+ >+void xgi_enable_ge(struct xgi_info * info) >+{ >+ u8 bOld3cf2a; >+ int wait = 0; >+ >+ OUT3C5B(info->mmio_map, 0x11, 0x92); >+ >+ /* Save and close dynamic gating >+ */ >+ bOld3cf2a = IN3CFB(info->mmio_map, XGI_MISC_CTRL); >+ OUT3CFB(info->mmio_map, XGI_MISC_CTRL, bOld3cf2a & ~EN_GEPWM); >+ >+ /* Enable 2D and 3D GE >+ */ >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, (GE_ENABLE | GE_ENABLE_3D)); >+ wait = 10; >+ while (wait--) { >+ DRM_READ8(info->mmio_map, 0x36); >+ } >+ >+ /* Reset both 3D and 2D engine >+ */ >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, >+ (GE_ENABLE | GE_RESET | GE_ENABLE_3D)); >+ wait = 10; >+ while (wait--) { >+ DRM_READ8(info->mmio_map, 0x36); >+ } >+ >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, (GE_ENABLE | GE_ENABLE_3D)); >+ wait = 10; >+ while (wait--) { >+ DRM_READ8(info->mmio_map, 0x36); >+ } >+ >+ /* Enable 2D engine only >+ */ >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, GE_ENABLE); >+ >+ /* Enable 2D+3D engine >+ */ >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, (GE_ENABLE | GE_ENABLE_3D)); >+ >+ /* Restore dynamic gating >+ */ >+ OUT3CFB(info->mmio_map, XGI_MISC_CTRL, bOld3cf2a); >+} >+ >+ >+void xgi_disable_ge(struct xgi_info * info) >+{ >+ int wait = 0; >+ >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, (GE_ENABLE | GE_ENABLE_3D)); >+ >+ wait = 10; >+ while (wait--) { >+ DRM_READ8(info->mmio_map, 0x36); >+ } >+ >+ /* Reset both 3D and 2D engine >+ */ >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, >+ (GE_ENABLE | GE_RESET | GE_ENABLE_3D)); >+ >+ wait = 10; >+ while (wait--) { >+ DRM_READ8(info->mmio_map, 0x36); >+ } >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, (GE_ENABLE | GE_ENABLE_3D)); >+ >+ wait = 10; >+ while (wait--) { >+ DRM_READ8(info->mmio_map, 0x36); >+ } >+ >+ /* Disable 2D engine and 3D engine. >+ */ >+ OUT3X5B(info->mmio_map, XGI_GE_CNTL, 0); >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_misc.h linux-2.6.23.i686/drivers/char/drm/xgi_misc.h >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_misc.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_misc.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,37 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#ifndef _XGI_MISC_H_ >+#define _XGI_MISC_H_ >+ >+extern void xgi_dump_register(struct xgi_info * info); >+ >+extern bool xgi_ge_irq_handler(struct xgi_info * info); >+extern bool xgi_crt_irq_handler(struct xgi_info * info); >+extern bool xgi_dvi_irq_handler(struct xgi_info * info); >+extern void xgi_waitfor_pci_idle(struct xgi_info * info); >+ >+#endif >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_pcie.c linux-2.6.23.i686/drivers/char/drm/xgi_pcie.c >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_pcie.c 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_pcie.c 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,126 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#include "xgi_drv.h" >+#include "xgi_regs.h" >+#include "xgi_misc.h" >+ >+void xgi_gart_flush(struct drm_device *dev) >+{ >+ struct xgi_info *const info = dev->dev_private; >+ u8 temp; >+ >+ DRM_MEMORYBARRIER(); >+ >+ /* Set GART in SFB */ >+ temp = DRM_READ8(info->mmio_map, 0xB00C); >+ DRM_WRITE8(info->mmio_map, 0xB00C, temp & ~0x02); >+ >+ /* Set GART base address to HW */ >+ DRM_WRITE32(info->mmio_map, 0xB034, info->gart_info.bus_addr); >+ >+ /* Flush GART table. */ >+ DRM_WRITE8(info->mmio_map, 0xB03F, 0x40); >+ DRM_WRITE8(info->mmio_map, 0xB03F, 0x00); >+} >+ >+ >+int xgi_pcie_heap_init(struct xgi_info * info) >+{ >+ u8 temp = 0; >+ int err; >+ struct drm_scatter_gather request; >+ >+ /* Get current FB aperture size */ >+ temp = IN3X5B(info->mmio_map, 0x27); >+ DRM_INFO("In3x5(0x27): 0x%x \n", temp); >+ >+ if (temp & 0x01) { /* 256MB; Jong 06/05/2006; 0x10000000 */ >+ info->pcie.base = 256 * 1024 * 1024; >+ } else { /* 128MB; Jong 06/05/2006; 0x08000000 */ >+ info->pcie.base = 128 * 1024 * 1024; >+ } >+ >+ >+ DRM_INFO("info->pcie.base: 0x%lx\n", (unsigned long) info->pcie.base); >+ >+ /* Get current lookup table page size */ >+ temp = DRM_READ8(info->mmio_map, 0xB00C); >+ if (temp & 0x04) { /* 8KB */ >+ info->lutPageSize = 8 * 1024; >+ } else { /* 4KB */ >+ info->lutPageSize = 4 * 1024; >+ } >+ >+ DRM_INFO("info->lutPageSize: 0x%x \n", info->lutPageSize); >+ >+ >+ request.size = info->pcie.size; >+ err = drm_sg_alloc(info->dev, & request); >+ if (err) { >+ DRM_ERROR("cannot allocate PCIE GART backing store! " >+ "size = %d\n", info->pcie.size); >+ return err; >+ } >+ >+ info->gart_info.gart_table_location = DRM_ATI_GART_MAIN; >+ info->gart_info.gart_reg_if = DRM_ATI_GART_PCI; >+ info->gart_info.table_size = info->dev->sg->pages * sizeof(u32); >+ >+ if (!drm_ati_pcigart_init(info->dev, &info->gart_info)) { >+ DRM_ERROR("failed to init PCI GART!\n"); >+ return -ENOMEM; >+ } >+ >+ >+ xgi_gart_flush(info->dev); >+ >+ mutex_lock(&info->dev->struct_mutex); >+ err = drm_sman_set_range(&info->sman, XGI_MEMLOC_NON_LOCAL, >+ 0, info->pcie.size); >+ mutex_unlock(&info->dev->struct_mutex); >+ if (err) { >+ drm_ati_pcigart_cleanup(info->dev, &info->gart_info); >+ } >+ >+ info->pcie_heap_initialized = (err == 0); >+ return err; >+} >+ >+ >+/** >+ * xgi_find_pcie_virt >+ * @address: GE HW address >+ * >+ * Returns CPU virtual address. Assumes the CPU VAddr is continuous in not >+ * the same block >+ */ >+void *xgi_find_pcie_virt(struct xgi_info * info, u32 address) >+{ >+ const unsigned long offset = address - info->pcie.base; >+ >+ return ((u8 *) info->dev->sg->virtual) + offset; >+} >diff -Nurp linux-2.6.23.i686.orig/drivers/char/drm/xgi_regs.h linux-2.6.23.i686/drivers/char/drm/xgi_regs.h >--- linux-2.6.23.i686.orig/drivers/char/drm/xgi_regs.h 1970-01-01 01:00:00.000000000 +0100 >+++ linux-2.6.23.i686/drivers/char/drm/xgi_regs.h 2008-01-06 09:24:57.000000000 +0100 >@@ -0,0 +1,169 @@ >+/**************************************************************************** >+ * Copyright (C) 2003-2006 by XGI Technology, Taiwan. >+ * >+ * All Rights Reserved. >+ * >+ * Permission is hereby granted, free of charge, to any person obtaining >+ * a copy of this software and associated documentation files (the >+ * "Software"), to deal in the Software without restriction, including >+ * without limitation on the rights to use, copy, modify, merge, >+ * publish, distribute, sublicense, and/or sell copies of the Software, >+ * and to permit persons to whom the Software is furnished to do so, >+ * subject to the following conditions: >+ * >+ * The above copyright notice and this permission notice (including the >+ * next paragraph) shall be included in all copies or substantial >+ * portions of the Software. >+ * >+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS >+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, >+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL >+ * XGI AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER >+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING >+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER >+ * DEALINGS IN THE SOFTWARE. >+ ***************************************************************************/ >+ >+#ifndef _XGI_REGS_H_ >+#define _XGI_REGS_H_ >+ >+#include "drmP.h" >+#include "drm.h" >+ >+#define MAKE_MASK(bits) ((1U << (bits)) - 1) >+ >+#define ONE_BIT_MASK MAKE_MASK(1) >+#define TWENTY_BIT_MASK MAKE_MASK(20) >+#define TWENTYONE_BIT_MASK MAKE_MASK(21) >+#define TWENTYTWO_BIT_MASK MAKE_MASK(22) >+ >+ >+/* Port 0x3d4/0x3d5, index 0x2a */ >+#define XGI_INTERFACE_SEL 0x2a >+#define DUAL_64BIT (1U<<7) >+#define INTERNAL_32BIT (1U<<6) >+#define EN_SEP_WR (1U<<5) >+#define POWER_DOWN_SEL (1U<<4) >+/*#define RESERVED_3 (1U<<3) */ >+#define SUBS_MCLK_PCICLK (1U<<2) >+#define MEM_SIZE_MASK (3<<0) >+#define MEM_SIZE_32MB (0<<0) >+#define MEM_SIZE_64MB (1<<0) >+#define MEM_SIZE_128MB (2<<0) >+#define MEM_SIZE_256MB (3<<0) >+ >+/* Port 0x3d4/0x3d5, index 0x36 */ >+#define XGI_GE_CNTL 0x36 >+#define GE_ENABLE (1U<<7) >+/*#define RESERVED_6 (1U<<6) */ >+/*#define RESERVED_5 (1U<<5) */ >+#define GE_RESET (1U<<4) >+/*#define RESERVED_3 (1U<<3) */ >+#define GE_ENABLE_3D (1U<<2) >+/*#define RESERVED_1 (1U<<1) */ >+/*#define RESERVED_0 (1U<<0) */ >+ >+/* Port 0x3ce/0x3cf, index 0x2a */ >+#define XGI_MISC_CTRL 0x2a >+#define MOTION_VID_SUSPEND (1U<<7) >+#define DVI_CRTC_TIMING_SEL (1U<<6) >+#define LCD_SEL_CTL_NEW (1U<<5) >+#define LCD_SEL_EXT_DELYCTRL (1U<<4) >+#define REG_LCDDPARST (1U<<3) >+#define LCD2DPAOFF (1U<<2) >+/*#define RESERVED_1 (1U<<1) */ >+#define EN_GEPWM (1U<<0) /* Enable GE power management */ >+ >+ >+#define BASE_3D_ENG 0x2800 >+ >+#define M2REG_FLUSH_ENGINE_ADDRESS 0x000 >+#define M2REG_FLUSH_ENGINE_COMMAND 0x00 >+#define M2REG_FLUSH_FLIP_ENGINE_MASK (ONE_BIT_MASK<<21) >+#define M2REG_FLUSH_2D_ENGINE_MASK (ONE_BIT_MASK<<20) >+#define M2REG_FLUSH_3D_ENGINE_MASK TWENTY_BIT_MASK >+ >+#define M2REG_RESET_ADDRESS 0x004 >+#define M2REG_RESET_COMMAND 0x01 >+#define M2REG_RESET_STATUS2_MASK (ONE_BIT_MASK<<10) >+#define M2REG_RESET_STATUS1_MASK (ONE_BIT_MASK<<9) >+#define M2REG_RESET_STATUS0_MASK (ONE_BIT_MASK<<8) >+#define M2REG_RESET_3DENG_MASK (ONE_BIT_MASK<<4) >+#define M2REG_RESET_2DENG_MASK (ONE_BIT_MASK<<2) >+ >+/* Write register */ >+#define M2REG_AUTO_LINK_SETTING_ADDRESS 0x010 >+#define M2REG_AUTO_LINK_SETTING_COMMAND 0x04 >+#define M2REG_CLEAR_TIMER_INTERRUPT_MASK (ONE_BIT_MASK<<11) >+#define M2REG_CLEAR_INTERRUPT_3_MASK (ONE_BIT_MASK<<10) >+#define M2REG_CLEAR_INTERRUPT_2_MASK (ONE_BIT_MASK<<9) >+#define M2REG_CLEAR_INTERRUPT_0_MASK (ONE_BIT_MASK<<8) >+#define M2REG_CLEAR_COUNTERS_MASK (ONE_BIT_MASK<<4) >+#define M2REG_PCI_TRIGGER_MODE_MASK (ONE_BIT_MASK<<1) >+#define M2REG_INVALID_LIST_AUTO_INTERRUPT_MASK (ONE_BIT_MASK<<0) >+ >+/* Read register */ >+#define M2REG_AUTO_LINK_STATUS_ADDRESS 0x010 >+#define M2REG_AUTO_LINK_STATUS_COMMAND 0x04 >+#define M2REG_ACTIVE_TIMER_INTERRUPT_MASK (ONE_BIT_MASK<<11) >+#define M2REG_ACTIVE_INTERRUPT_3_MASK (ONE_BIT_MASK<<10) >+#define M2REG_ACTIVE_INTERRUPT_2_MASK (ONE_BIT_MASK<<9) >+#define M2REG_ACTIVE_INTERRUPT_0_MASK (ONE_BIT_MASK<<8) >+#define M2REG_INVALID_LIST_AUTO_INTERRUPTED_MODE_MASK (ONE_BIT_MASK<<0) >+ >+#define M2REG_PCI_TRIGGER_REGISTER_ADDRESS 0x014 >+#define M2REG_PCI_TRIGGER_REGISTER_COMMAND 0x05 >+ >+ >+/** >+ * Begin instruction, double-word 0 >+ */ >+#define BEGIN_STOP_STORE_CURRENT_POINTER_MASK (ONE_BIT_MASK<<22) >+#define BEGIN_VALID_MASK (ONE_BIT_MASK<<20) >+#define BEGIN_BEGIN_IDENTIFICATION_MASK TWENTY_BIT_MASK >+ >+/** >+ * Begin instruction, double-word 1 >+ */ >+#define BEGIN_LINK_ENABLE_MASK (ONE_BIT_MASK<<31) >+#define BEGIN_COMMAND_LIST_LENGTH_MASK TWENTYTWO_BIT_MASK >+ >+ >+/* Hardware access functions */ >+static inline void OUT3C5B(struct drm_map * map, u8 index, u8 data) >+{ >+ DRM_WRITE8(map, 0x3C4, index); >+ DRM_WRITE8(map, 0x3C5, data); >+} >+ >+static inline void OUT3X5B(struct drm_map * map, u8 index, u8 data) >+{ >+ DRM_WRITE8(map, 0x3D4, index); >+ DRM_WRITE8(map, 0x3D5, data); >+} >+ >+static inline void OUT3CFB(struct drm_map * map, u8 index, u8 data) >+{ >+ DRM_WRITE8(map, 0x3CE, index); >+ DRM_WRITE8(map, 0x3CF, data); >+} >+ >+static inline u8 IN3C5B(struct drm_map * map, u8 index) >+{ >+ DRM_WRITE8(map, 0x3C4, index); >+ return DRM_READ8(map, 0x3C5); >+} >+ >+static inline u8 IN3X5B(struct drm_map * map, u8 index) >+{ >+ DRM_WRITE8(map, 0x3D4, index); >+ return DRM_READ8(map, 0x3D5); >+} >+ >+static inline u8 IN3CFB(struct drm_map * map, u8 index) >+{ >+ DRM_WRITE8(map, 0x3CE, index); >+ return DRM_READ8(map, 0x3CF); >+} >+ >+#endif
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
Attachments on
bug 426743
:
291161
|
291162
|
291163
|
291164
| 291178 |
291179