1 |
<?xml version='1.0' encoding="UTF-8"?> |
2 |
<!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/nvidia-guide.xml,v 1.32 2006/09/02 10:19:23 nightmorph Exp $ --> |
3 |
<!DOCTYPE guide SYSTEM "/dtd/guide.dtd"> |
4 |
|
5 |
<guide link="/doc/en/nvidia-guide.xml"> |
6 |
<title>Gentoo Linux nVidia Guide</title> |
7 |
|
8 |
<author title="Author"> |
9 |
<mail link="swift@gentoo.org">Sven Vermeulen</mail> |
10 |
</author> |
11 |
<author title="Editor"> |
12 |
<mail link="curtis119@gentoo.org">M Curtis Napier</mail> |
13 |
</author> |
14 |
<author title="Editor"> |
15 |
<mail link="nightmorph@gentoo.org">Joshua Saddler</mail> |
16 |
</author> |
17 |
<author title="Editor"> |
18 |
<mail link="wolf31o2@gentoo.org">Chris Gianelloni</mail> |
19 |
</author> |
20 |
|
21 |
<abstract> |
22 |
Many Gentooists have an nVidia chipset on their system. nVidia provides specific |
23 |
Linux drivers to boost the performance of your card. This guide informs you how |
24 |
to install and configure these drivers. |
25 |
</abstract> |
26 |
|
27 |
<!-- The content of this document is licensed under the CC-BY-SA license --> |
28 |
<!-- See http://creativecommons.org/licenses/by-sa/2.5 --> |
29 |
<license/> |
30 |
|
31 |
<version>1.28</version> |
32 |
<date>2006-10-23</date> |
33 |
|
34 |
<chapter> |
35 |
<title>Introduction</title> |
36 |
<section> |
37 |
<body> |
38 |
|
39 |
<p> |
40 |
nVidia release their own Linux drivers which provide good performance and full |
41 |
3D acceleration. There are two drivers in Portage. <c>nvidia-drivers</c> is for |
42 |
newer nVidia graphics cards, while <c>nvidia-legacy-drivers</c> supports older |
43 |
cards. |
44 |
</p> |
45 |
|
46 |
<note> |
47 |
Previously, Gentoo provided separate ebuilds for the nVidia kernel module |
48 |
(<c>nvidia-kernel</c>) and the X11 GLX libraries (<c>nvidia-glx</c>). These |
49 |
ebuilds have since been removed from the Portage tree in favor of |
50 |
<c>nvidia-drivers</c> and <c>nvidia-legacy-drivers</c>. If you use |
51 |
<c>nvidia-kernel</c> and <c>nvidia-glx</c>, then you should migrate to the |
52 |
newer packages. |
53 |
</note> |
54 |
|
55 |
</body> |
56 |
</section> |
57 |
</chapter> |
58 |
|
59 |
<chapter> |
60 |
<title>Configuring your Card</title> |
61 |
<section> |
62 |
<title>Kernel Configuration</title> |
63 |
<body> |
64 |
|
65 |
<p> |
66 |
As mentioned above, the nVidia kernel driver installs and runs against your |
67 |
current kernel. It builds as a module, so it makes sense that your kernel must |
68 |
support the loading of kernel modules. If you used <c>genkernel</c> to |
69 |
configure the kernel for you then you're all set. If not, double check your |
70 |
kernel configuration so that this support is enabled: |
71 |
</p> |
72 |
|
73 |
<pre caption="Enabling the Loading of Kernel Modules"> |
74 |
Loadable module support ---> |
75 |
[*] Enable loadable module support |
76 |
</pre> |
77 |
|
78 |
<p> |
79 |
You also need to enable <e>Memory Type Range Register</e> in your kernel: |
80 |
</p> |
81 |
|
82 |
<pre caption="Enabling MTRR"> |
83 |
Processor and Features ---> |
84 |
[*] MTRR (Memory Type Range Register) support |
85 |
</pre> |
86 |
|
87 |
<p> Also, if you have an AGP graphics card, you can optionally enable |
88 |
<c>agpgart</c> support to your kernel, either compiled in or as a module. If |
89 |
you do not use the in-kernel agpgart, then the drivers will use their own |
90 |
<c>agpgart</c> implementation, called <c>NvAGP</c>. On certain systems, this |
91 |
performs better than the in-kernel agpgart, and on others, it performs worse. |
92 |
You will need to evaluate this on your own system to get the best performance. |
93 |
If you are unsure what to do, use the in-kernel agpgart: |
94 |
</p> |
95 |
|
96 |
<pre caption="Enabling agpgart"> |
97 |
Device Drivers ---> |
98 |
Character devices ---> |
99 |
<*> /dev/agpgart (AGP Support) |
100 |
</pre> |
101 |
|
102 |
</body> |
103 |
</section> |
104 |
<section> |
105 |
<title>Arch-specific notes</title> |
106 |
<body> |
107 |
|
108 |
<impo> |
109 |
For x86 and AMD64 processors, the in-kernel driver conflicts with the binary |
110 |
driver provided by nVidia. If you will be compiling your kernel for these CPUs, |
111 |
you must completely remove support for the in-kernel driver as shown: |
112 |
</impo> |
113 |
|
114 |
<pre caption="Remove the in-kernel driver"> |
115 |
Device Drivers ---> |
116 |
Graphics Support ---> |
117 |
< > nVidia Framebuffer Support |
118 |
< > nVidia Riva support |
119 |
</pre> |
120 |
|
121 |
<p> |
122 |
A good framebuffer alternative is <c>VESA</c>: |
123 |
</p> |
124 |
|
125 |
<pre caption="Enable VESA support"> |
126 |
Device Drivers ---> |
127 |
Graphics Support ---> |
128 |
<*> VESA VGA graphics support |
129 |
</pre> |
130 |
|
131 |
<p> |
132 |
Then, under "VESA driver type" select either <c>vesafb</c> or |
133 |
<c>vesafb-tng</c>. If you are using an AMD64 processor, you should select |
134 |
<c>vesafb</c> rather than <c>vesafb-tng</c>: |
135 |
</p> |
136 |
|
137 |
<pre caption="Select framebuffer type"> |
138 |
(X) vesafb |
139 |
( ) vesafb-tng |
140 |
</pre> |
141 |
|
142 |
<p> |
143 |
For more information, you can read up |
144 |
<path>/usr/src/linux/Documentation/fb/vesafb.txt</path> if you are using |
145 |
<c>vesafb</c> or look for your framebuffer documentation under |
146 |
<path>/usr/src/linux/Documentation/fb/</path>. |
147 |
</p> |
148 |
|
149 |
</body> |
150 |
</section> |
151 |
<section> |
152 |
<title>Continuing with Kernel Configuration</title> |
153 |
<body> |
154 |
|
155 |
<p> |
156 |
The <c>nvidia-drivers</c> and <c>nvidia-legacy-drivers</c> ebuilds |
157 |
automatically discover your kernel version based on the |
158 |
<path>/usr/src/linux</path> symlink. Please ensure that you have this symlink |
159 |
pointing to the correct sources and that your kernel is correctly configured. |
160 |
Please refer to the Configuring the Kernel section of the <uri |
161 |
link="/doc/en/handbook/">Installation Handbook</uri> for details on configuring |
162 |
your kernel. |
163 |
</p> |
164 |
|
165 |
<p> |
166 |
If you are using gentoo-sources-2.6.11-r6, your <path>/usr/src</path> directory |
167 |
might look something like this: |
168 |
</p> |
169 |
|
170 |
<pre caption="Check your /usr/src/linux symlink"> |
171 |
# <i>cd /usr/src</i> |
172 |
# <i>ls -l</i> |
173 |
<comment>(Check that linux points to the right directory)</comment> |
174 |
lrwxrwxrwx 1 root root 22 Apr 23 18:33 linux -> linux-2.6.11-gentoo-r6 |
175 |
drwxr-xr-x 4 root root 120 Apr 8 18:56 linux-2.4.26-gentoo-r4 |
176 |
drwxr-xr-x 18 root root 664 Dec 31 16:09 linux-2.6.10 |
177 |
drwxr-xr-x 18 root root 632 Mar 3 12:27 linux-2.6.11 |
178 |
drwxr-xr-x 19 root root 4096 Mar 16 22:00 linux-2.6.11-gentoo-r6 |
179 |
</pre> |
180 |
|
181 |
<p> |
182 |
In the above output, you'll notice that the <c>linux</c> symlink is pointing |
183 |
to the <c>linux-2.6.11-gentoo-r6</c> kernel. |
184 |
</p> |
185 |
|
186 |
<p> |
187 |
If the symlink is not pointing to the correct sources, you must update the link |
188 |
like this: |
189 |
</p> |
190 |
|
191 |
<pre caption="Create/Update /usr/src/linux symlink"> |
192 |
# <i>cd /usr/src</i> |
193 |
# <i>ln -snf linux-2.6.11-gentoo-r6 linux</i> |
194 |
</pre> |
195 |
|
196 |
</body> |
197 |
</section> |
198 |
<section> |
199 |
<title>Optional: Check for Legacy Card Support</title> |
200 |
<body> |
201 |
|
202 |
<note> |
203 |
Unfortunately, certain legacy video cards are not supported by the newer |
204 |
versions of <c>nvidia-drivers</c>. nVidia provides a <uri |
205 |
link="http://www.nvidia.com/object/IO_18897.html">list of supported |
206 |
cards</uri>. Please check the list before installing the drivers. |
207 |
</note> |
208 |
|
209 |
<p> |
210 |
The following is a list of <b>unsupported</b> legacy video cards: |
211 |
</p> |
212 |
|
213 |
<pre caption="Unsupported cards"> |
214 |
TNT2 |
215 |
TNT2 Pro |
216 |
TNT2 Ultra |
217 |
TNT2 Model 64 (M64) |
218 |
TNT2 Model 64 (M64) Pro |
219 |
Vanta |
220 |
Vanta LT |
221 |
GeForce 256 |
222 |
GeForce DDR |
223 |
GeForce2 GTS |
224 |
GeForce2 Pro |
225 |
GeForce2 Ti |
226 |
GeForce2 Ultra |
227 |
GeForce2 MX Integrated graphics |
228 |
Quadro |
229 |
Quadro2 Pro |
230 |
Quadro2 EX |
231 |
</pre> |
232 |
|
233 |
<p> |
234 |
If your card is listed in the legacy list, then you will be required to install |
235 |
the <c>nvidia-legacy-drivers</c> package to get 3D support. |
236 |
</p> |
237 |
|
238 |
</body> |
239 |
</section> |
240 |
<section> |
241 |
<title>Installing the Appropriate Drivers</title> |
242 |
<body> |
243 |
|
244 |
<p> |
245 |
Now it's time to install the drivers. |
246 |
</p> |
247 |
|
248 |
<pre caption="Installing the nVidia drivers"> |
249 |
<comment>(If you have a card not listed in the legacy list above)</comment> |
250 |
# <i>emerge nvidia-drivers</i> |
251 |
<comment>(If your card is listed in the legacy list)</comment> |
252 |
# <i>emerge nvidia-legacy-drivers</i> |
253 |
</pre> |
254 |
|
255 |
<impo> |
256 |
Every time you <uri link="/doc/en/kernel-upgrade.xml">compile a new |
257 |
kernel</uri> or recompile the current one, you will need to run <c>emerge |
258 |
nvidia-drivers</c> or <c>emerge nvidia-legacy-drivers</c> to reinstall the |
259 |
nVidia modules. |
260 |
</impo> |
261 |
|
262 |
<p> |
263 |
Once the installation has finished, run <c>modprobe nvidia</c> to load the |
264 |
kernel module into memory. If this is an upgrade, you should remove the |
265 |
previous module first. |
266 |
</p> |
267 |
|
268 |
<pre caption="Loading the kernel module"> |
269 |
# <i>lsmod | grep nvidia && rmmod nvidia</i> |
270 |
# <i>modprobe nvidia</i> |
271 |
</pre> |
272 |
|
273 |
<p> |
274 |
To prevent you having to manually load the module on every bootup, you probably |
275 |
want to have this done automatically each time you boot your system, so edit |
276 |
<path>/etc/modules.autoload.d/kernel-2.6</path> (or <path>kernel-2.4</path>, |
277 |
depending on which kernel version you use) and add <c>nvidia</c> to it. Don't |
278 |
forget to run <c>modules-update</c> afterwards. |
279 |
</p> |
280 |
|
281 |
<impo> |
282 |
If you compiled <c>agpgart</c> as a module, you will need to add it to |
283 |
<path>/etc/modules.autoload.d/kernel-2.6</path> (or <path>kernel-2.4</path> |
284 |
depending on your kernel version). |
285 |
</impo> |
286 |
|
287 |
<pre caption="Running modules-update"> |
288 |
# <i>modules-update</i> |
289 |
</pre> |
290 |
|
291 |
</body> |
292 |
</section> |
293 |
<section> |
294 |
<title>Configuring the X Server</title> |
295 |
<body> |
296 |
|
297 |
<p> |
298 |
Once the appropriate drivers are installed you need to configure your X Server |
299 |
to use the <c>nvidia</c> driver instead of the default <c>nv</c> driver. |
300 |
</p> |
301 |
|
302 |
<p> |
303 |
Open <path>/etc/X11/xorg.conf</path> with your favorite editor (such as |
304 |
<c>nano</c> or <c>vim</c>) and go to the <c>Device</c> section. In that |
305 |
section, change the <c>Driver</c> line: |
306 |
</p> |
307 |
|
308 |
<pre caption="Changing nv to nvidia in the X Server configuration"> |
309 |
Section "Device" |
310 |
Identifier "nVidia Inc. GeForce2" |
311 |
<i>Driver "nvidia"</i> |
312 |
VideoRam 65536 |
313 |
EndSection |
314 |
</pre> |
315 |
|
316 |
<p> |
317 |
Then go to the <c>Module</c> section and make sure the <c>glx</c> module gets |
318 |
loaded while the <c>dri</c> module doesn't: |
319 |
</p> |
320 |
|
321 |
<pre caption="Updating the Module section"> |
322 |
Section "Module" |
323 |
<comment>(...)</comment> |
324 |
<i># Load "dri" |
325 |
Load "glx"</i> |
326 |
<comment>(...)</comment> |
327 |
EndSection |
328 |
</pre> |
329 |
|
330 |
<p> |
331 |
Next, in section <c>Screen</c>, make sure that either the <c>DefaultDepth</c> |
332 |
directive is set to 16 or 24, or that you only have <c>Display</c> subsections |
333 |
with <c>Depth</c> settings of 16 or 24. Without it, the nVidia GLX extensions |
334 |
will not start. |
335 |
</p> |
336 |
|
337 |
<pre caption="Updating the Screen section"> |
338 |
Section "Screen" |
339 |
<comment>(...)</comment> |
340 |
<i>DefaultDepth 16</i> |
341 |
Subsection "Display" |
342 |
<comment>(...)</comment> |
343 |
EndSection |
344 |
</pre> |
345 |
|
346 |
<p> |
347 |
Run <c>eselect</c> so that the X Server uses the nVidia GLX libraries: |
348 |
</p> |
349 |
|
350 |
<pre caption="Running eselect"> |
351 |
# <i>eselect opengl set nvidia</i> |
352 |
</pre> |
353 |
|
354 |
</body> |
355 |
</section> |
356 |
<section> |
357 |
<title>Adding your Users to the video Group</title> |
358 |
<body> |
359 |
|
360 |
<p> |
361 |
You have to add your user to the <c>video</c> group so he has access to the |
362 |
nvidia device files: |
363 |
</p> |
364 |
|
365 |
<pre caption="Adding your user to the video group"> |
366 |
# <i>gpasswd -a youruser video</i> |
367 |
</pre> |
368 |
|
369 |
<p> |
370 |
This might not be totally necessary if you aren't using <c>udev</c> but it |
371 |
doesn't hurt either and makes your system future-proof. |
372 |
</p> |
373 |
|
374 |
</body> |
375 |
</section> |
376 |
<section> |
377 |
<title>Testing your Card</title> |
378 |
<body> |
379 |
|
380 |
<p> |
381 |
To test your nVidia card, fire up X and run the <c>glxinfo | grep direct</c> |
382 |
command. It should say that direct rendering is activated: |
383 |
</p> |
384 |
|
385 |
<pre caption="Checking the direct rendering status"> |
386 |
$ <i>glxinfo | grep direct</i> |
387 |
direct rendering: Yes |
388 |
</pre> |
389 |
|
390 |
<p> |
391 |
To monitor your FPS, run <c>glxgears</c>. |
392 |
</p> |
393 |
|
394 |
</body> |
395 |
</section> |
396 |
<section> |
397 |
<title>Enabling nvidia Support</title> |
398 |
<body> |
399 |
|
400 |
<p> |
401 |
Some tools, such as <c>mplayer</c> and <c>xine-lib</c>, use a local USE flag |
402 |
called "nvidia" which enables XvMCNVIDIA support, useful when watching high |
403 |
resolution movies. Add in "nvidia" in your USE variable in |
404 |
<path>/etc/make.conf</path> or add it as USE flag to <c>media-video/mplayer</c> |
405 |
and/or <c>media-libs/xine-lib</c> in <path>/etc/portage/package.use</path>. |
406 |
</p> |
407 |
|
408 |
<p> |
409 |
Then, run <c>emerge -uD --newuse world</c> to rebuild the applications that |
410 |
benefit from the USE flag change. |
411 |
</p> |
412 |
|
413 |
</body> |
414 |
</section> |
415 |
<section> |
416 |
<title>Using NVidia Settings Tool</title> |
417 |
<body> |
418 |
|
419 |
<p> |
420 |
Since nVidia released version 1.0.6106 it also provides you with a settings |
421 |
tool. This tool allows you to change graphical settings without restarting the |
422 |
X server and is available through Portage as |
423 |
<c>media-video/nvidia-settings</c>. |
424 |
</p> |
425 |
|
426 |
</body> |
427 |
</section> |
428 |
</chapter> |
429 |
|
430 |
<chapter> |
431 |
<title>Troubleshooting</title> |
432 |
<section> |
433 |
<title>Getting 2D to work on machines with 4Gb or more memory</title> |
434 |
<body> |
435 |
|
436 |
<p> |
437 |
If you are having troubles with the nVidia 2D acceleration it is likely that |
438 |
you are unable to set up a write-combining range with MTRR. To verify, check |
439 |
the contents of <path>/proc/mtrr</path>: |
440 |
</p> |
441 |
|
442 |
<pre caption="Checking if you have write-combining enabled"> |
443 |
# <i>cat /proc/mtrr</i> |
444 |
</pre> |
445 |
|
446 |
<p> |
447 |
Every line should contain "write-back" or "write-combining". If you see a line |
448 |
with "uncachable" in it you will need to change a BIOS setting to fix this. |
449 |
</p> |
450 |
|
451 |
<p> |
452 |
Reboot and enter the BIOS, then find the MTRR settings (probably under "CPU |
453 |
Settings"). Change the setting from "continuous" to "discrete" and boot back |
454 |
into Linux. You will now find out that there is no "uncachable" entry anymore |
455 |
and 2D acceleration now works without any glitches. |
456 |
</p> |
457 |
|
458 |
</body> |
459 |
</section> |
460 |
<section> |
461 |
<title> |
462 |
When I attempt to load the kernel module I receive a "no such device" |
463 |
</title> |
464 |
<body> |
465 |
|
466 |
<p> |
467 |
This usually occurs when you don't have a matching video card. Make sure that |
468 |
you have an nVidia-powered graphical card (you can double-check this using |
469 |
<c>lspci</c>). |
470 |
</p> |
471 |
|
472 |
<p> |
473 |
If you are confident that you have an nVidia card, check your BIOS and see if |
474 |
the directive <e>Assign IRQ to VGA</e> is set. |
475 |
</p> |
476 |
|
477 |
</body> |
478 |
</section> |
479 |
</chapter> |
480 |
|
481 |
<chapter> |
482 |
<title>Expert Configuration</title> |
483 |
<section> |
484 |
<title>Documentation</title> |
485 |
<body> |
486 |
|
487 |
<p> |
488 |
The nVidia driver package also comes with comprehensive documentation. This is |
489 |
installed into <c>/usr/share/doc</c> and can be viewed with the following |
490 |
command: |
491 |
</p> |
492 |
|
493 |
<pre caption="Viewing the NVIDIA documentation"> |
494 |
<comment>(for nvidia-drivers)</comment> |
495 |
$ <i>less /usr/share/doc/nvidia-drivers-*/README.gz</i> |
496 |
<comment>(for nvidia-legacy-drivers)</comment> |
497 |
$ <i>less /usr/share/doc/nvidia-legacy-drivers-*/README.gz</i> |
498 |
</pre> |
499 |
|
500 |
</body> |
501 |
</section> |
502 |
<section> |
503 |
<title>Kernel module parameters</title> |
504 |
<body> |
505 |
|
506 |
<p> |
507 |
The <c>nvidia</c> kernel module accepts a number of parameters (options) which |
508 |
you can use to tweak the behaviour of the driver. Most of these are mentioned in |
509 |
the documentation. To add or change the values of these parameters, edit the |
510 |
file <c>/etc/modules.d/nvidia</c>. Remember to run <c>modules-update</c> after |
511 |
modifying this file, and bear in mind that you will need to reload the |
512 |
<c>nvidia</c> module before the new settings take effect. |
513 |
</p> |
514 |
|
515 |
<pre caption="Adjusting nvidia options"> |
516 |
<comment>(Edit /etc/modules.d/nvidia in your favourite editor)</comment> |
517 |
# <i>nano -w /etc/modules.d/nvidia</i> |
518 |
<comment>(Update module information)</comment> |
519 |
# <i>modules-update</i> |
520 |
<comment>(Unload the nvidia module...)</comment> |
521 |
# <i>modprobe -r nvidia</i> |
522 |
<comment>(...and load it once again)</comment> |
523 |
# <i>modprobe nvidia</i> |
524 |
</pre> |
525 |
|
526 |
</body> |
527 |
</section> |
528 |
<section> |
529 |
<title>Advanced X configuration</title> |
530 |
<body> |
531 |
|
532 |
<p> |
533 |
The GLX layer also has a plethora of options which can be configured. These |
534 |
control the configuration of TV out, dual displays, monitor frequency detection, |
535 |
etc. Again, all of the available options are detailed in the documentation. |
536 |
</p> |
537 |
|
538 |
<p> |
539 |
If you wish to use any of these options, you need to list them in the relevant |
540 |
Device section of your X config file (usually <c>/etc/X11/xorg.conf</c>). For |
541 |
example, suppose I wanted to disable the splash logo: |
542 |
</p> |
543 |
|
544 |
<pre caption="Advanced nvidia configuration in the X configuration"> |
545 |
Section "Device" |
546 |
Identifier "nVidia Inc. GeForce2" |
547 |
Driver "nvidia" |
548 |
<i>Option "NoLogo" "true"</i> |
549 |
VideoRam 65536 |
550 |
EndSection |
551 |
</pre> |
552 |
|
553 |
</body> |
554 |
</section> |
555 |
</chapter> |
556 |
|
557 |
</guide> |