/[gentoo]/xml/htdocs/doc/en/openafs.xml
Gentoo

Contents of /xml/htdocs/doc/en/openafs.xml

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.19 - (show annotations) (download) (as text)
Sat Jul 2 09:50:30 2005 UTC (9 years, 5 months ago) by swift
Branch: MAIN
Changes since 1.18: +515 -430 lines
File MIME type: application/xml
Fix coding style, no content change

1 <?xml version='1.0' encoding="UTF-8"?>
2 <!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/openafs.xml,v 1.18 2005/07/02 09:40:23 swift Exp $ -->
3
4 <!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
5
6 <guide link = "/doc/en/openafs.xml">
7 <title>Gentoo Linux OpenAFS Guide</title>
8
9 <author title="Editor">
10 <mail link="darks@gentoo.org">Holger Brueckner</mail>
11 </author>
12 <author title="Editor">
13 <mail link="bennyc@gentoo.org">Benny Chuang</mail>
14 </author>
15 <author title="Editor">
16 <mail link="blubber@gentoo.org">Tiemo Kieft</mail>
17 </author>
18 <author title="Editor">
19 <mail link="fnjordy@gmail.com">Steven McCoy</mail>
20 </author>
21
22 <abstract>
23 This guide shows you how to install a openafs server and client on gentoo linux
24 </abstract>
25
26 <license/>
27
28 <version>0.8</version>
29 <date>2005-07-02</date>
30
31 <chapter>
32 <title>Overview</title>
33 <section>
34 <title>About this Document</title>
35 <body>
36
37 <p>
38 This document provides you with all neccessary steps to install an openafs
39 server on Gentoo Linux. Parts of this document are taken from the AFS FAQ and
40 IBM's Quick Beginnings guide on AFS. Well, never reinvent the wheel :)
41 </p>
42
43 </body>
44 </section>
45 <section>
46 <title>What is AFS ?</title>
47 <body>
48
49 <p>
50 AFS is a distributed filesystem that enables co-operating hosts
51 (clients and servers) to efficiently share filesystem resources
52 across both local area and wide area networks. Clients hold a
53 cache for often used objects (files), to get quicker
54 access to them.
55 </p>
56
57 <p>
58 AFS is based on a distributed file system originally developed
59 at the Information Technology Center at Carnegie-Mellon University
60 that was called the "Andrew File System". "Andrew" was the name of the
61 research project at CMU - honouring the founders of the University. Once
62 Transarc was formed and AFS became a product, the "Andrew" was dropped to
63 indicate that AFS had gone beyond the Andrew research project and had become
64 a supported, product quality filesystem. However, there were a number of
65 existing cells that rooted their filesystem as /afs. At the time, changing
66 the root of the filesystem was a non-trivial undertaking. So, to save the
67 early AFS sites from having to rename their filesystem, AFS remained as the
68 name and filesystem root.
69 </p>
70
71 </body>
72 </section>
73 <section>
74 <title>What is an AFS cell ?</title>
75 <body>
76
77 <p>
78 An AFS cell is a collection of servers grouped together administratively
79 and presenting a single, cohesive filesystem. Typically, an AFS cell is a set
80 of hosts that use the same Internet domain name (like for example gentoo.org)
81 Users log into AFS client workstations which request information and files
82 from the cell's servers on behalf of the users. Users won't know on which server
83 a file which they are accessing, is located. They even won't notice if a server
84 will be located to another room, since every volume can be replicated and moved
85 to another server without any user noticing. The files are always accessable.
86 Well it's like NFS on steroids :)
87 </p>
88
89 </body>
90 </section>
91 <section>
92 <title>What are the benefits of using AFS ?</title>
93 <body>
94
95 <p>
96 The main strengths of AFS are its:
97 caching facility (on client side, typically 100M to 1GB),
98 security features (Kerberos 4 based, access control lists),
99 simplicity of addressing (you just have one filesystem),
100 scalability (add further servers to your cell as needed),
101 communications protocol.
102 </p>
103
104 </body>
105 </section>
106 <section>
107 <title>Where can i get more information ?</title>
108 <body>
109
110 <p>
111 Read the <uri link="http://www.angelfire.com/hi/plutonic/afs-faq.html">AFS
112 FAQ</uri>.
113 </p>
114
115 <p>
116 Openafs main page is at <uri
117 link="http://www.openafs.org">www.openafs.org</uri>.
118 </p>
119
120 <p>
121 AFS was originally developed by Transarc which is now owned by IBM.
122 You can find some information about AFS on
123 <uri link="http://www.transarc.ibm.com/Product/EFS/AFS/index.html">Transarcs
124 Webpage</uri>.
125 </p>
126
127 </body>
128 </section>
129 </chapter>
130
131 <chapter>
132 <title>Documentation</title>
133 <section>
134 <title>Getting AFS Documentation</title>
135 <body>
136
137 <p>
138 You can get the original IBM AFS Documentation. It is very well written and you
139 really want read it if it is up to you to administer a AFS Server.
140 </p>
141
142 <pre caption="Installing afsdoc">
143 # <i>emerge app-doc/afsdoc</i>
144 </pre>
145
146 </body>
147 </section>
148 </chapter>
149
150 <chapter>
151 <title>Client Installation</title>
152 <section>
153 <title>Preliminary Work</title>
154 <body>
155
156 <note>
157 All commands should be written in one line !! In this document they are
158 sometimes wrapped to two lines to make them easier to read.
159 </note>
160
161 <note>
162 Unfortunately the AFS Client needs a ext2 partiton for it's cache to run
163 correctly, because there are some locking issues with reiserfs. You need to
164 create a ext2 partition of approx. 200MB (more won't hurt) and mount it to
165 <path>/usr/vice/cache</path>
166 </note>
167
168 <p>
169 You should adjust the two files CellServDB and ThisCell before you build the
170 afs client. (These files are in <path>/usr/portage/net-fs/openafs/files</path>)
171 </p>
172
173 <pre caption="Adjusting CellServDB and ThisCell">
174 CellServDB:
175 >netlabs #Cell name
176 10.0.0.1 #storage
177
178 ThisCell:
179 netlabs
180 </pre>
181
182 <warn>
183 Only use spaces inside the <path>CellServDB</path> file. The client will most
184 likely fail if you use TABs.
185 </warn>
186
187 <p>
188 CellServDB tells your client which server(s) he needs to contact for a
189 specific cell. ThisCell should be quite obvious. Normally you use a name
190 which is unique for your organisation. Your (official) domain might be a
191 good choice.
192 </p>
193
194 </body>
195 </section>
196 <section>
197 <title>Building the Client</title>
198 <body>
199
200 <pre caption="Installing openafs">
201 # <i>emerge net-fs/openafs</i>
202 </pre>
203
204 <p>
205 After successful compilation you're ready to go.
206 </p>
207
208 </body>
209 </section>
210 <section>
211 <title>Starting afs on startup</title>
212 <body>
213
214 <p>
215 The following command will create the appropriate links to start your afs client
216 on system startup.
217 </p>
218
219 <warn>
220 You should always have a running afs server in your domain when trying to
221 start the afs client. You're system won't boot until it gets some timeout
222 if your afs server is down. (and this is quite a long long time)
223 </warn>
224
225 <pre caption="Adding afs to the default runlevel">
226 # <i>rc-update add afs default</i>
227 </pre>
228
229 </body>
230 </section>
231 </chapter>
232
233 <chapter>
234 <title>Server Installation</title>
235 <section>
236 <title>Building the Server</title>
237 <body>
238
239 <p>
240 The following command will install all necessary binaries for setting up a AFS
241 Server <e>and</e> Client.
242 </p>
243
244 <pre caption="Installing openafs">
245 # <i>emerge net-fs/openafs</i>
246 </pre>
247
248 </body>
249 </section>
250 <section>
251 <title>Starting AFS Server</title>
252 <body>
253
254 <p>
255 You need to remove the sample CellServDB and ThisCell file first.
256 </p>
257
258 <pre caption="Remove sample files">
259 # <i>rm /usr/vice/etc/ThisCell</i>
260 # <i>rm /usr/vice/etc/CellServDB</i>
261 </pre>
262
263 <p>
264 Next you will run the <b>bosserver</b> command to initialize the Basic OverSeer
265 (BOS) Server, which monitors and controls other AFS server processes on its
266 server machine. Think of it as init for the system. Include the <b>-noauth</b>
267 flag to disable authorization checking, since you haven't added the admin user
268 yet.
269 </p>
270
271 <warn>
272 Disabling authorization checking gravely compromises cell security.
273 You must complete all subsequent steps in one uninterrupted pass
274 and must not leave the machine unattended until you restart the BOS Server with
275 authorization checking enabled. Well this is what the AFS documentation says :)
276 </warn>
277
278 <pre caption="Initialize the Basic OverSeer Server">
279 # <i>/usr/afs/bin/bosserver -noauth &amp;</i>
280 </pre>
281
282 <p>
283 Verify that the BOS Server created <path>/usr/vice/etc/CellServDB</path>
284 and <path>/usr/vice/etc/ThisCell</path>
285 </p>
286
287 <pre caption="Check if CellServDB and ThisCell are created">
288 # <i>ls -al /usr/vice/etc/</i>
289 -rw-r--r-- 1 root root 41 Jun 4 22:21 CellServDB
290 -rw-r--r-- 1 root root 7 Jun 4 22:21 ThisCell
291 </pre>
292
293 </body>
294 </section>
295 <section>
296 <title>Defining Cell Name and Membership for Server Process</title>
297 <body>
298
299 <p>
300 Now assign your cells name.
301 </p>
302
303 <impo>
304 There are some restrictions on the name format.
305 Two of the most important restrictions are that the name
306 cannot include uppercase letters or more than 64 characters. Remember that
307 your cell name will show up under <path>/afs</path>, so you might want to choose
308 a short one.
309 </impo>
310
311 <note>
312 In the following and every instruction in this guide, for the &lt;server
313 name&gt; argument substitute the full-qualified hostname (such as
314 <b>afs.gentoo.org</b>) of the machine you are installing. For the &lt;cell
315 name&gt; argument substitute your cell's complete name (such as
316 <b>gentoo</b>)
317 </note>
318
319 <p>
320 Run the <b>bos setcellname</b> command to set the cell name:
321 </p>
322
323 <pre caption="Set the cell name">
324 # <i>/usr/afs/bin/bos setcellname &lt;server name&gt; &lt;cell name&gt; -noauth</i>
325 </pre>
326
327 </body>
328 </section>
329 <section>
330 <title>Starting the Database Server Process</title>
331 <body>
332
333 <p>
334 Next use the <b>bos create</b> command to create entries for the four database
335 server processes in the <path>/usr/afs/local/BosConfig</path> file. The four
336 processes run on database server machines only.
337 </p>
338
339 <table>
340 <tr>
341 <ti>kaserver</ti>
342 <ti>
343 The Authentication Server maintains the Authentication Database.
344 This can be replaced by a Kerberos 5 daemon. If anybody want's to try that
345 feel free to update this document :)
346 </ti>
347 </tr>
348 <tr>
349 <ti>buserver</ti>
350 <ti>The Backup Server maintains the Backup Database</ti>
351 </tr>
352 <tr>
353 <ti>ptserver</ti>
354 <ti>The Protection Server maintains the Protection Database</ti>
355 </tr>
356 <tr>
357 <ti>vlserver</ti>
358 <ti>
359 The Volume Location Server maintains the Volume Location Database (VLDB).
360 Very important :)
361 </ti>
362 </tr>
363 </table>
364
365 <pre caption="Create entries for the database processes">
366 # <i>/usr/afs/bin/bos create &lt;server name&gt; kaserver simple /usr/afs/bin/kaserver -cell &lt;cell name&gt; -noauth</i>
367 # <i>/usr/afs/bin/bos create &lt;server name&gt; buserver simple /usr/afs/bin/buserver -cell &lt;cell name&gt; -noauth</i>
368 # <i>/usr/afs/bin/bos create &lt;server name&gt; ptserver simple /usr/afs/bin/ptserver -cell &lt;cell name&gt; -noauth</i>
369 # <i>/usr/afs/bin/bos create &lt;server name&gt; vlserver simple /usr/afs/bin/vlserver -cell &lt;cell name&gt; -noauth</i>
370 </pre>
371
372 <p>
373 You can verify that all servers are running with the <b>bos status</b> command:
374 </p>
375
376 <pre caption="Check if all the servers are running">
377 # <i>/usr/afs/bin/bos status &lt;server name&gt; -noauth</i>
378 Instance kaserver, currently running normally.
379 Instance buserver, currently running normally.
380 Instance ptserver, currently running normally.
381 Instance vlserver, currently running normally.
382 </pre>
383
384 </body>
385 </section>
386 <section>
387 <title>Initializing Cell Security</title>
388 <body>
389
390 <p>
391 Now we'll initialize the cell's security mechanisms. We'll begin by creating
392 the following two initial entries in the Authentication Database: The main
393 administrative account, called <b>admin</b> by convention and an entry for
394 the AFS server processes, called <b>afs</b>. No user logs in under the
395 identity <b>afs</b>, but the Authentication Server's Ticket Granting
396 Service (TGS) module uses the account to encrypt the server tickets that
397 it grants to AFS clients. This sounds pretty much like Kerberos :)
398 </p>
399
400 <p>
401 Enter <b>kas</b> interactive mode
402 </p>
403
404 <pre caption="Entering the interactive mode">
405 # <i>/usr/afs/bin/kas -cell &lt;cell name&gt; -noauth</i>
406 ka&gt; <i>create afs</i>
407 initial_password:
408 Verifying, please re-enter initial_password:
409 ka&gt; <i>create admin</i>
410 initial_password:
411 Verifying, please re-enter initial_password:
412 ka&gt; <i>examine afs</i>
413
414 User data for afs
415 key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:30 2001
416 password will never expire.
417 An unlimited number of unsuccessful authentications is permitted.
418 entry never expires. Max ticket lifetime 100.00 hours.
419 last mod on Mon Jun 4 20:49:30 2001 by $lt;none&gt;
420 permit password reuse
421 ka&gt; <i>setfields admin -flags admin</i>
422 ka&gt; <i>examine admin</i>
423
424 User data for admin (ADMIN)
425 key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:59 2001
426 password will never expire.
427 An unlimited number of unsuccessful authentications is permitted.
428 entry never expires. Max ticket lifetime 25.00 hours.
429 last mod on Mon Jun 4 20:51:10 2001 by $lt;none&gt;
430 permit password reuse
431 ka&gt;
432 </pre>
433
434 <p>
435 Run the <b>bos adduser</b> command, to add the <b>admin</b> user to
436 the <path>/usr/afs/etc/UserList</path>.
437 </p>
438
439 <pre caption="Add the admin user to the UserList">
440 # <i>/usr/afs/bin/bos adduser &lt;server name&gt; admin -cell &lt;cell name&gt; -noauth</i>
441 </pre>
442
443 <p>
444 Issue the <b>bos addkey</b> command to define the AFS Server
445 encryption key in <path>/usr/afs/etc/KeyFile</path>
446 </p>
447
448 <note>
449 If asked for the input key, give the password you entered when creating
450 the afs entry with <b>kas</b>
451 </note>
452
453 <pre caption="Entering the password">
454 # <i>/usr/afs/bin/bos addkey &lt;server name&gt; -kvno 0 -cell &lt;cell name&gt; -noauth</i>
455 input key:
456 Retype input key:
457 </pre>
458
459 <p>
460 Issue the <b>pts createuser</b> command to create a Protection Database
461 entry for the admin user
462 </p>
463
464 <note>
465 By default, the Protection Server assigns AFS UID 1 to the <b>admin</b> user,
466 because it is the first user entry you are creating. If the local password file
467 (/etc/passwd or equivalent) already has an entry for <b>admin</b> that assigns
468 a different UID use the <b>-id</b> argument to create matching UID's
469 </note>
470
471 <pre caption="Create a Protection Database entry for the database user">
472 # <i>/usr/afs/bin/pts createuser -name admin -cell &lt;cell name&gt; [-id &lt;AFS UID&gt;] -noauth</i>
473 </pre>
474
475 <p>
476 Issue the <b>pts adduser</b> command to make the <b>admin</b> user a member
477 of the system:administrators group, and the <b>pts membership</b> command to
478 verify the new membership
479 </p>
480
481 <pre caption="Make admin a member of the administrators group and verify">
482 # <i>/usr/afs/bin/pts adduser admin system:administrators -cell &lt;cell name&gt; -noauth</i>
483 # <i>/usr/afs/bin/pts membership admin -cell &lt;cell name&gt; -noauth</i>
484 Groups admin (id: 1) is a member of:
485 system:administrators
486 </pre>
487
488 <p>
489 Restart all AFS Server processes
490 </p>
491
492 <pre caption="Restart all AFS server processes">
493 # <i>/usr/afs/bin/bos restart &lt;server name&gt; -all -cell &lt;cell name&gt; -noauth</i>
494 </pre>
495
496 </body>
497 </section>
498 <section>
499 <title>Starting the File Server, Volume Server and Salvager</title>
500 <body>
501
502 <p>
503 Start the <b>fs</b> process, which consists of the File Server, Volume Server
504 and Salvager (fileserver, volserver and salvager processes).
505 </p>
506
507 <pre caption="Start the fs process">
508 # <i>/usr/afs/bin/bos create &lt;server name&gt; fs fs /usr/afs/bin/fileserver /usr/afs/bin/volserver /usr/afs/bin/salvager -cell &lt;cell name&gt; -noauth</i>
509 </pre>
510
511 <p>
512 Verify that all processes are running
513 </p>
514
515 <pre caption="Check if all processes are running">
516 # <i>/usr/afs/bin/bos status &lt;server name&gt; -long -noauth</i>
517 Instance kaserver, (type is simple) currently running normally.
518 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
519 Last exit at Mon Jun 4 21:07:17 2001
520 Command 1 is '/usr/afs/bin/kaserver'
521
522 Instance buserver, (type is simple) currently running normally.
523 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
524 Last exit at Mon Jun 4 21:07:17 2001
525 Command 1 is '/usr/afs/bin/buserver'
526
527 Instance ptserver, (type is simple) currently running normally.
528 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
529 Last exit at Mon Jun 4 21:07:17 2001
530 Command 1 is '/usr/afs/bin/ptserver'
531
532 Instance vlserver, (type is simple) currently running normally.
533 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
534 Last exit at Mon Jun 4 21:07:17 2001
535 Command 1 is '/usr/afs/bin/vlserver'
536
537 Instance fs, (type is fs) currently running normally.
538 Auxiliary status is: file server running.
539 Process last started at Mon Jun 4 21:09:30 2001 (2 proc starts)
540 Command 1 is '/usr/afs/bin/fileserver'
541 Command 2 is '/usr/afs/bin/volserver'
542 Command 3 is '/usr/afs/bin/salvager'
543 </pre>
544
545 <p>
546 Your next action depends on whether you have ever run AFS file server machines
547 in the cell:
548 </p>
549
550 <p>
551 If you are installing the first AFS Server ever in the cell create the
552 first AFS volume, <b>root.afs</b>
553 </p>
554
555 <note>
556 For the partition name argument, substitute the name of one of the machine's
557 AFS Server partitions. By convention
558 these partitions are named <path>/vicepx</path>, where x is in the range of a-z.
559 </note>
560
561 <pre caption="Create the root.afs volume">
562 # <i>/usr/afs/bin/vos create &lt;server name&gt; &lt;partition name&gt; root.afs -cell &lt;cell name&gt; -noauth</i>
563 </pre>
564
565 <p>
566 If there are existing AFS file server machines and volumes in the cell
567 issue the <b>vos sncvldb</b> and <b>vos syncserv</b> commands to synchronize
568 the VLDB (Volume Location Database) with the actual state of volumes on the
569 local machine. This will copy all necessary data to your new server.
570 </p>
571
572 <p>
573 If the command fails with the message "partition /vicepa does not exist on
574 the server", ensure that the partition is mounted before running OpenAFS
575 servers, or mount the directory and restart the processes using
576 <c>/usr/afs/bin/bos restart &lt;server name&gt; -all -cell &lt;cell
577 name&gt; -noauth</c>.
578 </p>
579
580 <pre caption="Synchronise the VLDB">
581 # <i>/usr/afs/bin/vos syncvldb &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
582 # <i>/usr/afs/bin/vos syncserv &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
583 </pre>
584
585 </body>
586 </section>
587 <section>
588 <title>Starting the Server Portion of the Update Server</title>
589 <body>
590
591 <pre caption="Start the update server">
592 # <i>/usr/afs/bin/bos create &lt;server name&gt;
593 upserver simple "/usr/afs/bin/upserver
594 -crypt /usr/afs/etc -clear /usr/afs/bin"
595 -cell &lt;cell name&gt; -noauth</i>
596 </pre>
597
598 </body>
599 </section>
600 <section>
601 <title>Configuring the Top Level of the AFS filespace</title>
602 <body>
603
604 <p>
605 First you need to set some acl's, so that any user can lookup
606 <path>/afs</path>.
607 </p>
608
609 <pre caption="Set access control lists">
610 # <i>/usr/afs/bin/fs setacl /afs system:anyuser rl</i>
611 </pre>
612
613 <p>
614 Then you need to create the root volume, mount it readonly on
615 <path>/afs/&lt;cell name&gt;</path> and read/write on <path>/afs/.&lt;cell
616 name&gt;</path>
617 </p>
618
619 <pre caption="Prepare the root volume">
620 # <i>/usr/afs/bin/vos create &lt;server name&gt;&lt;partition name&gt; root.cell</i>
621 # <i>/usr/afs/bin/fs mkmount /afs/&lt;cell name&gt; root.cell </i>
622 # <i>/usr/afs/bin/fs setacl /afs/&lt;cell name&gt; system:anyuser rl</i>
623 # <i>/usr/afs/bin/fs mkmount /afs/.&lt;cell name&gt; root.cell -rw</i>
624 </pre>
625
626 <p>
627 Finally you're done !!! You should now have a working AFS file server
628 on your local network. Time to get a big
629 cup of coffee and print out the AFS documentation !!!
630 </p>
631
632 <note>
633 It is very important for the AFS server to function properly, that all system
634 clock's are synchronized. This is best accomplished by installing a ntp server
635 on one machine (e.g. the AFS server) and synchronize all client clock's
636 with the ntp client. This can also be done by the afs client.
637 </note>
638
639 </body>
640 </section>
641 </chapter>
642
643 <chapter>
644 <title>Basic Administration</title>
645 <section>
646 <title>Disclaimer</title>
647 <body>
648
649 <p>
650 OpenAFS is an extensive technology. Please read the AFS documentation for more
651 information. We only list a few administrative tasks in this chapter.
652 </p>
653
654 </body>
655 </section>
656 <section>
657 <title>Configuring PAM to Acquire an AFS Token on Login</title>
658 <body>
659
660 <p>
661 To use AFS you need to authenticate against the KA Server if using
662 an implementation AFS Kerberos 4, or against a Kerberos 5 KDC if using
663 MIT, Heimdal, or ShiShi Kerberos 5. However in order to login to a
664 machine you will also need a user account, this can be local in
665 /etc/passwd, NIS, LDAP (OpenLDAP), or a Hesiod database. PAM allows
666 Gentoo to tie the authentication against AFS and login to the user
667 account.
668 </p>
669
670 <p>
671 You will need to update /etc/pam.d/system-auth which is used by the
672 other configurations. "use_first_pass" indicates it will be checked
673 first against the user login, and "ignore_root" stops the local super
674 user being checked so as to order to allow login if AFS or the network
675 fails.
676 </p>
677
678 <pre caption="/etc/pam.d/system-auth">
679 auth required /lib/security/pam_env.so
680 auth sufficient /lib/security/pam_unix.so likeauth nullok
681 auth sufficient /usr/afsws/lib/pam_afs.so.1 use_first_pass ignore_root
682 auth required /lib/security/pam_deny.so
683
684 account required /lib/security/pam_unix.so
685
686 password required /lib/security/pam_cracklib.so retry=3
687 password sufficient /lib/security/pam_unix.so nullok md5 shadow use_authtok
688 password required /lib/security/pam_deny.so
689
690 session required /lib/security/pam_limits.so
691 session required /lib/security/pam_unix.so
692 </pre>
693
694 <p>
695 In order for sudo to keep the real user's token and to prevent local
696 users gaining AFS access change /etc/pam.d/su as follows:
697 </p>
698
699 <pre caption="/etc/pam.d/su">
700 <comment># Here, users with uid &gt; 100 are considered to belong to AFS and users with
701 # uid &lt;= 100 are ignored by pam_afs.</comment>
702 auth sufficient /usr/afsws/lib/pam_afs.so.1 ignore_uid 100
703
704 auth sufficient /lib/security/pam_rootok.so
705
706 <comment># If you want to restrict users begin allowed to su even more,
707 # create /etc/security/suauth.allow (or to that matter) that is only
708 # writable by root, and add users that are allowed to su to that
709 # file, one per line.
710 #auth required /lib/security/pam_listfile.so item=ruser \
711 # sense=allow onerr=fail file=/etc/security/suauth.allow
712
713 # Uncomment this to allow users in the wheel group to su without
714 # entering a passwd.
715 #auth sufficient /lib/security/pam_wheel.so use_uid trust
716
717 # Alternatively to above, you can implement a list of users that do
718 # not need to supply a passwd with a list.
719 #auth sufficient /lib/security/pam_listfile.so item=ruser \
720 # sense=allow onerr=fail file=/etc/security/suauth.nopass
721
722 # Comment this to allow any user, even those not in the 'wheel'
723 # group to su</comment>
724 auth required /lib/security/pam_wheel.so use_uid
725
726 auth required /lib/security/pam_stack.so service=system-auth
727
728 account required /lib/security/pam_stack.so service=system-auth
729
730 password required /lib/security/pam_stack.so service=system-auth
731
732 session required /lib/security/pam_stack.so service=system-auth
733 session optional /lib/security/pam_xauth.so
734
735 <comment># Here we prevent the real user id's token from being dropped</comment>
736 session optional /usr/afsws/lib/pam_afs.so.1 no_unlog
737 </pre>
738
739 </body>
740 </section>
741 </chapter>
742
743 </guide>

  ViewVC Help
Powered by ViewVC 1.1.20