/[gentoo]/xml/htdocs/doc/en/openafs.xml
Gentoo

Contents of /xml/htdocs/doc/en/openafs.xml

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.28 - (show annotations) (download) (as text)
Tue Dec 13 20:22:23 2011 UTC (3 years ago) by swift
Branch: MAIN
Changes since 1.27: +5 -7 lines
File MIME type: application/xml
Fix #394271 - Drop dead link; AFS has been discontinued by IBM since 2005. Thanks to Miguel de Val-Borro for reporting.

1 <?xml version='1.0' encoding="UTF-8"?>
2 <!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/openafs.xml,v 1.27 2011/09/04 17:53:40 swift Exp $ -->
3
4 <!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
5
6 <guide>
7 <title>Gentoo Linux OpenAFS Guide</title>
8
9 <author title="Editor">
10 <mail link="stefaan@gentoo.org">Stefaan De Roeck</mail>
11 </author>
12 <author title="Editor">
13 <mail link="darks@gentoo.org">Holger Brueckner</mail>
14 </author>
15 <author title="Editor">
16 <mail link="bennyc@gentoo.org">Benny Chuang</mail>
17 </author>
18 <author title="Editor">
19 <mail link="blubber@gentoo.org">Tiemo Kieft</mail>
20 </author>
21 <author title="Editor">
22 <mail link="fnjordy@gmail.com">Steven McCoy</mail>
23 </author>
24 <author title="Editor">
25 <mail link="fox2mike@gentoo.org">Shyam Mani</mail>
26 </author>
27
28 <abstract>
29 This guide shows you how to install an OpenAFS server and client on Gentoo
30 Linux.
31 </abstract>
32
33 <!-- The content of this document is licensed under the CC-BY-SA license -->
34 <!-- See http://creativecommons.org/licenses/by-sa/2.5 -->
35 <license/>
36
37 <version>2</version>
38 <date>2011-12-13</date>
39
40 <chapter>
41 <title>Overview</title>
42 <section>
43 <title>About this Document</title>
44 <body>
45
46 <p>
47 This document provides you with all necessary steps to install an OpenAFS
48 server on Gentoo Linux. Parts of this document are taken from the AFS FAQ and
49 IBM's Quick Beginnings guide on AFS. Well, never reinvent the wheel. :)
50 </p>
51
52 </body>
53 </section>
54 <section>
55 <title>What is AFS?</title>
56 <body>
57
58 <p>
59 AFS is a distributed filesystem that enables co-operating hosts
60 (clients and servers) to efficiently share filesystem resources
61 across both local area and wide area networks. Clients hold a
62 cache for often used objects (files), to get quicker
63 access to them.
64 </p>
65
66 <p>
67 AFS is based on a distributed file system originally developed
68 at the Information Technology Center at Carnegie-Mellon University
69 that was called the "Andrew File System". "Andrew" was the name of the
70 research project at CMU - honouring the founders of the University. Once
71 Transarc was formed and AFS became a product, the "Andrew" was dropped to
72 indicate that AFS had gone beyond the Andrew research project and had become
73 a supported, product quality filesystem. However, there were a number of
74 existing cells that rooted their filesystem as /afs. At the time, changing
75 the root of the filesystem was a non-trivial undertaking. So, to save the
76 early AFS sites from having to rename their filesystem, AFS remained as the
77 name and filesystem root.
78 </p>
79
80 </body>
81 </section>
82 <section>
83 <title>What is an AFS cell?</title>
84 <body>
85
86 <p>
87 An AFS cell is a collection of servers grouped together administratively and
88 presenting a single, cohesive filesystem. Typically, an AFS cell is a set of
89 hosts that use the same Internet domain name (for example, gentoo.org) Users
90 log into AFS client workstations which request information and files from the
91 cell's servers on behalf of the users. Users won't know on which server a
92 file which they are accessing, is located. They even won't notice if a server
93 will be located to another room, since every volume can be replicated and
94 moved to another server without any user noticing. The files are always
95 accessible. Well, it's like NFS on steroids :)
96 </p>
97
98 </body>
99 </section>
100 <section>
101 <title>What are the benefits of using AFS?</title>
102 <body>
103
104 <p>
105 The main strengths of AFS are its:
106 caching facility (on client side, typically 100M to 1GB),
107 security features (Kerberos 4 based, access control lists),
108 simplicity of addressing (you just have one filesystem),
109 scalability (add further servers to your cell as needed),
110 communications protocol.
111 </p>
112
113 </body>
114 </section>
115 <section>
116 <title>Where can I get more information?</title>
117 <body>
118
119 <p>
120 Read the <uri link="http://www.angelfire.com/hi/plutonic/afs-faq.html">AFS
121 FAQ</uri>.
122 </p>
123
124 <p>
125 OpenAFS main page is at <uri
126 link="http://www.openafs.org">www.openafs.org</uri>.
127 </p>
128
129 <p>
130 AFS was originally developed by Transarc which is now owned by IBM. Since April
131 2005, it has been withdrawn from IBM's product catalogue.
132 </p>
133
134 </body>
135 </section>
136 <section>
137 <title>How Can I Debug Problems?</title>
138 <body>
139
140 <p>
141 OpenAFS has great logging facilities. However, by default it logs straight into
142 its own logs instead of through the system logging facilities you have on your
143 system. To have the servers log through your system logger, use the
144 <c>-syslog</c> option for all <c>bos</c> commands.
145 </p>
146
147 </body>
148 </section>
149 </chapter>
150
151 <chapter>
152 <title>Upgrading from previous versions</title>
153 <section>
154 <title>Introduction</title>
155 <body>
156
157 <p>
158 This section aims to help you through the process of upgrading an existing
159 OpenAFS installation to OpenAFS version 1.4.0 or higher (or 1.2.x starting from
160 1.2.13. The latter will not be handled specifically, as most people will want
161 1.4 for a.o. linux-2.6 support, large file support and bug fixes).
162 </p>
163
164 <p>
165 If you're dealing with a clean install of a 1.4 version of OpenAFS, then you can
166 safely skip this chapter. However, if you're upgrading from a previous version,
167 we strongly urge you to follow the guidelines in the next sections. The
168 transition script in the ebuild is designed to assist you in quickly upgrading
169 and restarting. Please note that it will (for safety reasons) not delete
170 configuration files and startup scripts in old places, not automatically change
171 your boot configuration to use the new scripts, etc. If you need further
172 convincing, using an old OpenAFS kernel module together with the updated system
173 binaries, may very well cause your kernel to freak out. So, let's read on for a
174 clean and easy transition, shall we?
175 </p>
176
177 <note>
178 This chapter has been written bearing many different system configurations in
179 mind. Still, it is possible that due to peculiar tweaks a user has made, his or
180 her specific situation may not be described here. A user with enough
181 self-confidence to tweak his system should be experienced enough to apply the
182 given remarks where appropriate. Vice versa, a user that has done little
183 to his system but install the previous ebuild, can skip most of the warnings
184 further on.
185 </note>
186
187 </body>
188 </section>
189 <section>
190 <title>Differences to previous versions</title>
191 <body>
192
193 <p>
194 Traditionally, OpenAFS has used the same path-conventions that IBM TransArc labs
195 had used, before the code was forked. Understandably, old AFS setups continue
196 using these legacy path conventions. More recent setups conform with FHS by
197 using standard locations (as seen in many Linux distributions). The following
198 table is a compilation of the configure-script and the README accompanying the
199 OpenAFS distribution tarballs:
200 </p>
201
202 <table>
203 <tr>
204 <th>Directory</th>
205 <th>Purpose</th>
206 <th>Transarc Mode</th>
207 <th>Default Mode</th>
208 <th>translation to Gentoo</th>
209 </tr>
210 <tr>
211 <ti>viceetcdir</ti>
212 <ti>Client configuration</ti>
213 <ti>/usr/vice/etc</ti>
214 <ti>$(sysconfdir)/openafs</ti>
215 <ti>/etc/openafs</ti>
216 </tr>
217 <tr>
218 <ti>unnamed</ti>
219 <ti>Client binaries</ti>
220 <ti>unspecified</ti>
221 <ti>$(bindir)</ti>
222 <ti>/usr/bin</ti>
223 </tr>
224 <tr>
225 <ti>afsconfdir</ti>
226 <ti>Server configuration</ti>
227 <ti>/usr/afs/etc</ti>
228 <ti>$(sysconfdir)/openafs/server</ti>
229 <ti>/etc/openafs/server</ti>
230 </tr>
231 <tr>
232 <ti>afssrvdir</ti>
233 <ti>Internal server binaries</ti>
234 <ti>/usr/afs/bin (servers)</ti>
235 <ti>$(libexecdir)/openafs</ti>
236 <ti>/usr/libexec/openafs</ti>
237 </tr>
238 <tr>
239 <ti>afslocaldir</ti>
240 <ti>Server state</ti>
241 <ti>/usr/afs/local</ti>
242 <ti>$(localstatedir)/openafs</ti>
243 <ti>/var/lib/openafs</ti>
244 </tr>
245 <tr>
246 <ti>afsdbdir</ti>
247 <ti>Auth/serverlist/... databases</ti>
248 <ti>/usr/afs/db</ti>
249 <ti>$(localstatedir)/openafs/db</ti>
250 <ti>/var/lib/openafs/db</ti>
251 </tr>
252 <tr>
253 <ti>afslogdir</ti>
254 <ti>Log files</ti>
255 <ti>/usr/afs/logs</ti>
256 <ti>$(localstatedir)/openafs/logs</ti>
257 <ti>/var/lib/openafs/logs</ti>
258 </tr>
259 <tr>
260 <ti>afsbosconfig</ti>
261 <ti>Overseer config</ti>
262 <ti>$(afslocaldir)/BosConfig</ti>
263 <ti>$(afsconfdir)/BosConfig</ti>
264 <ti>/etc/openafs/BosConfig</ti>
265 </tr>
266 </table>
267
268 <p>
269 There are some other oddities, like binaries being put in
270 <path>/usr/vice/etc</path> in Transarc mode, but this list is not intended
271 to be comprehensive. It is rather meant to serve as a reference to those
272 troubleshooting config file transition.
273 </p>
274
275 <p>
276 Also as a result of the path changes, the default disk cache location has
277 been changed from <path>/usr/vice/cache</path> to
278 <path>/var/cache/openafs</path>.
279 </p>
280
281 <p>
282 Furthermore, the init-script has been split into a client and a server part.
283 You used to have <path>/etc/init.d/afs</path>, but now you'll end up with both
284 <path>/etc/init.d/openafs-client</path> and
285 <path>/etc/init.d/openafs-server</path>.
286 Consequently, the configuration file <path>/etc/conf.d/afs</path> has been split
287 into <path>/etc/conf.d/openafs-client</path> and
288 <path>/etc/conf.d/openafs-server</path>. Also, options in
289 <path>/etc/conf.d/afs</path> to turn either client or server on or off have
290 been obsoleted.
291 </p>
292
293 <p>
294 Another change to the init script is that it doesn't check your disk cache
295 setup anymore. The old code required that a separate ext2 partition be
296 mounted at <path>/usr/vice/cache</path>. There were some problems with that:
297 </p>
298
299 <ul>
300 <li>
301 Though it's a very logical setup, your cache doesn't need to be on a
302 separate partition. As long as you make sure that the amount of space
303 specified in <path>/etc/openafs/cacheinfo</path> really is available
304 for disk cache usage, you're safe. So there is no real problem with
305 having the cache on your root partition.
306 </li>
307 <li>
308 Some people use soft-links to point to the real disk cache location.
309 The init script didn't like this, because then this cache location
310 didn't turn up in <path>/proc/mounts</path>.
311 </li>
312 <li>
313 Many prefer ext3 over ext2 nowadays. Both filesystems are valid for
314 usage as a disk cache. Any other filesystem is unsupported
315 (like: don't try reiserfs, you'll get a huge warning, expect failure
316 afterwards).
317 </li>
318 </ul>
319
320 </body>
321 </section>
322 <section>
323 <title>Transition to the new paths</title>
324 <body>
325
326 <p>
327 First of all, emerging a newer OpenAFS version should not overwrite any old
328 configuration files. The script is designed to not change any files
329 already present on the system. So even if you have a totally messed up
330 configuration with a mix of old and new locations, the script should not
331 cause further problems. Also, if a running OpenAFS server is detected, the
332 installation will abort, preventing possible database corruption.
333 </p>
334
335 <p>
336 One caveat though -- there have been ebuilds floating around the internet that
337 partially disable the protection that Gentoo puts on <path>/etc</path>. These
338 ebuilds have never been distributed by Gentoo. You might want to check the
339 <c>CONFIG_PROTECT_MASK</c> variable in the output of the following command:
340 </p>
341
342 <pre caption="Checking your CONFIG_PROTECT_MASK">
343 # <i>emerge info | grep "CONFIG_PROTECT_MASK"</i>
344 CONFIG_PROTECT_MASK="/etc/gconf /etc/terminfo /etc/texmf/web2c /etc/env.d"
345 </pre>
346
347 <p>
348 Though nothing in this ebuild would touch the files in <path>/etc/afs</path>,
349 upgrading will cause the removal of your older OpenAFS installation. Files in
350 <c>CONFIG_PROTECT_MASK</c> that belong to the older installation will be removed
351 as well.
352 </p>
353
354 <p>
355 It should be clear to the experienced user that in the case he has tweaked his
356 system by manually adding soft links (e.g. <path>/usr/afs/etc</path> to
357 <path>/etc/openafs</path>), the new installation may run fine while still using
358 the old configuration files. In this case, there has been no real transition,
359 and cleaning up the old installation will result in a broken OpenAFS config.
360 </p>
361
362 <p>
363 Now that you know what doesn't happen, you may want to know what does:
364 </p>
365
366 <ul>
367 <li>
368 <path>/usr/afs/etc</path> is copied to <path>/etc/openafs/server</path>
369 </li>
370 <li>
371 <path>/usr/vice/etc</path> is copied to <path>/etc/openafs</path>
372 </li>
373 <li>
374 <path>/usr/afs/local</path> is copied to <path>/var/lib/openafs</path>
375 </li>
376 <li>
377 <path>/usr/afs/local/BosConfig</path> is copied to
378 <path>/etc/openafs/BosConfig</path>, while replacing occurrences of
379 <path>/usr/afs/bin/</path> with <path>/usr/libexec/openafs</path>,
380 <path>/usr/afs/etc</path> with <path>/etc/openafs/server</path>
381 and <path>/usr/afs/bin</path> (without the / as previously) with
382 <path>/usr/bin</path>
383 </li>
384 <li>
385 <path>/usr/afs/db</path> is copied to <path>/var/lib/openafs/db</path>
386 </li>
387 <li>
388 The configuration file <path>/etc/conf.d/afs</path> is copied to
389 <path>/etc/conf.d/openafs-client</path>, as all known old options were
390 destined for client usage only.
391 </li>
392 </ul>
393
394 </body>
395 </section>
396 <section>
397 <title>The upgrade itself</title>
398 <body>
399
400 <p>
401 So you haven't got an OpenAFS server setup? Or maybe you do, the previous
402 sections have informed you about what is going to happen, and you're still
403 ready for it?
404 </p>
405
406 <p>
407 Let's go ahead with it then!
408 </p>
409
410 <p>
411 If you do have a server running, you want to shut it down now.
412 </p>
413
414 <pre caption="Stopping OpenAFS (in case you have a server)">
415 # <i>/etc/init.d/afs stop</i>
416 </pre>
417
418 <p>
419 And then the upgrade itself.
420 </p>
421
422 <pre caption="Now upgrade!">
423 # <i>emerge -u openafs</i>
424 </pre>
425
426 </body>
427 </section>
428 <section>
429 <title>Restarting OpenAFS</title>
430 <body>
431
432 <p>
433 If you had an OpenAFS server running, you would have not have been forced to
434 shut it down. Now is the time to do that.
435 </p>
436
437 <pre caption="Stopping OpenAFS client after upgrade">
438 # <i>/etc/init.d/afs stop</i>
439 </pre>
440
441 <p>
442 As you may want keep the downtime to a minimum, so you can restart
443 your OpenAFS server right away.
444 </p>
445
446 <pre caption="Restarting OpenAFS server after upgrade">
447 # <i>/etc/init.d/openafs-server start</i>
448 </pre>
449
450 <p>
451 You can check whether it's running properly with the following command:
452 </p>
453
454 <pre caption="Checking OpenAFS server status">
455 # <i>/usr/bin/bos status localhost -localauth</i>
456 </pre>
457
458 <p>
459 Before starting the OpenAFS client again, please take time to check your
460 cache settings. They are determined by <path>/etc/openafs/cacheinfo</path>.
461 To restart your OpenAFS client installation, please type the following:
462 </p>
463
464 <pre caption="Restarting OpenAFS client after upgrade">
465 # <i>/etc/init.d/openafs-client start</i>
466 </pre>
467
468 </body>
469 </section>
470 <section>
471 <title>Cleaning up afterwards</title>
472 <body>
473
474 <p>
475 Before cleaning up, please make really sure that everything runs smoothly and
476 that you have restarted after the upgrade (otherwise, you may still be running
477 your old installation).
478 </p>
479
480 <impo>
481 Please make sure you're not using <path>/usr/vice/cache</path> for disk cache
482 if you are deleting <path>/usr/vice</path>!!
483 </impo>
484
485 <p>
486 The following directories may be safely removed from the system:
487 </p>
488
489 <ul>
490 <li><path>/etc/afs</path></li>
491 <li><path>/usr/vice</path></li>
492 <li><path>/usr/afs</path></li>
493 <li><path>/usr/afsws</path></li>
494 </ul>
495
496 <p>
497 The following files are also unnecessary:
498 </p>
499
500 <ul>
501 <li><path>/etc/init.d/afs</path></li>
502 <li><path>/etc/conf.d/afs</path></li>
503 </ul>
504
505 <pre caption="Removing the old files">
506 # <i>tar czf /root/oldafs-backup.tgz /etc/afs /usr/vice /usr/afs /usr/afsws</i>
507 # <i>rm -R /etc/afs /usr/vice /usr/afs /usr/afsws</i>
508 # <i>rm /etc/init.d/afs /etc/conf.d/afs</i>
509 </pre>
510
511 <p>
512 In case you've previously used ebuilds =openafs-1.2.13 or =openafs-1.3.85, you
513 may also have some other unnecessary files:
514 </p>
515
516 <ul>
517 <li><path>/etc/init.d/afs-client</path></li>
518 <li><path>/etc/init.d/afs-server</path></li>
519 <li><path>/etc/conf.d/afs-client</path></li>
520 <li><path>/etc/conf.d/afs-server</path></li>
521 </ul>
522
523 </body>
524 </section>
525 <section>
526 <title>Init Script changes</title>
527 <body>
528
529 <p>
530 Now most people would have their systems configured to automatically start
531 the OpenAFS client and server on startup. Those who don't can safely skip
532 this section. If you had your system configured to start them automatically,
533 you will need to re-enable this, because the names of the init scripts have
534 changed.
535 </p>
536
537 <pre caption="Re-enabling OpenAFS startup at boot time">
538 # <i>rc-update del afs default</i>
539 # <i>rc-update add openafs-client default</i>
540 # <i>rc-update add openafs-server default</i>
541 </pre>
542
543 <p>
544 If you had <c>=openafs-1.2.13</c> or <c>=openafs-1.3.85</c>, you should remove
545 <path>afs-client</path> and <path>afs-server</path> from the default runlevel,
546 instead of <path>afs</path>.
547 </p>
548
549 </body>
550 </section>
551 <section>
552 <title>Troubleshooting: what if the automatic upgrade fails</title>
553 <body>
554
555 <p>
556 Don't panic. You shouldn't have lost any data or configuration files. So let's
557 analyze the situation. Please file a bug at <uri
558 link="http://bugs.gentoo.org">bugs.gentoo.org</uri> in any case, preferably
559 with as much information as possible.
560 </p>
561
562 <p>
563 If you're having problems starting the client, this should help you diagnosing
564 the problem:
565 </p>
566
567 <ul>
568 <li>
569 Run <c>dmesg</c>. The client normally sends error messages there.
570 </li>
571 <li>
572 Check <path>/etc/openafs/cacheinfo</path>. It should be of the form:
573 /afs:{path to disk cache}:{number of blocks for disk cache}.
574 Normally, your disk cache will be located at
575 <path>/var/cache/openafs</path>.
576 </li>
577 <li>
578 Check the output of <c>lsmod</c>. You will want to see a line beginning
579 with the word openafs.
580 </li>
581 <li><c>pgrep afsd</c> will tell you whether afsd is running or not</li>
582 <li>
583 <c>cat /proc/mounts</c> should reveal whether <path>/afs</path> has been
584 mounted.
585 </li>
586 </ul>
587
588 <p>
589 If you're having problems starting the server, then these hints may be useful:
590 </p>
591
592 <ul>
593 <li>
594 <c>pgrep bosserver</c> tells you whether the overseer is running or not. If
595 you have more than one overseer running, then something has gone wrong. In
596 that case, you should try a graceful OpenAFS server shutdown with <c>bos
597 shutdown localhost -localauth -wait</c>, check the result with <c>bos
598 status localhost -localauth</c>, kill all remaining overseer processes and
599 then finally check whether any server processes are still running (<c>ls
600 /usr/libexec/openafs</c> to get a list of them). Afterwards, do
601 <c>/etc/init.d/openafs-server zap</c> to reset the status of the server and
602 <c>/etc/init.d/openafs-server start</c> to try launching it again.
603 </li>
604 <li>
605 If you're using OpenAFS' own logging system (which is the default setting),
606 check out <path>/var/lib/openafs/logs/*</path>. If you're using the syslog
607 service, go check out its logs for any useful information.
608 </li>
609 </ul>
610
611 </body>
612 </section>
613 </chapter>
614
615 <chapter>
616 <title>Documentation</title>
617 <section>
618 <title>Getting AFS Documentation</title>
619 <body>
620
621 <p>
622 You can get the original IBM AFS Documentation. It is very well written and you
623 really want read it if it is up to you to administer a AFS Server.
624 </p>
625
626 <pre caption="Installing afsdoc">
627 # <i>emerge app-doc/afsdoc</i>
628 </pre>
629
630 <p>
631 You also have the option of using the documentation delivered with OpenAFS. It
632 is installed when you have the USE flag <c>doc</c> enabled while emerging
633 OpenAFS. It can be found in <path>/usr/share/doc/openafs-*/</path>. At the time
634 of writing, this documentation was a work in progress. It may however document
635 newer features in OpenAFS that aren't described in the original IBM AFS
636 Documentation.
637 </p>
638
639 </body>
640 </section>
641 </chapter>
642
643 <chapter>
644 <title>Client Installation</title>
645 <section>
646 <title>Building the Client</title>
647 <body>
648
649 <pre caption="Installing openafs">
650 # <i>emerge net-fs/openafs</i>
651 </pre>
652
653 <p>
654 After successful compilation you're ready to go.
655 </p>
656
657 </body>
658 </section>
659 <section>
660 <title>A simple global-browsing client installation</title>
661 <body>
662
663 <p>
664 If you're not part of a specific OpenAFS-cell you want to access, and you just
665 want to try browsing globally available OpenAFS-shares, then you can just
666 install OpenAFS, not touch the configuration at all, and start
667 <path>/etc/init.d/openafs-client</path>.
668 </p>
669
670 </body>
671 </section>
672 <section>
673 <title>Accessing a specific OpenAFS cell</title>
674 <body>
675
676 <p>
677 If you need to access a specific cell, say your university's or company's own
678 cell, then some adjustments to your configuration have to be made.
679 </p>
680
681 <p>
682 Firstly, you need to update <path>/etc/openafs/CellServDB</path> with the
683 database servers for your cell. This information is normally provided by your
684 administrator.
685 </p>
686
687 <p>
688 Secondly, in order to be able to log onto the OpenAFS cell, you need to specify
689 its name in <path>/etc/openafs/ThisCell</path>.
690 </p>
691
692 <pre caption="Adjusting CellServDB and ThisCell">
693 CellServDB:
694 >netlabs #Cell name
695 10.0.0.1 #storage
696
697 ThisCell:
698 netlabs
699 </pre>
700
701 <warn>
702 Only use spaces inside the <path>CellServDB</path> file. The client will most
703 likely fail if you use TABs.
704 </warn>
705
706 <p>
707 CellServDB tells your client which server(s) it needs to contact for a
708 specific cell. ThisCell should be quite obvious. Normally you use a name
709 which is unique for your organisation. Your (official) domain might be a
710 good choice.
711 </p>
712
713 <p>
714 For a quick start, you can now start <path>/etc/init.d/openafs-client</path> and
715 use <c>klog</c> to authenticate yourself and start using your access to the
716 cell. For automatic logons to you cell, you want to consult the appropriate
717 section below.
718 </p>
719
720 </body>
721 </section>
722 <section>
723 <title>Adjusting the cache</title>
724 <body>
725
726 <note>
727 Unfortunately the AFS Client needs a ext2/3 filesystem for its cache to run
728 correctly. There are some issues when using other filesystems (using e.g.
729 reiserfs is not a good idea).
730 </note>
731
732 <p>
733 You can house your cache on an existing filesystem (if it's ext2/3), or you
734 may want to have a separate partition for that. The default location of the
735 cache is <path>/var/cache/openafs</path>, but you can change that by editing
736 <path>/etc/openafs/cacheinfo</path>. A standard size for your cache is
737 200MB, but more won't hurt.
738 </p>
739
740 </body>
741 </section>
742 <section>
743 <title>Starting AFS on startup</title>
744 <body>
745
746 <p>
747 The following command will create the appropriate links to start your afs
748 client on system startup.
749 </p>
750
751 <warn>
752 You should always have a running afs server in your domain when trying to start
753 the afs client. Your system won't boot until it gets some timeout if your AFS
754 server is down (and this is quite a long long time.)
755 </warn>
756
757 <pre caption="Adding AFS client to the default runlevel">
758 # <i>rc-update add openafs-client default</i>
759 </pre>
760
761 </body>
762 </section>
763 </chapter>
764
765 <chapter>
766 <title>Server Installation</title>
767 <section>
768 <title>Building the Server</title>
769 <body>
770
771 <note>
772 All commands should be written in one line!! In this document they are
773 sometimes wrapped to two lines to make them easier to read.
774 </note>
775
776 <p>
777 If you haven't already done so, the following command will install all
778 necessary binaries for setting up an AFS Server <e>and</e> Client.
779 </p>
780
781 <pre caption="Installing openafs">
782 # <i>emerge net-fs/openafs</i>
783 </pre>
784
785 </body>
786 </section>
787 <section>
788 <title>Starting AFS Server</title>
789 <body>
790
791 <p>
792 You need to run the <c>bosserver</c> command to initialize the Basic OverSeer
793 (BOS) Server, which monitors and controls other AFS server processes on its
794 server machine. Think of it as init for the system. Include the <c>-noauth</c>
795 flag to disable authorization checking, since you haven't added the admin user
796 yet.
797 </p>
798
799 <warn>
800 Disabling authorization checking gravely compromises cell security. You must
801 complete all subsequent steps in one uninterrupted pass and must not leave
802 the machine unattended until you restart the BOS Server with authorization
803 checking enabled. Well, this is what the AFS documentation says. :)
804 </warn>
805
806 <pre caption="Initialize the Basic OverSeer Server">
807 # <i>bosserver -noauth &amp;</i>
808 </pre>
809
810 <p>
811 Verify that the BOS Server created <path>/etc/openafs/server/CellServDB</path>
812 and <path>/etc/openafs/server/ThisCell</path>
813 </p>
814
815 <pre caption="Check if CellServDB and ThisCell are created">
816 # <i>ls -al /etc/openafs/server/</i>
817 -rw-r--r-- 1 root root 41 Jun 4 22:21 CellServDB
818 -rw-r--r-- 1 root root 7 Jun 4 22:21 ThisCell
819 </pre>
820
821 </body>
822 </section>
823 <section>
824 <title>Defining Cell Name and Membership for Server Process</title>
825 <body>
826
827 <p>
828 Now assign your cell's name.
829 </p>
830
831 <impo>
832 There are some restrictions on the name format. Two of the most important
833 restrictions are that the name cannot include uppercase letters or more than
834 64 characters. Remember that your cell name will show up under
835 <path>/afs</path>, so you might want to choose a short one.
836 </impo>
837
838 <note>
839 In the following and every instruction in this guide, for the &lt;server
840 name&gt; argument substitute the full-qualified hostname (such as
841 <b>afs.gentoo.org</b>) of the machine you are installing. For the &lt;cell
842 name&gt; argument substitute your cell's complete name (such as
843 <b>gentoo</b>)
844 </note>
845
846 <p>
847 Run the <c>bos setcellname</c> command to set the cell name:
848 </p>
849
850 <pre caption="Set the cell name">
851 # <i>bos setcellname &lt;server name&gt; &lt;cell name&gt; -noauth</i>
852 </pre>
853
854 </body>
855 </section>
856 <section>
857 <title>Starting the Database Server Process</title>
858 <body>
859
860 <p>
861 Next use the <c>bos create</c> command to create entries for the four database
862 server processes in the <path>/etc/openafs/BosConfig</path> file. The four
863 processes run on database server machines only.
864 </p>
865
866 <table>
867 <tr>
868 <ti>kaserver</ti>
869 <ti>
870 The Authentication Server maintains the Authentication Database.
871 This can be replaced by a Kerberos 5 daemon. If anybody wants to try that
872 feel free to update this document :)
873 </ti>
874 </tr>
875 <tr>
876 <ti>buserver</ti>
877 <ti>The Backup Server maintains the Backup Database</ti>
878 </tr>
879 <tr>
880 <ti>ptserver</ti>
881 <ti>The Protection Server maintains the Protection Database</ti>
882 </tr>
883 <tr>
884 <ti>vlserver</ti>
885 <ti>
886 The Volume Location Server maintains the Volume Location Database (VLDB).
887 Very important :)
888 </ti>
889 </tr>
890 </table>
891
892 <pre caption="Create entries for the database processes">
893 # <i>bos create &lt;server name&gt; kaserver \
894 simple /usr/libexec/openafs/kaserver \
895 -cell &lt;cell name&gt; -noauth</i>
896 # <i>bos create &lt;server name&gt; buserver \
897 simple /usr/libexec/openafs/buserver \
898 -cell &lt;cell name&gt; -noauth</i>
899 # <i>bos create &lt;server name&gt; ptserver \
900 simple /usr/libexec/openafs/ptserver \
901 -cell &lt;cell name&gt; -noauth</i>
902 # <i>bos create &lt;server name&gt; \
903 vlserver simple /usr/libexec/openafs/vlserver \
904 -cell &lt;cell name&gt; -noauth</i>
905 </pre>
906
907 <p>
908 You can verify that all servers are running with the <c>bos status</c> command:
909 </p>
910
911 <pre caption="Check if all the servers are running">
912 # <i>bos status &lt;server name&gt; -noauth</i>
913 Instance kaserver, currently running normally.
914 Instance buserver, currently running normally.
915 Instance ptserver, currently running normally.
916 Instance vlserver, currently running normally.
917 </pre>
918
919 </body>
920 </section>
921 <section>
922 <title>Initializing Cell Security</title>
923 <body>
924
925 <p>
926 Now we'll initialize the cell's security mechanisms. We'll begin by creating
927 the following two initial entries in the Authentication Database: The main
928 administrative account, called <b>admin</b> by convention and an entry for
929 the AFS server processes, called <c>afs</c>. No user logs in under the
930 identity <b>afs</b>, but the Authentication Server's Ticket Granting
931 Service (TGS) module uses the account to encrypt the server tickets that
932 it grants to AFS clients. This sounds pretty much like Kerberos :)
933 </p>
934
935 <p>
936 Enter <c>kas</c> interactive mode
937 </p>
938
939 <pre caption="Entering the interactive mode">
940 # <i>kas -cell &lt;cell name&gt; -noauth</i>
941 ka&gt; <i>create afs</i>
942 initial_password:
943 Verifying, please re-enter initial_password:
944 ka&gt; <i>create admin</i>
945 initial_password:
946 Verifying, please re-enter initial_password:
947 ka&gt; <i>examine afs</i>
948
949 User data for afs
950 key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:30 2001
951 password will never expire.
952 An unlimited number of unsuccessful authentications is permitted.
953 entry never expires. Max ticket lifetime 100.00 hours.
954 last mod on Mon Jun 4 20:49:30 2001 by &lt;none&gt;
955 permit password reuse
956 ka&gt; <i>setfields admin -flags admin</i>
957 ka&gt; <i>examine admin</i>
958
959 User data for admin (ADMIN)
960 key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:59 2001
961 password will never expire.
962 An unlimited number of unsuccessful authentications is permitted.
963 entry never expires. Max ticket lifetime 25.00 hours.
964 last mod on Mon Jun 4 20:51:10 2001 by &lt;none&gt;
965 permit password reuse
966 ka&gt;
967 </pre>
968
969 <p>
970 Run the <c>bos adduser</c> command, to add the <b>admin</b> user to
971 the <path>/etc/openafs/server/UserList</path>.
972 </p>
973
974 <pre caption="Add the admin user to the UserList">
975 # <i>bos adduser &lt;server name&gt; admin -cell &lt;cell name&gt; -noauth</i>
976 </pre>
977
978 <p>
979 Issue the <c>bos addkey</c> command to define the AFS Server
980 encryption key in <path>/etc/openafs/server/KeyFile</path>
981 </p>
982
983 <note>
984 If asked for the input key, give the password you entered when creating
985 the AFS entry with <c>kas</c>
986 </note>
987
988 <pre caption="Entering the password">
989 # <i>bos addkey &lt;server name&gt; -kvno 0 -cell &lt;cell name&gt; -noauth</i>
990 input key:
991 Retype input key:
992 </pre>
993
994 <p>
995 Issue the <c>pts createuser</c> command to create a Protection Database entry
996 for the admin user.
997 </p>
998
999 <note>
1000 By default, the Protection Server assigns AFS UID 1 to the <b>admin</b> user,
1001 because it is the first user entry you are creating. If the local password file
1002 (<path>/etc/passwd</path> or equivalent) already has an entry for <b>admin</b>
1003 that assigns a different UID use the <c>-id</c> argument to create matching
1004 UIDs.
1005 </note>
1006
1007 <pre caption="Create a Protection Database entry for the database user">
1008 # <i>pts createuser -name admin -cell &lt;cell name&gt; [-id &lt;AFS UID&gt;] -noauth</i>
1009 </pre>
1010
1011 <p>
1012 Issue the <c>pts adduser</c> command to make the <b>admin</b> user a member
1013 of the system:administrators group, and the <c>pts membership</c> command to
1014 verify the new membership
1015 </p>
1016
1017 <pre caption="Make admin a member of the administrators group and verify">
1018 # <i>pts adduser admin system:administrators -cell &lt;cell name&gt; -noauth</i>
1019 # <i>pts membership admin -cell &lt;cell name&gt; -noauth</i>
1020 Groups admin (id: 1) is a member of:
1021 system:administrators
1022 </pre>
1023
1024 </body>
1025 </section>
1026 <section>
1027 <title>Properly (re-)starting the AFS server</title>
1028 <body>
1029
1030 <p>
1031 At this moment, proper authentication is possible, and the OpenAFS server can
1032 be started in a normal fashion. Note that authentication also requires a
1033 running OpenAFS client (setting it up is described in the previous chapter).
1034 <!-- Left out because deemed confusing>
1035 Continuing without this step is possible, but in that case a quick restart of
1036 the server is required, as demonstrated at the end of this section.
1037 <-->
1038 </p>
1039
1040 <pre caption="Shutdown bosserver">
1041 # <i>bos shutdown &lt;server name&gt; -wait -noauth</i>
1042 # <i>killall bosserver</i>
1043 </pre>
1044
1045 <pre caption="Normal OpenAFS server (and client) startup">
1046 # <i>/etc/init.d/openafs-server start</i>
1047 # <i>/etc/init.d/openafs-client start</i>
1048 </pre>
1049
1050 <pre caption="Adding AFS server to the default runlevel">
1051 # <i>rc-update add openafs-server default</i>
1052 </pre>
1053
1054 <pre caption="Getting a token as the admin user">
1055 # <i>klog admin</i>
1056 </pre>
1057
1058 <!-- Left out because deemed confusing>
1059 <p>
1060 If you chose not to restart OpenAFS without the -noauth flag, you can simply
1061 perform the following procedure instead:
1062 </p>
1063
1064 <pre caption="Restart all AFS server processes">
1065 # <i>bos restart &lt;server name&gt; -all -cell &lt;cell name&gt; -noauth</i>
1066 </pre>
1067 <-->
1068
1069 </body>
1070 </section>
1071 <section>
1072 <title>Starting the File Server, Volume Server and Salvager</title>
1073 <body>
1074
1075 <p>
1076 Start the <c>fs</c> process, which consists of the File Server, Volume Server
1077 and Salvager (fileserver, volserver and salvager processes).
1078 </p>
1079
1080 <pre caption="Start the fs process">
1081 # <i>bos create &lt;server name&gt; fs \
1082 fs /usr/libexec/openafs/fileserver /usr/libexec/openafs/volserver /usr/libexec/openafs/salvager \
1083 -cell &lt;cell name&gt; -noauth</i>
1084 </pre>
1085
1086 <p>
1087 Verify that all processes are running:
1088 </p>
1089
1090 <pre caption="Check if all processes are running">
1091 # <i>bos status &lt;server name&gt; -long -noauth</i>
1092 Instance kaserver, (type is simple) currently running normally.
1093 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1094 Last exit at Mon Jun 4 21:07:17 2001
1095 Command 1 is '/usr/libexec/openafs/kaserver'
1096
1097 Instance buserver, (type is simple) currently running normally.
1098 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1099 Last exit at Mon Jun 4 21:07:17 2001
1100 Command 1 is '/usr/libexec/openafs/buserver'
1101
1102 Instance ptserver, (type is simple) currently running normally.
1103 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1104 Last exit at Mon Jun 4 21:07:17 2001
1105 Command 1 is '/usr/libexec/openafs/ptserver'
1106
1107 Instance vlserver, (type is simple) currently running normally.
1108 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1109 Last exit at Mon Jun 4 21:07:17 2001
1110 Command 1 is '/usr/libexec/openafs/vlserver'
1111
1112 Instance fs, (type is fs) currently running normally.
1113 Auxiliary status is: file server running.
1114 Process last started at Mon Jun 4 21:09:30 2001 (2 proc starts)
1115 Command 1 is '/usr/libexec/openafs/fileserver'
1116 Command 2 is '/usr/libexec/openafs/volserver'
1117 Command 3 is '/usr/libexec/openafs/salvager'
1118 </pre>
1119
1120 <p>
1121 Your next action depends on whether you have ever run AFS file server machines
1122 in the cell.
1123 </p>
1124
1125 <p>
1126 If you are installing the first AFS Server ever in the cell, create the first
1127 AFS volume, <b>root.afs</b>
1128 </p>
1129
1130 <note>
1131 For the partition name argument, substitute the name of one of the machine's
1132 AFS Server partitions. Any filesystem mounted under a directory called
1133 <path>/vicepx</path>, where x is in the range of a-z, will be considered and
1134 used as an AFS Server partition. Any unix filesystem will do (as opposed to the
1135 client's cache, which can only be ext2/3). Tip: the server checks for each
1136 <path>/vicepx</path> mount point whether a filesystem is mounted there. If not,
1137 the server will not attempt to use it. This behaviour can be overridden by
1138 putting a file named <path>AlwaysAttach</path> in this directory.
1139 </note>
1140
1141 <pre caption="Create the root.afs volume">
1142 # <i>vos create &lt;server name&gt; &lt;partition name&gt; root.afs -cell &lt;cell name&gt; -noauth</i>
1143 </pre>
1144
1145 <p>
1146 If there are existing AFS file server machines and volumes in the cell
1147 issue the <c>vos sncvldb</c> and <c>vos syncserv</c> commands to synchronize
1148 the VLDB (Volume Location Database) with the actual state of volumes on the
1149 local machine. This will copy all necessary data to your new server.
1150 </p>
1151
1152 <p>
1153 If the command fails with the message "partition /vicepa does not exist on
1154 the server", ensure that the partition is mounted before running OpenAFS
1155 servers, or mount the directory and restart the processes using
1156 <c>bos restart &lt;server name&gt; -all -cell &lt;cell
1157 name&gt; -noauth</c>.
1158 </p>
1159
1160 <pre caption="Synchronise the VLDB">
1161 # <i>vos syncvldb &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
1162 # <i>vos syncserv &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
1163 </pre>
1164
1165 </body>
1166 </section>
1167 <section>
1168 <title>Starting the Server Portion of the Update Server</title>
1169 <body>
1170
1171 <pre caption="Start the update server">
1172 # <i>bos create &lt;server name&gt; \
1173 upserver simple "/usr/libexec/openafs/upserver \
1174 -crypt /etc/openafs/server -clear /usr/libexec/openafs" \
1175 -cell &lt;cell name&gt; -noauth</i>
1176 </pre>
1177
1178 </body>
1179 </section>
1180 <section>
1181 <title>Configuring the Top Level of the AFS filespace</title>
1182 <body>
1183
1184 <p>
1185 First you need to set some ACLs, so that any user can lookup
1186 <path>/afs</path>.
1187 </p>
1188
1189 <note>
1190 The default OpenAFS client configuration has <b>dynroot</b> enabled.
1191 This option turns <path>/afs</path> into a virtual directory composed of the
1192 contents of your <path>/etc/openafs/CellServDB</path> file. As such, the
1193 following command will not work, because it requires a real AFS directory.
1194 You can temporarily switch dynroot off by setting <b>ENABLE_DYNROOT</b> to
1195 <b>no</b> in <path>/etc/conf.d/openafs-client</path>. Don't forget to issue
1196 a client restart after changing parameters.
1197 </note>
1198
1199 <pre caption="Set access control lists">
1200 # <i>fs setacl /afs system:anyuser rl</i>
1201 </pre>
1202
1203 <p>
1204 Then you need to create the root volume, mount it readonly on
1205 <path>/afs/&lt;cell name&gt;</path> and read/write on <path>/afs/.&lt;cell
1206 name&gt;</path>.
1207 </p>
1208
1209 <pre caption="Prepare the root volume">
1210 # <i>vos create &lt;server name&gt; &lt;partition name&gt; root.cell</i>
1211 # <i>fs mkmount /afs/&lt;cell name&gt; root.cell</i>
1212 # <i>fs setacl /afs/&lt;cell name&gt; system:anyuser rl</i>
1213 # <i>fs mkmount /afs/.&lt;cell name&gt; root.cell -rw</i>
1214 </pre>
1215
1216 <pre caption="Adding volumes underneath">
1217 # <i>vos create &lt;server name&gt; &lt;partition name&gt; &lt;myvolume&gt;</i>
1218 # <i>fs mkmount /afs/&lt;cell name&gt;/&lt;mymountpoint&gt; &lt;myvolume&gt;</i>
1219 # <i>fs mkmount /afs/&lt;cell name&gt;/.&lt;mymountpoint&gt; &lt;myvolume&gt; -rw</i>
1220 # <i>fs setquota /afs/&lt;cell name&gt;/.&lt;mymountpoint&gt; -max &lt;quotum&gt;</i>
1221 </pre>
1222
1223 <p>
1224 Finally you're done!!! You should now have a working AFS file server
1225 on your local network. Time to get a big
1226 cup of coffee and print out the AFS documentation!!!
1227 </p>
1228
1229 <note>
1230 It is very important for the AFS server to function properly, that all system
1231 clocks are synchronized. This is best accomplished by installing a ntp server
1232 on one machine (e.g. the AFS server) and synchronize all client clocks
1233 with the ntp client. This can also be done by the AFS client.
1234 </note>
1235
1236 </body>
1237 </section>
1238 </chapter>
1239
1240 <chapter>
1241 <title>Basic Administration</title>
1242 <section>
1243 <title>Disclaimer</title>
1244 <body>
1245
1246 <p>
1247 OpenAFS is an extensive technology. Please read the AFS documentation for more
1248 information. We only list a few administrative tasks in this chapter.
1249 </p>
1250
1251 </body>
1252 </section>
1253 <section>
1254 <title>Configuring PAM to Acquire an AFS Token on Login</title>
1255 <body>
1256
1257 <p>
1258 To use AFS you need to authenticate against the KA Server if using
1259 an implementation AFS Kerberos 4, or against a Kerberos 5 KDC if using
1260 MIT, Heimdal, or ShiShi Kerberos 5. However in order to login to a
1261 machine you will also need a user account, this can be local in
1262 <path>/etc/passwd</path>, NIS, LDAP (OpenLDAP), or a Hesiod database.
1263 PAM allows Gentoo to tie the authentication against AFS and login to the
1264 user account.
1265 </p>
1266
1267 <p>
1268 You will need to update <path>/etc/pam.d/system-auth</path> which is
1269 used by the other configurations. "use_first_pass" indicates it will be
1270 checked first against the user login, and "ignore_root" stops the local
1271 superuser being checked so as to order to allow login if AFS or the network
1272 fails.
1273 </p>
1274
1275 <pre caption="/etc/pam.d/system-auth">
1276 auth required pam_env.so
1277 auth sufficient pam_unix.so likeauth nullok
1278 auth sufficient pam_afs.so.1 use_first_pass ignore_root
1279 auth required pam_deny.so
1280
1281 account required pam_unix.so
1282
1283 password required pam_cracklib.so retry=3
1284 password sufficient pam_unix.so nullok md5 shadow use_authtok
1285 password required pam_deny.so
1286
1287 session required pam_limits.so
1288 session required pam_unix.so
1289 </pre>
1290
1291 <p>
1292 In order for <c>sudo</c> to keep the real user's token and to prevent local
1293 users gaining AFS access change <path>/etc/pam.d/su</path> as follows:
1294 </p>
1295
1296 <pre caption="/etc/pam.d/su">
1297 <comment># Here, users with uid &gt; 100 are considered to belong to AFS and users with
1298 # uid &lt;= 100 are ignored by pam_afs.</comment>
1299 auth sufficient pam_afs.so.1 ignore_uid 100
1300
1301 auth sufficient pam_rootok.so
1302
1303 <comment># If you want to restrict users begin allowed to su even more,
1304 # create /etc/security/suauth.allow (or to that matter) that is only
1305 # writable by root, and add users that are allowed to su to that
1306 # file, one per line.
1307 #auth required pam_listfile.so item=ruser \
1308 # sense=allow onerr=fail file=/etc/security/suauth.allow
1309
1310 # Uncomment this to allow users in the wheel group to su without
1311 # entering a passwd.
1312 #auth sufficient pam_wheel.so use_uid trust
1313
1314 # Alternatively to above, you can implement a list of users that do
1315 # not need to supply a passwd with a list.
1316 #auth sufficient pam_listfile.so item=ruser \
1317 # sense=allow onerr=fail file=/etc/security/suauth.nopass
1318
1319 # Comment this to allow any user, even those not in the 'wheel'
1320 # group to su</comment>
1321 auth required pam_wheel.so use_uid
1322
1323 auth required pam_stack.so service=system-auth
1324
1325 account required pam_stack.so service=system-auth
1326
1327 password required pam_stack.so service=system-auth
1328
1329 session required pam_stack.so service=system-auth
1330 session optional pam_xauth.so
1331
1332 <comment># Here we prevent the real user id's token from being dropped</comment>
1333 session optional pam_afs.so.1 no_unlog
1334 </pre>
1335
1336 </body>
1337 </section>
1338 </chapter>
1339 </guide>

  ViewVC Help
Powered by ViewVC 1.1.20