/[gentoo]/xml/htdocs/doc/en/openafs.xml
Gentoo

Contents of /xml/htdocs/doc/en/openafs.xml

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.27 - (show annotations) (download) (as text)
Sun Sep 4 17:53:40 2011 UTC (2 years, 11 months ago) by swift
Branch: MAIN
Changes since 1.26: +2 -2 lines
File MIME type: application/xml
#379883 - Removing link attribute from guide element as it is not used anymore. Next step will be to remove it from DTD

1 <?xml version='1.0' encoding="UTF-8"?>
2 <!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/openafs.xml,v 1.26 2007/07/01 18:58:48 nightmorph Exp $ -->
3
4 <!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
5
6 <guide>
7 <title>Gentoo Linux OpenAFS Guide</title>
8
9 <author title="Editor">
10 <mail link="stefaan@gentoo.org">Stefaan De Roeck</mail>
11 </author>
12 <author title="Editor">
13 <mail link="darks@gentoo.org">Holger Brueckner</mail>
14 </author>
15 <author title="Editor">
16 <mail link="bennyc@gentoo.org">Benny Chuang</mail>
17 </author>
18 <author title="Editor">
19 <mail link="blubber@gentoo.org">Tiemo Kieft</mail>
20 </author>
21 <author title="Editor">
22 <mail link="fnjordy@gmail.com">Steven McCoy</mail>
23 </author>
24 <author title="Editor">
25 <mail link="fox2mike@gentoo.org">Shyam Mani</mail>
26 </author>
27
28 <abstract>
29 This guide shows you how to install an OpenAFS server and client on Gentoo
30 Linux.
31 </abstract>
32
33 <!-- The content of this document is licensed under the CC-BY-SA license -->
34 <!-- See http://creativecommons.org/licenses/by-sa/2.5 -->
35 <license/>
36
37 <version>1.2</version>
38 <date>2007-06-29</date>
39
40 <chapter>
41 <title>Overview</title>
42 <section>
43 <title>About this Document</title>
44 <body>
45
46 <p>
47 This document provides you with all necessary steps to install an OpenAFS
48 server on Gentoo Linux. Parts of this document are taken from the AFS FAQ and
49 IBM's Quick Beginnings guide on AFS. Well, never reinvent the wheel. :)
50 </p>
51
52 </body>
53 </section>
54 <section>
55 <title>What is AFS?</title>
56 <body>
57
58 <p>
59 AFS is a distributed filesystem that enables co-operating hosts
60 (clients and servers) to efficiently share filesystem resources
61 across both local area and wide area networks. Clients hold a
62 cache for often used objects (files), to get quicker
63 access to them.
64 </p>
65
66 <p>
67 AFS is based on a distributed file system originally developed
68 at the Information Technology Center at Carnegie-Mellon University
69 that was called the "Andrew File System". "Andrew" was the name of the
70 research project at CMU - honouring the founders of the University. Once
71 Transarc was formed and AFS became a product, the "Andrew" was dropped to
72 indicate that AFS had gone beyond the Andrew research project and had become
73 a supported, product quality filesystem. However, there were a number of
74 existing cells that rooted their filesystem as /afs. At the time, changing
75 the root of the filesystem was a non-trivial undertaking. So, to save the
76 early AFS sites from having to rename their filesystem, AFS remained as the
77 name and filesystem root.
78 </p>
79
80 </body>
81 </section>
82 <section>
83 <title>What is an AFS cell?</title>
84 <body>
85
86 <p>
87 An AFS cell is a collection of servers grouped together administratively and
88 presenting a single, cohesive filesystem. Typically, an AFS cell is a set of
89 hosts that use the same Internet domain name (for example, gentoo.org) Users
90 log into AFS client workstations which request information and files from the
91 cell's servers on behalf of the users. Users won't know on which server a
92 file which they are accessing, is located. They even won't notice if a server
93 will be located to another room, since every volume can be replicated and
94 moved to another server without any user noticing. The files are always
95 accessible. Well, it's like NFS on steroids :)
96 </p>
97
98 </body>
99 </section>
100 <section>
101 <title>What are the benefits of using AFS?</title>
102 <body>
103
104 <p>
105 The main strengths of AFS are its:
106 caching facility (on client side, typically 100M to 1GB),
107 security features (Kerberos 4 based, access control lists),
108 simplicity of addressing (you just have one filesystem),
109 scalability (add further servers to your cell as needed),
110 communications protocol.
111 </p>
112
113 </body>
114 </section>
115 <section>
116 <title>Where can I get more information?</title>
117 <body>
118
119 <p>
120 Read the <uri link="http://www.angelfire.com/hi/plutonic/afs-faq.html">AFS
121 FAQ</uri>.
122 </p>
123
124 <p>
125 OpenAFS main page is at <uri
126 link="http://www.openafs.org">www.openafs.org</uri>.
127 </p>
128
129 <p>
130 AFS was originally developed by Transarc which is now owned by IBM.
131 You can find some information about AFS on
132 <uri link="http://www.transarc.ibm.com/Product/EFS/AFS/index.html">Transarc's
133 Webpage</uri>.
134 </p>
135
136 </body>
137 </section>
138 <section>
139 <title>How Can I Debug Problems?</title>
140 <body>
141
142 <p>
143 OpenAFS has great logging facilities. However, by default it logs straight into
144 its own logs instead of through the system logging facilities you have on your
145 system. To have the servers log through your system logger, use the
146 <c>-syslog</c> option for all <c>bos</c> commands.
147 </p>
148
149 </body>
150 </section>
151 </chapter>
152
153 <chapter>
154 <title>Upgrading from previous versions</title>
155 <section>
156 <title>Introduction</title>
157 <body>
158
159 <p>
160 This section aims to help you through the process of upgrading an existing
161 OpenAFS installation to OpenAFS version 1.4.0 or higher (or 1.2.x starting from
162 1.2.13. The latter will not be handled specifically, as most people will want
163 1.4 for a.o. linux-2.6 support, large file support and bug fixes).
164 </p>
165
166 <p>
167 If you're dealing with a clean install of a 1.4 version of OpenAFS, then you can
168 safely skip this chapter. However, if you're upgrading from a previous version,
169 we strongly urge you to follow the guidelines in the next sections. The
170 transition script in the ebuild is designed to assist you in quickly upgrading
171 and restarting. Please note that it will (for safety reasons) not delete
172 configuration files and startup scripts in old places, not automatically change
173 your boot configuration to use the new scripts, etc. If you need further
174 convincing, using an old OpenAFS kernel module together with the updated system
175 binaries, may very well cause your kernel to freak out. So, let's read on for a
176 clean and easy transition, shall we?
177 </p>
178
179 <note>
180 This chapter has been written bearing many different system configurations in
181 mind. Still, it is possible that due to peculiar tweaks a user has made, his or
182 her specific situation may not be described here. A user with enough
183 self-confidence to tweak his system should be experienced enough to apply the
184 given remarks where appropriate. Vice versa, a user that has done little
185 to his system but install the previous ebuild, can skip most of the warnings
186 further on.
187 </note>
188
189 </body>
190 </section>
191 <section>
192 <title>Differences to previous versions</title>
193 <body>
194
195 <p>
196 Traditionally, OpenAFS has used the same path-conventions that IBM TransArc labs
197 had used, before the code was forked. Understandably, old AFS setups continue
198 using these legacy path conventions. More recent setups conform with FHS by
199 using standard locations (as seen in many Linux distributions). The following
200 table is a compilation of the configure-script and the README accompanying the
201 OpenAFS distribution tarballs:
202 </p>
203
204 <table>
205 <tr>
206 <th>Directory</th>
207 <th>Purpose</th>
208 <th>Transarc Mode</th>
209 <th>Default Mode</th>
210 <th>translation to Gentoo</th>
211 </tr>
212 <tr>
213 <ti>viceetcdir</ti>
214 <ti>Client configuration</ti>
215 <ti>/usr/vice/etc</ti>
216 <ti>$(sysconfdir)/openafs</ti>
217 <ti>/etc/openafs</ti>
218 </tr>
219 <tr>
220 <ti>unnamed</ti>
221 <ti>Client binaries</ti>
222 <ti>unspecified</ti>
223 <ti>$(bindir)</ti>
224 <ti>/usr/bin</ti>
225 </tr>
226 <tr>
227 <ti>afsconfdir</ti>
228 <ti>Server configuration</ti>
229 <ti>/usr/afs/etc</ti>
230 <ti>$(sysconfdir)/openafs/server</ti>
231 <ti>/etc/openafs/server</ti>
232 </tr>
233 <tr>
234 <ti>afssrvdir</ti>
235 <ti>Internal server binaries</ti>
236 <ti>/usr/afs/bin (servers)</ti>
237 <ti>$(libexecdir)/openafs</ti>
238 <ti>/usr/libexec/openafs</ti>
239 </tr>
240 <tr>
241 <ti>afslocaldir</ti>
242 <ti>Server state</ti>
243 <ti>/usr/afs/local</ti>
244 <ti>$(localstatedir)/openafs</ti>
245 <ti>/var/lib/openafs</ti>
246 </tr>
247 <tr>
248 <ti>afsdbdir</ti>
249 <ti>Auth/serverlist/... databases</ti>
250 <ti>/usr/afs/db</ti>
251 <ti>$(localstatedir)/openafs/db</ti>
252 <ti>/var/lib/openafs/db</ti>
253 </tr>
254 <tr>
255 <ti>afslogdir</ti>
256 <ti>Log files</ti>
257 <ti>/usr/afs/logs</ti>
258 <ti>$(localstatedir)/openafs/logs</ti>
259 <ti>/var/lib/openafs/logs</ti>
260 </tr>
261 <tr>
262 <ti>afsbosconfig</ti>
263 <ti>Overseer config</ti>
264 <ti>$(afslocaldir)/BosConfig</ti>
265 <ti>$(afsconfdir)/BosConfig</ti>
266 <ti>/etc/openafs/BosConfig</ti>
267 </tr>
268 </table>
269
270 <p>
271 There are some other oddities, like binaries being put in
272 <path>/usr/vice/etc</path> in Transarc mode, but this list is not intended
273 to be comprehensive. It is rather meant to serve as a reference to those
274 troubleshooting config file transition.
275 </p>
276
277 <p>
278 Also as a result of the path changes, the default disk cache location has
279 been changed from <path>/usr/vice/cache</path> to
280 <path>/var/cache/openafs</path>.
281 </p>
282
283 <p>
284 Furthermore, the init-script has been split into a client and a server part.
285 You used to have <path>/etc/init.d/afs</path>, but now you'll end up with both
286 <path>/etc/init.d/openafs-client</path> and
287 <path>/etc/init.d/openafs-server</path>.
288 Consequently, the configuration file <path>/etc/conf.d/afs</path> has been split
289 into <path>/etc/conf.d/openafs-client</path> and
290 <path>/etc/conf.d/openafs-server</path>. Also, options in
291 <path>/etc/conf.d/afs</path> to turn either client or server on or off have
292 been obsoleted.
293 </p>
294
295 <p>
296 Another change to the init script is that it doesn't check your disk cache
297 setup anymore. The old code required that a separate ext2 partition be
298 mounted at <path>/usr/vice/cache</path>. There were some problems with that:
299 </p>
300
301 <ul>
302 <li>
303 Though it's a very logical setup, your cache doesn't need to be on a
304 separate partition. As long as you make sure that the amount of space
305 specified in <path>/etc/openafs/cacheinfo</path> really is available
306 for disk cache usage, you're safe. So there is no real problem with
307 having the cache on your root partition.
308 </li>
309 <li>
310 Some people use soft-links to point to the real disk cache location.
311 The init script didn't like this, because then this cache location
312 didn't turn up in <path>/proc/mounts</path>.
313 </li>
314 <li>
315 Many prefer ext3 over ext2 nowadays. Both filesystems are valid for
316 usage as a disk cache. Any other filesystem is unsupported
317 (like: don't try reiserfs, you'll get a huge warning, expect failure
318 afterwards).
319 </li>
320 </ul>
321
322 </body>
323 </section>
324 <section>
325 <title>Transition to the new paths</title>
326 <body>
327
328 <p>
329 First of all, emerging a newer OpenAFS version should not overwrite any old
330 configuration files. The script is designed to not change any files
331 already present on the system. So even if you have a totally messed up
332 configuration with a mix of old and new locations, the script should not
333 cause further problems. Also, if a running OpenAFS server is detected, the
334 installation will abort, preventing possible database corruption.
335 </p>
336
337 <p>
338 One caveat though -- there have been ebuilds floating around the internet that
339 partially disable the protection that Gentoo puts on <path>/etc</path>. These
340 ebuilds have never been distributed by Gentoo. You might want to check the
341 <c>CONFIG_PROTECT_MASK</c> variable in the output of the following command:
342 </p>
343
344 <pre caption="Checking your CONFIG_PROTECT_MASK">
345 # <i>emerge info | grep "CONFIG_PROTECT_MASK"</i>
346 CONFIG_PROTECT_MASK="/etc/gconf /etc/terminfo /etc/texmf/web2c /etc/env.d"
347 </pre>
348
349 <p>
350 Though nothing in this ebuild would touch the files in <path>/etc/afs</path>,
351 upgrading will cause the removal of your older OpenAFS installation. Files in
352 <c>CONFIG_PROTECT_MASK</c> that belong to the older installation will be removed
353 as well.
354 </p>
355
356 <p>
357 It should be clear to the experienced user that in the case he has tweaked his
358 system by manually adding soft links (e.g. <path>/usr/afs/etc</path> to
359 <path>/etc/openafs</path>), the new installation may run fine while still using
360 the old configuration files. In this case, there has been no real transition,
361 and cleaning up the old installation will result in a broken OpenAFS config.
362 </p>
363
364 <p>
365 Now that you know what doesn't happen, you may want to know what does:
366 </p>
367
368 <ul>
369 <li>
370 <path>/usr/afs/etc</path> is copied to <path>/etc/openafs/server</path>
371 </li>
372 <li>
373 <path>/usr/vice/etc</path> is copied to <path>/etc/openafs</path>
374 </li>
375 <li>
376 <path>/usr/afs/local</path> is copied to <path>/var/lib/openafs</path>
377 </li>
378 <li>
379 <path>/usr/afs/local/BosConfig</path> is copied to
380 <path>/etc/openafs/BosConfig</path>, while replacing occurrences of
381 <path>/usr/afs/bin/</path> with <path>/usr/libexec/openafs</path>,
382 <path>/usr/afs/etc</path> with <path>/etc/openafs/server</path>
383 and <path>/usr/afs/bin</path> (without the / as previously) with
384 <path>/usr/bin</path>
385 </li>
386 <li>
387 <path>/usr/afs/db</path> is copied to <path>/var/lib/openafs/db</path>
388 </li>
389 <li>
390 The configuration file <path>/etc/conf.d/afs</path> is copied to
391 <path>/etc/conf.d/openafs-client</path>, as all known old options were
392 destined for client usage only.
393 </li>
394 </ul>
395
396 </body>
397 </section>
398 <section>
399 <title>The upgrade itself</title>
400 <body>
401
402 <p>
403 So you haven't got an OpenAFS server setup? Or maybe you do, the previous
404 sections have informed you about what is going to happen, and you're still
405 ready for it?
406 </p>
407
408 <p>
409 Let's go ahead with it then!
410 </p>
411
412 <p>
413 If you do have a server running, you want to shut it down now.
414 </p>
415
416 <pre caption="Stopping OpenAFS (in case you have a server)">
417 # <i>/etc/init.d/afs stop</i>
418 </pre>
419
420 <p>
421 And then the upgrade itself.
422 </p>
423
424 <pre caption="Now upgrade!">
425 # <i>emerge -u openafs</i>
426 </pre>
427
428 </body>
429 </section>
430 <section>
431 <title>Restarting OpenAFS</title>
432 <body>
433
434 <p>
435 If you had an OpenAFS server running, you would have not have been forced to
436 shut it down. Now is the time to do that.
437 </p>
438
439 <pre caption="Stopping OpenAFS client after upgrade">
440 # <i>/etc/init.d/afs stop</i>
441 </pre>
442
443 <p>
444 As you may want keep the downtime to a minimum, so you can restart
445 your OpenAFS server right away.
446 </p>
447
448 <pre caption="Restarting OpenAFS server after upgrade">
449 # <i>/etc/init.d/openafs-server start</i>
450 </pre>
451
452 <p>
453 You can check whether it's running properly with the following command:
454 </p>
455
456 <pre caption="Checking OpenAFS server status">
457 # <i>/usr/bin/bos status localhost -localauth</i>
458 </pre>
459
460 <p>
461 Before starting the OpenAFS client again, please take time to check your
462 cache settings. They are determined by <path>/etc/openafs/cacheinfo</path>.
463 To restart your OpenAFS client installation, please type the following:
464 </p>
465
466 <pre caption="Restarting OpenAFS client after upgrade">
467 # <i>/etc/init.d/openafs-client start</i>
468 </pre>
469
470 </body>
471 </section>
472 <section>
473 <title>Cleaning up afterwards</title>
474 <body>
475
476 <p>
477 Before cleaning up, please make really sure that everything runs smoothly and
478 that you have restarted after the upgrade (otherwise, you may still be running
479 your old installation).
480 </p>
481
482 <impo>
483 Please make sure you're not using <path>/usr/vice/cache</path> for disk cache
484 if you are deleting <path>/usr/vice</path>!!
485 </impo>
486
487 <p>
488 The following directories may be safely removed from the system:
489 </p>
490
491 <ul>
492 <li><path>/etc/afs</path></li>
493 <li><path>/usr/vice</path></li>
494 <li><path>/usr/afs</path></li>
495 <li><path>/usr/afsws</path></li>
496 </ul>
497
498 <p>
499 The following files are also unnecessary:
500 </p>
501
502 <ul>
503 <li><path>/etc/init.d/afs</path></li>
504 <li><path>/etc/conf.d/afs</path></li>
505 </ul>
506
507 <pre caption="Removing the old files">
508 # <i>tar czf /root/oldafs-backup.tgz /etc/afs /usr/vice /usr/afs /usr/afsws</i>
509 # <i>rm -R /etc/afs /usr/vice /usr/afs /usr/afsws</i>
510 # <i>rm /etc/init.d/afs /etc/conf.d/afs</i>
511 </pre>
512
513 <p>
514 In case you've previously used ebuilds =openafs-1.2.13 or =openafs-1.3.85, you
515 may also have some other unnecessary files:
516 </p>
517
518 <ul>
519 <li><path>/etc/init.d/afs-client</path></li>
520 <li><path>/etc/init.d/afs-server</path></li>
521 <li><path>/etc/conf.d/afs-client</path></li>
522 <li><path>/etc/conf.d/afs-server</path></li>
523 </ul>
524
525 </body>
526 </section>
527 <section>
528 <title>Init Script changes</title>
529 <body>
530
531 <p>
532 Now most people would have their systems configured to automatically start
533 the OpenAFS client and server on startup. Those who don't can safely skip
534 this section. If you had your system configured to start them automatically,
535 you will need to re-enable this, because the names of the init scripts have
536 changed.
537 </p>
538
539 <pre caption="Re-enabling OpenAFS startup at boot time">
540 # <i>rc-update del afs default</i>
541 # <i>rc-update add openafs-client default</i>
542 # <i>rc-update add openafs-server default</i>
543 </pre>
544
545 <p>
546 If you had <c>=openafs-1.2.13</c> or <c>=openafs-1.3.85</c>, you should remove
547 <path>afs-client</path> and <path>afs-server</path> from the default runlevel,
548 instead of <path>afs</path>.
549 </p>
550
551 </body>
552 </section>
553 <section>
554 <title>Troubleshooting: what if the automatic upgrade fails</title>
555 <body>
556
557 <p>
558 Don't panic. You shouldn't have lost any data or configuration files. So let's
559 analyze the situation. Please file a bug at <uri
560 link="http://bugs.gentoo.org">bugs.gentoo.org</uri> in any case, preferably
561 with as much information as possible.
562 </p>
563
564 <p>
565 If you're having problems starting the client, this should help you diagnosing
566 the problem:
567 </p>
568
569 <ul>
570 <li>
571 Run <c>dmesg</c>. The client normally sends error messages there.
572 </li>
573 <li>
574 Check <path>/etc/openafs/cacheinfo</path>. It should be of the form:
575 /afs:{path to disk cache}:{number of blocks for disk cache}.
576 Normally, your disk cache will be located at
577 <path>/var/cache/openafs</path>.
578 </li>
579 <li>
580 Check the output of <c>lsmod</c>. You will want to see a line beginning
581 with the word openafs.
582 </li>
583 <li><c>pgrep afsd</c> will tell you whether afsd is running or not</li>
584 <li>
585 <c>cat /proc/mounts</c> should reveal whether <path>/afs</path> has been
586 mounted.
587 </li>
588 </ul>
589
590 <p>
591 If you're having problems starting the server, then these hints may be useful:
592 </p>
593
594 <ul>
595 <li>
596 <c>pgrep bosserver</c> tells you whether the overseer is running or not. If
597 you have more than one overseer running, then something has gone wrong. In
598 that case, you should try a graceful OpenAFS server shutdown with <c>bos
599 shutdown localhost -localauth -wait</c>, check the result with <c>bos
600 status localhost -localauth</c>, kill all remaining overseer processes and
601 then finally check whether any server processes are still running (<c>ls
602 /usr/libexec/openafs</c> to get a list of them). Afterwards, do
603 <c>/etc/init.d/openafs-server zap</c> to reset the status of the server and
604 <c>/etc/init.d/openafs-server start</c> to try launching it again.
605 </li>
606 <li>
607 If you're using OpenAFS' own logging system (which is the default setting),
608 check out <path>/var/lib/openafs/logs/*</path>. If you're using the syslog
609 service, go check out its logs for any useful information.
610 </li>
611 </ul>
612
613 </body>
614 </section>
615 </chapter>
616
617 <chapter>
618 <title>Documentation</title>
619 <section>
620 <title>Getting AFS Documentation</title>
621 <body>
622
623 <p>
624 You can get the original IBM AFS Documentation. It is very well written and you
625 really want read it if it is up to you to administer a AFS Server.
626 </p>
627
628 <pre caption="Installing afsdoc">
629 # <i>emerge app-doc/afsdoc</i>
630 </pre>
631
632 <p>
633 You also have the option of using the documentation delivered with OpenAFS. It
634 is installed when you have the USE flag <c>doc</c> enabled while emerging
635 OpenAFS. It can be found in <path>/usr/share/doc/openafs-*/</path>. At the time
636 of writing, this documentation was a work in progress. It may however document
637 newer features in OpenAFS that aren't described in the original IBM AFS
638 Documentation.
639 </p>
640
641 </body>
642 </section>
643 </chapter>
644
645 <chapter>
646 <title>Client Installation</title>
647 <section>
648 <title>Building the Client</title>
649 <body>
650
651 <pre caption="Installing openafs">
652 # <i>emerge net-fs/openafs</i>
653 </pre>
654
655 <p>
656 After successful compilation you're ready to go.
657 </p>
658
659 </body>
660 </section>
661 <section>
662 <title>A simple global-browsing client installation</title>
663 <body>
664
665 <p>
666 If you're not part of a specific OpenAFS-cell you want to access, and you just
667 want to try browsing globally available OpenAFS-shares, then you can just
668 install OpenAFS, not touch the configuration at all, and start
669 <path>/etc/init.d/openafs-client</path>.
670 </p>
671
672 </body>
673 </section>
674 <section>
675 <title>Accessing a specific OpenAFS cell</title>
676 <body>
677
678 <p>
679 If you need to access a specific cell, say your university's or company's own
680 cell, then some adjustments to your configuration have to be made.
681 </p>
682
683 <p>
684 Firstly, you need to update <path>/etc/openafs/CellServDB</path> with the
685 database servers for your cell. This information is normally provided by your
686 administrator.
687 </p>
688
689 <p>
690 Secondly, in order to be able to log onto the OpenAFS cell, you need to specify
691 its name in <path>/etc/openafs/ThisCell</path>.
692 </p>
693
694 <pre caption="Adjusting CellServDB and ThisCell">
695 CellServDB:
696 >netlabs #Cell name
697 10.0.0.1 #storage
698
699 ThisCell:
700 netlabs
701 </pre>
702
703 <warn>
704 Only use spaces inside the <path>CellServDB</path> file. The client will most
705 likely fail if you use TABs.
706 </warn>
707
708 <p>
709 CellServDB tells your client which server(s) it needs to contact for a
710 specific cell. ThisCell should be quite obvious. Normally you use a name
711 which is unique for your organisation. Your (official) domain might be a
712 good choice.
713 </p>
714
715 <p>
716 For a quick start, you can now start <path>/etc/init.d/openafs-client</path> and
717 use <c>klog</c> to authenticate yourself and start using your access to the
718 cell. For automatic logons to you cell, you want to consult the appropriate
719 section below.
720 </p>
721
722 </body>
723 </section>
724 <section>
725 <title>Adjusting the cache</title>
726 <body>
727
728 <note>
729 Unfortunately the AFS Client needs a ext2/3 filesystem for its cache to run
730 correctly. There are some issues when using other filesystems (using e.g.
731 reiserfs is not a good idea).
732 </note>
733
734 <p>
735 You can house your cache on an existing filesystem (if it's ext2/3), or you
736 may want to have a separate partition for that. The default location of the
737 cache is <path>/var/cache/openafs</path>, but you can change that by editing
738 <path>/etc/openafs/cacheinfo</path>. A standard size for your cache is
739 200MB, but more won't hurt.
740 </p>
741
742 </body>
743 </section>
744 <section>
745 <title>Starting AFS on startup</title>
746 <body>
747
748 <p>
749 The following command will create the appropriate links to start your afs
750 client on system startup.
751 </p>
752
753 <warn>
754 You should always have a running afs server in your domain when trying to start
755 the afs client. Your system won't boot until it gets some timeout if your AFS
756 server is down (and this is quite a long long time.)
757 </warn>
758
759 <pre caption="Adding AFS client to the default runlevel">
760 # <i>rc-update add openafs-client default</i>
761 </pre>
762
763 </body>
764 </section>
765 </chapter>
766
767 <chapter>
768 <title>Server Installation</title>
769 <section>
770 <title>Building the Server</title>
771 <body>
772
773 <note>
774 All commands should be written in one line!! In this document they are
775 sometimes wrapped to two lines to make them easier to read.
776 </note>
777
778 <p>
779 If you haven't already done so, the following command will install all
780 necessary binaries for setting up an AFS Server <e>and</e> Client.
781 </p>
782
783 <pre caption="Installing openafs">
784 # <i>emerge net-fs/openafs</i>
785 </pre>
786
787 </body>
788 </section>
789 <section>
790 <title>Starting AFS Server</title>
791 <body>
792
793 <p>
794 You need to run the <c>bosserver</c> command to initialize the Basic OverSeer
795 (BOS) Server, which monitors and controls other AFS server processes on its
796 server machine. Think of it as init for the system. Include the <c>-noauth</c>
797 flag to disable authorization checking, since you haven't added the admin user
798 yet.
799 </p>
800
801 <warn>
802 Disabling authorization checking gravely compromises cell security. You must
803 complete all subsequent steps in one uninterrupted pass and must not leave
804 the machine unattended until you restart the BOS Server with authorization
805 checking enabled. Well, this is what the AFS documentation says. :)
806 </warn>
807
808 <pre caption="Initialize the Basic OverSeer Server">
809 # <i>bosserver -noauth &amp;</i>
810 </pre>
811
812 <p>
813 Verify that the BOS Server created <path>/etc/openafs/server/CellServDB</path>
814 and <path>/etc/openafs/server/ThisCell</path>
815 </p>
816
817 <pre caption="Check if CellServDB and ThisCell are created">
818 # <i>ls -al /etc/openafs/server/</i>
819 -rw-r--r-- 1 root root 41 Jun 4 22:21 CellServDB
820 -rw-r--r-- 1 root root 7 Jun 4 22:21 ThisCell
821 </pre>
822
823 </body>
824 </section>
825 <section>
826 <title>Defining Cell Name and Membership for Server Process</title>
827 <body>
828
829 <p>
830 Now assign your cell's name.
831 </p>
832
833 <impo>
834 There are some restrictions on the name format. Two of the most important
835 restrictions are that the name cannot include uppercase letters or more than
836 64 characters. Remember that your cell name will show up under
837 <path>/afs</path>, so you might want to choose a short one.
838 </impo>
839
840 <note>
841 In the following and every instruction in this guide, for the &lt;server
842 name&gt; argument substitute the full-qualified hostname (such as
843 <b>afs.gentoo.org</b>) of the machine you are installing. For the &lt;cell
844 name&gt; argument substitute your cell's complete name (such as
845 <b>gentoo</b>)
846 </note>
847
848 <p>
849 Run the <c>bos setcellname</c> command to set the cell name:
850 </p>
851
852 <pre caption="Set the cell name">
853 # <i>bos setcellname &lt;server name&gt; &lt;cell name&gt; -noauth</i>
854 </pre>
855
856 </body>
857 </section>
858 <section>
859 <title>Starting the Database Server Process</title>
860 <body>
861
862 <p>
863 Next use the <c>bos create</c> command to create entries for the four database
864 server processes in the <path>/etc/openafs/BosConfig</path> file. The four
865 processes run on database server machines only.
866 </p>
867
868 <table>
869 <tr>
870 <ti>kaserver</ti>
871 <ti>
872 The Authentication Server maintains the Authentication Database.
873 This can be replaced by a Kerberos 5 daemon. If anybody wants to try that
874 feel free to update this document :)
875 </ti>
876 </tr>
877 <tr>
878 <ti>buserver</ti>
879 <ti>The Backup Server maintains the Backup Database</ti>
880 </tr>
881 <tr>
882 <ti>ptserver</ti>
883 <ti>The Protection Server maintains the Protection Database</ti>
884 </tr>
885 <tr>
886 <ti>vlserver</ti>
887 <ti>
888 The Volume Location Server maintains the Volume Location Database (VLDB).
889 Very important :)
890 </ti>
891 </tr>
892 </table>
893
894 <pre caption="Create entries for the database processes">
895 # <i>bos create &lt;server name&gt; kaserver \
896 simple /usr/libexec/openafs/kaserver \
897 -cell &lt;cell name&gt; -noauth</i>
898 # <i>bos create &lt;server name&gt; buserver \
899 simple /usr/libexec/openafs/buserver \
900 -cell &lt;cell name&gt; -noauth</i>
901 # <i>bos create &lt;server name&gt; ptserver \
902 simple /usr/libexec/openafs/ptserver \
903 -cell &lt;cell name&gt; -noauth</i>
904 # <i>bos create &lt;server name&gt; \
905 vlserver simple /usr/libexec/openafs/vlserver \
906 -cell &lt;cell name&gt; -noauth</i>
907 </pre>
908
909 <p>
910 You can verify that all servers are running with the <c>bos status</c> command:
911 </p>
912
913 <pre caption="Check if all the servers are running">
914 # <i>bos status &lt;server name&gt; -noauth</i>
915 Instance kaserver, currently running normally.
916 Instance buserver, currently running normally.
917 Instance ptserver, currently running normally.
918 Instance vlserver, currently running normally.
919 </pre>
920
921 </body>
922 </section>
923 <section>
924 <title>Initializing Cell Security</title>
925 <body>
926
927 <p>
928 Now we'll initialize the cell's security mechanisms. We'll begin by creating
929 the following two initial entries in the Authentication Database: The main
930 administrative account, called <b>admin</b> by convention and an entry for
931 the AFS server processes, called <c>afs</c>. No user logs in under the
932 identity <b>afs</b>, but the Authentication Server's Ticket Granting
933 Service (TGS) module uses the account to encrypt the server tickets that
934 it grants to AFS clients. This sounds pretty much like Kerberos :)
935 </p>
936
937 <p>
938 Enter <c>kas</c> interactive mode
939 </p>
940
941 <pre caption="Entering the interactive mode">
942 # <i>kas -cell &lt;cell name&gt; -noauth</i>
943 ka&gt; <i>create afs</i>
944 initial_password:
945 Verifying, please re-enter initial_password:
946 ka&gt; <i>create admin</i>
947 initial_password:
948 Verifying, please re-enter initial_password:
949 ka&gt; <i>examine afs</i>
950
951 User data for afs
952 key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:30 2001
953 password will never expire.
954 An unlimited number of unsuccessful authentications is permitted.
955 entry never expires. Max ticket lifetime 100.00 hours.
956 last mod on Mon Jun 4 20:49:30 2001 by &lt;none&gt;
957 permit password reuse
958 ka&gt; <i>setfields admin -flags admin</i>
959 ka&gt; <i>examine admin</i>
960
961 User data for admin (ADMIN)
962 key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:59 2001
963 password will never expire.
964 An unlimited number of unsuccessful authentications is permitted.
965 entry never expires. Max ticket lifetime 25.00 hours.
966 last mod on Mon Jun 4 20:51:10 2001 by &lt;none&gt;
967 permit password reuse
968 ka&gt;
969 </pre>
970
971 <p>
972 Run the <c>bos adduser</c> command, to add the <b>admin</b> user to
973 the <path>/etc/openafs/server/UserList</path>.
974 </p>
975
976 <pre caption="Add the admin user to the UserList">
977 # <i>bos adduser &lt;server name&gt; admin -cell &lt;cell name&gt; -noauth</i>
978 </pre>
979
980 <p>
981 Issue the <c>bos addkey</c> command to define the AFS Server
982 encryption key in <path>/etc/openafs/server/KeyFile</path>
983 </p>
984
985 <note>
986 If asked for the input key, give the password you entered when creating
987 the AFS entry with <c>kas</c>
988 </note>
989
990 <pre caption="Entering the password">
991 # <i>bos addkey &lt;server name&gt; -kvno 0 -cell &lt;cell name&gt; -noauth</i>
992 input key:
993 Retype input key:
994 </pre>
995
996 <p>
997 Issue the <c>pts createuser</c> command to create a Protection Database entry
998 for the admin user.
999 </p>
1000
1001 <note>
1002 By default, the Protection Server assigns AFS UID 1 to the <b>admin</b> user,
1003 because it is the first user entry you are creating. If the local password file
1004 (<path>/etc/passwd</path> or equivalent) already has an entry for <b>admin</b>
1005 that assigns a different UID use the <c>-id</c> argument to create matching
1006 UIDs.
1007 </note>
1008
1009 <pre caption="Create a Protection Database entry for the database user">
1010 # <i>pts createuser -name admin -cell &lt;cell name&gt; [-id &lt;AFS UID&gt;] -noauth</i>
1011 </pre>
1012
1013 <p>
1014 Issue the <c>pts adduser</c> command to make the <b>admin</b> user a member
1015 of the system:administrators group, and the <c>pts membership</c> command to
1016 verify the new membership
1017 </p>
1018
1019 <pre caption="Make admin a member of the administrators group and verify">
1020 # <i>pts adduser admin system:administrators -cell &lt;cell name&gt; -noauth</i>
1021 # <i>pts membership admin -cell &lt;cell name&gt; -noauth</i>
1022 Groups admin (id: 1) is a member of:
1023 system:administrators
1024 </pre>
1025
1026 </body>
1027 </section>
1028 <section>
1029 <title>Properly (re-)starting the AFS server</title>
1030 <body>
1031
1032 <p>
1033 At this moment, proper authentication is possible, and the OpenAFS server can
1034 be started in a normal fashion. Note that authentication also requires a
1035 running OpenAFS client (setting it up is described in the previous chapter).
1036 <!-- Left out because deemed confusing>
1037 Continuing without this step is possible, but in that case a quick restart of
1038 the server is required, as demonstrated at the end of this section.
1039 <-->
1040 </p>
1041
1042 <pre caption="Shutdown bosserver">
1043 # <i>bos shutdown &lt;server name&gt; -wait -noauth</i>
1044 # <i>killall bosserver</i>
1045 </pre>
1046
1047 <pre caption="Normal OpenAFS server (and client) startup">
1048 # <i>/etc/init.d/openafs-server start</i>
1049 # <i>/etc/init.d/openafs-client start</i>
1050 </pre>
1051
1052 <pre caption="Adding AFS server to the default runlevel">
1053 # <i>rc-update add openafs-server default</i>
1054 </pre>
1055
1056 <pre caption="Getting a token as the admin user">
1057 # <i>klog admin</i>
1058 </pre>
1059
1060 <!-- Left out because deemed confusing>
1061 <p>
1062 If you chose not to restart OpenAFS without the -noauth flag, you can simply
1063 perform the following procedure instead:
1064 </p>
1065
1066 <pre caption="Restart all AFS server processes">
1067 # <i>bos restart &lt;server name&gt; -all -cell &lt;cell name&gt; -noauth</i>
1068 </pre>
1069 <-->
1070
1071 </body>
1072 </section>
1073 <section>
1074 <title>Starting the File Server, Volume Server and Salvager</title>
1075 <body>
1076
1077 <p>
1078 Start the <c>fs</c> process, which consists of the File Server, Volume Server
1079 and Salvager (fileserver, volserver and salvager processes).
1080 </p>
1081
1082 <pre caption="Start the fs process">
1083 # <i>bos create &lt;server name&gt; fs \
1084 fs /usr/libexec/openafs/fileserver /usr/libexec/openafs/volserver /usr/libexec/openafs/salvager \
1085 -cell &lt;cell name&gt; -noauth</i>
1086 </pre>
1087
1088 <p>
1089 Verify that all processes are running:
1090 </p>
1091
1092 <pre caption="Check if all processes are running">
1093 # <i>bos status &lt;server name&gt; -long -noauth</i>
1094 Instance kaserver, (type is simple) currently running normally.
1095 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1096 Last exit at Mon Jun 4 21:07:17 2001
1097 Command 1 is '/usr/libexec/openafs/kaserver'
1098
1099 Instance buserver, (type is simple) currently running normally.
1100 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1101 Last exit at Mon Jun 4 21:07:17 2001
1102 Command 1 is '/usr/libexec/openafs/buserver'
1103
1104 Instance ptserver, (type is simple) currently running normally.
1105 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1106 Last exit at Mon Jun 4 21:07:17 2001
1107 Command 1 is '/usr/libexec/openafs/ptserver'
1108
1109 Instance vlserver, (type is simple) currently running normally.
1110 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1111 Last exit at Mon Jun 4 21:07:17 2001
1112 Command 1 is '/usr/libexec/openafs/vlserver'
1113
1114 Instance fs, (type is fs) currently running normally.
1115 Auxiliary status is: file server running.
1116 Process last started at Mon Jun 4 21:09:30 2001 (2 proc starts)
1117 Command 1 is '/usr/libexec/openafs/fileserver'
1118 Command 2 is '/usr/libexec/openafs/volserver'
1119 Command 3 is '/usr/libexec/openafs/salvager'
1120 </pre>
1121
1122 <p>
1123 Your next action depends on whether you have ever run AFS file server machines
1124 in the cell.
1125 </p>
1126
1127 <p>
1128 If you are installing the first AFS Server ever in the cell, create the first
1129 AFS volume, <b>root.afs</b>
1130 </p>
1131
1132 <note>
1133 For the partition name argument, substitute the name of one of the machine's
1134 AFS Server partitions. Any filesystem mounted under a directory called
1135 <path>/vicepx</path>, where x is in the range of a-z, will be considered and
1136 used as an AFS Server partition. Any unix filesystem will do (as opposed to the
1137 client's cache, which can only be ext2/3). Tip: the server checks for each
1138 <path>/vicepx</path> mount point whether a filesystem is mounted there. If not,
1139 the server will not attempt to use it. This behaviour can be overridden by
1140 putting a file named <path>AlwaysAttach</path> in this directory.
1141 </note>
1142
1143 <pre caption="Create the root.afs volume">
1144 # <i>vos create &lt;server name&gt; &lt;partition name&gt; root.afs -cell &lt;cell name&gt; -noauth</i>
1145 </pre>
1146
1147 <p>
1148 If there are existing AFS file server machines and volumes in the cell
1149 issue the <c>vos sncvldb</c> and <c>vos syncserv</c> commands to synchronize
1150 the VLDB (Volume Location Database) with the actual state of volumes on the
1151 local machine. This will copy all necessary data to your new server.
1152 </p>
1153
1154 <p>
1155 If the command fails with the message "partition /vicepa does not exist on
1156 the server", ensure that the partition is mounted before running OpenAFS
1157 servers, or mount the directory and restart the processes using
1158 <c>bos restart &lt;server name&gt; -all -cell &lt;cell
1159 name&gt; -noauth</c>.
1160 </p>
1161
1162 <pre caption="Synchronise the VLDB">
1163 # <i>vos syncvldb &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
1164 # <i>vos syncserv &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
1165 </pre>
1166
1167 </body>
1168 </section>
1169 <section>
1170 <title>Starting the Server Portion of the Update Server</title>
1171 <body>
1172
1173 <pre caption="Start the update server">
1174 # <i>bos create &lt;server name&gt; \
1175 upserver simple "/usr/libexec/openafs/upserver \
1176 -crypt /etc/openafs/server -clear /usr/libexec/openafs" \
1177 -cell &lt;cell name&gt; -noauth</i>
1178 </pre>
1179
1180 </body>
1181 </section>
1182 <section>
1183 <title>Configuring the Top Level of the AFS filespace</title>
1184 <body>
1185
1186 <p>
1187 First you need to set some ACLs, so that any user can lookup
1188 <path>/afs</path>.
1189 </p>
1190
1191 <note>
1192 The default OpenAFS client configuration has <b>dynroot</b> enabled.
1193 This option turns <path>/afs</path> into a virtual directory composed of the
1194 contents of your <path>/etc/openafs/CellServDB</path> file. As such, the
1195 following command will not work, because it requires a real AFS directory.
1196 You can temporarily switch dynroot off by setting <b>ENABLE_DYNROOT</b> to
1197 <b>no</b> in <path>/etc/conf.d/openafs-client</path>. Don't forget to issue
1198 a client restart after changing parameters.
1199 </note>
1200
1201 <pre caption="Set access control lists">
1202 # <i>fs setacl /afs system:anyuser rl</i>
1203 </pre>
1204
1205 <p>
1206 Then you need to create the root volume, mount it readonly on
1207 <path>/afs/&lt;cell name&gt;</path> and read/write on <path>/afs/.&lt;cell
1208 name&gt;</path>.
1209 </p>
1210
1211 <pre caption="Prepare the root volume">
1212 # <i>vos create &lt;server name&gt; &lt;partition name&gt; root.cell</i>
1213 # <i>fs mkmount /afs/&lt;cell name&gt; root.cell</i>
1214 # <i>fs setacl /afs/&lt;cell name&gt; system:anyuser rl</i>
1215 # <i>fs mkmount /afs/.&lt;cell name&gt; root.cell -rw</i>
1216 </pre>
1217
1218 <pre caption="Adding volumes underneath">
1219 # <i>vos create &lt;server name&gt; &lt;partition name&gt; &lt;myvolume&gt;</i>
1220 # <i>fs mkmount /afs/&lt;cell name&gt;/&lt;mymountpoint&gt; &lt;myvolume&gt;</i>
1221 # <i>fs mkmount /afs/&lt;cell name&gt;/.&lt;mymountpoint&gt; &lt;myvolume&gt; -rw</i>
1222 # <i>fs setquota /afs/&lt;cell name&gt;/.&lt;mymountpoint&gt; -max &lt;quotum&gt;</i>
1223 </pre>
1224
1225 <p>
1226 Finally you're done!!! You should now have a working AFS file server
1227 on your local network. Time to get a big
1228 cup of coffee and print out the AFS documentation!!!
1229 </p>
1230
1231 <note>
1232 It is very important for the AFS server to function properly, that all system
1233 clocks are synchronized. This is best accomplished by installing a ntp server
1234 on one machine (e.g. the AFS server) and synchronize all client clocks
1235 with the ntp client. This can also be done by the AFS client.
1236 </note>
1237
1238 </body>
1239 </section>
1240 </chapter>
1241
1242 <chapter>
1243 <title>Basic Administration</title>
1244 <section>
1245 <title>Disclaimer</title>
1246 <body>
1247
1248 <p>
1249 OpenAFS is an extensive technology. Please read the AFS documentation for more
1250 information. We only list a few administrative tasks in this chapter.
1251 </p>
1252
1253 </body>
1254 </section>
1255 <section>
1256 <title>Configuring PAM to Acquire an AFS Token on Login</title>
1257 <body>
1258
1259 <p>
1260 To use AFS you need to authenticate against the KA Server if using
1261 an implementation AFS Kerberos 4, or against a Kerberos 5 KDC if using
1262 MIT, Heimdal, or ShiShi Kerberos 5. However in order to login to a
1263 machine you will also need a user account, this can be local in
1264 <path>/etc/passwd</path>, NIS, LDAP (OpenLDAP), or a Hesiod database.
1265 PAM allows Gentoo to tie the authentication against AFS and login to the
1266 user account.
1267 </p>
1268
1269 <p>
1270 You will need to update <path>/etc/pam.d/system-auth</path> which is
1271 used by the other configurations. "use_first_pass" indicates it will be
1272 checked first against the user login, and "ignore_root" stops the local
1273 superuser being checked so as to order to allow login if AFS or the network
1274 fails.
1275 </p>
1276
1277 <pre caption="/etc/pam.d/system-auth">
1278 auth required pam_env.so
1279 auth sufficient pam_unix.so likeauth nullok
1280 auth sufficient pam_afs.so.1 use_first_pass ignore_root
1281 auth required pam_deny.so
1282
1283 account required pam_unix.so
1284
1285 password required pam_cracklib.so retry=3
1286 password sufficient pam_unix.so nullok md5 shadow use_authtok
1287 password required pam_deny.so
1288
1289 session required pam_limits.so
1290 session required pam_unix.so
1291 </pre>
1292
1293 <p>
1294 In order for <c>sudo</c> to keep the real user's token and to prevent local
1295 users gaining AFS access change <path>/etc/pam.d/su</path> as follows:
1296 </p>
1297
1298 <pre caption="/etc/pam.d/su">
1299 <comment># Here, users with uid &gt; 100 are considered to belong to AFS and users with
1300 # uid &lt;= 100 are ignored by pam_afs.</comment>
1301 auth sufficient pam_afs.so.1 ignore_uid 100
1302
1303 auth sufficient pam_rootok.so
1304
1305 <comment># If you want to restrict users begin allowed to su even more,
1306 # create /etc/security/suauth.allow (or to that matter) that is only
1307 # writable by root, and add users that are allowed to su to that
1308 # file, one per line.
1309 #auth required pam_listfile.so item=ruser \
1310 # sense=allow onerr=fail file=/etc/security/suauth.allow
1311
1312 # Uncomment this to allow users in the wheel group to su without
1313 # entering a passwd.
1314 #auth sufficient pam_wheel.so use_uid trust
1315
1316 # Alternatively to above, you can implement a list of users that do
1317 # not need to supply a passwd with a list.
1318 #auth sufficient pam_listfile.so item=ruser \
1319 # sense=allow onerr=fail file=/etc/security/suauth.nopass
1320
1321 # Comment this to allow any user, even those not in the 'wheel'
1322 # group to su</comment>
1323 auth required pam_wheel.so use_uid
1324
1325 auth required pam_stack.so service=system-auth
1326
1327 account required pam_stack.so service=system-auth
1328
1329 password required pam_stack.so service=system-auth
1330
1331 session required pam_stack.so service=system-auth
1332 session optional pam_xauth.so
1333
1334 <comment># Here we prevent the real user id's token from being dropped</comment>
1335 session optional pam_afs.so.1 no_unlog
1336 </pre>
1337
1338 </body>
1339 </section>
1340 </chapter>
1341 </guide>

  ViewVC Help
Powered by ViewVC 1.1.20