/[gentoo]/xml/htdocs/doc/en/openafs.xml
Gentoo

Contents of /xml/htdocs/doc/en/openafs.xml

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.23 - (show annotations) (download) (as text)
Wed Nov 9 20:58:07 2005 UTC (8 years, 9 months ago) by fox2mike
Branch: MAIN
Changes since 1.22: +626 -78 lines
File MIME type: application/xml
#110883 - Huge update to OpenAFS guide, thanks to stefaan. This is a Gentoo Doc Overnight Express Delivery on the request of seemant.

1 <?xml version='1.0' encoding="UTF-8"?>
2 <!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/openafs.xml,v 1.22 2005/10/29 21:10:15 so Exp $ -->
3
4 <!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
5
6 <guide link="/doc/en/openafs.xml">
7 <title>Gentoo Linux OpenAFS Guide</title>
8
9 <author title="Editor">
10 <mail link="darks@gentoo.org">Holger Brueckner</mail>
11 </author>
12 <author title="Editor">
13 <mail link="bennyc@gentoo.org">Benny Chuang</mail>
14 </author>
15 <author title="Editor">
16 <mail link="blubber@gentoo.org">Tiemo Kieft</mail>
17 </author>
18 <author title="Editor">
19 <mail link="fnjordy@gmail.com">Steven McCoy</mail>
20 </author>
21 <author title="Editor">
22 <mail link="stefaan@gentoo.org">Stefaan De Roeck</mail>
23 </author>
24 <author title="Editor">
25 <mail link="fox2mike@gentoo.org">Shyam Mani</mail>
26 </author>
27
28 <abstract>
29 This guide shows you how to install an OpenAFS server and client on Gentoo
30 Linux.
31 </abstract>
32
33 <!-- The content of this document is licensed under the CC-BY-SA license -->
34 <!-- See http://creativecommons.org/licenses/by-sa/2.5 -->
35 <license/>
36
37 <version>1.1</version>
38 <date>2005-11-10</date>
39
40 <chapter>
41 <title>Overview</title>
42 <section>
43 <title>About this Document</title>
44 <body>
45
46 <p>
47 This document provides you with all neccessary steps to install an OpenAFS
48 server on Gentoo Linux. Parts of this document are taken from the AFS FAQ and
49 IBM's Quick Beginnings guide on AFS. Well, never reinvent the wheel. :)
50 </p>
51
52 </body>
53 </section>
54 <section>
55 <title>What is AFS?</title>
56 <body>
57
58 <p>
59 AFS is a distributed filesystem that enables co-operating hosts
60 (clients and servers) to efficiently share filesystem resources
61 across both local area and wide area networks. Clients hold a
62 cache for often used objects (files), to get quicker
63 access to them.
64 </p>
65
66 <p>
67 AFS is based on a distributed file system originally developed
68 at the Information Technology Center at Carnegie-Mellon University
69 that was called the "Andrew File System". "Andrew" was the name of the
70 research project at CMU - honouring the founders of the University. Once
71 Transarc was formed and AFS became a product, the "Andrew" was dropped to
72 indicate that AFS had gone beyond the Andrew research project and had become
73 a supported, product quality filesystem. However, there were a number of
74 existing cells that rooted their filesystem as /afs. At the time, changing
75 the root of the filesystem was a non-trivial undertaking. So, to save the
76 early AFS sites from having to rename their filesystem, AFS remained as the
77 name and filesystem root.
78 </p>
79
80 </body>
81 </section>
82 <section>
83 <title>What is an AFS cell?</title>
84 <body>
85
86 <p>
87 An AFS cell is a collection of servers grouped together administratively and
88 presenting a single, cohesive filesystem. Typically, an AFS cell is a set of
89 hosts that use the same Internet domain name (for example, gentoo.org) Users
90 log into AFS client workstations which request information and files from the
91 cell's servers on behalf of the users. Users won't know on which server a
92 file which they are accessing, is located. They even won't notice if a server
93 will be located to another room, since every volume can be replicated and
94 moved to another server without any user noticing. The files are always
95 accessible. Well, it's like NFS on steroids :)
96 </p>
97
98 </body>
99 </section>
100 <section>
101 <title>What are the benefits of using AFS?</title>
102 <body>
103
104 <p>
105 The main strengths of AFS are its:
106 caching facility (on client side, typically 100M to 1GB),
107 security features (Kerberos 4 based, access control lists),
108 simplicity of addressing (you just have one filesystem),
109 scalability (add further servers to your cell as needed),
110 communications protocol.
111 </p>
112
113 </body>
114 </section>
115 <section>
116 <title>Where can I get more information?</title>
117 <body>
118
119 <p>
120 Read the <uri link="http://www.angelfire.com/hi/plutonic/afs-faq.html">AFS
121 FAQ</uri>.
122 </p>
123
124 <p>
125 OpenAFS main page is at <uri
126 link="http://www.openafs.org">www.openafs.org</uri>.
127 </p>
128
129 <p>
130 AFS was originally developed by Transarc which is now owned by IBM.
131 You can find some information about AFS on
132 <uri link="http://www.transarc.ibm.com/Product/EFS/AFS/index.html">Transarc's
133 Webpage</uri>.
134 </p>
135
136 </body>
137 </section>
138 <section>
139 <title>How Can I Debug Problems?</title>
140 <body>
141
142 <p>
143 OpenAFS has great logging facilities. However, by default it logs straight into
144 its own logs instead of through the system logging facilities you have on your
145 system. To have the servers log through your system logger, use the
146 <c>-syslog</c> option for all <c>bos</c> commands.
147 </p>
148
149 </body>
150 </section>
151 </chapter>
152
153 <chapter>
154 <title>Upgrading from previous versions</title>
155 <section>
156 <title>Introduction</title>
157 <body>
158
159 <p>
160 This section aims to help you through the process of upgrading an existing
161 OpenAFS installation to OpenAFS version 1.4.0 or higher (or 1.2.x starting from
162 1.2.13. The latter will not be handled specifically, as most people will want
163 1.4 for a.o.linux-2.6 support, large file support and bug fixes).
164 </p>
165
166 <p>
167 If you're dealing with a clean install of a 1.4 version of OpenAFS, then you can
168 safely skip this chapter. However, if you're upgrading from a previous version,
169 we strongly urge you to follow the guidelines in the next sections. The
170 transition script in the ebuild is designed to assist you in quickly upgrading
171 and restarting. Please not that it will (for safety reasons) not delete
172 configuration files and startup scripts in old places, not automatically change
173 your boot configuration to use the new scripts, etc. If you need further
174 convincing, using an old OpenAFS kernel module together with the updated system
175 binaries, may very well cause your kernel to freak out. So, let's read on for a
176 clean and easy transition, shall we?
177 </p>
178
179 <note>
180 This chapter has been written bearing many different system configurations in
181 mind. Still, it is possible that due to peculiar tweaks a user has made, his or
182 her specific situation may not be described here. A user with enough
183 self-confidence to tweak his system should be experienced enough to apply the
184 given remarks where appropriate. Vice versa, a user that has done little
185 to his system but install the previous ebuild, can skip most of the warnings
186 further on.
187 </note>
188
189 </body>
190 </section>
191 <section>
192 <title>Differences to previous versions</title>
193 <body>
194
195 <p>
196 Traditionally, OpenAFS has used the same path-conventions that IBM TransArc labs
197 had used, before the code was forked. Understandably, old AFS setups continue
198 using these legacy path conventions. More recent setups conform with FHS by
199 using standard locations (as seen in many Linux distributions). The following
200 table is a compilation of the configure-script and the README accompanying the
201 OpenAFS distribution tarballs:
202 </p>
203
204 <table>
205 <tr>
206 <th>Directory</th>
207 <th>Purpose</th>
208 <th>Transarc Mode</th>
209 <th>Default Mode</th>
210 <th>translation to Gentoo</th>
211 </tr>
212 <tr>
213 <ti>viceetcdir</ti>
214 <ti>Client configuration</ti>
215 <ti>/usr/vice/etc</ti>
216 <ti>$(sysconfdir)/openafs</ti>
217 <ti>/etc/openafs</ti>
218 </tr>
219 <tr>
220 <ti>unnamed</ti>
221 <ti>Client binaries</ti>
222 <ti>unspecified</ti>
223 <ti>$(bindir)</ti>
224 <ti>/usr/bin</ti>
225 </tr>
226 <tr>
227 <ti>afsconfdir</ti>
228 <ti>Server configuration</ti>
229 <ti>/usr/afs/etc</ti>
230 <ti>$(sysconfdir)/openafs/server</ti>
231 <ti>/etc/openafs/server</ti>
232 </tr>
233 <tr>
234 <ti>afssrvdir</ti>
235 <ti>Internal server binaries</ti>
236 <ti>/usr/afs/bin (servers)</ti>
237 <ti>$(libexecdir)/openafs</ti>
238 <ti>/usr/libexec/openafs</ti>
239 </tr>
240 <tr>
241 <ti>afslocaldir</ti>
242 <ti>Server state</ti>
243 <ti>/usr/afs/local</ti>
244 <ti>$(localstatedir)/openafs</ti>
245 <ti>/var/lib/openafs</ti>
246 </tr>
247 <tr>
248 <ti>afsdbdir</ti>
249 <ti>Auth/serverlist/... databases</ti>
250 <ti>/usr/afs/db</ti>
251 <ti>$(localstatedir)/openafs/db</ti>
252 <ti>/var/lib/openafs/db</ti>
253 </tr>
254 <tr>
255 <ti>afslogdir</ti>
256 <ti>Log files</ti>
257 <ti>/usr/afs/logs</ti>
258 <ti>$(localstatedir)/openafs/logs</ti>
259 <ti>/var/lib/openafs/logs</ti>
260 </tr>
261 <tr>
262 <ti>afsbosconfig</ti>
263 <ti>Overseer config</ti>
264 <ti>$(afslocaldir)/BosConfig</ti>
265 <ti>$(afsconfdir)/BosConfig</ti>
266 <ti>/etc/openafs/BosConfig</ti>
267 </tr>
268 </table>
269
270 <p>
271 There are some other oddities, like binaries being put in
272 <path>/usr/vice/etc</path> in Transarc mode, but this list is not intended
273 to be comprehensive. It is rather meant to serve as a reference to those
274 troubleshooting config file transition.
275 </p>
276
277 <p>
278 Also as a result of the path changes, the default disk cache location has
279 been changed from <path>/usr/vice/cache</path> to
280 <path>/var/cache/openafs</path>.
281 </p>
282
283 <p>
284 Furthermore, the init-script has been split into a client and a server part.
285 You used to have <path>/etc/init.d/afs</path>, but now you'll end up with both
286 <path>/etc/init.d/openafs-client</path> and
287 <path>/etc/init.d/openafs-server</path>.
288 Consequently, the configuration file <path>/etc/conf.d/afs</path> has been split
289 into <path>/etc/conf.d/openafs-client</path> and
290 <path>/etc/conf.d/openafs-server</path>. Also, options in
291 <path>/etc/conf.d/afs</path> to turn either client or server on or off have
292 been obsoleted.
293 </p>
294
295 <p>
296 Another change to the init script is that it doesn't check your disk cache
297 setup anymore. The old code required that a separate ext2 partition be
298 mounted at <path>/usr/vice/cache</path>. There were some problems with that:
299 </p>
300
301 <ul>
302 <li>
303 Though it's a very logical setup, your cache doesn't need to be on a
304 separate partition. As long as you make sure that the amount of space
305 specified in <path>/etc/openafs/cacheinfo</path> really is available
306 for disk cache usage, you're safe. So there is no real problem with
307 having the cache on your root partition.
308 </li>
309 <li>
310 Some people use soft-links to point to the real disk cache location.
311 The init script didn't like this, because then this cache location
312 didn't turn up in <path>/proc/mounts</path>.
313 </li>
314 <li>
315 Many prefer ext3 over ext2 nowadays. Both filesystems are valid for
316 usage as a disk cache. Any other filesystem is unsupported
317 (like: don't try reiserfs, you'll get a huge warning, expect failure
318 afterwards).
319 </li>
320 </ul>
321
322 </body>
323 </section>
324 <section>
325 <title>Transition to the new paths</title>
326 <body>
327
328 <p>
329 First of all, emerging a newer OpenAFS version should not overwrite any old
330 configuration files. The script is designed to not change any files
331 already present on the system. So even if you have a totally messed up
332 configuration with a mix of old and new locations, the script should not
333 cause further problems. Also, if a running OpenAFS server is detected, the
334 installation will abort, preventing possible database corruption.
335 </p>
336
337 <p>
338 One caveat though -- there have been ebuilds floating around the internet that
339 partially disable the protection that Gentoo puts on <path>/etc</path>. These
340 ebuilds have never been distributed by Gentoo. You might want to check the
341 <c>CONFIG_PROTECT_MASK</c> variable in the output of the following command:
342 </p>
343
344 <pre caption="Checking your CONFIG_PROTECT_MASK">
345 # <i>emerge info | grep "CONFIG_PROTECT_MASK</i>
346 CONFIG_PROTECT_MASK="/etc/gconf /etc/terminfo /etc/texmf/web2c /etc/env.d"
347 </pre>
348
349 <p>
350 Though nothing in this ebuild would touch the files in <path>/etc/afs</path>,
351 upgrading will cause the removal of your older OpenAFS installation. Files in
352 <c>CONFIG_PROTECT_MASK</c> that belong to the older installation will be removed
353 as well.
354 </p>
355
356 <p>
357 It should be clear to the experienced user that in the case he has tweaked his
358 system by manually adding soft links (e.g. <path>/usr/afs/etc</path> to
359 <path>/etc/openafs</path>), the new installation may run fine while still using
360 the old configuration files. In this case, there has been no real transition,
361 and cleaning up the old installation will result in a broken OpenAFS config.
362 </p>
363
364 <p>
365 Now that you know what doesn't happen, you may want to know what does:
366 </p>
367
368 <ul>
369 <li>
370 <path>/usr/afs/etc</path> is copied to<path>/etc/openafs/server</path>
371 </li>
372 <li>
373 <path>/usr/vice/etc</path> is copied to <path>/etc/openafs</path>
374 </li>
375 <li>
376 <path>/usr/afs/local</path> is copied to <path>/var/lib/openafs</path>
377 </li>
378 <li>
379 <path>/usr/afs/local/BosConfig</path> is copied to
380 <path>/etc/openafs/BosConfig</path>, while replacing occurrences of
381 <path>/usr/afs/bin/</path> with <path>/usr/libexec/openafs</path>,
382 <path>/usr/afs/etc</path> with <path>/etc/openafs/server</path>
383 and <path>/usr/afs/bin</path> (without the / as previously) with
384 <path>/usr/bin</path>
385 </li>
386 <li>
387 <path>/usr/afs/db</path> is copied to <path>/var/lib/openafs/db</path>
388 </li>
389 <li>
390 The configuration file <path>/etc/conf.d/afs</path> is copied to
391 <path>/etc/conf.d/openafs-client</path>, as all known old options were
392 destined for client usage only.
393 </li>
394 </ul>
395
396 </body>
397 </section>
398 <section>
399 <title>The upgrade itself</title>
400 <body>
401
402 <p>
403 So you haven't got an OpenAFS server setup? Or maybe you do, the previous
404 sections have informed you about what is going to happen, and you're still
405 ready for it?
406 </p>
407
408 <p>
409 Let's go ahead with it then!
410 </p>
411
412 <p>
413 If you do have a server running, you want to shut it down now.
414 </p>
415
416 <pre caption="Stopping OpenAFS (in case you have a server)">
417 # <i>/etc/init.d/afs stop</i>
418 </pre>
419
420 <p>
421 And then the upgrade itself.
422 </p>
423
424 <pre caption="Now upgrade!">
425 # <i>emerge -u openafs</i>
426 </pre>
427
428 </body>
429 </section>
430 <section>
431 <title>Restarting OpenAFS</title>
432 <body>
433
434 <p>
435 If you had an OpenAFS server running, you would have not have been forced to
436 shut it down. Now is the time to do that.
437 </p>
438
439 <pre caption="Stopping OpenAFS client after upgrade">
440 # <i>/etc/init.d/afs stop</i>
441 </pre>
442
443 <p>
444 As you may want keep the downtime to a minimum, so you can restart
445 your OpenAFS server right away.
446 </p>
447
448 <pre caption="Restarting OpenAFS server after upgrade">
449 # <i>/etc/init.d/openafs-server start</i>
450 </pre>
451
452 <p>
453 You can check whether it's running properly with the following command:
454 </p>
455
456 <pre caption="Checking OpenAFS server status">
457 # <i>/usr/bin/bos status localhost -localauth</i>
458 </pre>
459
460 <p>
461 Before starting the OpenAFS client again, please take time to check your
462 cache settings. They are determined by <path>/etc/openafs/cacheinfo</path>.
463 To restart your OpenAFS client installation, please type the following:
464 </p>
465
466 <pre caption="Restarting OpenAFS client after upgrade">
467 # <i>/etc/init.d/openafs-client start</i>
468 </pre>
469
470 </body>
471 </section>
472 <section>
473 <title>Cleaning up afterwards</title>
474 <body>
475
476 <p>
477 Before cleaning up, please make really sure that everything runs smoothly and
478 that you have restarted after the upgrade (otherwise, you may still be running
479 your old installation).
480 </p>
481
482 <impo>
483 Please make sure you're not using <path>/usr/vice/cache</path> for disk cache
484 if you are deleting <path>/usr/vice</path>!!
485 </impo>
486
487 <p>
488 The following directories may be safely removed from the system:
489 </p>
490
491 <ul>
492 <li><path>/etc/afs</path></li>
493 <li><path>/usr/vice</path></li>
494 <li><path>/usr/afs</path></li>
495 <li><path>/usr/afsws</path></li>
496 </ul>
497
498 <p>
499 The following files are also unnecessary:
500 </p>
501
502 <ul>
503 <li><path>/etc/init.d/afs</path></li>
504 <li><path>/etc/conf.d/afs</path></li>
505 </ul>
506
507 <pre caption="Removing the old files">
508 # <i>tar czf /root/oldafs-backup.tgz /etc/afs /usr/vice /usr/afs /usr/afsws</i>
509 # <i>rm -R /etc/afs /usr/vice /usr/afs /usr/afsws</i>
510 # <i>rm /etc/init.d/afs /etc/conf.d/afs</i>
511 </pre>
512
513 <p>
514 In case you've previously used ebuilds =openafs-1.2.13 or =openafs-1.3.85, you
515 may also have some other unnecessary files:
516 </p>
517
518 <ul>
519 <li><path>/etc/init.d/afs-client</path></li>
520 <li><path>/etc/init.d/afs-server</path></li>
521 <li><path>/etc/conf.d/afs-client</path></li>
522 <li><path>/etc/conf.d/afs-server</path></li>
523 </ul>
524
525 </body>
526 </section>
527 <section>
528 <title>Init Script changes</title>
529 <body>
530
531 <p>
532 Now most people would have their systems configured to automatically start
533 the OpenAFS client and server on startup. Those who don't can safely skip
534 this section. If you had your system configured to start them automatically,
535 you will need to re-enable this, because the names of the init scripts have
536 changed.
537 </p>
538
539 <pre caption="Re-enabling OpenAFS startup at boot time">
540 # <i>rc-update del afs default</i>
541 # <i>rc-update add openafs-client default</i>
542 # <i>rc-update add openafs-server default</i>
543 </pre>
544
545 <p>
546 If you had <c>=openafs-1.2.13</c> or <c>=openafs-1.3.85</c>, you should remove
547 <path>afs-client</path> and <path>afs-server</path> from the default runlevel,
548 instead of <path>afs</path>.
549 </p>
550
551 </body>
552 </section>
553 <section>
554 <title>Troubleshooting: what if the automatic upgrade fails</title>
555 <body>
556
557 <p>
558 Don't panic. You shouldn't have lost any data or configuration files. So
559 let's analyze the situation. Please file a bug at
560 <uri link="http://bugs.gentoo.org">bugs.gentoo.org</uri> in any case,
561 preferably with as much information as possible.
562 </p>
563
564 <p>
565 If you're having problems starting the client, this should help you diagnosing
566 the problem:
567 </p>
568
569 <ul>
570 <li>
571 Run <c>dmesg</c>. The client normally sends error messages there.
572 </li>
573 <li>
574 Check <path>/etc/openafs/cacheinfo</path>. It should be of the form:
575 /afs:{path to disk cache}:{number of blocks for disk cache}.
576 Normally, your disk cache will be located at
577 <path>/var/cache/openafs</path>.
578 </li>
579 <li>
580 Check the output of <c>lsmod</c>. You will want to see a line beginning
581 with the word openafs.
582 </li>
583 <li><c>pgrep afsd</c> will tell you whether afsd is running or not</li>
584 <li>
585 <c>cat /proc/mounts</c> should reveal whether <path>/afs</path> has been
586 mounted.
587 </li>
588 </ul>
589
590 <p>
591 If you're having problems starting the server, then these hints may be useful:
592 </p>
593
594 <ul>
595 <li>
596 <c>pgrep bosserver</c> tells you whether the overseer is running or not. If
597 you have more than one overseer running, then something has gone wrong. In
598 that case, you should try a graceful OpenAFS server shutdown with <c>bos
599 shutdown localhost -localauth -wait</c>, check the result with <c>bos
600 status localhost -localauth</c>, kill all remaining overseer processes and
601 then finally check whether any server processes are still running (<c>ls
602 /usr/libexec/openafs</c> to get a list of them). Afterwards, do
603 <c>/etc/init.d/openafs-server zap</c> to reset the status of the server and
604 <c>/etc/init.d/openafs-server start</c> to try launching it again.
605 </li>
606 <li>
607 If you're using OpenAFS' own logging system (which is the default setting),
608 check out <path>/var/lib/openafs/logs/*</path>. If you're using the syslog
609 service, go check out its logs for any useful information.
610 </li>
611 </ul>
612
613 </body>
614 </section>
615 </chapter>
616
617 <chapter>
618 <title>Documentation</title>
619 <section>
620 <title>Getting AFS Documentation</title>
621 <body>
622
623 <p>
624 You can get the original IBM AFS Documentation. It is very well written and you
625 really want read it if it is up to you to administer a AFS Server.
626 </p>
627
628 <pre caption="Installing afsdoc">
629 # <i>emerge app-doc/afsdoc</i>
630 </pre>
631
632 <p>
633 You also have the option of using the documentation delivered with OpenAFS. It
634 is installed when you have the USE flag <c>doc</c> enabled while emerging
635 OpenAFS. It can be found in <path>/usr/share/doc/openafs-*/</path>. At the time
636 of writing, this documentation was a work in progress. It may however document
637 newer features in OpenAFS that aren't described in the original IBM AFS
638 Documentation.
639 </p>
640
641 </body>
642 </section>
643 </chapter>
644
645 <chapter>
646 <title>Client Installation</title>
647 <section>
648 <title>Building the Client</title>
649 <body>
650
651 <note>
652 All commands should be written in one line!! In this document they are
653 sometimes wrapped to two lines to make them easier to read.
654 </note>
655
656 <pre caption="Installing openafs">
657 # <i>emerge net-fs/openafs</i>
658 </pre>
659
660 <p>
661 After successful compilation you're ready to go.
662 </p>
663
664 </body>
665 </section>
666 <section>
667 <title>A simple global-browsing client installation</title>
668 <body>
669
670 <p>
671 If you're not part of a specific OpenAFS-cell you want to access, and you just
672 want to try browsing globally available OpenAFS-shares, then you can just
673 install OpenAFS, not touch the configuration at all, and start
674 <path>/etc/init.d/openafs-client</path>.
675 </p>
676
677 </body>
678 </section>
679 <section>
680 <title>Accessing a specific OpenAFS cell</title>
681 <body>
682
683 <p>
684 If you need to access a specific cell, say your university's or company's own
685 cell, then some adjustments to your configuration have to be made.
686 </p>
687
688 <p>
689 Firstly, you need to update <path>/etc/openafs/CellServDB</path> with the
690 database servers for your cell. This information is normally provided by your
691 administrator.
692 </p>
693
694 <p>
695 Secondly, in order to be able to log onto the OpenAFS cell, you need to specify
696 its name in <path>/etc/openafs/ThisCell</path>.
697 </p>
698
699 <pre caption="Adjusting CellServDB and ThisCell">
700 CellServDB:
701 >netlabs #Cell name
702 10.0.0.1 #storage
703
704 ThisCell:
705 netlabs
706 </pre>
707
708 <warn>
709 Only use spaces inside the <path>CellServDB</path> file. The client will most
710 likely fail if you use TABs.
711 </warn>
712
713 <p>
714 For a quick start, you can now start <path>/etc/init.d/openafs/client</path> and
715 use <c>klog</c> to authenticate yourself and start using your access to the
716 cell. For automatic logons to you cell, you want to consult the appropriate
717 section below.
718 </p>
719
720 </body>
721 </section>
722 <section>
723 <title>Adjusting the cache</title>
724 <body>
725
726 <note>
727 Unfortunately the AFS Client needs a ext2/3 filesystem for its cache to run
728 correctly, because there are some issues with reiserfs.
729 </note>
730
731 <p>
732 You can house your cache on an existing filesystem (if it's ext2/3), but some
733 may want to create a separate partition for that. The default location of the
734 cache is <path>/var/cache/openafs</path>, but you can change that by editing
735 <path>/etc/openafs/cacheinfo</path>. A standard size for your cache is
736 200MB, but more won't hurt.
737 </p>
738
739 </body>
740 </section>
741 <section>
742 <title>Adjusting the cell access configuration</title>
743 <body>
744
745 <p>
746 In case you want to do more than just read-only browsing of globally available
747 AFS cells, you need to adjust the two files CellServDB and ThisCell. These
748 are located in <path>/etc/openafs</path>.
749 </p>
750
751 <pre caption="Adjusting CellServDB and ThisCell">
752 CellServDB:
753 >netlabs #Cell name
754 10.0.0.1 #storage
755
756 ThisCell:
757 netlabs
758 </pre>
759
760 <warn>
761 Only use spaces inside the <path>CellServDB</path> file. The client will most
762 likely fail if you use TABs.
763 </warn>
764
765 <p>
766 CellServDB tells your client which server(s) it needs to contact for a
767 specific cell. ThisCell should be quite obvious. Normally you use a name
768 which is unique for your organisation. Your (official) domain might be a
769 good choice.
770 </p>
771
772 </body>
773 </section>
774 <section>
775 <title>Starting AFS on startup</title>
776 <body>
777
778 <p>
779 The following command will create the appropriate links to start your afs
780 client on system startup.
781 </p>
782
783 <warn>
784 You should always have a running afs server in your domain when trying to
785 start the afs client. You're system won't boot until it gets some timeout
786 if your AFS server is down. (And this is quite a long long time)
787 </warn>
788
789 <pre caption="Adding AFS server to the default runlevel">
790 # <i>rc-update add openafs-server default</i>
791 </pre>
792
793 </body>
794 </section>
795 </chapter>
796
797 <chapter>
798 <title>Server Installation</title>
799 <section>
800 <title>Building the Server</title>
801 <body>
802
803 <p>
804 The following command will install all necessary binaries for setting up an AFS
805 Server <e>and</e> Client.
806 </p>
807
808 <pre caption="Installing openafs">
809 # <i>emerge net-fs/openafs</i>
810 </pre>
811
812 </body>
813 </section>
814 <section>
815 <title>Starting AFS Server</title>
816 <body>
817
818 <p>
819 You need to remove the sample CellServDB and ThisCell file first.
820 </p>
821
822 <pre caption="Remove sample files">
823 # <i>rm /usr/vice/etc/ThisCell</i>
824 # <i>rm /usr/vice/etc/CellServDB</i>
825 </pre>
826
827 <p>
828 Next you will run the <c>bosserver</c> command to initialize the Basic OverSeer
829 (BOS) Server, which monitors and controls other AFS server processes on its
830 server machine. Think of it as init for the system. Include the <c>-noauth</c>
831 flag to disable authorization checking, since you haven't added the admin user
832 yet.
833 </p>
834
835 <warn>
836 Disabling authorization checking gravely compromises cell security. You must
837 complete all subsequent steps in one uninterrupted pass and must not leave
838 the machine unattended until you restart the BOS Server with authorization
839 checking enabled. Well, this is what the AFS documentation says. :)
840 </warn>
841
842 <pre caption="Initialize the Basic OverSeer Server">
843 # <i>bosserver -noauth &amp;</i>
844 </pre>
845
846 <p>
847 Verify that the BOS Server created <path>/usr/vice/etc/CellServDB</path>
848 and <path>/usr/vice/etc/ThisCell</path>
849 </p>
850
851 <pre caption="Check if CellServDB and ThisCell are created">
852 # <i>ls -al /usr/vice/etc/</i>
853 -rw-r--r-- 1 root root 41 Jun 4 22:21 CellServDB
854 -rw-r--r-- 1 root root 7 Jun 4 22:21 ThisCell
855 </pre>
856
857 </body>
858 </section>
859 <section>
860 <title>Defining Cell Name and Membership for Server Process</title>
861 <body>
862
863 <p>
864 Now assign your cell's name.
865 </p>
866
867 <impo>
868 There are some restrictions on the name format. Two of the most important
869 restrictions are that the name cannot include uppercase letters or more than
870 64 characters. Remember that your cell name will show up under
871 <path>/afs</path>, so you might want to choose a short one.
872 </impo>
873
874 <note>
875 In the following and every instruction in this guide, for the &lt;server
876 name&gt; argument substitute the full-qualified hostname (such as
877 <b>afs.gentoo.org</b>) of the machine you are installing. For the &lt;cell
878 name&gt; argument substitute your cell's complete name (such as
879 <b>gentoo</b>)
880 </note>
881
882 <p>
883 Run the <c>bos setcellname</c> command to set the cell name:
884 </p>
885
886 <pre caption="Set the cell name">
887 # <i>bos setcellname &lt;server name&gt; &lt;cell name&gt; -noauth</i>
888 </pre>
889
890 </body>
891 </section>
892 <section>
893 <title>Starting the Database Server Process</title>
894 <body>
895
896 <p>
897 Next use the <c>bos create</c> command to create entries for the four database
898 server processes in the <path>/etc/openafs/BosConfig</path> file. The four
899 processes run on database server machines only.
900 </p>
901
902 <table>
903 <tr>
904 <ti>kaserver</ti>
905 <ti>
906 The Authentication Server maintains the Authentication Database.
907 This can be replaced by a Kerberos 5 daemon. If anybody wants to try that
908 feel free to update this document :)
909 </ti>
910 </tr>
911 <tr>
912 <ti>buserver</ti>
913 <ti>The Backup Server maintains the Backup Database</ti>
914 </tr>
915 <tr>
916 <ti>ptserver</ti>
917 <ti>The Protection Server maintains the Protection Database</ti>
918 </tr>
919 <tr>
920 <ti>vlserver</ti>
921 <ti>
922 The Volume Location Server maintains the Volume Location Database (VLDB).
923 Very important :)
924 </ti>
925 </tr>
926 </table>
927
928 <pre caption="Create entries for the database processes">
929 # <i>bos create &lt;server name&gt; kaserver simple /usr/libexec/openafs/kaserver -cell &lt;cell name&gt; -noauth</i>
930 # <i>bos create &lt;server name&gt; buserver simple /usr/libexec/openafs/buserver -cell &lt;cell name&gt; -noauth</i>
931 # <i>bos create &lt;server name&gt; ptserver simple /usr/libexec/openafs/ptserver -cell &lt;cell name&gt; -noauth</i>
932 # <i>bos create &lt;server name&gt; vlserver simple /usr/libexec/openafs/vlserver -cell &lt;cell name&gt; -noauth</i>
933 </pre>
934
935 <p>
936 You can verify that all servers are running with the <c>bos status</c> command:
937 </p>
938
939 <pre caption="Check if all the servers are running">
940 # <i>bos status &lt;server name&gt; -noauth</i>
941 Instance kaserver, currently running normally.
942 Instance buserver, currently running normally.
943 Instance ptserver, currently running normally.
944 Instance vlserver, currently running normally.
945 </pre>
946
947 </body>
948 </section>
949 <section>
950 <title>Initializing Cell Security</title>
951 <body>
952
953 <p>
954 Now we'll initialize the cell's security mechanisms. We'll begin by creating
955 the following two initial entries in the Authentication Database: The main
956 administrative account, called <b>admin</b> by convention and an entry for
957 the AFS server processes, called <c>afs</c>. No user logs in under the
958 identity <b>afs</b>, but the Authentication Server's Ticket Granting
959 Service (TGS) module uses the account to encrypt the server tickets that
960 it grants to AFS clients. This sounds pretty much like Kerberos :)
961 </p>
962
963 <p>
964 Enter <c>kas</c> interactive mode
965 </p>
966
967 <pre caption="Entering the interactive mode">
968 # <i>kas -cell &lt;cell name&gt; -noauth</i>
969 ka&gt; <i>create afs</i>
970 initial_password:
971 Verifying, please re-enter initial_password:
972 ka&gt; <i>create admin</i>
973 initial_password:
974 Verifying, please re-enter initial_password:
975 ka&gt; <i>examine afs</i>
976
977 User data for afs
978 key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:30 2001
979 password will never expire.
980 An unlimited number of unsuccessful authentications is permitted.
981 entry never expires. Max ticket lifetime 100.00 hours.
982 last mod on Mon Jun 4 20:49:30 2001 by &lt;none&gt;
983 permit password reuse
984 ka&gt; <i>setfields admin -flags admin</i>
985 ka&gt; <i>examine admin</i>
986
987 User data for admin (ADMIN)
988 key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:59 2001
989 password will never expire.
990 An unlimited number of unsuccessful authentications is permitted.
991 entry never expires. Max ticket lifetime 25.00 hours.
992 last mod on Mon Jun 4 20:51:10 2001 by &lt;none&gt;
993 permit password reuse
994 ka&gt;
995 </pre>
996
997 <p>
998 Run the <c>bos adduser</c> command, to add the <b>admin</b> user to
999 the <path>/etc/openafs/server/UserList</path>.
1000 </p>
1001
1002 <pre caption="Add the admin user to the UserList">
1003 # <i>bos adduser &lt;server name&gt; admin -cell &lt;cell name&gt; -noauth</i>
1004 </pre>
1005
1006 <p>
1007 Issue the <c>bos addkey</c> command to define the AFS Server
1008 encryption key in <path>/etc/openafs/server/KeyFile</path>
1009 </p>
1010
1011 <note>
1012 If asked for the input key, give the password you entered when creating
1013 the AFS entry with <c>kas</c>
1014 </note>
1015
1016 <pre caption="Entering the password">
1017 # <i>bos addkey &lt;server name&gt; -kvno 0 -cell &lt;cell name&gt; -noauth</i>
1018 input key:
1019 Retype input key:
1020 </pre>
1021
1022 <p>
1023 Issue the <c>pts createuser</c> command to create a Protection Database entry
1024 for the admin user.
1025 </p>
1026
1027 <note>
1028 By default, the Protection Server assigns AFS UID 1 to the <b>admin</b> user,
1029 because it is the first user entry you are creating. If the local password file
1030 (<path>/etc/passwd</path> or equivalent) already has an entry for <b>admin</b>
1031 that assigns a different UID use the <c>-id</c> argument to create matching
1032 UIDs.
1033 </note>
1034
1035 <pre caption="Create a Protection Database entry for the database user">
1036 # <i>pts createuser -name admin -cell &lt;cell name&gt; [-id &lt;AFS UID&gt;] -noauth</i>
1037 </pre>
1038
1039 <p>
1040 Issue the <c>pts adduser</c> command to make the <b>admin</b> user a member
1041 of the system:administrators group, and the <c>pts membership</c> command to
1042 verify the new membership
1043 </p>
1044
1045 <pre caption="Make admin a member of the administrators group and verify">
1046 # <i>pts adduser admin system:administrators -cell &lt;cell name&gt; -noauth</i>
1047 # <i>pts membership admin -cell &lt;cell name&gt; -noauth</i>
1048 Groups admin (id: 1) is a member of:
1049 system:administrators
1050 </pre>
1051
1052 <p>
1053 Restart all AFS Server processes
1054 </p>
1055
1056 <pre caption="Restart all AFS server processes">
1057 # <i>bos restart &lt;server name&gt; -all -cell &lt;cell name&gt; -noauth</i>
1058 </pre>
1059
1060 </body>
1061 </section>
1062 <section>
1063 <title>Starting the File Server, Volume Server and Salvager</title>
1064 <body>
1065
1066 <p>
1067 Start the <c>fs</c> process, which consists of the
1068 File Server,
1069 Volume Server and Salvager (fileserver,
1070 volserver and salvager processes).
1071 </p>
1072
1073 <pre caption="Start the fs process">
1074 # <i>bos create &lt;server name&gt; fs fs /usr/libexec/openafs/fileserver /usr/libexec/openafs/volserver /usr/libexec/openafs/salvager -cell &lt;cell name&gt; -noauth</i>
1075 </pre>
1076
1077 <p>
1078 Verify that all processes are running
1079 </p>
1080
1081 <pre caption="Check if all processes are running">
1082 # <i>bos status &lt;server name&gt; -long -noauth</i>
1083 Instance kaserver, (type is simple) currently running normally.
1084 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1085 Last exit at Mon Jun 4 21:07:17 2001
1086 Command 1 is '/usr/libexec/openafs/kaserver'
1087
1088 Instance buserver, (type is simple) currently running normally.
1089 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1090 Last exit at Mon Jun 4 21:07:17 2001
1091 Command 1 is '/usr/libexec/openafs/buserver'
1092
1093 Instance ptserver, (type is simple) currently running normally.
1094 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1095 Last exit at Mon Jun 4 21:07:17 2001
1096 Command 1 is '/usr/libexec/openafs/ptserver'
1097
1098 Instance vlserver, (type is simple) currently running normally.
1099 Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
1100 Last exit at Mon Jun 4 21:07:17 2001
1101 Command 1 is '/usr/libexec/openafs/vlserver'
1102
1103 Instance fs, (type is fs) currently running normally.
1104 Auxiliary status is: file server running.
1105 Process last started at Mon Jun 4 21:09:30 2001 (2 proc starts)
1106 Command 1 is '/usr/libexec/openafs/fileserver'
1107 Command 2 is '/usr/libexec/openafs/volserver'
1108 Command 3 is '/usr/libexec/openafs/salvager'
1109 </pre>
1110
1111 <p>
1112 Your next action depends on whether you have ever run AFS file server machines
1113 in the cell.
1114 </p>
1115
1116 <p>
1117 If you are installing the first AFS Server ever in the cell create the
1118 first AFS volume, <b>root.afs</b>
1119 </p>
1120
1121 <note>
1122 For the partition name argument, substitute the name of one of the machine's
1123 AFS Server partitions. By convention
1124 these partitions are named <path>/vicepx</path>, where x is in the range of a-z.
1125 </note>
1126
1127 <pre caption="Create the root.afs volume">
1128 # <i>vos create &lt;server name&gt; &lt;partition name&gt; root.afs -cell &lt;cell name&gt; -noauth</i>
1129 </pre>
1130
1131 <p>
1132 If there are existing AFS file server machines and volumes in the cell
1133 issue the <c>vos sncvldb</c> and <c>vos syncserv</c> commands to synchronize
1134 the VLDB (Volume Location Database) with the actual state of volumes on the
1135 local machine. This will copy all necessary data to your new server.
1136 </p>
1137
1138 <p>
1139 If the command fails with the message "partition /vicepa does not exist on
1140 the server", ensure that the partition is mounted before running OpenAFS
1141 servers, or mount the directory and restart the processes using
1142 <c>bos restart &lt;server name&gt; -all -cell &lt;cell
1143 name&gt; -noauth</c>.
1144 </p>
1145
1146 <pre caption="Synchronise the VLDB">
1147 # <i>vos syncvldb &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
1148 # <i>vos syncserv &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
1149 </pre>
1150
1151 </body>
1152 </section>
1153 <section>
1154 <title>Starting the Server Portion of the Update Server</title>
1155 <body>
1156
1157 <pre caption="Start the update server">
1158 # <i>bos create &lt;server name&gt;
1159 upserver simple "/usr/libexec/openafs/upserver
1160 -crypt /etc/openafs/server -clear /usr/libexec/openafs"
1161 -cell &lt;cell name&gt; -noauth</i>
1162 </pre>
1163
1164 </body>
1165 </section>
1166 <section>
1167 <title>Configuring the Top Level of the AFS filespace</title>
1168 <body>
1169
1170 <p>
1171 First you need to set some ACLs, so that any user can lookup
1172 <path>/afs</path>.
1173 </p>
1174
1175 <pre caption="Set access control lists">
1176 # <i>fs setacl /afs system:anyuser rl</i>
1177 </pre>
1178
1179 <p>
1180 Then you need to create the root volume, mount it readonly on
1181 <path>/afs/&lt;cell name&gt;</path> and read/write on <path>/afs/.&lt;cell
1182 name&gt;</path>.
1183 </p>
1184
1185 <pre caption="Prepare the root volume">
1186 # <i>vos create &lt;server name&gt;&lt;partition name&gt; root.cell</i>
1187 # <i>fs mkmount /afs/&lt;cell name&gt; root.cell </i>
1188 # <i>fs setacl /afs/&lt;cell name&gt; system:anyuser rl</i>
1189 # <i>fs mkmount /afs/.&lt;cell name&gt; root.cell -rw</i>
1190 </pre>
1191
1192 <p>
1193 Finally you're done!!! You should now have a working AFS file server
1194 on your local network. Time to get a big
1195 cup of coffee and print out the AFS documentation!!!
1196 </p>
1197
1198 <note>
1199 It is very important for the AFS server to function properly, that all system
1200 clocks are synchronized. This is best accomplished by installing a ntp server
1201 on one machine (e.g. the AFS server) and synchronize all client clocks
1202 with the ntp client. This can also be done by the AFS client.
1203 </note>
1204
1205 </body>
1206 </section>
1207 </chapter>
1208
1209 <chapter>
1210 <title>Basic Administration</title>
1211 <section>
1212 <title>Disclaimer</title>
1213 <body>
1214
1215 <p>
1216 OpenAFS is an extensive technology. Please read the AFS documentation for more
1217 information. We only list a few administrative tasks in this chapter.
1218 </p>
1219
1220 </body>
1221 </section>
1222 <section>
1223 <title>Configuring PAM to Acquire an AFS Token on Login</title>
1224 <body>
1225
1226 <p>
1227 To use AFS you need to authenticate against the KA Server if using
1228 an implementation AFS Kerberos 4, or against a Kerberos 5 KDC if using
1229 MIT, Heimdal, or ShiShi Kerberos 5. However in order to login to a
1230 machine you will also need a user account, this can be local in
1231 <path>/etc/passwd</path>, NIS, LDAP (OpenLDAP), or a Hesiod database.
1232 PAM allows Gentoo to tie the authentication against AFS and login to the
1233 user account.
1234 </p>
1235
1236 <p>
1237 You will need to update <path>/etc/pam.d/system-auth</path> which is
1238 used by the other configurations. "use_first_pass" indicates it will be
1239 checked first against the user login, and "ignore_root" stops the local
1240 superuser being checked so as to order to allow login if AFS or the network
1241 fails.
1242 </p>
1243
1244 <pre caption="/etc/pam.d/system-auth">
1245 auth required pam_env.so
1246 auth sufficient pam_unix.so likeauth nullok
1247 auth sufficient pam_afs.so.1 use_first_pass ignore_root
1248 auth required pam_deny.so
1249
1250 account required pam_unix.so
1251
1252 password required pam_cracklib.so retry=3
1253 password sufficient pam_unix.so nullok md5 shadow use_authtok
1254 password required pam_deny.so
1255
1256 session required pam_limits.so
1257 session required pam_unix.so
1258 </pre>
1259
1260 <p>
1261 In order for <c>sudo</c> to keep the real user's token and to prevent local
1262 users gaining AFS access change <path>/etc/pam.d/su</path> as follows:
1263 </p>
1264
1265 <pre caption="/etc/pam.d/su">
1266 <comment># Here, users with uid &gt; 100 are considered to belong to AFS and users with
1267 # uid &lt;= 100 are ignored by pam_afs.</comment>
1268 auth sufficient pam_afs.so.1 ignore_uid 100
1269
1270 auth sufficient pam_rootok.so
1271
1272 <comment># If you want to restrict users begin allowed to su even more,
1273 # create /etc/security/suauth.allow (or to that matter) that is only
1274 # writable by root, and add users that are allowed to su to that
1275 # file, one per line.
1276 #auth required pam_listfile.so item=ruser \
1277 # sense=allow onerr=fail file=/etc/security/suauth.allow
1278
1279 # Uncomment this to allow users in the wheel group to su without
1280 # entering a passwd.
1281 #auth sufficient pam_wheel.so use_uid trust
1282
1283 # Alternatively to above, you can implement a list of users that do
1284 # not need to supply a passwd with a list.
1285 #auth sufficient pam_listfile.so item=ruser \
1286 # sense=allow onerr=fail file=/etc/security/suauth.nopass
1287
1288 # Comment this to allow any user, even those not in the 'wheel'
1289 # group to su</comment>
1290 auth required pam_wheel.so use_uid
1291
1292 auth required pam_stack.so service=system-auth
1293
1294 account required pam_stack.so service=system-auth
1295
1296 password required pam_stack.so service=system-auth
1297
1298 session required pam_stack.so service=system-auth
1299 session optional pam_xauth.so
1300
1301 <comment># Here we prevent the real user id's token from being dropped</comment>
1302 session optional pam_afs.so.1 no_unlog
1303 </pre>
1304
1305 </body>
1306 </section>
1307 </chapter>
1308 </guide>

  ViewVC Help
Powered by ViewVC 1.1.20