/[gentoo]/xml/htdocs/doc/en/openafs.xml
Gentoo

Contents of /xml/htdocs/doc/en/openafs.xml

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.5 - (hide annotations) (download) (as text)
Sat Apr 12 12:48:24 2003 UTC (11 years, 5 months ago) by swift
Branch: MAIN
Changes since 1.4: +3 -3 lines
File MIME type: application/xml
Fix 2 typos

1 zhen 1.3 <?xml version='1.0' encoding="UTF-8"?>
2 drobbins 1.1 <?xml-stylesheet href="/xsl/guide.xsl" type="text/xsl"?>
3    
4     <!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
5    
6 zhen 1.2 <guide link = "/doc/en/openafs.xml">
7 drobbins 1.1 <title>Gentoo Linux OpenAFS Guide</title>
8     <author title="Editor">
9     <mail link="darks@gentoo.org">Holger Brueckner</mail>
10     </author>
11    
12     <abstract>
13     This guide shows you how to install a openafs server and client on gentoo linux
14     </abstract>
15    
16     <version>0.1</version>
17 swift 1.5 <date>12 April 2003</date>
18 drobbins 1.1
19     <chapter>
20     <title>Overview</title>
21     <section>
22     <title>About this Document</title>
23     <body>
24     <p>This document provides you with all neccessary steps to install an openafs server on Gentoo Linux.
25     Parts of this document are taken from the AFS FAQ and IBM's Quick Beginnings guide on AFS. Well, never reinvent
26     the weel :)</p>
27     </body>
28     </section>
29     <section>
30     <title>What is AFS ?</title>
31     <body>
32    
33     <p>
34     AFS is a distributed filesystem that enables co-operating hosts
35     (clients and servers) to efficiently share filesystem resources
36     across both local area and wide area networks. Clients hold a
37     cache for often used objects (files), to get quicker
38     access to them.
39     </p>
40     <p>
41     AFS is based on a distributed file system originally developed
42     at the Information Technology Center at Carnegie-Mellon University
43     that was called the "Andrew File System". "Andrew" was the name of the research project at CMU - honouring the
44     founders of the University. Once Transarc was formed and AFS became a
45     product, the "Andrew" was dropped to indicate that AFS had gone beyond
46     the Andrew research project and had become a supported, product quality
47     filesystem. However, there were a number of existing cells that rooted
48     their filesystem as /afs. At the time, changing the root of the filesystem
49     was a non-trivial undertaking. So, to save the early AFS sites from having
50     to rename their filesystem, AFS remained as the name and filesystem root.
51     </p>
52     </body>
53     </section>
54     <section>
55     <title>What is an AFS cell ?</title>
56     <body>
57     <p>An AFS cell is a collection of servers grouped together administratively
58     and presenting a single, cohesive filesystem. Typically, an AFS cell is a set of
59     hosts that use the same Internet domain name (like for example gentoo.org)
60     Users log into AFS client workstations which request information and files
61     from the cell's servers on behalf of the users. Users won't know on which server
62     a file which they are accessing, is located. They even won't notice if a server
63     will be located to another room, since every volume can be replicated and moved
64     to another server without user an user noticing. The files are always accessable.
65     Well it's like NFS on steroids :)
66     </p>
67     </body>
68     </section>
69     <section>
70     <title>What are the benefits of using AFS ?</title>
71     <body>
72     <p>The main strengths of AFS are its:
73    
74     caching facility (on client side, typically 100M to 1GB),
75     security features (Kerberos 4 based, access control lists),
76     simplicity of addressing (you just have one filesystem),
77     scalability (add further servers to your cell as needed),
78     communications protocol.
79     </p>
80     </body>
81     </section>
82     <section>
83     <title>Where can i get more information ?</title>
84     <body>
85     <p>
86     Read the <uri link="http://www.angelfire.com/hi/plutonic/afs-faq.html">AFS FAQ</uri>.
87     </p>
88     <p>
89     Openafs main page is at <uri link="http://www.openafs.org">www.openafs.org</uri>.
90     </p>
91     <p>
92     AFS was originally developed by Transarc which is now owned by IBM.
93     You can find some information about AFS on
94     <uri link="http://www.transarc.ibm.com/Product/EFS/AFS/index.html">Transarcs Webpage</uri>
95     </p>
96     </body>
97     </section>
98    
99     </chapter>
100    
101     <chapter>
102     <title>Documentation</title>
103     <section>
104     <title>Getting AFS Documentation</title>
105     <body>
106     <p>
107     You can get the original IBM AFS Documentation. It is very well written and you
108     really want
109     read it if it is up to you to administer a AFS Server.
110     </p>
111     <pre>
112     # <i>emerge app-doc/afsdoc</i>
113     </pre>
114     </body>
115     </section>
116     </chapter>
117    
118     <chapter>
119     <title>Client Installation</title>
120     <section>
121     <title>Preliminary Work</title>
122     <body>
123     <note>
124     All commands should be written in on line !! In this document they are
125     sometimes wrapped to two lines to make them easier to read.
126     </note>
127     <note>
128     Unfortunately the AFS Client needs a ext2 partiton for it's cache to run
129     correctly, because there are some locking issues with reiserfs. You need to
130     create a ext2 partition of approx. 200MB (more won't hurt) and mount it to
131     <path>/usr/vice/cache</path>
132     </note>
133     <p>
134     You should adjust the two files CellServDB ans ThisCell before you build the
135     afs client. (These files are in <path>/usr/portage/net-fs/openafs/files</path>)
136     </p>
137     <pre>
138     CellServDB:
139     >netlabs #Cell name
140     10.0.0.1 #storage
141    
142     ThisCell:
143     netlabs
144     </pre>
145     <p>
146     CellServDB tells your client which server(s) he needs to contact for a
147     specific cell. ThisCell should be quite obvious. Normally you use a name
148     which is unique for your organisation. Your (official) domain might be a
149     good choice.
150     </p>
151     </body>
152     </section>
153     <section>
154     <title>Building the Client</title>
155     <body>
156     <pre>
157     <i>emerge net-fs/openafs</i>
158     </pre>
159     <p>
160     After successfull compilation you're ready to go.
161     </p>
162     </body>
163     </section>
164     <section>
165     <title>Starting afs on startup</title>
166     <body>
167     <p>
168     The following command will create the appropriate links to start your afs client
169     on system startup.
170     </p>
171     <warn>
172     You should always have a running afs server in your domain when trying to start the afs client. You're system won't boot
173     until it gets some timeout if your afs server is down. (and this is quite a long long time)
174     </warn>
175     <pre>
176     # <i>rc-update add afs default</i>
177     </pre>
178     </body>
179     </section>
180     </chapter>
181    
182     <chapter>
183     <title>Server Installation</title>
184     <section>
185     <title>Building the Server</title>
186     <body>
187     <p>
188 swift 1.4 The following command will install all necessary binaries for setting up a AFS Server
189 drobbins 1.1 <i>and</i> Client
190     </p>
191     <pre>
192     # <i>emerge net-fs/openafs</i>
193     </pre>
194     </body>
195     </section>
196     <section>
197     <title>Starting AFS Server</title>
198     <body>
199     <p>
200     You need to remove the sample CellServDB and ThisCell file first.
201     </p>
202     <pre>
203     # <i>rm /usr/vice/etc/ThisCell</i>
204     # <i>rm /usr/vice/etc/CellServDB</i>
205     </pre>
206     <p>
207     Next you will run the <b>bosserver</b> command to initialize the Basic OverSeer (BOS)
208     Server, which monitors and controls other AFS server processes on its server
209     machine. Think of it as init for the system. Include the <b>-noauth</b>
210     flag to disable authorization checking, since you haven't added the admin user yet.
211     </p>
212     <p>
213     <warn>
214     Disabling authorization checking gravely compromises cell security.
215     You must complete all subsequent steps in one uninterrupted pass
216     and must not leave the machine unattended until you restart the BOS Server with
217     authorization checking enabled. Well this is what the AFS documentation says :)
218     </warn>
219     </p>
220     <pre>
221     # <i>/usr/afs/bin/bosserver -noauth &amp;</i>
222     </pre>
223     <p>
224     Verify that the BOS Server created <path>/usr/vice/etc/CellServDB</path>
225     and <path>/usr/vice/etc/ThisCell</path>
226     </p>
227     <pre>
228     # <i>ls -al /usr/vice/etc/</i>
229     -rw-r--r-- 1 root root 41 Jun 4 22:21 CellServDB
230     -rw-r--r-- 1 root root 7 Jun 4 22:21 ThisCell
231     </pre>
232    
233     </body>
234     </section>
235     <section>
236     <title>Defining Cell Name and Membership for Server Process</title>
237     <body>
238     <p>
239     Now assign your cells name.
240     </p>
241     <p>
242     <impo>There are some restrictions on the name format.
243     Two of the most important restrictions are that the name
244     cannot include uppercase letters or more than 64 characters. Remember that
245     your cell name will show up under <path>/afs</path>, so you might want to choose
246     a short one.</impo>
247     </p>
248     <p>
249     <note>In the following and every instruction in this guide, for the <i>&lt;server name&gt;</i>
250     argument substitute the full-qualified hostname
251     (such as <b>afs.gentoo.org</b>) of the machine you are installing.
252     For the <i>&lt;cell name&gt;</i>
253     argument substitute your cell's complete name (such as <b>gentoo</b>)</note>
254     </p>
255     <p>
256     Run the <b>bos setcellname</b> command to set the cell name:
257     </p>
258     <pre>
259     # <i>/usr/afs/bin/bos setcellname &lt;server name&gt; &lt;cell name&gt; -noauth</i>
260     </pre>
261     </body>
262     </section>
263     <section>
264     <title>Starting the Database Server Process</title>
265     <body><p>
266     Next use the <b>bos create</b> command to create entries for the four database
267     server processes in the
268     <path>/usr/afs/local/BosConfig</path> file. The four processes run on database
269     server machines only.
270     </p>
271     <p>
272     <table>
273     <tr>
274     <ti>kaserver</ti>
275     <ti>The Authentification Server maintains the Authentification Database.
276     This can be replaced by a Kerberos 5 daemon. If anybody want's to try that
277     feel free to update this document :)</ti>
278     </tr>
279     <tr>
280     <ti>buserver</ti>
281     <ti>The Backup Server maintains the Backup Database</ti>
282     </tr>
283     <tr>
284     <ti>ptserver</ti>
285     <ti>The Protection Server maintains the Protection Database</ti>
286     </tr>
287     <tr>
288     <ti>vlserver</ti>
289     <ti>The Volume Location Server maintains the Volume Location Database (VLDB).
290     Very important :)</ti>
291     </tr>
292     </table>
293     </p>
294     <pre>
295     # <i>/usr/afs/bin/bos create &lt;server name&gt; kaserver simple
296     /usr/afs/bin/kaserver -cell &lt;cell name&gt; -noauth</i>
297     # <i>/usr/afs/bin/bos create &lt;server name&gt; buserver simple
298     /usr/afs/bin/buserver -cell &lt;cell name&gt; -noauth</i>
299     # <i>/usr/afs/bin/bos create &lt;server name&gt; ptserver simple
300     /usr/afs/bin/ptserver -cell &lt;cell name&gt; -noauth</i>
301     # <i>/usr/afs/bin/bos create &lt;server name&gt; vlserver simple
302     /usr/afs/bin/vlserver -cell &lt;cell name&gt; -noauth</i>
303     </pre>
304     <p>
305     You can verify that all servers are running with the <b>bos status</b> command:
306     </p>
307     <pre>
308     # <i>/usr/afs/bin/bos status &lt;server name&gt; -noauth</i>
309     Instance kaserver, currently running normally.
310     Instance buserver, currently running normally.
311     Instance ptserver, currently running normally.
312     Instance vlserver, currently running normally.
313     </pre>
314    
315     </body>
316     </section>
317     <section>
318     <title>Initializing Cell Security</title>
319     <body>
320     <p>
321     Now we'll initialize the cell's security mechanisms. We'll begin by creating the
322     following two initial entries in the
323     Authentification Database: The main administrative account, called <b>admin</b> by
324     convention and an entry for
325     the AFS server processes, called <b>afs</b>. No user logs in under the
326     identity <b>afs</b>, but the Authentication
327     Server's Ticket Granting Service (TGS) module uses the account
328     to encrypt the server tickets that it grants to AFS clients. This sounds
329     pretty much like Kerberos :)
330     </p>
331     <p>
332     Enter <b>kas</b> interactive mode
333     </p>
334     <pre>
335     # <i>/usr/afs/bin/kas -cell &lt;cell name&gt; -noauth</i>
336     ka&gt; <i>create afs</i>
337     initial_password:
338     Verifying, please re-enter initial_password:
339     ka&gt; <i>create admin</i>
340     initial_password:
341     Verifying, please re-enter initial_password:
342     ka&gt; <i>examine afs</i>
343    
344     User data for afs
345     key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:30 2001
346     password will never expire.
347     An unlimited number of unsuccessful authentications is permitted.
348     entry never expires. Max ticket lifetime 100.00 hours.
349     last mod on Mon Jun 4 20:49:30 2001 by $lt;none&gt;
350     permit password reuse
351     ka&gt; <i>setfields admin -flags admin</i>
352     ka&gt; <i>examine admin</i>
353    
354     User data for admin (ADMIN)
355     key (0) cksum is 2651715259, last cpw: Mon Jun 4 20:49:59 2001
356     password will never expire.
357     An unlimited number of unsuccessful authentications is permitted.
358     entry never expires. Max ticket lifetime 25.00 hours.
359     last mod on Mon Jun 4 20:51:10 2001 by $lt;none&gt;
360     permit password reuse
361     ka&gt;
362     </pre>
363     <p>
364     Run the <b>bos adduser</b> command, to add the <b>admin</b> user to
365     the <path>/usr/afs/etc/UserList</path>.
366     </p>
367     <pre>
368     # <i>/usr/afs/bin/bos adduser &lt;server name&gt; admin -cell &lt;cell name&gt; -noauth</i>
369     </pre>
370     <p>
371     Issue the <b>bos addkey</b> command to define the AFS Server
372     encryption key in <path>/usr/afs/etc/KeyFile</path>
373     </p>
374     <note>
375     If asked for the input key, give the password you entered when creating
376     the afs entry with <b>kas</b>
377     </note>
378     <pre>
379     # <i>/usr/afs/bin/bos addkey &lt;server name&gt; -kvno 0 -cell &lt;cell name&gt; -noauth</i>
380     input key:
381     Retype input key:
382     </pre>
383     <p>
384     Issue the <b>pts createuser</b> command to create a Protection Database
385     entry for the admin user
386     </p>
387     <note>
388     By default, the Protection Server assigns AFS UID 1 to the <b>admin</b> user, because
389     it is the first user
390     entry you are creating. If the local password file (/etc/passwd or equivalent)
391     already has an entry for
392     <b>admin</b> that assigns a different UID use the <b>-id</b> argument
393     to create matching UID's
394     </note>
395     <pre>
396     # <i>/usr/afs/bin/pts createuser -name admin -cell &lt;cell name&gt; [-id &lt;AFS UID&gt;] -noauth</i>
397     </pre>
398     <p>
399     Issue the <b>pts adduser</b> command to make the <b>admin</b> user a member
400     of the system:administrators group,
401 swift 1.5 and the <b>pts membership</b> command to verify the new membership
402 drobbins 1.1 </p>
403     <pre>
404     # <i>/usr/afs/bin/pts adduser admin system:administrators -cell &lt;cell name&gt; -noauth</i>
405     # <i>/usr/afs/bin/pts membership admin -cell &lt;cell name&gt; -noauth</i>
406     Groups admin (id: 1) is a member of:
407     system:administrators
408     </pre>
409     <p>
410     Restart all AFS Server processes
411     </p>
412     <pre>
413     # <i>/usr/afs/bin/bos restart &lt;server name&gt; -all -cell &lt;cell name&gt; -noauth</i>
414     </pre>
415     </body>
416     </section>
417     <section>
418     <title>Starting the File Server, Volume Server and Salvager</title>
419     <body>
420     <p>
421 swift 1.5 Start the <b>fs</b> process, which consists of the File Server, Volume Server and Salvager (fileserver,
422 drobbins 1.1 volserver and salvager processes).
423     </p>
424     <pre>
425     # <i>/usr/afs/bin/bos create &lt;server name&gt; fs fs /usr/afs/bin/fileserver
426     /usr/afs/bin/volserver
427     /usr/afs/bin/salvager
428     -cell &lt;cell name&gt; -noauth</i>
429     </pre>
430     <p>
431     Verify that all processes are running
432     </p>
433     <pre>
434     # <i>/usr/afs/bin/bos status &lt;server name&gt; -long -noauth</i>
435     Instance kaserver, (type is simple) currently running normally.
436     Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
437     Last exit at Mon Jun 4 21:07:17 2001
438     Command 1 is '/usr/afs/bin/kaserver'
439    
440     Instance buserver, (type is simple) currently running normally.
441     Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
442     Last exit at Mon Jun 4 21:07:17 2001
443     Command 1 is '/usr/afs/bin/buserver'
444    
445     Instance ptserver, (type is simple) currently running normally.
446     Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
447     Last exit at Mon Jun 4 21:07:17 2001
448     Command 1 is '/usr/afs/bin/ptserver'
449    
450     Instance vlserver, (type is simple) currently running normally.
451     Process last started at Mon Jun 4 21:07:17 2001 (2 proc starts)
452     Last exit at Mon Jun 4 21:07:17 2001
453     Command 1 is '/usr/afs/bin/vlserver'
454    
455     Instance fs, (type is fs) currently running normally.
456     Auxiliary status is: file server running.
457     Process last started at Mon Jun 4 21:09:30 2001 (2 proc starts)
458     Command 1 is '/usr/afs/bin/fileserver'
459     Command 2 is '/usr/afs/bin/volserver'
460     Command 3 is '/usr/afs/bin/salvager'
461     </pre>
462     <p>
463     Your next action depends on whether you have ever run AFS file server machines
464     in the cell:
465     </p>
466     <p>
467     If you are installing the first AFS Server ever in the cell create the
468     first AFS volume, <b>root.afs</b>
469     </p>
470     <note>
471     For the partition name argument, substitute the name of one of the machine's
472     AFS Server partitions. By convention
473     these partitions are named <path>/vicex</path>, where x is in the range of a-z.
474     </note>
475     <pre>
476     # <i>/usr/afs/bin/vos create &lt;server name&gt;
477     &lt;partition name&gt; root.afs
478     -cell &lt;cell name&gt; -noauth</i>
479     </pre>
480     <p>
481     If there are existing AFS file server machines and volumes in the cell
482     issue the <b>vos sncvldb</b> and <b>vos
483     syncserv</b> commands to synchronize the VLDB (Volume Location Database) with
484     the actual state of volumes on the local machine. This will copy all necessary data to your
485     new server.
486     </p>
487     <pre>
488     # <i>/usr/afs/bin/vos syncvldb &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
489     # <i>/usr/afs/bin/vos syncserv &lt;server name&gt; -cell &lt;cell name&gt; -verbose -noauth</i>
490     </pre>
491     </body>
492     </section>
493     <section>
494     <title>Starting the Server Portion of the Update Server</title>
495     <body>
496     <pre>
497     # <i>/usr/afs/bin/bos create &lt;server name&gt;
498     upserver simple "/usr/afs/bin/upserver
499     -crypt /usr/afs/etc -clear /usr/afs/bin"
500     -cell &lt;cell name&gt; -noauth</i>
501     </pre>
502     </body>
503     </section>
504     <section>
505     <title>Configuring the Top Level of the AFS filespace</title>
506     <body>
507     <p>
508     First you need to set some acl's, so that any user can lookup <path>/afs</path>.
509     </p>
510     <pre>
511     # <i>/usr/afs/bin/fs setacl /afs system:anyuser rl</i>
512     </pre>
513     <p>
514     The you need to create the root volume, mount it readonly on <path>/afs/&lt;cell name&gt;</path> and read/write
515     on <path>/afs/.&lt;cell name&gt;</path>
516     <pre>
517     # <i>/usr/afs/bin/vos create &lt;server name&gt;&lt;partition name&gt; root.cell</i>
518     # <i>/usr/afs/bin/fs mkmount /afs/&lt;cell name&gt; root.cell </i>
519     # <i>/usr/afs/bin/fs setacl /afs/&lt;cell name&gt; system:anyuser rl</i>
520     # <i>/usr/afs/bin/fs mkmount /afs/.&lt;cell name&gt; root.cell -rw</i>
521     </pre>
522     </p>
523     <p>
524     Finally you're done !!! You should now have a working AFS file server
525     on your local network. Time to get a big
526     cup of coffee and print out the AFS documentation !!!
527     </p>
528     <note>
529     It is very important for the AFS server to function properly, that all system
530     clock's are synchronized.
531     This is best
532     accomplished by installing a ntp server on one machine (e.g. the AFS server)
533     and synchronize all client clock's
534     with the ntp client. This can also be done by the afs client.
535     </note>
536     </body>
537     </section>
538    
539     </chapter>
540    
541     <chapter>
542     <title>Basic Administration</title>
543     <section>
544     <title></title>
545     <body>
546     <p>To be done ... For now read the AFS Documentation :)</p>
547     </body>
548     </section>
549     </chapter>
550     </guide>

  ViewVC Help
Powered by ViewVC 1.1.20