/[gentoo]/xml/htdocs/doc/en/hpc-howto.xml
Gentoo

Diff of /xml/htdocs/doc/en/hpc-howto.xml

Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch

Revision 1.1 Revision 1.15
1<?xml version='1.0' encoding="UTF-8"?> 1<?xml version='1.0' encoding="UTF-8"?>
2
3<!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/hpc-howto.xml,v 1.1 2005/01/03 10:00:04 swift Exp $ --> 2<!-- $Header: /var/cvsroot/gentoo/xml/htdocs/doc/en/hpc-howto.xml,v 1.15 2010/06/07 09:08:37 nightmorph Exp $ -->
4
5<!DOCTYPE guide SYSTEM "/dtd/guide.dtd"> 3<!DOCTYPE guide SYSTEM "/dtd/guide.dtd">
6<guide link="hpc-howto.xml">
7 4
5<guide>
8<title>High Performance Computing on Gentoo Linux</title> 6<title>High Performance Computing on Gentoo Linux</title>
9 7
10<author title="Author"> 8<author title="Author">
11 <mail link="marc@adelielinux.com">Marc St-Pierre</mail> 9 <mail link="marc@adelielinux.com">Marc St-Pierre</mail>
12</author> 10</author>
18</author> 16</author>
19<author title="Assistant/Research"> 17<author title="Assistant/Research">
20 <mail link="olivier@adelielinux.com">Olivier Crete</mail> 18 <mail link="olivier@adelielinux.com">Olivier Crete</mail>
21</author> 19</author>
22<author title="Reviewer"> 20<author title="Reviewer">
23 <mail link="spyderous@gentoo.org">Donnie Berkholz</mail> 21 <mail link="dberkholz@gentoo.org">Donnie Berkholz</mail>
22</author>
23<author title="Editor">
24 <mail link="nightmorph"/>
24</author> 25</author>
25 26
26<!-- No licensing information; this document has been written by a third-party 27<!-- No licensing information; this document has been written by a third-party
27 organisation without additional licensing information. 28 organisation without additional licensing information.
28 29
29 In other words, this is copyright adelielinux R&D; Gentoo only has 30 In other words, this is copyright adelielinux R&D; Gentoo only has
30 permission to distribute this document as-is and update it when appropriate 31 permission to distribute this document as-is and update it when appropriate
31 as long as the adelie linux R&D notice stays 32 as long as the adelie linux R&D notice stays
32--> 33-->
33 34
34<abstract> 35<abstract>
35This document was written by people at the Adelie Linux R&amp;D Center 36This document was written by people at the Adelie Linux R&amp;D Center
36&lt;http://www.adelielinux.com&gt; as a 37&lt;http://www.adelielinux.com&gt; as a step-by-step guide to turn a Gentoo
37step-by-step guide to turn a Gentoo System into an High Performance Computing 38System into a High Performance Computing (HPC) system.
38(HPC) system.
39</abstract> 39</abstract>
40 40
41<version>1.0</version> 41<version>1.7</version>
42<date>August 1, 2003</date> 42<date>2010-06-07</date>
43 43
44<chapter> 44<chapter>
45<title>Introduction</title> 45<title>Introduction</title>
46<section> 46<section>
47<body> 47<body>
48 48
49<p> 49<p>
50Gentoo Linux, a special flavor of Linux that can be automatically optimized 50Gentoo Linux, a special flavor of Linux that can be automatically optimized
51and customized for just about any application or need. Extreme performance, 51and customized for just about any application or need. Extreme performance,
52configurability and a top-notch user and developer community are all hallmarks 52configurability and a top-notch user and developer community are all hallmarks
53of the Gentoo experience. 53of the Gentoo experience.
54</p> 54</p>
55 55
56<p> 56<p>
57Thanks to a technology called Portage, Gentoo Linux can become an ideal secure 57Thanks to a technology called Portage, Gentoo Linux can become an ideal secure
58server, development workstation, professional desktop, gaming system, embedded 58server, development workstation, professional desktop, gaming system, embedded
59solution or... a High Performance Computing system. Because of its 59solution or... a High Performance Computing system. Because of its
60near-unlimited adaptability, we call Gentoo Linux a metadistribution. 60near-unlimited adaptability, we call Gentoo Linux a metadistribution.
61</p> 61</p>
62 62
63<p> 63<p>
64This document explains how to turn a Gentoo system into a High Performance 64This document explains how to turn a Gentoo system into a High Performance
65Computing system. Step by step, it explains what packages one may want to 65Computing system. Step by step, it explains what packages one may want to
66install and helps configure them. 66install and helps configure them.
67</p> 67</p>
68 68
69<p> 69<p>
70Obtain Gentoo Linux from the website <uri 70Obtain Gentoo Linux from the website <uri>http://www.gentoo.org</uri>, and
71link="http://www.gentoo.org/">www.gentoo.org</uri>, and refer to the <uri 71refer to the <uri link="/doc/en/">documentation</uri> at the same location to
72link="http://www.gentoo.org/main/en/docs.xml">documentation</uri> at the same 72install it.
73location to install it.
74</p> 73</p>
75 74
76</body> 75</body>
77</section> 76</section>
78</chapter> 77</chapter>
82<section> 81<section>
83<title>Recommended Optimizations</title> 82<title>Recommended Optimizations</title>
84<body> 83<body>
85 84
86<note> 85<note>
87We refer to the <uri 86We refer to the <uri link="/doc/en/handbook/">Gentoo Linux Handbooks</uri> in
88link="http://www.gentoo.org/doc/en/handbook">Gentoo Linux Handbooks</uri> in
89this section. 87this section.
90</note> 88</note>
91 89
92<p> 90<p>
93During the installation process, you will have to set your USE variables in 91During the installation process, you will have to set your USE variables in
94<path>/etc/make.conf</path>. We recommended that you deactivate all the 92<path>/etc/make.conf</path>. We recommended that you deactivate all the
95defaults (see <path>/etc/make.profile/make.defaults</path>) by negating them 93defaults (see <path>/etc/make.profile/make.defaults</path>) by negating them in
96in make.conf. However, you may want to keep such use variables as x86, 3dnow, 94make.conf. However, you may want to keep such use variables as 3dnow, gpm,
97gpm, mmx, sse, ncurses, pam and tcpd. Refer to the USE documentation for more 95mmx, nptl, nptlonly, sse, ncurses, pam and tcpd. Refer to the USE documentation
98information. 96for more information.
99</p> 97</p>
100 98
101<pre caption="USE Flags"> 99<pre caption="USE Flags">
102# Copyright 2000-2003 Daniel Robbins, Gentoo Technologies, Inc.
103# Contains local system settings for Portage system
104
105# Please review 'man make.conf' for more information.
106
107USE="-oss 3dnow -apm -arts -avi -berkdb -crypt -cups -encode -gdbm 100USE="-oss 3dnow -apm -avi -berkdb -crypt -cups -encode -gdbm -gif gpm -gtk
108-gif gpm -gtk -imlib -java -jpeg -kde -gnome -libg++ -libwww -mikmod 101-imlib -java -jpeg -kde -gnome -libg++ -libwww -mikmod mmx -motif -mpeg ncurses
109mmx -motif -mpeg ncurses -nls -oggvorbis -opengl pam -pdflib -png 102-nls nptl nptlonly -ogg -opengl pam -pdflib -png -python -qt4 -qtmt
110-python -qt -qtmt -quicktime -readline -sdl -slang -spell -ssl 103-quicktime -readline -sdl -slang -spell -ssl -svga tcpd -truetype -vorbis -X
111-svga tcpd -truetype -X -xml2 -xmms -xv -zlib" 104-xml2 -xv -zlib"
112</pre> 105</pre>
113 106
114<p> 107<p>
115Or simply: 108Or simply:
116</p> 109</p>
117 110
118<pre caption="USE Flags - simplified version"> 111<pre caption="USE Flags - simplified version">
119# Copyright 2000-2003 Daniel Robbins, Gentoo Technologies, Inc.
120# Contains local system settings for Portage system
121
122# Please review 'man make.conf' for more information.
123
124USE="-* 3dnow gpm mmx ncurses pam sse tcpd" 112USE="-* 3dnow gpm mmx ncurses pam sse tcpd"
125</pre> 113</pre>
126 114
127<note> 115<note>
128The <e>tcpd</e> USE flag increases security for packages such as xinetd. 116The <e>tcpd</e> USE flag increases security for packages such as xinetd.
129</note> 117</note>
130 118
131<p> 119<p>
132In step 15 ("Installing the kernel and a System Logger") for stability 120In step 15 ("Installing the kernel and a System Logger") for stability
133reasons, we recommend the vanilla-sources, the official kernel sources 121reasons, we recommend the vanilla-sources, the official kernel sources
134released on <uri>http://www.kernel.org/</uri>, unless you require special 122released on <uri>http://www.kernel.org/</uri>, unless you require special
135support such as xfs. 123support such as xfs.
136</p> 124</p>
137 125
138<pre caption="Installing vanilla-sources"> 126<pre caption="Installing vanilla-sources">
139# <i>emerge -p syslog-ng vanilla-sources</i> 127# <i>emerge -a syslog-ng vanilla-sources</i>
140</pre> 128</pre>
141 129
142<p> 130<p>
143When you install miscellaneous packages, we recommend installing the 131When you install miscellaneous packages, we recommend installing the
144following: 132following:
145</p> 133</p>
146 134
147<pre caption="Installing necessary packages"> 135<pre caption="Installing necessary packages">
148# <i>emerge -p nfs-utils portmap tcpdump ssmtp iptables xinetd</i> 136# <i>emerge -a nfs-utils portmap tcpdump ssmtp iptables xinetd</i>
149</pre> 137</pre>
150 138
151</body> 139</body>
152</section> 140</section>
153<section> 141<section>
154<title>Communication Layer (TCP/IP Network)</title> 142<title>Communication Layer (TCP/IP Network)</title>
155<body> 143<body>
156 144
157<p> 145<p>
158A cluster requires a communication layer to interconnect the slave nodes to 146A cluster requires a communication layer to interconnect the slave nodes to
159the master node. Typically, a FastEthernet or GigaEthernet LAN can be used 147the master node. Typically, a FastEthernet or GigaEthernet LAN can be used
160since they have a good price/performance ratio. Other possibilities include 148since they have a good price/performance ratio. Other possibilities include
161use of products like <uri link="http://www.myricom.com/">Myrinet</uri>, <uri 149use of products like <uri link="http://www.myricom.com/">Myrinet</uri>, <uri
162link="http://quadrics.com/">QsNet</uri> or others. 150link="http://quadrics.com/">QsNet</uri> or others.
163</p> 151</p>
164 152
165<p> 153<p>
166A cluster is composed of two node types: master and slave. Typically, your 154A cluster is composed of two node types: master and slave. Typically, your
167cluster will have one master node and several slave nodes. 155cluster will have one master node and several slave nodes.
168</p> 156</p>
169 157
170<p> 158<p>
171The master node is the cluster's server. It is responsible for telling the 159The master node is the cluster's server. It is responsible for telling the
172slave nodes what to do. This server will typically run such daemons as dhcpd, 160slave nodes what to do. This server will typically run such daemons as dhcpd,
173nfs, pbs-server, and pbs-sched. Your master node will allow interactive 161nfs, pbs-server, and pbs-sched. Your master node will allow interactive
174sessions for users, and accept job executions. 162sessions for users, and accept job executions.
175</p> 163</p>
176 164
177<p> 165<p>
178The slave nodes listen for instructions (via ssh/rsh perhaps) from the master 166The slave nodes listen for instructions (via ssh/rsh perhaps) from the master
179node. They should be dedicated to crunching results and therefore should not 167node. They should be dedicated to crunching results and therefore should not
180run any unecessary services. 168run any unnecessary services.
181</p>
182
183<p> 169</p>
170
171<p>
184The rest of this documentation will assume a cluster configuration as per the 172The rest of this documentation will assume a cluster configuration as per the
185hosts file below. You should maintain on every node such a hosts file 173hosts file below. You should maintain on every node such a hosts file
186(<path>/etc/hosts</path>) with entries for each node participating node in the 174(<path>/etc/hosts</path>) with entries for each node participating node in the
187cluster. 175cluster.
188</p> 176</p>
189 177
190<pre caption="/etc/hosts"> 178<pre caption="/etc/hosts">
191# Adelie Linux Research &amp; Development Center 179# Adelie Linux Research &amp; Development Center
192# /etc/hosts 180# /etc/hosts
193 181
194127.0.0.1 localhost 182127.0.0.1 localhost
195 183
196192.168.1.100 master.adelie master 184192.168.1.100 master.adelie master
197 185
198192.168.1.1 node01.adelie node01 186192.168.1.1 node01.adelie node01
199192.168.1.2 node02.adelie node02 187192.168.1.2 node02.adelie node02
200</pre> 188</pre>
201 189
202<p> 190<p>
203To setup your cluster dedicated LAN, edit your <path>/etc/conf.d/net</path> 191To setup your cluster dedicated LAN, edit your <path>/etc/conf.d/net</path>
204file on the master node. 192file on the master node.
205</p> 193</p>
206 194
207<pre caption="/etc/conf.d/net"> 195<pre caption="/etc/conf.d/net">
208# Copyright 1999-2002 Gentoo Technologies, Inc.
209# Distributed under the terms of the GNU General Public License, v2 or later
210
211# Global config file for net.* rc-scripts 196# Global config file for net.* rc-scripts
212 197
213# This is basically the ifconfig argument without the ifconfig $iface 198# This is basically the ifconfig argument without the ifconfig $iface
214# 199#
215 200
218iface_eth1="dhcp" 203iface_eth1="dhcp"
219</pre> 204</pre>
220 205
221 206
222<p> 207<p>
223Finally, setup a DHCP daemon on the master node to avoid having to maintain a 208Finally, setup a DHCP daemon on the master node to avoid having to maintain a
224network configuration on each slave node. 209network configuration on each slave node.
225</p> 210</p>
226 211
227<pre caption="/etc/dhcp/dhcpd.conf"> 212<pre caption="/etc/dhcp/dhcpd.conf">
228# Adelie Linux Research &amp; Development Center 213# Adelie Linux Research &amp; Development Center
236 option domain-name "adelie"; 221 option domain-name "adelie";
237 range 192.168.1.10 192.168.1.99; 222 range 192.168.1.10 192.168.1.99;
238 option routers 192.168.1.100; 223 option routers 192.168.1.100;
239 224
240 host node01.adelie { 225 host node01.adelie {
241 # MAC address of network card on node 01 226 # MAC address of network card on node 01
242 hardware ethernet 00:07:e9:0f:e2:d4; 227 hardware ethernet 00:07:e9:0f:e2:d4;
243 fixed-address 192.168.1.1; 228 fixed-address 192.168.1.1;
244 } 229 }
245 host node02.adelie { 230 host node02.adelie {
246 # MAC address of network card on node 02 231 # MAC address of network card on node 02
247 hardware ethernet 00:07:e9:0f:e2:6b; 232 hardware ethernet 00:07:e9:0f:e2:6b;
248 fixed-address 192.168.1.2; 233 fixed-address 192.168.1.2;
249 } 234 }
250} 235}
251</pre> 236</pre>
255<section> 240<section>
256<title>NFS/NIS</title> 241<title>NFS/NIS</title>
257<body> 242<body>
258 243
259<p> 244<p>
260The Network File System (NFS) was developed to allow machines to mount a disk 245The Network File System (NFS) was developed to allow machines to mount a disk
261partition on a remote machine as if it were on a local hard drive. This allows 246partition on a remote machine as if it were on a local hard drive. This allows
262for fast, seamless sharing of files across a network. 247for fast, seamless sharing of files across a network.
263</p> 248</p>
264 249
265<p> 250<p>
266There are other systems that provide similar functionality to NFS which could 251There are other systems that provide similar functionality to NFS which could
267be used in a cluster environment. The <uri 252be used in a cluster environment. The <uri
268link="http://www.transarc.com/Product/EFS/AFS/index.html">Andrew File System 253link="http://www.openafs.org">Andrew File System
269from IBM</uri>, recently open-sourced, provides a file sharing mechanism with 254from IBM</uri>, recently open-sourced, provides a file sharing mechanism with
270some additional security and performance features. The <uri 255some additional security and performance features. The <uri
271link="http://www.coda.cs.cmu.edu/">Coda File System</uri> is still in 256link="http://www.coda.cs.cmu.edu/">Coda File System</uri> is still in
272development, but is designed to work well with disconnected clients. Many 257development, but is designed to work well with disconnected clients. Many
273of the features of the Andrew and Coda file systems are slated for inclusion 258of the features of the Andrew and Coda file systems are slated for inclusion
274in the next version of <uri link="http://www.nfsv4.org">NFS (Version 4)</uri>. 259in the next version of <uri link="http://www.nfsv4.org">NFS (Version 4)</uri>.
275The advantage of NFS today is that it is mature, standard, well understood, 260The advantage of NFS today is that it is mature, standard, well understood,
276and supported robustly across a variety of platforms. 261and supported robustly across a variety of platforms.
277</p> 262</p>
278 263
279<pre caption="Ebuilds for NFS-support"> 264<pre caption="Ebuilds for NFS-support">
280# <i>emerge -p nfs-utils portmap</i> 265# <i>emerge -a nfs-utils portmap</i>
281# <i>emerge nfs-utils portmap</i>
282</pre> 266</pre>
283 267
284<p> 268<p>
285Configure and install a kernel to support NFS v3 on all nodes: 269Configure and install a kernel to support NFS v3 on all nodes:
286</p> 270</p>
293CONFIG_NFSD_V3=y 277CONFIG_NFSD_V3=y
294CONFIG_LOCKD_V4=y 278CONFIG_LOCKD_V4=y
295</pre> 279</pre>
296 280
297<p> 281<p>
298On the master node, edit your <path>/etc/hosts.allow</path> file to allow 282On the master node, edit your <path>/etc/hosts.allow</path> file to allow
299connections from slave nodes. If your cluster LAN is on 192.168.1.0/24, 283connections from slave nodes. If your cluster LAN is on 192.168.1.0/24,
300your <path>hosts.allow</path> will look like: 284your <path>hosts.allow</path> will look like:
301</p> 285</p>
302 286
303<pre caption="hosts.allow"> 287<pre caption="hosts.allow">
304portmap:192.168.1.0/255.255.255.0 288portmap:192.168.1.0/255.255.255.0
305</pre> 289</pre>
306 290
307<p> 291<p>
308Edit the <path>/etc/exports</path> file of the master node to export a work 292Edit the <path>/etc/exports</path> file of the master node to export a work
309directory struture (/home is good for this). 293directory structure (/home is good for this).
310</p> 294</p>
311 295
312<pre caption="/etc/exports"> 296<pre caption="/etc/exports">
313/home/ *(rw) 297/home/ *(rw)
314</pre> 298</pre>
315 299
316<p> 300<p>
317Add nfs to your master node's default runlevel: 301Add nfs to your master node's default runlevel:
318</p> 302</p>
320<pre caption="Adding NFS to the default runlevel"> 304<pre caption="Adding NFS to the default runlevel">
321# <i>rc-update add nfs default</i> 305# <i>rc-update add nfs default</i>
322</pre> 306</pre>
323 307
324<p> 308<p>
325To mount the nfs exported filesystem from the master, you also have to 309To mount the nfs exported filesystem from the master, you also have to
326configure your salve nodes' <path>/etc/fstab</path>. Add a line like this 310configure your salve nodes' <path>/etc/fstab</path>. Add a line like this
327one: 311one:
328</p> 312</p>
329 313
330<pre caption="/etc/fstab"> 314<pre caption="/etc/fstab">
331master:/home/ /home nfs rw,exec,noauto,nouser,async 0 0 315master:/home/ /home nfs rw,exec,noauto,nouser,async 0 0
332</pre> 316</pre>
333 317
334<p> 318<p>
335You'll also need to set up your nodes so that they mount the nfs filesystem by 319You'll also need to set up your nodes so that they mount the nfs filesystem by
336issuing this command: 320issuing this command:
337</p> 321</p>
338 322
339<pre caption="Adding nfsmount to the default runlevel"> 323<pre caption="Adding nfsmount to the default runlevel">
340# <i>rc-update add nfsmount default</i> 324# <i>rc-update add nfsmount default</i>
345<section> 329<section>
346<title>RSH/SSH</title> 330<title>RSH/SSH</title>
347<body> 331<body>
348 332
349<p> 333<p>
350SSH is a protocol for secure remote login and other secure network services 334SSH is a protocol for secure remote login and other secure network services
351over an insecure network. OpenSSH uses public key cryptography to provide 335over an insecure network. OpenSSH uses public key cryptography to provide
352secure authorization. Generating the public key, which is shared with remote 336secure authorization. Generating the public key, which is shared with remote
353systems, and the private key which is kept on the local system, is done first 337systems, and the private key which is kept on the local system, is done first
354to configure OpenSSH on the cluster. 338to configure OpenSSH on the cluster.
355</p> 339</p>
356 340
357<p> 341<p>
358For transparent cluster usage, private/public keys may be used. This process 342For transparent cluster usage, private/public keys may be used. This process
359has two steps: 343has two steps:
360</p> 344</p>
361 345
362<ul> 346<ul>
363 <li>Generate public and private keys</li> 347 <li>Generate public and private keys</li>
364 <li>Copy public key to slave nodes</li> 348 <li>Copy public key to slave nodes</li>
365</ul> 349</ul>
366 350
367<p> 351<p>
368For user based authentification, general and copy as follows: 352For user based authentication, generate and copy as follows:
369</p> 353</p>
370 354
371<pre caption="SSH key authentication"> 355<pre caption="SSH key authentication">
372# <i>ssh-keygen -t dsa</i> 356# <i>ssh-keygen -t dsa</i>
373Generating public/private dsa key pair. 357Generating public/private dsa key pair.
390root@master's password: 374root@master's password:
391id_dsa.pub 100% 234 2.0MB/s 00:00 375id_dsa.pub 100% 234 2.0MB/s 00:00
392</pre> 376</pre>
393 377
394<note> 378<note>
395Host keys must have an empty passphrase. RSA is required for host-based 379Host keys must have an empty passphrase. RSA is required for host-based
396authentification. 380authentication.
397</note> 381</note>
398 382
399<p> 383<p>
400For host based authentication, you will also need to edit your 384For host based authentication, you will also need to edit your
401<path>/etc/ssh/shosts.equiv</path>. 385<path>/etc/ssh/shosts.equiv</path>.
402</p> 386</p>
403 387
404<pre caption="/etc/ssh/shosts.equiv"> 388<pre caption="/etc/ssh/shosts.equiv">
405node01.adelie 389node01.adelie
413 397
414<pre caption="sshd configurations"> 398<pre caption="sshd configurations">
415# $OpenBSD: sshd_config,v 1.42 2001/09/20 20:57:51 mouring Exp $ 399# $OpenBSD: sshd_config,v 1.42 2001/09/20 20:57:51 mouring Exp $
416# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin 400# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin
417 401
418# This is the sshd server system-wide configuration file. See sshd(8) 402# This is the sshd server system-wide configuration file. See sshd(8)
419# for more information. 403# for more information.
420 404
421# HostKeys for protocol version 2 405# HostKeys for protocol version 2
422HostKey /etc/ssh/ssh_host_rsa_key 406HostKey /etc/ssh/ssh_host_rsa_key
423</pre> 407</pre>
424 408
425<p> 409<p>
426If your application require RSH communications, you will need to emerge 410If your application require RSH communications, you will need to emerge
427net-misc/netkit-rsh and sys-apps/xinetd. 411<c>net-misc/netkit-rsh</c> and <c>sys-apps/xinetd</c>.
428</p> 412</p>
429 413
430<pre caption="Installing necessary applicaitons"> 414<pre caption="Installing necessary applicaitons">
431# <i>emerge -p xinetd</i> 415# <i>emerge -a xinetd</i>
432# <i>emerge xinetd</i>
433# <i>emerge -p netkit-rsh</i> 416# <i>emerge -a netkit-rsh</i>
434# <i>emerge netkit-rsh</i>
435</pre> 417</pre>
436 418
437<p> 419<p>
438Then configure the rsh deamon. Edit your <path>/etc/xinet.d/rsh</path> file. 420Then configure the rsh deamon. Edit your <path>/etc/xinet.d/rsh</path> file.
439</p> 421</p>
440 422
441<pre caption="rsh"> 423<pre caption="rsh">
442# Adelie Linux Research &amp; Development Center 424# Adelie Linux Research &amp; Development Center
443# /etc/xinetd.d/rsh 425# /etc/xinetd.d/rsh
472Or you can simply trust your cluster LAN: 454Or you can simply trust your cluster LAN:
473</p> 455</p>
474 456
475<pre caption="hosts.allow"> 457<pre caption="hosts.allow">
476# Adelie Linux Research &amp; Development Center 458# Adelie Linux Research &amp; Development Center
477# /etc/hosts.allow 459# /etc/hosts.allow
478 460
479ALL:192.168.1.0/255.255.255.0 461ALL:192.168.1.0/255.255.255.0
480</pre> 462</pre>
481 463
482<p> 464<p>
483Finally, configure host authentification from <path>/etc/hosts.equiv</path>. 465Finally, configure host authentication from <path>/etc/hosts.equiv</path>.
484</p> 466</p>
485 467
486<pre caption="hosts.equiv"> 468<pre caption="hosts.equiv">
487# Adelie Linux Research &amp; Development Center 469# Adelie Linux Research &amp; Development Center
488# /etc/hosts.equiv 470# /etc/hosts.equiv
505<section> 487<section>
506<title>NTP</title> 488<title>NTP</title>
507<body> 489<body>
508 490
509<p> 491<p>
510The Network Time Protocol (NTP) is used to synchronize the time of a computer 492The Network Time Protocol (NTP) is used to synchronize the time of a computer
511client or server to another server or reference time source, such as a radio 493client or server to another server or reference time source, such as a radio
512or satellite receiver or modem. It provides accuracies typically within a 494or satellite receiver or modem. It provides accuracies typically within a
513millisecond on LANs and up to a few tens of milliseconds on WANs relative to 495millisecond on LANs and up to a few tens of milliseconds on WANs relative to
514Coordinated Universal Time (UTC) via a Global Positioning Service (GPS) 496Coordinated Universal Time (UTC) via a Global Positioning Service (GPS)
515receiver, for example. Typical NTP configurations utilize multiple redundant 497receiver, for example. Typical NTP configurations utilize multiple redundant
516servers and diverse network paths in order to achieve high accuracy and 498servers and diverse network paths in order to achieve high accuracy and
517reliability. 499reliability.
518</p> 500</p>
519 501
520<p> 502<p>
521Select a NTP server geographically close to you from <uri 503Select a NTP server geographically close to you from <uri
522link="http://www.eecis.udel.edu/~mills/ntp/servers.html">Public NTP Time 504link="http://www.eecis.udel.edu/~mills/ntp/servers.html">Public NTP Time
523Servers</uri>, and configure your <path>/etc/conf.d/ntp</path> and 505Servers</uri>, and configure your <path>/etc/conf.d/ntp</path> and
524<path>/etc/ntp.conf</path> files on the master node. 506<path>/etc/ntp.conf</path> files on the master node.
525</p> 507</p>
526 508
527<pre caption="Master /etc/conf.d/ntp"> 509<pre caption="Master /etc/conf.d/ntp">
528# Copyright 1999-2002 Gentoo Technologies, Inc.
529# Distributed under the terms of the GNU General Public License v2
530# /etc/conf.d/ntpd 510# /etc/conf.d/ntpd
531 511
532# NOTES: 512# NOTES:
533# - NTPDATE variables below are used if you wish to set your 513# - NTPDATE variables below are used if you wish to set your
534# clock when you start the ntp init.d script 514# clock when you start the ntp init.d script
549NTPDATE_CMD="ntpdate" 529NTPDATE_CMD="ntpdate"
550 530
551# Options to pass to the above command 531# Options to pass to the above command
552# Most people should just uncomment this variable and 532# Most people should just uncomment this variable and
553# change 'someserver' to a valid hostname which you 533# change 'someserver' to a valid hostname which you
554# can aquire from the URL's below 534# can acquire from the URL's below
555NTPDATE_OPTS="-b ntp1.cmc.ec.gc.ca" 535NTPDATE_OPTS="-b ntp1.cmc.ec.gc.ca"
556 536
557## 537##
558# A list of available servers is available here: 538# A list of available servers is available here:
559# http://www.eecis.udel.edu/~mills/ntp/servers.html 539# http://www.eecis.udel.edu/~mills/ntp/servers.html
567#NTPD_OPTS="" 547#NTPD_OPTS=""
568 548
569</pre> 549</pre>
570 550
571<p> 551<p>
572Edit your <path>/etc/ntp.conf</path> file on the master to setup an external 552Edit your <path>/etc/ntp.conf</path> file on the master to setup an external
573synchronization source: 553synchronization source:
574</p> 554</p>
575 555
576<pre caption="Master ntp.conf"> 556<pre caption="Master ntp.conf">
577# Adelie Linux Research &amp; Development Center 557# Adelie Linux Research &amp; Development Center
583# Synchronization source #2 563# Synchronization source #2
584server ntp2.cmc.ec.gc.ca 564server ntp2.cmc.ec.gc.ca
585restrict ntp2.cmc.ec.gc.ca 565restrict ntp2.cmc.ec.gc.ca
586stratum 10 566stratum 10
587driftfile /etc/ntp.drift.server 567driftfile /etc/ntp.drift.server
588logfile /var/log/ntp 568logfile /var/log/ntp
589broadcast 192.168.1.255 569broadcast 192.168.1.255
590restrict default kod 570restrict default kod
591restrict 127.0.0.1 571restrict 127.0.0.1
592restrict 192.168.1.0 mask 255.255.255.0 572restrict 192.168.1.0 mask 255.255.255.0
593</pre> 573</pre>
594 574
595<p> 575<p>
596And on all your slave nodes, setup your synchronization source as your master 576And on all your slave nodes, setup your synchronization source as your master
597node. 577node.
598</p> 578</p>
599 579
600<pre caption="Node /etc/conf.d/ntp"> 580<pre caption="Node /etc/conf.d/ntp">
601# Copyright 1999-2002 Gentoo Technologies, Inc.
602# Distributed under the terms of the GNU General Public License v2
603# /etc/conf.d/ntpd 581# /etc/conf.d/ntpd
604 582
605NTPDATE_WARN="n" 583NTPDATE_WARN="n"
606NTPDATE_CMD="ntpdate" 584NTPDATE_CMD="ntpdate"
607NTPDATE_OPTS="-b master" 585NTPDATE_OPTS="-b master"
614# Synchronization source #1 592# Synchronization source #1
615server master 593server master
616restrict master 594restrict master
617stratum 11 595stratum 11
618driftfile /etc/ntp.drift.server 596driftfile /etc/ntp.drift.server
619logfile /var/log/ntp 597logfile /var/log/ntp
620restrict default kod 598restrict default kod
621restrict 127.0.0.1 599restrict 127.0.0.1
622</pre> 600</pre>
623 601
624<p> 602<p>
628<pre caption="Adding ntpd to the default runlevel"> 606<pre caption="Adding ntpd to the default runlevel">
629# <i>rc-update add ntpd default</i> 607# <i>rc-update add ntpd default</i>
630</pre> 608</pre>
631 609
632<note> 610<note>
633NTP will not update the local clock if the time difference between your 611NTP will not update the local clock if the time difference between your
634synchronization source and the local clock is too great. 612synchronization source and the local clock is too great.
635</note> 613</note>
636 614
637</body> 615</body>
638</section> 616</section>
643<p> 621<p>
644To setup a firewall on your cluster, you will need iptables. 622To setup a firewall on your cluster, you will need iptables.
645</p> 623</p>
646 624
647<pre caption="Installing iptables"> 625<pre caption="Installing iptables">
648# <i>emerge -p iptables</i> 626# <i>emerge -a iptables</i>
649# <i>emerge iptables</i>
650</pre> 627</pre>
651 628
652<p> 629<p>
653Required kernel configuration: 630Required kernel configuration:
654</p> 631</p>
670And the rules required for this firewall: 647And the rules required for this firewall:
671</p> 648</p>
672 649
673<pre caption="rule-save"> 650<pre caption="rule-save">
674# Adelie Linux Research &amp; Development Center 651# Adelie Linux Research &amp; Development Center
675# /var/lib/iptbles/rule-save 652# /var/lib/iptables/rule-save
676 653
677*filter 654*filter
678:INPUT ACCEPT [0:0] 655:INPUT ACCEPT [0:0]
679:FORWARD ACCEPT [0:0] 656:FORWARD ACCEPT [0:0]
680:OUTPUT ACCEPT [0:0] 657:OUTPUT ACCEPT [0:0]
711<section> 688<section>
712<title>OpenPBS</title> 689<title>OpenPBS</title>
713<body> 690<body>
714 691
715<p> 692<p>
716The Portable Batch System (PBS) is a flexible batch queueing and workload 693The Portable Batch System (PBS) is a flexible batch queueing and workload
717management system originally developed for NASA. It operates on networked, 694management system originally developed for NASA. It operates on networked,
718multi-platform UNIX environments, including heterogeneous clusters of 695multi-platform UNIX environments, including heterogeneous clusters of
719workstations, supercomputers, and massively parallel systems. Development of 696workstations, supercomputers, and massively parallel systems. Development of
720PBS is provided by Altair Grid Technologies. 697PBS is provided by Altair Grid Technologies.
721</p> 698</p>
722 699
723<pre caption="Installing openpbs"> 700<pre caption="Installing openpbs">
724# <i>emerge -p openpbs</i> 701# <i>emerge -a openpbs</i>
725</pre> 702</pre>
726 703
727<note> 704<note>
728OpenPBS ebuild does not currently set proper permissions on var-directories 705OpenPBS ebuild does not currently set proper permissions on var-directories
729used by OpenPBS. 706used by OpenPBS.
730</note> 707</note>
731 708
732<p> 709<p>
733Before starting using OpenPBS, some configurations are required. The files 710Before starting using OpenPBS, some configurations are required. The files
734you will need to personalize for your system are: 711you will need to personalize for your system are:
735</p> 712</p>
736 713
737<ul> 714<ul>
738 <li>/etc/pbs_environment</li> 715 <li>/etc/pbs_environment</li>
739 <li>/var/spool/PBS/server_name</li> 716 <li>/var/spool/PBS/server_name</li>
740 <li>/var/spool/PBS/server_priv/nodes</li> 717 <li>/var/spool/PBS/server_priv/nodes</li>
741 <li>/var/spool/PBS/mom_priv/config</li> 718 <li>/var/spool/PBS/mom_priv/config</li>
742 <li>/var/spool/PBS/sched_priv/sched_config</li> 719 <li>/var/spool/PBS/sched_priv/sched_config</li>
743</ul> 720</ul>
744 721
745<p> 722<p>
746Here is a sample sched_config: 723Here is a sample sched_config:
747</p> 724</p>
782set server resources_default.nodes = 1 759set server resources_default.nodes = 1
783set server scheduler_iteration = 60 760set server scheduler_iteration = 60
784</pre> 761</pre>
785 762
786<p> 763<p>
787To submit a task to OpenPBS, the command <c>qsub</c> is used with some 764To submit a task to OpenPBS, the command <c>qsub</c> is used with some
788optional parameters. In the exemple below, "-l" allows you to specify 765optional parameters. In the example below, "-l" allows you to specify
789the resources required, "-j" provides for redirection of standard out and 766the resources required, "-j" provides for redirection of standard out and
790standard error, and the "-m" will e-mail the user at begining (b), end (e) 767standard error, and the "-m" will e-mail the user at beginning (b), end (e)
791and on abort (a) of the job. 768and on abort (a) of the job.
792</p> 769</p>
793 770
794<pre caption="Submitting a task"> 771<pre caption="Submitting a task">
795<comment>(submit and request from OpenPBS that myscript be executed on 2 nodes)</comment> 772<comment>(submit and request from OpenPBS that myscript be executed on 2 nodes)</comment>
796# <i>qsub -l nodes=2 -j oe -m abe myscript</i> 773# <i>qsub -l nodes=2 -j oe -m abe myscript</i>
797</pre> 774</pre>
798 775
799<p> 776<p>
800Normally jobs submitted to OpenPBS are in the form of scripts. Sometimes, you 777Normally jobs submitted to OpenPBS are in the form of scripts. Sometimes, you
801may want to try a task manually. To request an interactive shell from OpenPBS, 778may want to try a task manually. To request an interactive shell from OpenPBS,
802use the "-I" parameter. 779use the "-I" parameter.
803</p> 780</p>
804 781
805<pre caption="Requesting an interactive shell"> 782<pre caption="Requesting an interactive shell">
806# <i>qsub -I</i> 783# <i>qsub -I</i>
822<section> 799<section>
823<title>MPICH</title> 800<title>MPICH</title>
824<body> 801<body>
825 802
826<p> 803<p>
827Message passing is a paradigm used widely on certain classes of parallel 804Message passing is a paradigm used widely on certain classes of parallel
828machines, especially those with distributed memory. MPICH is a freely 805machines, especially those with distributed memory. MPICH is a freely
829available, portable implementation of MPI, the Standard for message-passing 806available, portable implementation of MPI, the Standard for message-passing
830libraries. 807libraries.
831</p> 808</p>
832 809
833<p> 810<p>
834The mpich ebuild provided by Adelie Linux allows for two USE flags: 811The mpich ebuild provided by Adelie Linux allows for two USE flags:
835<e>doc</e> and <e>crypt</e>. <e>doc</e> will cause documentation to be 812<e>doc</e> and <e>crypt</e>. <e>doc</e> will cause documentation to be
836installed, while <e>crypt</e> will configure MPICH to use <c>ssh</c> instead 813installed, while <e>crypt</e> will configure MPICH to use <c>ssh</c> instead
837of <c>rsh</c>. 814of <c>rsh</c>.
838</p> 815</p>
839 816
840<pre caption="Installing the mpich application"> 817<pre caption="Installing the mpich application">
841# <i>emerge -p mpich</i> 818# <i>emerge -a mpich</i>
842# <i>emerge mpich</i>
843</pre> 819</pre>
844 820
845<p> 821<p>
846You may need to export a mpich work directory to all your slave nodes in 822You may need to export a mpich work directory to all your slave nodes in
847<path>/etc/exports</path>: 823<path>/etc/exports</path>:
848</p> 824</p>
849 825
850<pre caption="/etc/exports"> 826<pre caption="/etc/exports">
851/home *(rw) 827/home *(rw)
852</pre> 828</pre>
853 829
854<p> 830<p>
855Most massively parallel processors (MPPs) provide a way to start a program on 831Most massively parallel processors (MPPs) provide a way to start a program on
856a requested number of processors; <c>mpirun</c> makes use of the appropriate 832a requested number of processors; <c>mpirun</c> makes use of the appropriate
857command whenever possible. In contrast, workstation clusters require that each 833command whenever possible. In contrast, workstation clusters require that each
858process in a parallel job be started individually, though programs to help 834process in a parallel job be started individually, though programs to help
859start these processes exist. Because workstation clusters are not already 835start these processes exist. Because workstation clusters are not already
860organized as an MPP, additional information is required to make use of them. 836organized as an MPP, additional information is required to make use of them.
861Mpich should be installed with a list of participating workstations in the 837Mpich should be installed with a list of participating workstations in the
862file <path>machines.LINUX</path> in the directory 838file <path>machines.LINUX</path> in the directory
863<path>/usr/share/mpich/</path>. This file is used by <c>mpirun</c> to choose 839<path>/usr/share/mpich/</path>. This file is used by <c>mpirun</c> to choose
864processors to run on. 840processors to run on.
865</p> 841</p>
866 842
867<p> 843<p>
868Edit this file to reflect your cluster-lan configuration: 844Edit this file to reflect your cluster-lan configuration:
869</p> 845</p>
870 846
871<pre caption="/usr/share/mpich/machines.LINUX"> 847<pre caption="/usr/share/mpich/machines.LINUX">
872# Change this file to contain the machines that you want to use 848# Change this file to contain the machines that you want to use
873# to run MPI jobs on. The format is one host name per line, with either 849# to run MPI jobs on. The format is one host name per line, with either
874# hostname 850# hostname
875# or 851# or
876# hostname:n 852# hostname:n
877# where n is the number of processors in an SMP. The hostname should 853# where n is the number of processors in an SMP. The hostname should
878# be the same as the result from the command "hostname" 854# be the same as the result from the command "hostname"
879master 855master
880node01 856node01
881node02 857node02
882# node03 858# node03
883# node04 859# node04
884# ... 860# ...
885</pre> 861</pre>
886 862
887<p> 863<p>
888Use the script <c>tstmachines</c> in <path>/usr/sbin/</path> to ensure that 864Use the script <c>tstmachines</c> in <path>/usr/sbin/</path> to ensure that
889you can use all of the machines that you have listed. This script performs 865you can use all of the machines that you have listed. This script performs
890an <c>rsh</c> and a short directory listing; this tests that you both have 866an <c>rsh</c> and a short directory listing; this tests that you both have
891access to the node and that a program in the current directory is visible on 867access to the node and that a program in the current directory is visible on
892the remote node. If there are any problems, they will be listed. These 868the remote node. If there are any problems, they will be listed. These
893problems must be fixed before proceeding. 869problems must be fixed before proceeding.
894</p> 870</p>
895 871
896<p> 872<p>
897The only argument to <c>tstmachines</c> is the name of the architecture; this 873The only argument to <c>tstmachines</c> is the name of the architecture; this
898is the same name as the extension on the machines file. For example, the 874is the same name as the extension on the machines file. For example, the
899following tests that a program in the current directory can be executed by 875following tests that a program in the current directory can be executed by
900all of the machines in the LINUX machines list. 876all of the machines in the LINUX machines list.
901</p> 877</p>
902 878
903<pre caption="Running a test"> 879<pre caption="Running a test">
904# <i>/usr/local/mpich/sbin/tstmachines LINUX</i> 880# <i>/usr/local/mpich/sbin/tstmachines LINUX</i>
905</pre> 881</pre>
906 882
907<note> 883<note>
908This program is silent if all is well; if you want to see what it is doing, 884This program is silent if all is well; if you want to see what it is doing,
909use the -v (for verbose) argument: 885use the -v (for verbose) argument:
910</note> 886</note>
911 887
912<pre caption="Running a test verbosively"> 888<pre caption="Running a test verbosively">
913# <i>/usr/local/mpich/sbin/tstmachines -v LINUX</i> 889# <i>/usr/local/mpich/sbin/tstmachines -v LINUX</i>
925Trying user program on host1.uoffoo.edu ... 901Trying user program on host1.uoffoo.edu ...
926Trying user program on host2.uoffoo.edu ... 902Trying user program on host2.uoffoo.edu ...
927</pre> 903</pre>
928 904
929<p> 905<p>
930If <c>tstmachines</c> finds a problem, it will suggest possible reasons and 906If <c>tstmachines</c> finds a problem, it will suggest possible reasons and
931solutions. In brief, there are three tests: 907solutions. In brief, there are three tests:
932</p> 908</p>
933 909
934<ul> 910<ul>
935 <li> 911 <li>
936 <e>Can processes be started on remote machines?</e> tstmachines attempts 912 <e>Can processes be started on remote machines?</e> tstmachines attempts
937 to run the shell command true on each machine in the machines files by 913 to run the shell command true on each machine in the machines files by
938 using the remote shell command. 914 using the remote shell command.
939 </li> 915 </li>
940 <li> 916 <li>
941 <e>Is current working directory available to all machines?</e> This 917 <e>Is current working directory available to all machines?</e> This
942 attempts to ls a file that tstmachines creates by running ls using the 918 attempts to ls a file that tstmachines creates by running ls using the
943 remote shell command. 919 remote shell command.
944 </li> 920 </li>
945 <li> 921 <li>
946 <e>Can user programs be run on remote systems?</e> This checks that shared 922 <e>Can user programs be run on remote systems?</e> This checks that shared
947 libraries and other components have been properly installed on all 923 libraries and other components have been properly installed on all
948 machines. 924 machines.
949 </li> 925 </li>
950</ul> 926</ul>
951 927
952<p> 928<p>
959# <i>make hello++</i> 935# <i>make hello++</i>
960# <i>mpirun -machinefile /usr/share/mpich/machines.LINUX -np 1 hello++</i> 936# <i>mpirun -machinefile /usr/share/mpich/machines.LINUX -np 1 hello++</i>
961</pre> 937</pre>
962 938
963<p> 939<p>
964For further information on MPICH, consult the documentation at <uri 940For further information on MPICH, consult the documentation at <uri
965link="http://www-unix.mcs.anl.gov/mpi/mpich/docs/mpichman-chp4/mpichman-chp4.htm">http://www-unix.mcs.anl.gov/mpi/mpich/docs/mpichman-chp4/mpichman-chp4.htm</uri>. 941link="http://www-unix.mcs.anl.gov/mpi/mpich/docs/mpichman-chp4/mpichman-chp4.htm">http://www-unix.mcs.anl.gov/mpi/mpich/docs/mpichman-chp4/mpichman-chp4.htm</uri>.
966</p> 942</p>
967 943
968</body> 944</body>
969</section> 945</section>
993<title>Bibliography</title> 969<title>Bibliography</title>
994<section> 970<section>
995<body> 971<body>
996 972
997<p> 973<p>
998The original document is published at the <uri 974The original document is published at the <uri
999link="http://www.adelielinux.com">Adelie Linux R&amp;D Centre</uri> web site, 975link="http://www.adelielinux.com">Adelie Linux R&amp;D Centre</uri> web site,
1000and is reproduced here with the permission of the authors and <uri 976and is reproduced here with the permission of the authors and <uri
1001link="http://www.cyberlogic.ca">Cyberlogic</uri>'s Adelie Linux R&amp;D 977link="http://www.cyberlogic.ca">Cyberlogic</uri>'s Adelie Linux R&amp;D
1002Centre. 978Centre.
1003</p> 979</p>
1004 980
1005<ul> 981<ul>
1006 <li> 982 <li><uri>http://www.gentoo.org</uri>, Gentoo Foundation, Inc.</li>
1007 <uri link="http://www.gentoo.org">http://www.gentoo.org</uri>, Gentoo
1008 Technologies, Inc.
1009 </li> 983 <li>
1010 <li>
1011 <uri link="http://www.adelielinux.com">http://www.adelielinux.com</uri>, 984 <uri link="http://www.adelielinux.com">http://www.adelielinux.com</uri>,
1012 Adelie Linux Research and Development Centre 985 Adelie Linux Research and Development Centre
1013 </li> 986 </li>
1014 <li> 987 <li>
1015 <uri link="http://nfs.sourceforge.net/">http://nfs.sourceforge.net</uri>, 988 <uri link="http://nfs.sourceforge.net/">http://nfs.sourceforge.net</uri>,
1016 Linux NFS Project 989 Linux NFS Project
1017 </li> 990 </li>
1018 <li> 991 <li>
1019 <uri link="http://www-unix.mcs.anl.gov/mpi/mpich/">http://www-unix.mcs.anl.gov/mpi/mpich/</uri>, 992 <uri link="http://www-unix.mcs.anl.gov/mpi/mpich/">http://www-unix.mcs.anl.gov/mpi/mpich/</uri>,
1020 Mathematics and Computer Science Division, Argonne National Laboratory 993 Mathematics and Computer Science Division, Argonne National Laboratory
1021 </li> 994 </li>
1022 <li> 995 <li>
1023 <uri link="http://www.ntp.org/">http://ntp.org</uri> 996 <uri link="http://www.ntp.org/">http://ntp.org</uri>
1024 </li> 997 </li>
1025 <li> 998 <li>
1026 <uri link="http://www.eecis.udel.edu/~mills/">http://www.eecis.udel.edu/~mills/</uri>, 999 <uri link="http://www.eecis.udel.edu/~mills/">http://www.eecis.udel.edu/~mills/</uri>,
1027 David L. Mills, University of Delaware 1000 David L. Mills, University of Delaware
1028 </li> 1001 </li>
1029 <li> 1002 <li>
1030 <uri link="http://www.ietf.org/html.charters/secsh-charter.html">http://www.ietf.org/html.charters/secsh-charter.html</uri>, 1003 <uri link="http://www.ietf.org/html.charters/secsh-charter.html">http://www.ietf.org/html.charters/secsh-charter.html</uri>,
1031 Secure Shell Working Group, IETF, Internet Society 1004 Secure Shell Working Group, IETF, Internet Society
1032 </li> 1005 </li>
1033 <li> 1006 <li>
1034 <uri link="http://www.linuxsecurity.com/">http://www.linuxsecurity.com/</uri>, 1007 <uri link="http://www.linuxsecurity.com/">http://www.linuxsecurity.com/</uri>,
1035 Guardian Digital 1008 Guardian Digital
1036 </li> 1009 </li>
1037 <li> 1010 <li>
1038 <uri link="http://www.openpbs.org/">http://www.openpbs.org/</uri>, 1011 <uri link="http://www.openpbs.org/">http://www.openpbs.org/</uri>,
1039 Altair Grid Technologies, LLC. 1012 Altair Grid Technologies, LLC.
1040 </li> 1013 </li>
1041</ul> 1014</ul>
1042 1015
1043</body> 1016</body>

Legend:
Removed from v.1.1  
changed lines
  Added in v.1.15

  ViewVC Help
Powered by ViewVC 1.1.20