debuggers.hg

view docs/user.tex @ 2665:5604871d7e94

bitkeeper revision 1.1159.1.223 (416c1158SsW4313-aAMVy2dT2UKoJg)

Merge font.cl.cam.ac.uk:/auto/groups/xeno/BK/xeno.bk
into font.cl.cam.ac.uk:/auto/homes/sd386/xeno.bk
author sd386@font.cl.cam.ac.uk
date Tue Oct 12 17:16:08 2004 +0000 (2004-10-12)
parents b3ae9d6dbece 203f4884b517
children 98b7d1c6a2f7
line source
1 \documentclass[11pt,twoside,final,openright]{xenstyle}
2 \usepackage{a4,graphicx,setspace}
3 \setstretch{1.15}
4 \input{style.tex}
6 \begin{document}
8 % TITLE PAGE
9 \pagestyle{empty}
10 \begin{center}
11 \vspace*{\fill}
12 \includegraphics{eps/xenlogo.eps}
13 \vfill
14 \vfill
15 \vfill
16 \begin{tabular}{l}
17 {\Huge \bf Users' manual} \\[4mm]
18 {\huge Xen v2.0 for x86} \\[80mm]
20 {\Large Xen is Copyright (c) 2004, The Xen Team} \\[3mm]
21 {\Large University of Cambridge, UK} \\[20mm]
22 {\large Last updated on 12th October, 2004}
23 \end{tabular}
24 \vfill
25 \end{center}
26 \cleardoublepage
28 % TABLE OF CONTENTS
29 \pagestyle{plain}
30 \pagenumbering{roman}
31 { \parskip 0pt plus 1pt
32 \tableofcontents }
33 \cleardoublepage
35 % PREPARE FOR MAIN TEXT
36 \pagenumbering{arabic}
37 \raggedbottom
38 \widowpenalty=10000
39 \clubpenalty=10000
40 \parindent=0pt
41 \renewcommand{\topfraction}{.8}
42 \renewcommand{\bottomfraction}{.8}
43 \renewcommand{\textfraction}{.2}
44 \renewcommand{\floatpagefraction}{.8}
45 \setstretch{1.15}
47 \newcommand{\path}[1]{{\tt #1}}
49 \part{Introduction and Tutorial}
50 \chapter{Introduction}
52 {\bf
53 DISCLAIMER: This documentation is currently under active development
54 and as such there may be mistakes and omissions --- watch out for
55 these and please report any you find to the developer's mailing list.
56 Contributions of material, suggestions and corrections are welcome.
57 }
59 Xen is a { \em paravirtualising } virtual machine monitor (VMM) or
60 ``Hypervisor'' for the x86 processor architecture. Xen can securely
61 multiplex heterogeneous virtual machines on a single physical with
62 near-native performance. The virtual machine technology facilitates
63 enterprise-grade functionality, including:
65 \begin{itemize}
66 \item Virtual machines with close to native performance.
67 \item Live migration of running virtual machines.
68 \item Excellent hardware support (use unmodified Linux device drivers).
69 \item Suspend to disk / resume from disk of running virtual machines.
70 \item Transparent copy on write disks.
71 \item Sandboxed, restartable device drivers.
72 \item Pervasive debugging - debug whole OSes, from kernel to applications.
73 \end{itemize}
75 Xen support is available for increasingly many operating systems. The
76 following OSs have either been ported already or a port is in
77 progress:
78 \begin{itemize}
79 \item Dragonfly BSD
80 \item FreeBSD 5.3
81 \item Linux 2.4
82 \item Linux 2.6
83 \item NetBSD 2.0
84 \item Plan 9
85 \item Windows XP
86 \end{itemize}
88 Right now, Linux 2.4 and 2.6 are available for Xen 2.0. NetBSD
89 port will be updated to run on Xen 2.0, hopefully in time for the NetBSD
90 2.0 release. It is intended that Xen support be integrated into the
91 official releases of Linux 2.6, NetBSD 2.0, FreeBSD and Dragonfly BSD.
93 Even running multiple copies of Linux can be very useful, providing a
94 means of containing faults to one OS image, providing performance
95 isolation between the various OS instances and trying out multiple
96 distros.
98 The Windows XP port is only available to those who have signed the
99 Microsoft Academic Source License.
101 Possible usage scenarios for Xen include:
102 \begin{description}
103 \item [Kernel development] test and debug kernel modifications in a
104 sandboxed virtual machine --- no need for a separate test
105 machine
106 \item [Multiple OS Configurations] run multiple operating systems
107 simultaneously, for instance for compatibility or QA purposes
108 \item [Server consolidation] move multiple servers onto one box,
109 provided performance and fault isolation at virtual machine
110 boundaries
111 \item [Cluster computing] improve manageability and efficiency by
112 running services in virtual machines, isolated from
113 machine-specifics and load balance using live migration
114 \item [High availability computing] run device drivers in sandboxed
115 domains for increased robustness
116 \item [Hardware support for custom OSes] export drivers from a
117 mainstream OS (e.g. Linux) with good hardware support
118 to your custom OS, avoiding the need for you to port existing
119 drivers to achieve good hardware support
120 \end{description}
122 \section{Structure}
124 \subsection{High level}
126 A Xen system has multiple layers. The lowest layer is Xen itself ---
127 the most privileged piece of code in the system. On top of Xen run
128 guest operating system kernels. These are scheduled pre-emptively by
129 Xen. On top of these run the applications of the guest OSs. Guest
130 OSs are responsible for scheduling their own applications within the
131 time allotted to them by Xen.
133 One of the domains --- { \em Domain 0 } --- is privileged. It is
134 started by Xen at system boot and is responsible for initialising and
135 managing the whole machine. Domain 0 builds other domains and manages
136 their virtual devices. It also performs suspend, resume and
137 migration of other virtual machines. Where it is used, the X server
138 is also run in domain 0.
140 Within Domain 0, a process called ``Xend'' runs to manage the system.
141 Xend is responsible for managing virtual machines and providing access
142 to their consoles. Commands are issued to Xend over an HTTP
143 interface, either from a command-line tool or from a web browser.
145 XXX need diagram(s) here to make this make sense
147 \subsection{Paravirtualisation}
149 Paravirtualisation allows very high performance virtual machine
150 technology, even on architectures (like x86) which are traditionally
151 hard to virtualise.
153 Paravirtualisation requires guest operating systems to be { \em ported
154 } to run on the VMM. This process is similar to a port of an
155 operating system to a new hardware platform. Although operating
156 system kernels must explicitly support Xen in order to run in a
157 virtual machine, { \em user space applications and libraries
158 do not require modification }.
160 \section{Hardware Support}
162 Xen currently runs on the x86 architecture, but could in principle be
163 ported to others. In fact, it would have been rather easier to write
164 Xen for pretty much any other architecture as x86 is particularly
165 tricky to handle. A good description of Xen's design, implementation
166 and performance is contained in the October 2003 SOSP paper, available
167 at:\\
168 {\tt http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}\\
169 Work to port Xen to x86\_64 and IA64 is currently underway.
171 Xen is targeted at server-class machines, and the current list of
172 supported hardware very much reflects this, avoiding the need for us
173 to write drivers for "legacy" hardware. It is likely that some desktop
174 chipsets will fail to work properly with the default Xen
175 configuration: specifying {\tt noacpi} or {\tt ignorebiostables} when
176 booting Xen may help in these cases.
178 Xen requires a ``P6'' or newer processor (e.g. Pentium Pro, Celeron,
179 Pentium II, Pentium III, Pentium IV, Xeon, AMD Athlon, AMD Duron).
180 Multiprocessor machines are supported, and we also have basic support
181 for HyperThreading (SMT), although this remains a topic for ongoing
182 research. We're also working on an x86\_64 port (though Xen should
183 already run on these systems just fine in 32-bit mode).
185 Xen can currently use up to 4GB of memory. It is possible for x86
186 machines to address up to 64GB of physical memory but (unless an
187 external developer volunteers) there are no plans to support these
188 systems. The x86\_64 port is the planned route to supporting more
189 than 4GB of memory.
191 In contrast to previous Xen versions, in Xen 2.0 device drivers run
192 within a privileged guest OS rather than within Xen itself. This means
193 that we should be compatible with the majority of device hardware
194 supported by Linux. The default XenLinux build contains support for
195 relatively modern server-class network and disk hardware, but you can
196 add support for other hardware by configuring your XenLinux kernel in
197 the normal way (e.g. \verb_# make ARCH=xen xconfig_).
199 \section{History}
202 ``Xen'' is a Virtual Machine Monitor (VMM) originally developed by the
203 Systems Research Group of the University of Cambridge Computer
204 Laboratory, as part of the UK-EPSRC funded XenoServers project.
206 The XenoServers project aims to provide a ``public infrastructure for
207 global distributed computing'', and Xen plays a key part in that,
208 allowing us to efficiently partition a single machine to enable
209 multiple independent clients to run their operating systems and
210 applications in an environment providing protection, resource
211 isolation and accounting. The project web page contains further
212 information along with pointers to papers and technical reports:
213 {\tt http://www.cl.cam.ac.uk/xeno}
215 Xen has since grown into a project in its own right, enabling us to
216 investigate interesting research issues regarding the best techniques
217 for virtualizing resources such as the CPU, memory, disk and network.
218 The project has been bolstered by support from Intel Research
219 Cambridge, and HP Labs, who are now working closely with us. We're
220 also in receipt of support from Microsoft Research Cambridge to port
221 Windows XP to run on Xen.
223 Xen was first described in the 2003 paper at SOSP \\
224 ({\tt http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}).
225 The first public release of Xen (1.0) was made in October 2003. Xen
226 was developed as a research project by the University of Cambridge
227 Computer Laboratory (UK). Xen was the first Virtual Machine Monitor
228 to make use of {\em paravirtualisation} to achieve near-native
229 performance virtualisation of commodity operating systems. Since
230 then, Xen has been extensively developed and is now used in production
231 scenarios on multiple sites.
233 Xen 2.0 is the latest release, featuring greatly enhanced hardware
234 support, configuration flexibility, usability and a larger complement
235 of supported operating systems. We think that Xen has the potential
236 to become {\em the} definitive open source virtualisation solution and
237 will work to conclusively achieve that position.
240 \chapter{Installation}
242 The Xen distribution includes three main components: Xen itself,
243 utilities to convert a standard Linux tree to run on Xen and the
244 userspace tools required to operate a Xen-based system.
246 This manual describes how to install the Xen 2.0 distribution from
247 source. Alternatively, there may be packages available for your
248 operating system distribution.
250 \section{Prerequisites}
251 \label{sec:prerequisites}
252 \begin{itemize}
253 \item A working installation of your favourite Linux distribution.
254 \item A working installation of the GRUB bootloader.
255 \item An installation of Twisted v1.3 or above (see {\tt
256 http://www.twistedmatrix.com}). There may be a package available for
257 your distribution; alternatively it can be installed by running {\tt \#
258 make install-twisted} in the root of the Xen source tree.
259 \item The Linux bridge control tools (see {\tt
260 http://bridge.sourceforge.net}). There may be a packages of these
261 tools available for your distribution.
262 \item Linux IP Routing Tools
263 \item make
264 \item python-dev
265 \item gcc
266 \item zlib-dev
267 \item libcurl
268 \item python2.3-pycurl
269 \item python2.3-twisted
270 \end{itemize}
272 \section{Optional}
273 \begin{itemize}
274 \item The Python logging package (see {\tt http://www.red-dove.com/})
275 for additional Xend logging functionality.
276 \end{itemize}
278 \section{Install Bitkeeper (Optional)}
280 To fetch a local copy, first download the BitKeeper tools.
281 Download instructions must be obtained by filling out the provided
282 form at: \\ {\tt
283 http://www.bitmover.com/cgi-bin/download.cgi }
285 The BitKeeper install program is designed to be run with X. If X is
286 not available, you can specify the install directory on the command
287 line.
289 \section{Download the Xen source code}
291 \subsection{Using Bitkeeper}
293 The public master BK repository for the 2.0 release lives at: \\
294 {\tt bk://xen.bkbits.net/xen-2.0.bk}. You can use Bitkeeper to
295 download it and keep it updated with the latest features and fixes.
297 Change to the directory in which you want to put the source code, then
298 run:
299 \begin{verbatim}
300 # bk clone bk://xen.bkbits.net/xen-2.0.bk
301 \end{verbatim}
303 Under your current directory, a new directory named `xen-2.0.bk'
304 has been created, which contains all the source code for the Xen
305 hypervisor and the Xen tools. The directory also contains `sparse'
306 Linux source trees, containing only the files that differ between
307 XenLinux and standard Linux.
309 Once you have cloned the repository, you can update to the newest
310 changes to the repository by running:
311 \begin{verbatim}
312 # cd xen-2.0.bk # to change into the local repository
313 # bk pull # to update the repository
314 \end{verbatim}
316 \subsection{Without Bitkeeper}
318 The Xen source tree is also available in gzipped tarball form from the
319 Xen downloads page:\\
320 {\tt http://www.cl.cam.ac.uk/Research/SRG/netos/xen/downloads.html}.
321 Prebuilt tarballs are also available but are very large.
323 \section{The distribution}
325 The Xen source code repository is structured as follows:
327 \begin{description}
328 \item[\path{tools/}] Xen node controller daemon (Xend), command line tools,
329 control libraries
330 \item[\path{xen/}] The Xen hypervisor itself.
331 \item[\path{linux-2.4.27-xen/}] Linux 2.4 support for Xen
332 \item[\path{linux-2.6.8.1-xen/}] Linux 2.6 support for Xen
333 \item[\path{docs/}] various documentation files for users and developers
334 \item[\path{extras/}] currently this contains the Mini OS, aimed at developers
335 \end{description}
337 \section{Build and install}
339 The Xen makefile includes a target ``world'' that will do the
340 following:
342 \begin{itemize}
343 \item Build Xen
344 \item Build the control tools, including Xend
345 \item Download (if necessary) and unpack the Linux 2.6 source code,
346 and patch it for use with Xen
347 \item Build a Linux kernel to use in domain 0 and a smaller
348 unprivileged kernel, which can optionally be used for
349 unprivileged virtual machines.
350 \end{itemize}
352 Inspect the Makefile if you want to see what goes on during a
353 build. Building Xen and the tools is straightforward, but XenLinux is
354 more complicated. The makefile needs a `pristine' linux kernel tree
355 which it will then add the Xen architecture files to. You can tell the
356 makefile the location of the appropriate linux compressed tar file by
357 setting the LINUX\_SRC environment variable, e.g. \\
358 \verb!# LINUX_SRC=/tmp/linux-2.6.8.1.tar.bz2 make world! \\ or by
359 placing the tar file somewhere in the search path of {\tt LINUX\_SRC\_PATH}
360 which defaults to ``{\tt .:..}". If the makefile can't find a suitable
361 kernel tar file it attempts to download it from kernel.org (this won't
362 work if you're behind a firewall).
364 After untaring the pristine kernel tree, the makefile uses the {\tt
365 mkbuildtree} script to add the Xen patches the kernel. It then builds
366 two different XenLinux images, one with a ``-xen0'' extension which
367 contains hardware device drivers and drivers for Xen's virtual devices,
368 and one with a ``-xenU'' extension that just contains the virtual ones.
369 The former is intended to be used in the first virtual machine (``domain 0''),
370 the latter just has a smaller memory footprint.
372 The procedure is similar to build the Linux 2.4 port: \\
373 \verb!# LINUX_SRC=/path/to/linux2.4/source make linux24!
375 In both cases, if you have an SMP machine you may wish to give the
376 {\tt '-j4'} argument to make to get a parallel build.
378 XXX Insert details on customising the kernel to be built.
379 i.e. merging config files
381 The files produced by the build process are stored under the
382 \path{install/} directory. To install them in their default
383 locations, do: \\
384 \verb_# make install_\\
386 Alternatively, users with special installation requirements may wish
387 to install them manually by copying file to their appropriate
388 destinations.
390 Take a look at the files in \path{install/boot/}:
391 \begin{itemize}
392 \item \path{install/boot/xen.gz} The Xen 'kernel'
393 \item \path{install/boot/vmlinuz-2.6.8.1-xen0} Domain 0 XenLinux kernel
394 \item \path{install/boot/vmlinuz-2.6.8.1-xenU} Unprivileged XenLinux kernel
395 \end{itemize}
397 The difference between the two Linux kernels that are built is due to
398 the configuration file used for each. The "U" suffixed unprivileged
399 version doesn't contain any of the physical hardware device drivers
400 --- it is 30\% smaller and hence may be preferred for your
401 non-privileged domains. The ``0'' suffixed privileged version can be
402 used to boot the system, as well as in driver domains and unprivileged
403 domains.
405 The \path{install/boot} directory will also contain the config files
406 used for building the XenLinux kernels, and also versions of Xen and
407 XenLinux kernels that contain debug symbols (\path{xen-syms} and
408 \path{vmlinux-syms-2.4.27-xen0}) which are essential for interpreting crash
409 dumps. Retain these files as the developers may wish to see them if
410 you post on the mailing list.
412 \section{Configuration}
414 \subsection{GRUB Configuration}
416 An entry should be added to \path{grub.conf} (often found under
417 \path{/boot/} or \path{/boot/grub/}) to allow Xen / XenLinux to boot.
418 This file is sometimes called \path{menu.lst}, depending on your
419 distribution. The entry should look something like the following:
421 \begin{verbatim}
422 title Xen 2.0 / XenLinux 2.6.8.1
423 kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
424 module /boot/vmlinuz-2.6.8.1-xen0 root=/dev/sda4 ro console=tty0 console=ttyS0
425 \end{verbatim}
427 The first line of the configuration (kernel...) tells GRUB where to
428 find Xen itself and what boot parameters should be passed to it. The
429 second line of the configuration describes the location of the
430 XenLinux kernel that Xen should start and the parameters that should
431 be passed to it.
433 As always when installing a new kernel, it is recommended that you do
434 not remove the original contents of \path{menu.lst} --- you may want
435 to boot up with your old Linux kernel in future, particularly if you
436 have problems.
438 XXX insert distro specific stuff in here (maybe)
439 Suse 9.1: no 'ro' option
441 \subsection{Serial Console}
443 In order to configure serial console output, it is necessary to add a
444 line into \path{/etc/inittab}. The XenLinux console driver is
445 designed to make this procedure the same as configuring a normal
446 serial console. Add the line:
448 {\tt c:2345:respawn:/sbin/mingetty ttyS0}
450 XXX insert distro specific stuff in here (maybe)
451 Suse 9.1: different boot scheme (/etc/init.d/)
453 \section{Test the new install}
455 It should now be possible to restart the system and use Xen. Reboot
456 as usual but choose the new Xen option when the Grub screen appears.
458 What follows should look much like a conventional Linux boot. The
459 first portion of the output comes from Xen itself, supplying low level
460 information about itself and the machine it is running on. The
461 following portion of the output comes from XenLinux itself.
463 You may see some errors during the XenLinux boot. These are not
464 necessarily anything to worry about --- they may result from kernel
465 configuration differences between your XenLinux kernel and the one you
466 usually use.
468 When the boot completes, you should be able to log into your system as
469 usual. If you are unable to log in to your system running Xen, you
470 should still be able to reboot with your normal Linux kernel.
473 \chapter{Starting a domain}
475 The first step in creating a new domain is to prepare a root
476 filesystem for it to boot off. Typically, this might be stored in a
477 normal partition, a disk file and LVM volume, or on an NFS server.
479 A simple way to do this is simply to boot from your standard OS
480 install CD and install the distribution into another partition on your
481 hard drive.
483 {\em N.b } you can boot with Xen and XenLinux without installing any
484 special userspace tools but will need to have the prerequisites
485 described in Section~\ref{sec:prerequisites} and the Xen control tools
486 are installed before you proceed.
488 \section{From the web interface}
490 Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv} for
491 more details) using the command: \\
492 \verb_# xensv start_ \\
493 This will also start Xend (see Chapter~\ref{cha:xend} for more information).
495 The domain management interface will then be available at {\tt
496 http://your\_machine:8080/}. This provides a user friendly wizard for
497 starting domains and functions for managing running domains.
499 \section{From the command line}
501 Full details of the {\tt xm} tool are found in Chapter~\ref{cha:xm}.
503 This example explains how to use the \path{xmdefaults} file. If you
504 require a more complex setup, you will want to write a custom
505 configuration file --- details of the configuration file formats are
506 included in Chapter~\ref{cha:config}.
508 The \path{xmdefconfig1} file is a simple template configuration file
509 for describing a single VM.
511 The \path{xmdefconfig2} file is a template description that is intended
512 to be reused for multiple virtual machines. Setting the value of the
513 {\tt vmid} variable on the {\tt xm} command line
514 fills in parts of this template.
516 \subsection{Editing \path{xmdefconfig}}
518 At minimum, you should edit the following variables in \path{xmdefconfig}:
520 \begin{description}
521 \item[kernel] Set this to the path of the kernel you compiled for use
522 with Xen. [e.g. {\tt kernel =
523 '/root/xen-2.0.bk/install/boot/vmlinuz-2.4.27-xenU'}]
524 \item[memory] Set this to the size of the domain's memory in
525 megabytes. [e.g. {\tt memory = 64 } ]
526 \item[disk] Set the first entry in this list to calculate the offset
527 of the domain's root partition, based on the domain ID. Set the
528 second to the location of \path{/usr} (if you are sharing it between
529 domains). [i.e. {\tt disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
530 (base\_partition\_number + vmid), 'phy:your\_usr\_partition,sda6,r' ]}
531 \item[dhcp] Uncomment the dhcp variable, so that the domain will
532 receive its IP address from a DHCP server. [i.e. {\tt dhcp=''dhcp''}]
533 \end{description}
535 You may also want to edit the {\bf vif} variable in order to choose
536 the MAC address of the virtual ethernet interface yourself. For
537 example: \\ \verb_vif = [`mac=00:06:AA:F6:BB:B3']_\\ If you do not set
538 this variable, Xend will automatically generate a random MAC address
539 from an unused range.
541 \subsection{Starting the domain}
543 The {\tt xm} tool provides a variety of commands for managing domains.
544 Use the {\tt create} command to start new domains. To start the
545 virtual machine with virtual machine ID 1.
547 \begin{verbatim}
548 # xm create -c vmid=1
549 \end{verbatim}
551 The {\tt -c} switch causes {\tt xm} to turn into the domain's console
552 after creation. The {\tt vmid=1} sets the {\tt vmid} variable used in
553 the {\tt xmdefconfig} file. The tool uses the
554 \path{/etc/xen/xmdefconfig} file, since no custom configuration file
555 was specified on the command line.
557 \chapter{Domain management tasks}
559 The previous chapter described a simple example of how to configure
560 and start a domain. This chapter summarises the tools available to
561 manage running domains.
563 \section{Command line management}
565 Command line management tasks are also performed using the {\tt xm}
566 tool. For online help for the commands available, type:\\
567 \verb_# xm help_
569 \subsection{Basic management commands}
571 The most important {\tt xm} commands are: \\
572 \verb_# xm list_ : Lists all domains running. \\
573 \verb_# xm consoles_ : Gives information about the domain consoles. \\
574 \verb_# xm console_: open a console to a domain.
575 e.g. \verb_# xm console 1_ (open console to domain 1)
577 \subsection{\tt xm list}
579 The output of {\tt xm list} is in rows of the following format:\\
580 \verb_domid name memory cpu state cputime_
582 \begin{description}
583 \item[domid] The number of the domain ID this virtual machine is running in.
584 \item[name] The descriptive name of the virtual machine.
585 \item[memory] Memory size in megabytes.
586 \item[cpu] The CPU this domain is running on.
587 \item[state] Domain state consists of 5 fields:
588 \begin{description}
589 \item[r] running
590 \item[b] blocked
591 \item[p] paused
592 \item[s] shutdown
593 \item[c] crashed
594 \end{description}
595 \item[cputime] How much CPU time (in seconds) the domain has used so far.
596 \end{description}
598 The {\tt xm list} command also supports a long output format when the
599 {\tt -l} switch is used. This outputs the fulls details of the
600 running domains in Xend's SXP configuration format.
602 \chapter{Other kinds of storage}
604 It is possible to use any Linux block device to store virtual machine
605 disk images. This chapter covers some of the possibilities; note that
606 it is also possible to use network-based block devices and other
607 unconventional block devices.
609 \section{File-backed virtual block devices}
611 It is possible to use a file in Domain 0 as the primary storage for a
612 virtual machine. As well as being convenient, this also has the
613 advantage that the virtual block device will be {\em sparse} --- space
614 will only really be allocated as parts of the file are used. So if a
615 virtual machine uses only half its disk space then the file really
616 takes up a half of the size allocated.
618 For example, to create a 2GB sparse file-backed virtual block device
619 (actually only consumes 1KB of disk):
621 \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
623 Choose a free loop back device, and attach file: \\
624 \verb_# losetup /dev/loop0 vm1disk_ \\
625 Make a file system on the loop back device: \\
626 \verb_# mkfs -t ext3 /dev/loop0_
628 Populate the file system e.g. by copying from the current root:
629 \begin{verbatim}
630 # mount /dev/loop0 /mnt
631 # cp -ax / /mnt
632 \end{verbatim}
633 Tailor the file system by editing \path{/etc/fstab},
634 \path{/etc/hostname}, etc (don't forget to edit the files in the
635 mounted file system, instead of your domain 0 filesystem, e.g. you
636 would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab} ). For
637 this example put \path{/dev/sda1} to root in fstab.
639 Now unmount (this is important!):\\
640 \verb_# umount /dev/loop0_
642 In the configuration file set:\\
643 \verb_disk = [`phy:loop0,sda1,w']_
645 As the virtual machine writes to its `disk', the sparse file will be
646 filled in and consume more space up to the original 2GB.
648 {\em NB.} You will need to use {\tt losetup} to bind the file to
649 \path{/dev/loop0} (or whatever loopback device you chose) each time
650 you reboot domain 0. In the near future, Xend will track which loop
651 devices are currently free and do binding itself, making this manual
652 effort unnecessary.
654 \section{LVM-backed virtual block devices}
656 XXX Put some simple examples here - would be nice if an LVM user could
657 contribute some, although obviously users would have to read the LVM
658 docs to do advanced stuff.
660 \part{Quick Reference}
662 \chapter{Domain Configuration Files}
663 \label{cha:config}
665 XXX Could use a little explanation about possible values
667 Xen configuration files contain the following standard variables:
669 \begin{description}
670 \item[kernel] Path to the kernel image (on the server).
671 \item[ramdisk] Path to a ramdisk image (optional).
672 \item[builder] The name of the domain build function (e.g. {\tt'linux'} or {\tt'netbsd'}.
673 \item[memory] Memory size in megabytes.
674 \item[cpu] CPU to assign this domain to.
675 \item[nics] Number of virtual network interfaces.
676 \item[vif] List of MAC addresses (random addresses are assigned if not given).
677 \item[disk] Regions of disk to export to the domain.
678 \item[dhcp] Set to {\tt 'dhcp'} if you want to DHCP allocate the IP address.
679 \item[netmask] IP netmask.
680 \item[gateway] IP address for the gateway (if any).
681 \item[hostname] Set the hostname for the virtual machine.
682 \item[root] Set the root device.
683 \item[nfs\_server] IP address for the NFS server.
684 \item[nfs\_root] Path of the root filesystem on the NFS server.
685 \item[extra] Extra string to append to the kernel command line.
686 \item[restart] Three possible options:
687 \begin{description}
688 \item[always] Always restart the domain, no matter what
689 its exit code is.
690 \item[never] Never restart the domain.
691 \item[onreboot] (restart the domain if it requests reboot).
692 \end{description}
693 \end{description}
695 It is also possible to include Python scripting commands in
696 configuration files. This is done in the \path{xmdefconfig} file in
697 order to handle the {\tt vmid} variable.
700 \chapter{Xend (Node control daemon)}
701 \label{cha:xensv}
703 The Xen Daemon (Xend) performs system management functions related to
704 virtual machines. It forms a central point of control for a machine
705 and can be controlled using an HTTP-based protocol. Xend must be
706 running in order to start and manage virtual machines.
708 Xend must be run as root because it needs access to privileged system
709 management functions. A small set of commands may be issued on the
710 Xend command line:
712 \begin{tabular}{ll}
713 \verb_# xend start_ & start Xend, if not already running \\
714 \verb_# xend stop_ & stop Xend if already running \\
715 \verb_# xend restart_ & restart Xend if running, otherwise start it \\
716 \end{tabular}
718 An SysV init script called {\tt xend} is provided to start Xend at
719 boot time. The {\tt make install} will install this script in
720 {\path{/etc/init.d} automatically. To enable it, you can make
721 symbolic links in the appropriate runlevel directories or use the {\tt
722 chkconfig} tool, where available.
724 Once Xend is running, more sophisticated administration can be done
725 using the Xensv web interface (see Chapter~\ref{cha:xensv}).
727 \chapter{Xensv (Web interface server)}
728 \label{cha:xensv}
730 Xensv is the server for the web control interface. It can be started
731 using:\\
732 \verb_# xensv start_ \\
733 and stopped using:
734 \verb_# xensv stop_ \\
735 It will automatically start Xend if it is not already running.
737 By default, Xensv will serve out the web interface on port 8080. This
738 can be changed by editing {\tt
739 /usr/lib/python2.2/site-packages/xen/sv/params.py}.
741 Once Xensv is running, the web interface can be used to manage running
742 domains and provides a user friendly domain creation wizard.
744 \chapter{The xm tool}
745 \label{cha:xm}
747 XXX Add description of arguments and switches for all the options
749 The xm tool is the primary tool for managing Xen from the console.
750 The general format of an xm command line is:
752 \begin{verbatim}
753 # xm command [switches] [arguments] [variables]
754 \end{verbatim}
756 The available {\em switches } and {\em arguments}are dependent on the
757 {\em command} chosen. The {\em variables} may be set using
758 declarations of the form {\tt variable=value} and may be used to set /
759 override any of the values in the configuration file being used,
760 including the standard variables described above and any custom
761 variables (for instance, the \path{xmdefconfig} file uses a {\tt vmid}
762 variable).
764 The available commands are as follows:
766 \begin{description}
767 \item[create] Create a new domain.
768 \item[destroy] Kill a domain immediately.
769 \item[list] List running domains.
770 \item[shutdown] Ask a domain to shutdown.
771 \item[dmesg] Fetch the Xen (not Linux!) boot output.
772 \item[consoles] Lists the available consoles.
773 \item[console] Connect to the console for a domain.
774 \item[help] Get help on xm commands.
775 \item[save] Suspend a domain to disk.
776 \item[restore] Restore a domain from disk.
777 \item[pause] Pause a domain's execution.
778 \item[unpause] Unpause a domain.
779 \item[pincpu] Pin a domain to a CPU.
780 \item[bvt] Set BVT scheduler parameters for a domain.
781 \item[bvt\_ctxallow] Set the BVT context switching allowance for the system.
782 \item[fbvt] Set the FBVT scheduler parameters for a domain.
783 \item[fbvt\_ctxallow] Set the FBVT context switching allowance for the system.
784 \item[atropos] Set the atropos parameters for a domain.
785 \item[rrobin] Set the round robin time slice for the system.
786 \item[info] Get information about the Xen host.
787 \item[call] Call a Xend HTTP API function directly.
788 \end{description}
790 \chapter{Glossary}
792 \begin{description}
793 \item[Atropos] One of the CPU schedulers provided by Xen.
794 Atropos provides domains with absolute shares
795 of the CPU, with timeliness guarantees and a
796 mechanism for sharing out ``slack time''.
798 \item[BVT] The BVT scheduler is used to give proportional
799 fair shares of the CPU to domains.
801 \item[Exokernel] A minimal piece of privileged code, similar to
802 a {\bf microkernel} but providing a more
803 `hardware-like' interface to the tasks it
804 manages. This is similar to a paravirtualising
805 VMM like {\bf Xen} but was designed as a new
806 operating system structure, rather than
807 specifically to run multiple conventional OSs.
809 \item[FBVT] A derivative of the { \bf BVT } scheduler that
810 aims to give better fairness performance to IO
811 intensive domains in competition with CPU
812 intensive domains.
814 \item[Domain] A domain is the execution context that
815 contains a running { \bf virtual machine }.
816 The relationship between virtual machines
817 and domains on Xen is similar to that between
818 programs and processes in an operating
819 system: a virtual machine is a persistent
820 entity that resides on disk (somewhat like
821 a program). When it is loaded for execution,
822 it runs in a domain. Each domain has a
823 { \bf domain ID }.
825 \item[Domain 0] The first domain to be started on a Xen
826 machine. Domain 0 is responsible for managing
827 the system.
829 \item[Domain ID] A unique identifier for a { \bf domain },
830 analogous to a process ID in an operating
831 system. Apart from domain
833 \item[Full virtualisation] An approach to virtualisation which
834 requires no modifications to the hosted
835 operating system, providing the illusion of
836 a complete system of real hardware devices.
838 \item[Hypervisor] An alternative term for { \bf VMM }, used
839 because it means ``beyond supervisor'',
840 since it is responsible for managing multiple
841 ``supervisor'' kernels.
843 \item[Microkernel] A small base of code running at the highest
844 hardware privilege level. A microkernel is
845 responsible for sharing CPU and memory (and
846 sometimes other devices) between less
847 privileged tasks running on the system.
848 This is similar to a VMM, particularly a
849 {\bf paravirtualising} VMM but typically
850 addressing a different problem space and
851 providing different kind of interface.
853 \item[NetBSD/Xen] A port of NetBSD to the Xen architecture.
855 \item[Paravirtualisation] An approach to virtualisation which requires
856 modifications to the operating system in
857 order to run in a virtual machine. Xen
858 uses paravirtualisation but preserves
859 binary compatibility for user space
860 applications.
862 \item[Virtual Machine] The environment in which a hosted operating
863 system runs, providing the abstraction of a
864 dedicated machine. A virtual machine may
865 be identical to the underlying hardware (as
866 in { \bf full virtualisation }, or it may
867 differ, as in { \bf paravirtualisation }.
869 \item[VMM] Virtual Machine Monitor - the software that
870 allows multiple virtual machines to be
871 multiplexed on a single physical machine.
873 \item[Xen] Xen is a paravirtualising virtual machine
874 monitor, developed primarily by the
875 Systems Research Group at the University
876 of Cambridge Computer Laboratory.
878 \item[XenLinux] Official name for the port of the Linux kernel
879 that runs on Xen.
881 \end{description}
883 \part{Advanced Topics}
885 XXX More to add here, including config file format
887 \chapter{Advanced Network Configuration}
889 For simple systems with a single ethernet interface with a simple
890 configuration, the default installation should work ``out of the
891 box''. More complicated network setups, for instance with multiple
892 ethernet interfaces and / or existing bridging setups will require
893 some special configuration.
895 The purpose of this chapter is to describe the mechanisms provided by
896 xend to allow a flexible configuration for Xen's virtual networking.
898 \section{Xen networking scripts}
900 Xen's virtual networking is configured by 3 shell scripts. These are
901 called automatically by Xend when certain events occur, with arguments
902 to the scripts providing further contextual information. These
903 scripts are found by default in \path{/etc/xen}. The names and
904 locations of the scripts can be configured in \path{xend-config.sxp}.
906 \subsection{\path{network}}
908 This script is called once when Xend is started and once when Xend is
909 stopped. Its job is to do any advance preparation required for the
910 Xen virtual network when Xend starts and to do any corresponding
911 cleanup when Xend exits.
913 In the default configuration, this script creates the bridge
914 ``xen-br0'' and moves eth0 onto that bridge, modifying the routing
915 accordingly.
917 In configurations where the bridge already exists, this script could
918 be replaced with a link to \path{/bin/true} (for instance).
920 When Xend exits, this script is called with the {\tt stop} argument,
921 which causes it to delete the Xen bridge and remove {\tt eth0} from
922 it, restoring the normal IP and routing configuration.
924 \subsection{\path{vif-bridge}}
926 This script is called for every domain virtual interface. This should
927 do things like configuring firewalling rules for that interface and
928 adding it to the appropriate bridge.
930 By default, this adds and removes VIFs on the default Xen bridge.
931 This script can be customized to properly deal with more complicated
932 bridging setups.
934 \chapter{Advanced Scheduling Configuration}
936 \section{Scheduler selection}
938 Xen offers a boot time choice between multiple schedulers. To select
939 a scheduler, pass the boot parameter { \tt sched=sched\_name } to Xen,
940 substituting the appropriate scheduler name. Details of the schedulers
941 and their parameters are included below; future versions of the tools
942 will provide a higher-level interface to these tools.
944 \section{Borrowed Virtual Time}
946 BVT provides proportional fair shares of the CPU time. It has been
947 observed to penalise domains that block frequently (e.g. IO intensive
948 domains), so the FBVT derivative has been included as an alternative.
950 \subsection{Global Parameters}
952 \begin{description}
953 \item[ctx\_allow]
954 the context switch allowance is similar to the "quantum"
955 in traditional schedulers. It is the minimum time that
956 a scheduled domain will be allowed to run before be
957 pre-empted. This prevents thrashing of the CPU.
958 \end{description}
960 \subsection{Per-domain parameters}
962 \begin{description}
963 \item[mcuadv]
964 the MCU (Minimum Charging Unit) advance determines the
965 proportional share of the CPU that a domain receives. It
966 is set inversely proportionally to a domain's sharing weight.
967 \item[warp]
968 the amount of "virtual time" the domain is allowed to warp
969 backwards
970 \item[warpl]
971 the warp limit is the maximum time a domain can run warped for
972 \item[warpu]
973 the unwarp requirement is the minimum time a domain must
974 run unwarped for before it can warp again
975 \end{description}
977 \section{Fair Borrowed Virtual Time}
979 This is a derivative for BVT that aims to provide better fairness for
980 IO intensive domains as well as for CPU intensive domains.
982 \subsection{Global Parameters}
984 Same as for BVT.
986 \subsection{Per-domain parameters}
988 Same as for BVT.
990 \section{Atropos}
992 Atropos is a Soft Real Time scheduler. It provides guarantees about
993 absolute shares of the CPU (with a method for optionally sharing out
994 slack CPU time on a best-effort basis) and can provide timeliness
995 guarantees for latency-sensitive domains.
997 \subsection{Per-domain parameters}
999 \begin{description}
1000 \item[slice]
1001 The length of time per period that a domain is guaranteed.
1002 \item[period]
1003 The period over which a domain is guaranteed to receive
1004 its slice of CPU time.
1005 \item[latency]
1006 The latency hint is used to control how soon after
1007 waking up a domain should be scheduled.
1008 \item[xtratime]
1009 This is a true (1) / false (0) flag that specifies whether
1010 a domain should be allowed a share of the system slack time.
1011 \end{description}
1013 \section{Round Robin}
1015 The Round Robin scheduler is included as a simple demonstration of
1016 Xen's internal scheduler API. It is not intended for production use
1017 --- the other schedulers included are all more general and should give
1018 higher throughput.
1020 \subsection{Global parameters}
1022 \begin{description}
1023 \item[rr\_slice]
1024 The maximum time each domain runs before the next
1025 scheduling decision is made.
1026 \end{description}
1028 \chapter{Privileged domains}
1030 There are two possible types of privileges: IO privileges and
1031 administration privileges.
1033 \section{Driver domains (IO Privileges)}
1035 IO privileges can be assigned to allow a domain to drive PCI devices
1036 itself. This is used for to support driver domains.
1038 Setting backend privileges is currently only supported in SXP format
1039 config files (??? is this true - there's nothing in xmdefconfig,
1040 anyhow). To allow a domain to function as a backend for others,
1041 somewhere within the {\tt vm} element of its configuration file must
1042 be a {\tt backend} element of the form {\tt (backend ({\em type}))}
1043 where {\tt \em type} may be either {\tt netif} or {\tt blkif},
1044 according to the type of virtual device this domain will service.
1045 After this domain has been built, Xend will connect all new and
1046 existing {\em virtual} devices (of the appropriate type) to that
1047 backend.
1049 Note that:
1050 \begin{itemize}
1051 \item a block backend cannot import virtual block devices from other
1052 domains
1053 \item a network backend cannot import virtual network devices from
1054 other domains
1055 \end{itemize}
1057 Thus (particularly in the case of block backends, which cannot import
1058 a virtual block device as their root filesystem), you may need to boot
1059 a backend domain from a ramdisk or a network device.
1061 The privilege to drive PCI devices may also be specified on a
1062 per-device basis. Xen will assign the minimal set of hardware
1063 privileges to a domain that are required to control its devices. This
1064 can be configured in either format of configuration file:
1066 \begin{itemize}
1067 \item SXP Format:
1068 Include {\tt device} elements
1069 {\tt (device (pci (bus {\em x}) (dev {\em y}) (func {\em z}))) } \\
1070 inside the top-level {\tt vm} element. Each one specifies the address
1071 of a device this domain is allowed to drive ---
1072 the numbers {\em x},{\em y} and {\em z} may be in either decimal or
1073 hexadecimal format.
1074 \item Flat Format: Include a list of PCI device addresses of the
1075 format: \\ {\tt pci = ['x,y,z', ...] } \\ where each element in the
1076 list is a string specifying the components of the PCI device
1077 address, separated by commas. The components ({\tt \em x}, {\tt \em
1078 y} and {\tt \em z}) of the list may be formatted as either decimal
1079 or hexadecimal.
1080 \end{itemize}
1082 \section{Administration Domains}
1084 Administration privileges allow a domain to use the ``dom0
1085 operations'' (so called because they are usually available only to
1086 domain 0). A privileged domain can build other domains, set scheduling
1087 parameters, etc.
1089 % Support for other administrative domains is not yet available...
1091 \chapter{Xen build options}
1093 For most users, the default build of Xen will be adequate. For some
1094 advanced uses, Xen provides a number of build-time options:
1096 At build time, these options should be set as environment variables or
1097 passed on make's command-line. For example:
1099 \begin{verbatim}
1100 export option=y; make
1101 option=y make
1102 make option1=y option2=y
1103 \end{verbatim}
1105 \section{List of options}
1107 {\bf debug=y }\\
1108 Enable debug assertions and console output.
1109 (Primarily useful for tracing bugs in Xen). \\
1110 {\bf debugger=y }\\
1111 Enable the in-Xen pervasive debugger (PDB).
1112 This can be used to debug Xen, guest OSes, and
1113 applications. For more information see the
1114 XenDebugger-HOWTO. \\
1115 {\bf perfc=y }\\
1116 Enable performance-counters for significant events
1117 within Xen. The counts can be reset or displayed
1118 on Xen's console via console control keys. \\
1119 {\bf trace=y }\\
1120 Enable per-cpu trace buffers which log a range of
1121 events within Xen for collection by control
1122 software. For more information see the chapter on debugging,
1123 in the Xen Interface Manual.
1125 \chapter{Xen boot options}
1127 These options are used to configure Xen's behaviour at runtime. They
1128 should be appended to Xen's command line, either manually or by
1129 editing \path{grub.conf}.
1131 \section{List of options}
1133 {\bf ignorebiostables }\\
1134 Disable parsing of BIOS-supplied tables. This may help with some
1135 chipsets that aren't fully supported by Xen. If you specify this
1136 option then ACPI tables are also ignored, and SMP support is
1137 disabled. \\
1139 {\bf noreboot } \\
1140 Don't reboot the machine automatically on errors. This is
1141 useful to catch debug output if you aren't catching console messages
1142 via the serial line. \\
1144 {\bf nosmp } \\
1145 Disable SMP support.
1146 This option is implied by 'ignorebiostables'. \\
1148 {\bf noacpi } \\
1149 Disable ACPI tables, which confuse Xen on some chipsets.
1150 This option is implied by 'ignorebiostables'. \\
1152 {\bf watchdog } \\
1153 Enable NMI watchdog which can report certain failures. \\
1155 {\bf noht } \\
1156 Disable Hyperthreading. \\
1158 {\bf badpage=$<$page number$>$[,$<$page number$>$] } \\
1159 Specify a list of pages not to be allocated for use
1160 because they contain bad bytes. For example, if your
1161 memory tester says that byte 0x12345678 is bad, you would
1162 place 'badpage=0x12345' on Xen's command line (i.e., the
1163 last three digits of the byte address are not
1164 included!). \\
1166 {\bf com1=$<$baud$>$,DPS[,$<$io\_base$>$,$<$irq$>$] \\
1167 com2=$<$baud$>$,DPS[,$<$io\_base$>$,$<$irq$>$] } \\
1168 Xen supports up to two 16550-compatible serial ports.
1169 For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
1170 9600-baud port, 8 data bits, no parity, 1 stop bit,
1171 I/O port base 0x408, IRQ 5.
1172 If the I/O base and IRQ are standard (com1:0x3f8,4;
1173 com2:0x2f8,3) then they need not be specified. \\
1175 {\bf console=$<$specifier list$>$ } \\
1176 Specify the destination for Xen console I/O.
1177 This is a comma-separated list of, for example:
1178 \begin{description}
1179 \item[vga] use VGA console and allow keyboard input
1180 \item[com1] use serial port com1
1181 \item[com2H] use serial port com2. Transmitted chars will
1182 have the MSB set. Received chars must have
1183 MSB set.
1184 \item[com2L] use serial port com2. Transmitted chars will
1185 have the MSB cleared. Received chars must
1186 have MSB cleared.
1187 \end{description}
1188 The latter two examples allow a single port to be
1189 shared by two subsystems (e.g. console and
1190 debugger). Sharing is controlled by MSB of each
1191 transmitted/received character.
1192 [NB. Default for this option is 'com1,tty'] \\
1194 {\bf conswitch=$<$switch-char$><$auto-switch-char$>$ } \\
1195 Specify how to switch serial-console input between
1196 Xen and DOM0. The required sequence is CTRL-<switch-char>
1197 pressed three times. Specifying '`' disables switching.
1198 The <auto-switch-char> specifies whether Xen should
1199 auto-switch input to DOM0 when it boots -- if it is 'x'
1200 then auto-switching is disabled. Any other value, or
1201 omitting the character, enables auto-switching.
1202 [NB. Default for this option is 'a'] \\
1204 {\bf nmi=xxx } \\
1205 Specify what to do with an NMI parity or I/O error. \\
1206 'nmi=fatal': Xen prints a diagnostic and then hangs. \\
1207 'nmi=dom0': Inform DOM0 of the NMI. \\
1208 'nmi=ignore': Ignore the NMI. \\
1210 {\bf dom0\_mem=xxx } \\
1211 Set the maximum amount of memory for domain0. \\
1213 {\bf tbuf\_size=xxx } \\
1214 Set the size of the per-cpu trace buffers, in pages
1215 (default 1). Note that the trace buffers are only
1216 enabled in debug builds. Most users can ignore
1217 this feature completely. \\
1219 {\bf sched=xxx } \\
1220 Select the CPU scheduler Xen should use. The current
1221 possibilities are 'bvt', 'atropos' and 'rrobin'. The
1222 default is 'bvt'. For more information see
1223 Sched-HOWTO.txt. \\
1225 {\bf pci\_dom0\_hide=(xx.xx.x)(yy.yy.y)... } \\
1226 Hide selected PCI devices from domain 0 (for instance, to stop it
1227 taking ownership of them so that they can be driven by another
1228 domain). Device IDs should be given in hex format. Bridge devices do
1229 not need to be hidden --- they are hidden implicitly, since guest OSes
1230 do not need to configure them.
1232 \chapter{Further Support}
1234 If you have questions that are not answered by this manual, the
1235 sources of information listed below may be of interest to you. Note
1236 that bug reports, suggestions and contributions related to the
1237 software (or the documentation) should be sent to the Xen developers'
1238 mailing list (address below).
1240 \section{Other documentation}
1242 For developers interested in porting operating systems to Xen, the
1243 {\em Xen Interface Manual} is distributed in the \path{docs/}
1244 directory of the Xen source distribution. Various HOWTOs are
1245 available in \path{docs/HOWTOS} but this content is being integrated
1246 into this manual.
1248 \section{Online references}
1250 The official Xen web site is found at: \\
1251 {\tt
1252 http://www.cl.cam.ac.uk/Research/SRG/netos/xen/] }.
1254 Links to other
1255 documentation sources are listed at: \\ {\tt
1256 http://www.cl.cam.ac.uk/Research/SRG/netos/xen/documentation.html}.
1258 \section{Mailing lists}
1260 There are currently two official Xen mailing lists:
1262 \begin{description}
1263 \item[xen-devel@lists.sourceforge.net] Used for development
1264 discussions and requests for help. Subscribe at: \\
1265 {\tt http://lists.sourceforge.net/mailman/listinfo/xen-devel}
1266 \item[xen-announce@lists.sourceforge.net] Used for announcements only.
1267 Subscribe at: \\
1268 {\tt http://lists.sourceforge.net/mailman/listinfo/xen-announce}
1269 \end{description}
1271 Although there is no specific user support list, the developers try to
1272 assist users who post on xen-devel. As the bulk of traffic on this
1273 list increases, a dedicated user support list may be introduced.
1275 \end{document}