Frequently Asked Questions
From Linux-VServer
We currently migrate to MediaWiki from our old installation, but not all content has been migrated yet. Take a look at the Wiki Team page for instructions how to help or look at the old wiki to find the information not migrated yet.
To ease migration we created a List of old Documentation pages.
CURRENTLY THE CONTENT OF THE OLD WIKI FAQ (AND MORE) IS BEING MIGRATED TO THIS PAGE (TASK: DERJOHN)
General
What is a 'Guest'?
What kind of Operating System (OS) can I run as guest?
Is this a new project? When was it started?
Which distributions did you test?
Is VServer comparable to XEN/UML/QEMU?
With which version should I begin?
Is VServer secure?
Performance?
What is the "great flower page"?
Resources usage
Resource sharing?
- memory: Dynamically.
- CPU usage: Dynamically (token bucket)
Resource limiting?
- using ulimits and rlimits (rlimit is a new feature of kernel 2.6/vs2.0.) per guest, to limit the memory consumption, the number of processes or file-handles, ... : see Resource Limits
- CPU usage : see CPU Scheduler
- disk space usage : see Disk Limits and Quota
How do I limit a guests RAM? I want to prevent OOM situations on the host!
If you want a recipe, do this:
- Check the size of memory pages. On x86 and x86_64 is usually 4 KB per page.
- Create /etc/vserver/<guest>/rlimits/
- Check your physical memory size on the host, e.g. with "free -m". maxram = kilobytes/pagesize.
- Limit the guests physical RAM to value smaller then maxram:
echo %%insertYourPagesHereSmallerThanMaxram%% > /etc/vserver/<guest>/rlimits/rss
- Check your swapspace, e.g. with 'swapon -s'. maxswap = swapkilobytes/pagesize.
- Limit the guest's maximum number of as pages to a value smaller than (maxram+maxswap):
echo %%desiredvalue%% > /etc/vserver/<guest>/rlimits/as
Disk I/O limiting? Is that possible?
# cat /sys/block/hdc/queue/scheduler noop [anticipatory] deadline cfq
The default is anticipatory a.k.a. "AS". When running several guests on a host you probably want the I/O performance shared in a fair way among the different guests. The kernel comes with a "completely fair queueing" scheduler, CFQ, which can do that. (More on schedulers can be found at http://lwn.net/Articles/114770/) This is how to set the scheduler to "cfq" manually:
root# echo "cfq" > /sys/block/hdc/queue/scheduler root# cat /sys/block/hdc/queue/scheduler noop anticipatory deadline [cfq]
Keep in mind that you have to do it on all physical discs. So if you run an md-softraid, do it to all physical /dev/hdXYZ discs! If you run Debian there is a predefined way to set the /sys values at boot-time:
# apt-get install sysfsutils [...] # grep cfq /etc/sysfs.conf block/sda/queue/scheduler = cfq block/sdc/queue/scheduler = cfq # /etc/init.d/sysfsutils restart
For non-vserver processes and CFQ you can set by which key the kernel decides about the fairness:
cat /sys/block/hdc/queue/iosched/key_type pgid [tgid] uid gid
Hint: The 'key_type'-feature has been removed in the mainline kernel recently. Don't look for it any longer :(
The default is tgid, which means to share fairly among process groups. Think every guest is treated like a own process group. It's not possible to set a scheduler strategy within a guest. All processes belonging to the same guest are treated like "noop" within the guest. So: If you run apache and some ftp-server within the _same_ guest, there is no fair scheduling between them, but there is fair scheduling between the whole guest and all other guests.
And: It's possible to tune the scheduler parameters in several ways. Have a look at /sys/block/hdc/queue/....
Nice disk I/O scheduling, is that possible?
It's split into three groups, called real-time, best effort and idle. The default is best-effort, but within best-effort, you can have a niceness from 0 to and including 7. You can set this niceness by the tool ionice, which for debian is either in the package util-linux or schedutils. To change the io-niceness you need the CAP_SYS_NICE, and need to have the same uid as the processe you want to ionice.
- Note: If you want to use any schedulung other than best-effort you will also need the CAP_SYS_ADMIN-flag. Be warned that this gives quite some capabilities to the vserver, not just for I/O scheduling!
If you want to increase the niceness of an I/O hogging process within a vserver you need to do:
chcontext --xid sponlp1 sudo -u '#2089' ionice -c2 -n5 -p24409with sudo and ionice installed on the root server to increase the *nice*ness of pid 24409, with uid 2089
Unification
What is unification (vunify)?
What is vhashify?
It creates hardlinks to files named after a hash of the content of the file. If you have a recent version of the vserver patch (2.2+), with CONFIG_VSERVER_COWBL enabled, you can even modify the hardlinked files inside the vservers and the links will be broken automatically.
There seems to be a catch when a hashified file has multiple hardlinks inside a guest, or when another internal hardlink is added after hashification. Link breaking will remove all the internal hardlinks too, so the guest will end up with different copies of the original file. The correct solution would be to not hashify files that have multiple links prior to hashification, and to break the link to the hashified version when a new internal hardlink is created. Apparently, this is not implemented yet (?).
How do I manage a multi-guest setup with vhashify?
mkdir /etc/vservers/.defaults/apps/vunify/hash /vservers/.hash ln -s /vservers/.hash /etc/vservers/.defaults/apps/vunify/hash/root
Then, do this one line per vserver:
mkdir /etc/vservers/<vservername>/apps/vunify # vhashify reuses vunify configuration
To hashify a running vserver, do (possibly from a cronjob):
vserver name-of-guest hashify
The guest needs to be running because vhashify tries to figure out what files not to hashify by calling the package manager of the guest via vserver enter.
In order for the OS cache to benefit from the hardlinking, you'll have to restart the vservers.
To clean up hashified files that are no longer referenced by any vserver, do (possibly from a cronjob):
find /vservers/.hash -type f -links 1 -print0 | xargs -0 rmUntil you do this, the files still take up place even though no vservers need them.
Filesystem usage
Is there a way to implement "user/group quota" per VServer?
What about "Quota" for a context? Howto limit disk usage?
How do I tag a guest's directory with xid?
Filesystem XID tagging only works on supported filesystem. Those are currently: ext2/3, reiserfs/reiser3, xfs and jfs. To activate the XID tagging you have to mount the filesystem with "-o tag" (former tagxid is outdated since VS2.2). Attention: It's _not_ possible to "-o remount,tag", you have to mount it freshly. The guests will tag their files automatiaclly. If you copy files in from the host, you have to tag them manually like this:
chxid -c xid -R /var/lib/vservers/<guest>
Note: Context 0 and 1 will see all files, guests will only be able to access untagged files and their own XID. They can see other XID files but no information about the file, e.g. no owner, no group, no permissions.
Note: It is not advised to tag the root filesystem, as explained by Herbert : trying to do so will expose you to some troubles !
Network
Does it support IPv6?
I can't do all I want with the network interfaces inside the guest?
How do I add several IPs to a vserver?
Here is a little helper-script that adds a list of IPs defined in a text file, one per line.
#!/bin/bash j=1 for i in `cat myiplist`; do j=$(($j+1)) mkdir $j echo $i > $j/ip echo "24" > $j/prefix done
How do I assign a new IP address to a running guest?
- add the ip on the host, for example
ip addr add 194.169.123.23/24 dev eth0
- add the ip to the guest's network context (a guests NID is the same as the XID {context ID})
naddress --add --nid <nid> --ip 194.169.123.23/24
- enter the guest (best via ssh)
- restart the services that need to make use of the new address if required
- update the config in /etc/vserver/<servername>/interfaces to reflect the changes for the next guest restart (if desired)
If my host has only one a single public IP, can I use RFC1918 IP (e.g. 192.168.foo.bar) forxplainable error "ncontext: vc_net_migrate(): No such process" when trying to start the vserver.
How do I assign a static context to an existing vserver?
Since upgrading to a newer VS version my guest complains about "vsched: non-numeric value specified for '--priority_bias" at start time. What's wrong?
# cat /usr/local/sbin/vserver-convert-schedule-to-scheddir #/bin/sh mkdir /etc/vservers/$1/sched sed -e 1p -n /etc/vservers/$1/schedule > /etc/vservers/$1/sched/fill-rate sed -e 2p -n /etc/vservers/$1/schedule > /etc/vservers/$1/sched/interval sed -e 3p -n /etc/vservers/$1/schedule > /etc/vservers/$1/sched/tokens sed -e 4p -n /etc/vservers/$1/schedule > /etc/vservers/$1/sched/tokens-min sed -e 5p -n /etc/vservers/$1/schedule > /etc/vservers/$1/sched/tokens-max mv /etc/vservers/$1/schedule /etc/vservers/$1/schedule.converted.see.scheddir # see: http://oldwiki.linux-vserver.org/Scheduler+Parameters # see: http://www.nongnu.org/util-vserver/doc/conf/configuration.html#sched
Here is an example how to do so:
# mkdir /etc/vservers/<vserver>/sysctl/0 -p # echo kernel.shmall > /etc/vservers/<vserver>/sysctl/0/setting # echo 134217728 > /etc/vservers/<vserver>/sysctl/0/value # mkdir /etc/vservers/<vserver>/sysctl/1 -p # echo kernel.shmmax > /etc/vservers/<vserver>/sysctl/1/setting # echo 134217728 > /etc/vservers/<vserver>/sysctl/1/value
It's also explained on the geat flower page:
- see: http://www.nongnu.org/util-vserver/doc/conf/configuration.html -> Look for "sysctl".
After changing those values, restart your guest, enter it and check if the values are set:
# sysctl -a | grep shm ... kernel.shmall = 134217728 kernel.shmmax = 134217728