Frequently Asked Questions
From Linux-VServer
We currently migrate to MediaWiki from our old installation, but not all content has been migrated yet. Take a look at the Wiki Team page for instructions how to help or look at the old wiki to find the information not migrated yet.
To ease migration we created a List of old Documentation pages.
CURRENTLY THE CONTENT OF THE OLD WIKI FAQ (AND MORE) IS BEING MIGRATED TO THIS PAGE (TASK: DERJOHN)
What is a 'Guest'?
What kind of Operating System (OS) can I run as guest?
Which distributions did you test?
Is VServer comparable to XEN/UML/QEMU?
Is VServer secure?
Performance?
Is SMP Supported?
Resource sharing?
- memory: Dynamically.
- CPU usage: Dynamically (token bucket)
Resource limiting?
Disk I/O limiting? Is that possible?
# cat /sys/block/hdc/queue/scheduler noop [anticipatory] deadline cfq
The default is anticipatory a.k.a. "AS". When running several guests on a host you probably want the I/O performance shared in a fair way among the different guests. The kernel comes with a "completely fair queueing" scheduler, CFQ, which can do that. (More on schedulers can be found at http://lwn.net/Articles/114770/)
This is how to set the scheduler to "cfq" manually:
root# echo "cfq" > /sys/block/hdc/queue/scheduler root# cat /sys/block/hdc/queue/scheduler noop anticipatory deadline [cfq]
Keep in mind that you have to do it on all physical discs. So if you run an md-softraid, do it to all physical /dev/hdXYZ discs!
If you run Debian there is a predefined way to set the /sys values at boot-time:
# apt-get install sysfsutils [...] # grep cfq /etc/sysfs.conf block/sda/queue/scheduler = cfq block/sdc/queue/scheduler = cfq # /etc/init.d/sysfsutils restart
For non-vserver processes and CFQ you can set by which key the kernel decides about the fairness:
cat /sys/block/hdc/queue/iosched/key_type pgid [tgid] uid gid
Hint: The 'key_type'-feature has been removed in the mainline kernel recently. Don't look for it any longer :(
The default is tgid, which means to share fairly among process groups. Think every guest is treated like a own process group. It's not possible to set a scheduler strategy within a guest. All processes belonging to the same guest are treated like "noop" within the guest. So: If you run apache and some ftp-server within the _same_ guest, there is no fair scheduling between them, but there is fair scheduling between the whole guest and all other guests.
And: It's possible to tune the scheduler parameters in several ways. Have a look at /sys/block/hdc/queue/....
Why isn't there a device /dev/xyz within a guest?
What is unification (vunify)?
What is vhashify?
It creates hardlinks to files named after a hash of the content of the file. If you have a recent version of the vserver patch (2.2+), with CONFIG_VSERVER_COWBL enabled, you can even modify the hardlinked files inside the vservers and the links will be broken automatically.
There seems to be a catch when a hashified file has multiple hardlinks inside a guest, or when another internal hardlink is added after hashification. Link breaking will remove all the internal hardlinks too, so the guest will end up with different copies of the original file. The correct solution would be to not hashify files that have multiple links prior to hashification, and to break the link to the hashified version when a new internal hardlink is created. Apparently, this is not implemented yet (?).
How do I manage a multi-guest setup with vhashify?
mkdir /etc/vservers/.defaults/apps/vunify/hash /vservers/.hash ln -s /vservers/.hash /etc/vservers/.defaults/apps/vunify/hash/root
Then, do this one line per vserver:
mkdir /etc/vservers/<vservername>/apps/vunify # vhashify reuses vunify configuration
To hashify a running vserver, do (possibly from a cronjob):
vserver name-of-guest hashify
The guest needs to be running because vhashify tries to figure out what files not to hashify by calling the package manager of the guest via vserver enter.
In order for the OS cache to benefit from the hardlinking, you'll have to restart the vservers.
To clean up hashified files that are no longer referenced by any vserver, do (possibly from a cronjob):
find /vservers/.hash -type f -links 1 -print0 | xargs -0 rmUntil you do this, the files still take up place even though no vservers need them.
With which version should I begin?
Is there a way to implement "user/group quota" per VServer?
What about "Quota" for a context?
Does it support IPv6?
I can't do all I want with the network interfaces inside the guest?
Is there a web-based interface for vserver that will allow creation/deletion/configuration etc. of vserver guests?
What is old-style and new-style config?
What is the "great flower page"?
How do I add several IPs to a vserver?
Here is a little helper-script that adds a list of IPs defined in a text file, one per line.
#!/bin/bash j=1 for i in `cat myiplist`; do j=$(($j+1)) mkdir $j echo $i > $j/ip echo "24" > $j/prefix done
If my host has only one a single public IP, can I use RFC1918 IP (e.g. 192.168.foo.bar) for the guest vservers?
iptables -t nat -I POSTROUTING -s $VSERVER_NETZ ! -d $VSERVER_NETZ -j SNAT --to $EXT_IP
See: HowtoPrivateNetworking and
http://www.tgunkel.de/it/software/doc/linux_server.en#h3-VServer_Masquerading_SNAT (THX, [MUPPETS]Gonzo)
If I shut down my vserver guest, the whole Internet interface ethX on the host is shut down. What happened?
kernel.shmmax = 134217728
</pre>