Nobody cares if you do backups. All that counts is your ability to restore the data.
This page contains material about backup - from simple hints to complete scripts and links to further references. Please note that a few of these scripts date many years back; they may no longer be actual.
The script backup2net.sh allows the backup of large amounts of data to another backup location. It is very easy to use - the only requirement is that the "other backup location" is mounted on the system where you want to perform the backup. Thus, you can use any USB disk drive, or a NAS server somewhere on your network!
Download the backup2net.sh script (updated 2011-01).
A while ago the amount of data that should be included in a daily backup outgrew the volume of a CD-RW. At that time I did not have a DVD writer but access to a tape drive, so I wrote the script backup2tape.sh which later became the framework for backup2net.sh (see above). Some features:
The script relies on Jörg Schilling's star. I have been using this script with success for almost a decade to perform regular backups on several Linux machines, either manually or via cron, with both DDS2 and DLT tapes.
As a side note, I have stopped using tape backups and switched to network-based storage. While magnetic tapes are incredibly safe long-term archival media, one inconvenience (for me) lies in their slow operation: it can take several hours to retrieve a file.
Download the backup2tape.sh script (updated 2014-06).
One of the wonderful things with Linux is the built-in "allround" networking. This makes it very easy to use almost any device that is attached to a remote computer, such as the display, scanner, or other devices. If you want to use a remote tape drive (say, located on machine "tapehost") to backup your local computer, use something along the following line:
tar cvf - /path/to/file | ssh tapehost 'buffer -s 32k -p 75 -m 10m > /dev/nst0'
Tape access usually requires root rights, so you may need to enable superuser
access by adding the option
-h to the entry for
rshd in /etc/inetd.conf:
shell stream tcp nowait root /usr/sbin/tcpd in.rshd -Lh
You can use a similar procedure to clone an existing system to another one (where the remote system, here called targetPC, should be booted from floppy or CD, so that no access to the harddisk is required for system operation during the cloning process):
(cd / && tar cpf - .) | ssh targetPC '(cd / && tar xpf -)'
... or the other way round, i.e. you fetch the system from targetPC:
ssh sourcePC '(cd / && tar cpf - .)' | (cd / && tar xpf -)'
Computer with a pre-installed MS Windows often come without a recovery medium and you have to create these yourself. Sometimes some additional "one-key recovery" software is available, but I found out (the hard way ;-) that most of these work only if you do not modify the hard disk partition layout.
If you want to preserve the original data, I recommend to make a backup not only of the data, but of the complete partition. This is easily achieved using Linux; you may use any floppy-based Linux to do this:
dd if=/dev/sda1 | gzip -c > /path/to/backup/sda1.gz
sda1 is your windows partition and
is some free drive (local, USB or remote) where you can write your data.
Alternatively, do the same thing over ssh, preferably on a fast network:
dd if=/dev/sda1 | ssh login@remote 'gzip -c > /path/to/backup/sda1.gz'
The process may take a while and the resulting file may be quite large (more than
half of the partition size) even if the actually installed data are only 300 MB - this is due
to the archival of the "raw" partiton data.
Axel Buergers pointed out that this problem can be largely reduced by defragmenting and
then "filling up" the Windows partition with a huge NULL file, something like
dd if=/dev/zero of=/mnt/dos/eraseme bs=reallybignumber, which will abort
once the partition is full. The file /mnt/dos/eraseme can then be deleted and
the "clean" partition archived as above. This "cleaning" process can reduce the size
of the final partition image by 50%.
Repeat this for all relevant partitions.
In addition, do not forget to make a copy of the original MBR ...
dd if=/dev/sda of=/path/to/backup/mbr.512-bytes.original bs=512 count=1
.. and print the current partition table to file:
/sbin/fdisk -l /dev/sda > /path/to/backup/partitions.txt
To restore such an archived partition file to the original location:
gunzip -c /path/to/backup/sda1.gz | dd of=/dev/sda1
Among others, I maintain a Linux system that runs some MySQL databases. To save the database files in regular intervals, the script below is called every night via crontab. It archives the files to a backup drive, in our case a NFS-mounted drive on a physically different computer. - Please adjust the mount points to your installation before launching. Indications for setting up the cron job are in the file.
Similar to the above, this is a simple script to archive the /home directory tree to a remote computer. It is called as a cron job. - Please adjust the mount points to your installation before launching. Indications for setting up the cron job are in the file.
Firefox is a great web browser, but it has one downside: Mozilla products use a random folder name for user profiles. This causes a problem if you want to use the same bookmarks name on multiple computers, e.g. by synchronisation with unison.
The workaround is easy: Simply rename your profile folder in ~/.mozilla/firefox to whatever fits your needs - e.g. default.JHa. Then, edit ~/.mozilla/firefox/profiles.ini to reflect this change and you're done.
Source: Micah Carrick's blog (link is dead?).
If you do need to copy data over the network in a very simple way - without
ftp, even without
... there is still
netcat, the swiss knife of networking (on some systems the binary is called
I sometimes use a floppy-based Linux, such as tomsrtbt,
to copy data between two computers, e.g. to backup the whole harddisk of a laptop.
An example how to use
On the "sender" side, launch
tar and pipe its output through
specifying the destination IP address, "listen"-mode (
-l) and an arbitrary port number,
tar cvf - . | netcat -l -p 5555
On the "receiving" computer, just give the sender's IP address (or hostname)
and the same port number and pipe the output through
tar to unpack:
netcat 192.168.xxx.yyy 5555 | tar xvf -
... and the data will be transferred. The same works with almost any other
dd instead of
A comment to the importance of Backup was also published in the Heise Newsticker in 2002-06:
Einen nachdenklichen Akzent setzte Douglas O'Shaugnessy vom Support-Service der Firma Legato, der mit 18 Spezialisten nach dem Einsturz der Türme des World Trade Centers vor Ort arbeitete. Shaugnessy [...] berichtete von der häufig vergeblichen Suche nach Recovery-Plänen und brauchbaren Inventar-Verzeichnissen. Dies ließ die rein technische Arbeit der Datensicherer zu einer Mischung aus Puzzle und Detektivspiel werden. Mangels aussagekräftiger Kennzeichnung mussten seine Spezialisten 20.000 Bänder auf der Suche nach den neuesten Backups durchforsten und mehr als einmal einen viel früheren Versionsstand der Sicherungen einspielen, damit die Firmen wieder arbeiten konnten.
Shaugnessy präsentierte eine Übersicht, nach der die Firmen den größten Schaden hatten, deren Mitarbeiter im World Trade Center vor allem mit Laptops arbeiteten. Rund 30 Prozent der Firmendaten seien verloren, weil die Mitarbeiter ihre mobilen Rechner nur unregelmäßig im zentralen Firmen-Backup sicherten. Shaugnessys Vortrag endete mit einer dringlich klingenden Bitte: "Dokumentieren Sie Ihren Backup-Prozess. Dokumentieren Sie Ihre Recovery-Maßnahmen. Dokumentieren Sie Ihr Tape-System. Drucken Sie diese Informationen mehrfach aus und bewahren Sie diese an anderen, sicheren Orten."