[SGVLUG] Sending backup file to remote/offsite storage

Matthew Gallizzi matthew.gallizzi at gmail.com
Thu Aug 10 10:38:42 PDT 2006


Ah, I see. Although I have not tried these solutions, I thought they would
work. Anyways, comments below.

On 8/10/06, David Lawyer <dave at lafn.org> wrote:
>
> On Tue, Aug 08, 2006 at 10:43:41AM -0700, Matthew Gallizzi wrote:
> > Tom,
> >
> > Hmm, what I'd probably do is one of the following
> > 1) If he wants version history, write a bash script to tar the files
> that
> > you feel are crucial (/home, /etc..) on his system, put them in a
> cronjob,
> > have the .tgz be saved to the same location, then have rsync copy over
> the
> > files in /home/backups or something to your server.
>
> One problem with a big tar archive (file) is that if it gets
> corrupted, everything could be lost.  Better to have a lot of small
> files as tbackup had.  It used afio instead of tar (although tar was
> an option) which creates one compressed file per file.  But it's
> defunct since it wasn't maintained.  It was programmed modularly with
> about 50 different C programs and scripts all linked into one program.
> Nice job!  I looked some of these over but was too busy to volunteer
> to maintain it.


Alright, if there is possibility of file corruption when transferred between
networks... I'd say the best thing to do is have a script do an md5sum on
the original file contents, and then do it again after the file has been
transferred across the net (to ensure everything transferred ok). Some
copying utilities (rsync?) might already have this capability, I don't know.

> 2) If he just wants backups and version history isn't important, then
> rsync
> > his /etc, /home, and whatever else you want to a location on your
> server.
> > Set rsync to only copy over the files that are either updated or new.
>
> A problem here is suppose you mess up a file on day one.  Then on day
> 2 you look at it and realized you messed it up and want the original
> back. But unfortunately, at night after day one the messed-up file was
> backed up and there's no copy of the original around.  cpbk (what I
> use) avoids that by putting a "trash" directory on the backup drive.
> More than once I've recovered what I needed from the trash.
> Unfortunately, cpbk became unmaintained and Debian dropped it.


To solve this, I would have a script on the backup server that copied over
the file contents and copied over for a 7 day archive... after the 7th day,
it will begin rewriting over the days one by one.

So is it true that the best backup packages of moderate simplicity
> have been abandoned.
>
>                         David Lawyer
> >
> > This is the way I would do it. Good luck.
> >
> > On 8/8/06, Emerson, Tom <Tom.Emerson at wbconsultant.com> wrote:
> > >
> > >I've set up a linux system for a friend, and even thought far enough
> > >ahead to set up a couple of cron-automated backup jobs so that if he
> > >hoses something, I'll at least have something I can recover from
> (though
> > >I'm finding it's painfully slow...)
> > >
> > >He recently had some (minor?) corruption on his hard drive, and it made
> > >me realize that the backups are all on the same physical device --
> while
> > >this is OK for cases where he munges a config file or some such, but
> > >doesn't do diddly if he loses the drive itself, so I'm formulating
> "plan
> > >B"
> > >
> > >It turns out that SuSE's administrative user interface (Yast) has a
> > >module for creating fairly rudimentary backups and automating them,
> > >which is what I've done (one for the "user" backup of /home, and
> another
> > >"system" backup of things like /etc, the actual packages that are
> > >installed, and so on)  You have the option of a "plain" tar file,
> > >gzipped tar file, gzipped tar file of tar sub-files, and so on.  About
> > >the only other thing you control is the location of the resulting file
> > >and "how many generations" to keep on disk.
> > >
> > >I'm not sure, but I think that the way this works is that the program
> > >first renames any prior instance of the named backup file (based on
> > >cdate?), then creates the new backup -- OR -- it renames the backup at
> > >the completion -- either way, what I typically "see" in the directory
> > >are files named with the date & time (14 digit number) followed by the
> > >name I gave it, so for instance you might see this in the directory:
> > >
> > >   20060807030456-user.tgz
> > >   20060807235214-system.tgz
> > >
> > >What I'd like to do is create a script to run [some time...] after the
> > >backup to copy the file to my server (via scp, most likely) at a time
> > >when I'm not likely to be using the system (4:45 am, for instance...)
> > >any suggestions on how to go about it?
> > >
> >
> >
> >
> > --
> > Matthew Gallizzi
>



-- 
Matthew Gallizzi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.sgvlug.net/pipermail/sgvlug/attachments/20060810/d9b8831e/attachment-0001.html


More information about the SGVLUG mailing list