Ah, I see. Although I have not tried these solutions, I thought they would work. Anyways, comments below.<br><br><div><span class="gmail_quote">On 8/10/06, <b class="gmail_sendername">David Lawyer</b> <<a href="mailto:dave@lafn.org">
dave@lafn.org</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">On Tue, Aug 08, 2006 at 10:43:41AM -0700, Matthew Gallizzi wrote:
<br>> Tom,<br>><br>> Hmm, what I'd probably do is one of the following<br>> 1) If he wants version history, write a bash script to tar the files that<br>> you feel are crucial (/home, /etc..) on his system, put them in a cronjob,
<br>> have the .tgz be saved to the same location, then have rsync copy over the<br>> files in /home/backups or something to your server.<br><br>One problem with a big tar archive (file) is that if it gets<br>corrupted, everything could be lost. Better to have a lot of small
<br>files as tbackup had. It used afio instead of tar (although tar was<br>an option) which creates one compressed file per file. But it's<br>defunct since it wasn't maintained. It was programmed modularly with<br>about 50 different C programs and scripts all linked into one program.
<br>Nice job! I looked some of these over but was too busy to volunteer<br>to maintain it.</blockquote><div><br>Alright, if there is possibility of file corruption when transferred between networks... I'd say the best thing to do is have a script do an md5sum on the original file contents, and then do it again after the file has been transferred across the net (to ensure everything transferred ok). Some copying utilities (rsync?) might already have this capability, I don't know.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> 2) If he just wants backups and version history isn't important, then rsync
<br>> his /etc, /home, and whatever else you want to a location on your server.<br>> Set rsync to only copy over the files that are either updated or new.<br><br>A problem here is suppose you mess up a file on day one. Then on day
<br>2 you look at it and realized you messed it up and want the original<br>back. But unfortunately, at night after day one the messed-up file was<br>backed up and there's no copy of the original around. cpbk (what I<br>
use) avoids that by putting a "trash" directory on the backup drive.<br>More than once I've recovered what I needed from the trash.<br>Unfortunately, cpbk became unmaintained and Debian dropped it.</blockquote><div>
<br>To solve this, I would have a script on the backup server that copied over the file contents and copied over for a 7 day archive... after the 7th day, it will begin rewriting over the days one by one. <br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
So is it true that the best backup packages of moderate simplicity<br>have been abandoned.<br><br> David Lawyer<br>><br>> This is the way I would do it. Good luck.<br>><br>> On 8/8/06, Emerson, Tom <
<a href="mailto:Tom.Emerson@wbconsultant.com">Tom.Emerson@wbconsultant.com</a>> wrote:<br>> ><br>> >I've set up a linux system for a friend, and even thought far enough<br>> >ahead to set up a couple of cron-automated backup jobs so that if he
<br>> >hoses something, I'll at least have something I can recover from (though<br>> >I'm finding it's painfully slow...)<br>> ><br>> >He recently had some (minor?) corruption on his hard drive, and it made
<br>> >me realize that the backups are all on the same physical device -- while<br>> >this is OK for cases where he munges a config file or some such, but<br>> >doesn't do diddly if he loses the drive itself, so I'm formulating "plan
<br>> >B"<br>> ><br>> >It turns out that SuSE's administrative user interface (Yast) has a<br>> >module for creating fairly rudimentary backups and automating them,<br>> >which is what I've done (one for the "user" backup of /home, and another
<br>> >"system" backup of things like /etc, the actual packages that are<br>> >installed, and so on) You have the option of a "plain" tar file,<br>> >gzipped tar file, gzipped tar file of tar sub-files, and so on. About
<br>> >the only other thing you control is the location of the resulting file<br>> >and "how many generations" to keep on disk.<br>> ><br>> >I'm not sure, but I think that the way this works is that the program
<br>> >first renames any prior instance of the named backup file (based on<br>> >cdate?), then creates the new backup -- OR -- it renames the backup at<br>> >the completion -- either way, what I typically "see" in the directory
<br>> >are files named with the date & time (14 digit number) followed by the<br>> >name I gave it, so for instance you might see this in the directory:<br>> ><br>> > 20060807030456-user.tgz<br>
> > 20060807235214-system.tgz<br>> ><br>> >What I'd like to do is create a script to run [some time...] after the<br>> >backup to copy the file to my server (via scp, most likely) at a time<br>> >when I'm not likely to be using the system (4:45 am, for instance...)
<br>> >any suggestions on how to go about it?<br>> ><br>><br>><br>><br>> --<br>> Matthew Gallizzi<br></blockquote></div><br><br clear="all"><br>-- <br>Matthew Gallizzi