Thanks for the detailed feedback Rae. I'm not so much worried about uptime for this particular server. The only way (in CentOS/Redhat) I can tell if one of the 3Ware hardware RAID controllers notices a drive failure or RAID problem is the loud alarm the card sounds. Not the best way, but it actually suffices for our little data center.<br>
<br><div class="gmail_quote">On Tue, Apr 7, 2009 at 4:13 PM, Rae Yip <span dir="ltr"><<a href="mailto:rae.yip@gmail.com">rae.yip@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
There have been plenty of replies already, but I'll throw in my $0.02.<br>
<br>
I tend to agree with Tom; LVM is a wonderful thing. The main idea is<br>
to allow you to size your filesystems according to your actual needs,<br>
and worry about the physical layout separately.<br>
<br>
That said, if you're running this as a production server, I agree with<br>
others who have said that RAID 1 is a better way to go for the root<br>
disk; especially if you want to get decent performance out of your<br>
swap.<br>
<br>
You may think that getting the most capacity out of your drives is<br>
your priority, but with 4 disks, you're losing 1 disk to parity<br>
anyway, and paying a likely uptime penalty for not having a hot spare.<br>
So why not go with RAID1 and worry less? You'll only get 2N capacity<br>
instead of 3N, but 66% capacity on five 9's uptime is better than 100%<br>
on three 9's, if you care at all about that.<br>
<br>
As for sizing your filesystems, it's good practice to keep a separate<br>
/boot because you generally don't want to deal with fs corruption at<br>
the bootloader prompt. I tend not to give all space to / because you<br>
want a separate /var and to keep fsck times low, but 2gb is too small<br>
for modern distros.<br>
<br>
It's okay to keep /home part of / on a prod server, as long as your<br>
apps and data are stored on a separate fs (ie. /app or /srv); you may<br>
want to keep your app logs on yet another fs to keep them from<br>
crowding your data, but this depends on the app. So this looks<br>
something like:<br>
<br>
/boot 20-100MB<br>
/ 20-40GB<br>
swap 1.5 - 2x RAM<br>
/var 6-15GB (more if you keep app logs here too)<br>
/app rest of space - /app/log size<br>
/app/log depends on app and desired retention<br>
<br>
That's only six logical partitions, not too much to remember. I would<br>
do things differently for a desktop (bigger separate /home, smaller<br>
/app).<br>
<br>
Finally I would ask, do you know how you can tell inside Linux that<br>
you've had a hardware RAID failure? I don't have much experience with<br>
PC RAID controllers, but if you can't tell when you need to replace a<br>
disk, then you have no business using hardware RAID. Factor in recent<br>
discussion about RAID5 not scaling to larger disks.<br>
<font color="#888888"><br>
-Rae.<br>
</font></blockquote></div><br>