[SGVLUG] Linux Partitioning for Server

Emerson, Tom (*IC) Tom.Emerson at wbconsultant.com
Mon Apr 6 16:32:58 PDT 2009


Something isn't complete here...

-----Original Message----- Of Edgar Garrobo

... building a CentOS 5 server and I'm at the partitioning phase.
... wondering if anyone can suggest a better(optimal) partitioning setup.
... It has a RAID 5 array of SATA drives with 1TB of space.
... The rest of the space it's assigning to the root directory "/"

The breakdown it shows is as follows:
LVM Groups
- VolGroup00   992368
  --LogVol01    swap 16213
  --LogVol00  /  ext3   976155
Hard Drives
- /dev/hda
  --/dev/hda1  /boot  ext3  101
  --/dev/hda2  VolGroup00  LVM PV  992368
=============

You've stated it has a "raid5 array", yet the partitioning only shows a single drive (/dev/hda)
Is this a hardware array?  (in other words, the BIOS only presents a single drive to the OS) or are you planning on doing this within Linux as a software array? (which would generally appear as /dev/md#)

When working with LVM and RAID, you can get into some really interesting scenarios.  "Logical" volumes, as you see here, are made up from "physical" volumes.  If you remember years back when everyone still understood that "Windows" was just a shell over the OS, there was a program out called "One Big Disk" that would combine physical volumes and make them all appear as "Drive C:" -- this is essentially what LVM does in Linux.

What confuses the issue is that Linux's "RAID" driver does essentially the same thing: combines multiple devices into "one big device".  The difference, however, is in purpose: LVM makes multiple devices appear as a single device for the sake of a really large storage space, while the RAID driver combines multiple devices with the goal of data safety/integrity.

You can even combine the two, and that's where tylenol helps :)

osnut:/etc/lvmconf # vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found active volume group "system"

osnut:/etc/lvmconf # lvscan
lvscan -- ACTIVE            "/dev/system/root" [2 GB]
lvscan -- ACTIVE            "/dev/system/swap" [1 GB]
lvscan -- ACTIVE            "/dev/system/logs" [4 GB]
lvscan -- ACTIVE            "/dev/system/data" [100 GB]
lvscan -- ACTIVE            "/dev/system/home" [10 GB]
lvscan -- 5 logical volumes with 117 GB total in 1 volume group
lvscan -- 5 active logical volumes

osnut:/etc/lvmconf # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ACTIVE   PV "/dev/md0" of VG "system" [8.47 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/md1" of VG "system" [114.49 GB / 5.96 GB free]
pvscan -- total: 2 [122.97 GB] / in use: 2 [122.97 GB] / in no VG: 0 [0]

osnut:/etc/lvmconf #

(and what's not shown here is that "md0" is a raid-mirror of two SCSI disks, while "md1" is a mirror of two PATA drives)

osnut:/etc/lvmconf # mount  [edited to remove non-physical mount points]
/dev/system/root on / type reiserfs (rw)
/dev/sdb3 on /boot type ext2 (rw)
/dev/sdb1 on /bootBackup type ext2 (rw)
/dev/sda1 on /bootx type ext2 (rw)
/dev/system/data on /srv type reiserfs (rw)
/dev/system/logs on /var type reiserfs (rw)
/dev/system/home on /home type reiserfs (rw)
osnut:/etc/lvmconf #

Yes, I know this is rather messy - I doubt I'll do it this way again (especially since reiserfs has fallen out of favor...)  This system is not overly stressed - it runs a couple of server programs (web, mail, NFS, and MySQL) of which the NFS and SQL servers remain "inside" the firewall -- the "web" server is passed through, and e-mail only exposes imap for reading while I'm "on the road" so despite being far from "optmimal", it does "work"

Working backwards from mtab (mount), I have:

   a 2GB partition for "/"
   a 1GB partition for "swap"
   a 4GB partition for "/var" [i.e., /var/log/...]
   a 10GB partition for "/home" (where my e-mail collects, as well as network-shared "home" directory)
   a 100GB partition for "/srv"  [the web server - other distros call this /www or similar...]

What I *should* have done is make /dev/md0 a "logical volume group" of /dev/system, and that would house "/" and "/var" (and I would probably break out "/tmp" as well)  then "/dev/md1" should have been made into a separate volume group (/dev/data) and THAT would be broken down into "/home" and "/srv" (and possibly even "/backup")

You asked "what are the caveats" of a single large partition -- what hasn't been mentioned in this thread so far is the fact that when you update the OS, you are at risk of losing your website and related data -- the installation procedure will often "recommend" reformatting the partition for "/", and if your /home or /srv/www/html files are on that "same" partition, well, they'll go up in a puff of re-formatted smoke...

(putting things like "/var" on it's own partition not only helps avoid the problem of filling up the "root" partition, but also allows for forensic continuity of records regarding the server, i.e., attempts access to the server itself, should the need arise [from your point of view, showing repeated attempts to break in over time and multiple OS's])



More information about the SGVLUG mailing list