Partitioning scheme ideas
Richard A Barrow
arb at aaa-comms.com
Thu Apr 12 21:54:33 BST 2007
Just gmirror it up and walk away, I have done something similar and have
never been back to the box since I built it 4 years ago for a =
company, the difference was it has apache mysql email and a samba share =
x 140GB SATA mirrored.
It's never gone down, and I see no reason for it to. FreeBSD 5.4 was =
then, and it is still running it.
Be aware that with hot swaps you may have issues replacing the drives =
software mirroring if something does happen.
Otherwise .. have fun with it broe :)
From: freebsd-users-admin at uk.freebsd.org
[mailto:freebsd-users-admin at uk.freebsd.org] On Behalf Of Dominic Marks
Sent: 12 April 2007 10:21
To: freebsd-users at uk.freebsd.org
Subject: Partitioning scheme ideas
I'm setting up a new remote system in a few weeks and thought
I'd discuss it here. Perhaps I can improve on my initial
The life-time for the system will be three years and in that
time I'd like to not have to make any emergency visits to it.
With that in mind I have purchased a box with four hot-swap
drives and a 3 year parts+labour warranty from HP. I also have
an iLO unit installed. So far so good.
As I've said I have four 80GB SATA drives (the Businesses data
capacity requirements are not that great).
My current plan is to put the system (OS & Applications) on to
a small mirror which is spread over all four drives. This
should give me the absolute maximum level of protection and not
waste much space either. Looking at the existing system (which
is being replaced) 4GB would be more than sufficient for /
and /usr/local. The number of installed applications will be
constant for the life time of the system so I am not risking
The remaining disc space is used by only two other things.
Samba file shares (~30GB) and E-Mail (~15GB). To get a good
balance for these I could go with two, two drive RAID1 mirrors,
or one, four drive RAID10. I don't have hardware RAID5
available, I'm not using geom_raid3 again (that is in use at
the moment and its really, really slow) and I haven't played
with the experimental graid5 module.
I've briefly flirted with the idea of running CURRENT on it to
get ZFS and making use of it for this, but when it is a remote
system which (will not be redundant) I shied away from it.
Especially so after being on the zfs-discuss mailing list where
there quite a few people posting about corruption and panics in
So, any thing obviously wrong, anything I haven't considered?
The entire data set is backed up off-site on a nightly basis
and the system will be protected with a UPS. I have roughly
two weeks before I start the build.
------ FreeBSD UK Users' Group - Mailing List ------
More information about the Ukfreebsd