Partitioning scheme ideas

Dominic Marks dom at helenmarks.co.uk
Thu Apr 19 10:30:10 BST 2007


On Thu, 19 Apr 2007 10:04:41 +0100 (BST)
Robert Watson <rwatson at FreeBSD.org> wrote:

Robert & Richard, many thanks for taking the time to reply.

> 
> On Thu, 12 Apr 2007, Dominic Marks wrote:
> 
> > With that in mind I have purchased a box with four hot-swap drives and a 3 
> > year parts+labour warranty from HP.  I also have an iLO unit installed.  So 
> > far so good.
> 
> I have two server boxes in the US and live in the UK, and had similar 
> long-term concerns.  Other than the normal redundant drives and power supplies 
> (i.e., moving parts), the main concerns are proper software configuration and 
> remote console/power access if needed.  I've been very impressed with iLo on 
> the HP hardware I've used in the past, and recommend it highly.  I'm much less 
> impressed with the lower end generic remote management parts in cheaper HP 
> hardware, where the BIOS's tend to contain incorrect information, you get 
> stuck with serial-over-lan, etc.

Well, I'm optimistic I will have avoided that.  It is certainly
branded as iLO so if I have been sold some cheaper rubbish then
I will not be a happy man!

> > As I've said I have four 80GB SATA drives (the Businesses data capacity 
> > requirements are not that great).
> >
> > My current plan is to put the system (OS & Applications) on to a small 
> > mirror which is spread over all four drives.  This should give me the 
> > absolute maximum level of protection and not waste much space either. 
> > Looking at the existing system (which is being replaced) 4GB would be more 
> > than sufficient for / and /usr/local.  The number of installed applications 
> > will be constant for the life time of the system so I am not risking much 
> > here.
> 
> Sometimes people make the mistake of not putting swap on the RAID.  Don't make 
> that mistake. :-)

Indeed. Just after I sent the initial message it occurred to me
that I should have mentioned I would be doing mirrored swap too.

> > The remaining disc space is used by only two other things. Samba file shares 
> > (~30GB) and E-Mail (~15GB).  To get a good balance for these I could go with 
> > two, two drive RAID1 mirrors, or one, four drive RAID10.  I don't have 
> > hardware RAID5 available, I'm not using geom_raid3 again (that is in use at 
> > the moment and its really, really slow) and I haven't played with the 
> > experimental graid5 module.
> 
> I think it's all about the number of failures you want to tolerate.  For 
> long-haul survival of the system where over-committing resources isn't a 
> problem, a mirror with many replicas seems like a pretty good model.  There 
> have been varying opinions expressed over time regarding hot spares and I'm 
> not sure whether the current wisdom is to leave the drive idle and use it as a 
> rebuild target after a failure of one of the online drives, or simply to have 
> all drives online all the time.

>From what I have read recently, specifically Google's disc
audit report, it seems that disc activity has little or no
bearing on life time.  Which makes hot spares a luxury I can
live without.  Certainly this has been the case in my
experience.

> > I've briefly flirted with the idea of running CURRENT on it to get ZFS and 
> > making use of it for this, but when it is a remote system which (will not be 
> > redundant) I shied away from it. Especially so after being on the 
> > zfs-discuss mailing list where there quite a few people posting about 
> > corruption and panics in Solaris ZFS.
> 
> While ZFS looks like it will be excellent technology to use in this scenario 
> in the future (with nice properties like being able to set the level of 
> replication per volume, and error detection/healing), it's definitely 
> experimental on both Solaris and FreeBSD.  I would not deploy it in production 
> at this point for systems I'm unable to tolerate partial or complete failure 
> for.  Maybe in six to twelve months with 7.1 out the door, it will be a less 
> risky proposition.

I'm sure all FreeBSD users are chomping at the bit for
7.0-RELEASE now!  I know I am.

> > So, any thing obviously wrong, anything I haven't considered? The entire 
> > data set is backed up off-site on a nightly basis and the system will be 
> > protected with a UPS.  I have roughly two weeks before I start the build.
> 
> iLo should address both the remote power and remote console concerns.  Get 
> your partitioning right up front, and do set up enough swap so you can get a 
> crash dump if you need to.  Make sure you always keep a /boot/kernel.good 
> around so you can back out remote kernel upgrades, and you might consider 
> keeping a spare /rescue.good around in case you need to recover.  If you 
> configure a firewall to "default to deny", consider also keeping a 
> kernel.good.GENERIC in your / so you can boot a kernel without the firewall in 
> the event you need to pull down replacement files over the network after a bad 
> upgrade.  While that's a pretty unlikely scenario, in the event it happens 
> it's a lot easier to do that than try to figure out how to get the files onto 
> the disk without network access :-).

I have experienced this particular level of hell before,
several times.  It is even less fun when you don't have a
remote console and the system is not in co-location.

Luckily for this project a firewall will not be required.

> Finally, consider how you're going to handle backups -- remote backups are a 
> pain to deal with if your data size substantially exceeds available bandwidth. 
> Some colocation centers provide backup facilities, but usually at significant 
> cost.  With modern broadband and a relatively small data size, perhaps you're 
> fine with backing up over the network; in my case I storage 1/3TB of e-mail on 
> one of the remote servers, and that makes things a bit more tricky :-).

I am fortunate to have high-bandwidth connections at both ends
- and rsync is a miracle worker!

Thanks again,
Dominic




More information about the Ukfreebsd mailing list