MegaCLI Raid6 Array creation

I am using Ubuntu Karmic on Dell R610 to access MD1200 storage devices and since (until recently) Openmanage was not a option for the H800 SAS Raid adaptors so I had to explore the wonderful megacli utility!

I am using Ubuntu Karmic on Dell R610 to access MD1200 storage devices and since (until recently) Openmanage was not a option for the H800 SAS Raid adaptors so I had to explore the wonderful megacli utility!

# Find unused disks

root@srv-103-27:/opt/MegaRAID/MegaCli# ./MegaCli64 -PDList -a0 | grep -B14 Unconfigured | grep -e ‘^Enclosure Device ID:’ -e ‘^Slot Number:’
Enclosure Device ID: 41
Slot Number: 11
Enclosure Device ID: 80
Slot Number: 0
Enclosure Device ID: 80
Slot Number: 1
Enclosure Device ID: 80
Slot Number: 2ID: 80
Enclosure Device ID: 80
Slot Number: 3
Enclosure Device ID: 80
Slot Number: 4
Enclosure Device ID: 80
Slot Number: 5
Enclosure Device ID: 80
Slot Number: 6
Enclosure Device ID: 80
Slot Number: 7
Enclosure Device ID: 80
Slot Number: 8
Enclosure Device ID: 80
Slot Number: 9
Enclosure Device ID: 80
Slot Number: 10
Enclosure Device ID: 80
Slot Number: 11
Enclosure Device ID: 106
Slot Number: 0
Enclosure Device ID: 106
Slot Number: 1
Enclosure Device ID: 106
Slot Number: 2
Enclosure Device ID: 106
Slot Number: 3
Enclosure Device ID: 106
Slot Number: 4
Enclosure Device ID: 106
Slot Number: 5
Enclosure Device ID: 106
Slot Number: 6
Enclosure Device ID: 106
Slot Number: 7
Enclosure Device ID: 106
Slot Number: 8
Enclosure Device ID: 106
Slot Number: 9
Enclosure Device ID: 106
Slot Number: 10
Enclosure Device ID: 106
Slot Number: 11
root@srv-103-27:/opt/MegaRAID/MegaCli#

# Create Raid 6 Volume

root@srv-103-27:/opt/MegaRAID/MegaCli# ./MegaCli64 -CfgLdAdd -r6 [80:0,80:1,80:2,80:3,80:4,80:5,80:6,80:7,80:8,80:9,80:10] -a0

Adapter 0: Created VD 5

Adapter 0: Configured the Adapter!!

Exit Code: 0x00
root@srv-103-27:/opt/MegaRAID/MegaCli#

# add dedicated hot spares, we use dedicated as they stay with the array/shelf
root@srv-103-27:/opt/MegaRAID/MegaCli# ./MegaCli64 -PDHSP -Set -Dedicated -Array5 -PhysDrv [80:11] -a0

Adapter: 0: Set Physical Drive at EnclId-80 SlotId-11 as Hot Spare Success.

Exit Code: 0x00

18TB Volume and ext4 … you wish

Well I was not really tied to ext4 and it was not a big deal but come on lets stop the lies, ext4 only supports 16 binary tera bytes.

I have recently installed a few Dell MD1200 attached to R710 for long term storage and since I am using Ubuntu Karmic (9.10) I decided to go with ext4. I have read over the spec a few times quickly and had read about how“Ext4 adds 48-bit block addressing, so it will have 1 EB of maximum file system size” . I had not gotten this info from wiki page but rather the tech articles I have read. I get the raid arrays configured and try to create a ext4 fs and up pops this error “too big to be expressed in 32 bits”. I know I am running a 64bit version of Ubuntu so what gives? I double check just to confirm and sure enough “x86_64 GNU/Linux”. As I start to dig around the ugly truth pops up when I read the wiki page “The code to create file systems bigger than 16 TB is, at the time of writing this article, not in any stable release of e2fsprogs. It will be in future releases.” … future releases … ext4 has been in use for over a year now and is the default on karmic.

Well I was not really tied to ext4 and it was not a big deal but come on lets stop the lies, ext4 only supports 16 binary tera bytes and thats not likely to change any time soon.

If your looking for alternatives I suggest a good look at the tried and true xfs, and keep your eye on btrfs filesystem as it looks like it will be the first to bring the promisses of zfs to linux.