Mal som raid5 pole o 4 diskoch (kazdy po 10gb) ktore som resizoval na 5 diskov po pridani dalsieho disku do stroja.
Proces to bol bezstratovy - vsetky data z povodneho pola vyzeraju OK, Povodna velkost pola bola nejakych 28GB (4*10GB - 1*10GB na checksumy). Terajsia velkost je 37GB (5*10GB - 1*10GB na checksumy). Tie velkosti su velkosti ktore pise df z toho co vravi filessytem, takze ja este pocitat viem :)
Na p2 400/768mb ram trvalo pod dve hodiny, hd[f-h] su na SIL CMD640 radici [UDMA5] a hda je na doske [UDMA2]
raidtab.new je fajl kde je uz pridany ten dalsi disk..
Po zbehnuti tohto celeho sa po raidstart /dev/md1 spustil este klasicky resync
sh-2.05b# raidstop /dev/md1
sh-2.05b# raidreconf -o /etc/raidtab -n /etc/raidtab.new -m /dev/md1
Parsing /etc/raidtab
Parsing /etc/raidtab.new
Size of old array: 78204168 blocks, Size of new array: 97755210 blocks
Old raid-disk 0 has 76370 chunks, 9775424 blocks
Old raid-disk 1 has 76370 chunks, 9775424 blocks
Old raid-disk 2 has 76370 chunks, 9775424 blocks
Old raid-disk 3 has 76370 chunks, 9775424 blocks
New raid-disk 0 has 76370 chunks, 9775424 blocks
New raid-disk 1 has 76370 chunks, 9775424 blocks
New raid-disk 2 has 76370 chunks, 9775424 blocks
New raid-disk 3 has 76370 chunks, 9775424 blocks
New raid-disk 4 has 76370 chunks, 9775424 blocks
Using 128 Kbyte blocks to move from 128 Kbyte chunks to 128 Kbyte chunks.
Detected 774336 KB of physical memory in system
A maximum of 1573 outstanding requests is allowed
---------------------------------------------------
I will grow your old device /dev/md1 of 229110 blocks
to a new device /dev/md1 of 305480 blocks
using a block-size of 128 KB
Is this what you want? (yes/no): yes
Converting 229110 block device to 305480 block device
Allocated free block map for 4 disks
5 unique disks detected.
Working () [00229110/00229110] []
Source drained, flushing sink.
Reconfiguration succeeded, will update superblocks...
Updating superblocks...
handling MD device /dev/md1
analyzing super-block
disk 0: /dev/hda1, 9775521kB, raid superblock at 9775424kB
disk 1: /dev/hdf1, 9775521kB, raid superblock at 9775424kB
disk 2: /dev/hdg1, 9775521kB, raid superblock at 9775424kB
disk 3: /dev/hdh1, 9775521kB, raid superblock at 9775424kB
disk 4: /dev/hda6, 9775521kB, raid superblock at 9775424kB
Array is updated with kernel.
Disks re-inserted in array... Hold on while starting the array...
Maximum friend-freeing depth: 8
Total wishes hooked: 229110
Maximum wishes hooked: 1573
Total gifts hooked: 229110
Maximum gifts hooked: 990
Congratulations, your array has been reconfigured,
and no errors seem to have occured.
sh-2.05b# e2fsck -f /dev/md1
e2fsck 1.34 (25-Jul-2003)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
/lost+found not found. Create<y>? yes
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/md1: 8874/3670016 files (1.8% non-contiguous), 7329064/7331520 blocks
sh-2.05b#
sh-2.05b# resize2fs /dev/md1
resize2fs 1.34 (25-Jul-2003)
Resizing the filesystem on /dev/md1 to 9775360 (4k) blocks.
The filesystem on /dev/md1 is now 9775360 blocks long.
sh-2.05b#
============================================================
tak som to odskusal uz aj na ostrom poli 4x150GB -> 5x150GB
30 min fsck
13 hod raidreconf
30 min fsck
30 min resize2fs
------------------
15 hod dokopy aj s konfigurovanim atd
vsetko sa zda byt ok:
root@junk:~# df -h /dev/md0
Filesystem Size Used Avail Use% Mounted on
/dev/md0 565G 423G 142G 75% /storage
root@junk:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md0 : active raid5 hda5[4] hdh5[3] hdg5[2] hdf5[1] hde5[0]
601216000 blocks level 5, 128k chunk, algorithm 2 [5/5] [UUUUU]