During a reformat of my server I found that two of my hard drives had metadata on them. This data made the new operating system (Fedora 12) think that they were in a raid preventing them from being loaded as they were missing thier .ddf1_disks information file being that they weren’t in a raid. On the old OS I could confirm this fault by running
# /sbin/dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdf: asr discovering
NOTICE: /dev/sdf: ddf1 discovering
NOTICE: /dev/sdf: hpt37x discovering
NOTICE: /dev/sdf: hpt45x discovering
NOTICE: /dev/sdf: isw discovering
NOTICE: /dev/sdf: jmicron discovering
NOTICE: /dev/sdf: lsi discovering
NOTICE: /dev/sdf: nvidia discovering
NOTICE: /dev/sdf: pdc discovering
NOTICE: /dev/sdf: sil discovering
NOTICE: /dev/sdf: via discovering
NOTICE: /dev/sde: asr discovering
NOTICE: /dev/sde: ddf1 discovering
NOTICE: /dev/sde: hpt37x discovering
NOTICE: /dev/sde: hpt45x discovering
NOTICE: /dev/sde: isw discovering
NOTICE: /dev/sde: jmicron discovering
NOTICE: /dev/sde: lsi discovering
NOTICE: /dev/sde: nvidia discovering
NOTICE: /dev/sde: pdc discovering
NOTICE: /dev/sde: sil discovering
NOTICE: /dev/sde: via discovering
NOTICE: /dev/sdd: asr discovering
NOTICE: /dev/sdd: ddf1 discovering
NOTICE: /dev/sdd: hpt37x discovering
NOTICE: /dev/sdd: hpt45x discovering
NOTICE: /dev/sdd: isw discovering
NOTICE: /dev/sdd: jmicron discovering
NOTICE: /dev/sdd: lsi discovering
NOTICE: /dev/sdd: nvidia discovering
NOTICE: /dev/sdd: pdc discovering
NOTICE: /dev/sdd: sil discovering
NOTICE: /dev/sdd: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: ddf1 metadata discovered
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: ddf1 metadata discovered
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching .ddf1_disks
DEBUG: _find_set: not found .ddf1_disks
ERROR: ddf1: cannot find virtual drive record on /dev/sdc
NOTICE: added /dev/sdc to RAID set ".ddf1_disks"
DEBUG: _find_set: searching .ddf1_disks
DEBUG: _find_set: found .ddf1_disks
ERROR: ddf1: cannot find virtual drive record on /dev/sdb
NOTICE: added /dev/sdb to RAID set ".ddf1_disks"
DEBUG: set status of set ".ddf1_disks" to 16
INFO: Activating GROUP RAID set ".ddf1_disks"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set ".ddf1_disks"
DEBUG: freeing device ".ddf1_disks", path "/dev/sdc"
DEBUG: freeing device ".ddf1_disks", path "/dev/sdb"
I did find a command that would delete this raid setup but it failed to run because of the same fault
# /sbin/dmraid -E -r /dev/sdb
Do you really want to erase "ddf1" ondisk metadata on /dev/sdb ? [y/n] :y
ERROR: ddf1: seeking device "/dev/sdb" to 256055225090048
ERROR: writing metadata to /dev/sdb, offset 500107861504 sectors, size 0 bytes returned 0
ERROR: erasing ondisk metadata on /dev/sdb
Which brought me to low level formating the first 50MB of the drive. WARNING this erases everything on the drive. You will lose data please backup or move your data. The other thing to note is the /dev/sd? if you have multiple hard drives like me the wrong letter here can kill all data on the drive. Check yours by using df -h
# umount /dev/sdb1
# dd if=/dev/zero of=/dev/sdb bs=1M
It’s only the beginning and end we need to blank but I’ve been googling and trying for half a day and can’t find it. In that time it could have blanked the whole thing. After that you need to fdisk it
# /sbin/fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xa99048f4.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
The number of cylinders for this disk is set to 60801.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): p
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xa99048f4
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-60801, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-60801, default 60801):
Using default value 60801
Command (m for help): p
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xa99048f4
Device Boot Start End Blocks Id System
/dev/sdb1 1 60801 488384001 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
And create the filesystem
# /sbin/mkfs.ext3 /dev/sdb1
mke2fs 1.40.4 (31-Dec-2007)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
61063168 inodes, 122096000 blocks
6104800 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
3727 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Don’t forget the label for easier mounting and remount
# /sbin/e2label /dev/sdb1 LABEL=switch
# mount LABEL=switch /media/switch/
From that it stopped listing that drive as a raid, Yay!
# /sbin/dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdf: asr discovering
NOTICE: /dev/sdf: ddf1 discovering
NOTICE: /dev/sdf: hpt37x discovering
NOTICE: /dev/sdf: hpt45x discovering
NOTICE: /dev/sdf: isw discovering
NOTICE: /dev/sdf: jmicron discovering
NOTICE: /dev/sdf: lsi discovering
NOTICE: /dev/sdf: nvidia discovering
NOTICE: /dev/sdf: pdc discovering
NOTICE: /dev/sdf: sil discovering
NOTICE: /dev/sdf: via discovering
NOTICE: /dev/sde: asr discovering
NOTICE: /dev/sde: ddf1 discovering
NOTICE: /dev/sde: hpt37x discovering
NOTICE: /dev/sde: hpt45x discovering
NOTICE: /dev/sde: isw discovering
NOTICE: /dev/sde: jmicron discovering
NOTICE: /dev/sde: lsi discovering
NOTICE: /dev/sde: nvidia discovering
NOTICE: /dev/sde: pdc discovering
NOTICE: /dev/sde: sil discovering
NOTICE: /dev/sde: via discovering
NOTICE: /dev/sdd: asr discovering
NOTICE: /dev/sdd: ddf1 discovering
NOTICE: /dev/sdd: hpt37x discovering
NOTICE: /dev/sdd: hpt45x discovering
NOTICE: /dev/sdd: isw discovering
NOTICE: /dev/sdd: jmicron discovering
NOTICE: /dev/sdd: lsi discovering
NOTICE: /dev/sdd: nvidia discovering
NOTICE: /dev/sdd: pdc discovering
NOTICE: /dev/sdd: sil discovering
NOTICE: /dev/sdd: via discovering
NOTICE: /dev/sdc: asr discovering
NOTICE: /dev/sdc: ddf1 discovering
NOTICE: /dev/sdc: ddf1 metadata discovered
NOTICE: /dev/sdc: hpt37x discovering
NOTICE: /dev/sdc: hpt45x discovering
NOTICE: /dev/sdc: isw discovering
NOTICE: /dev/sdc: jmicron discovering
NOTICE: /dev/sdc: lsi discovering
NOTICE: /dev/sdc: nvidia discovering
NOTICE: /dev/sdc: pdc discovering
NOTICE: /dev/sdc: sil discovering
NOTICE: /dev/sdc: via discovering
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching .ddf1_disks
DEBUG: _find_set: not found .ddf1_disks
ERROR: ddf1: cannot find virtual drive record on /dev/sdc
NOTICE: added /dev/sdc to RAID set ".ddf1_disks"
DEBUG: set status of set ".ddf1_disks" to 16
INFO: Activating GROUP RAID set ".ddf1_disks"
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set ".ddf1_disks"
DEBUG: freeing device ".ddf1_disks", path "/dev/sdc"
On to the second drive, hope this has been a help to you too. As it could me two days of googling to get all of this.
6 comments
Skip to comment form
I had a similar problem with a disk that I used for my /backup partition. I was able to create the pv, vg and lvm, format and use the file system.
After a reboot one day, I noticed /backup was gone and I had a dmraid disk. After several hours or sifting through info on google I attempted the following.
dd if=/dev/zero of=/dev/sdb bs=1M count=500
Would overwrite the first 500MB.
I used fdisk to list the existing partitions, remove them. Ran the dd command above, recreated the partitions with exact same parameters as before.
After this I was able to activate my volume groups and mount the logical volumes on the disk.
Unfortunately this does not survive a reboot. It might be possible that I have not overwritten enough of the start of the disk or the metadata exists elsewhere.
Moral of the story, If you do not have data on the disk, dd the whole thing, unless you know of a method to eradicate the metadata only and not your data.
gr8, recreating partition worked like a charm, thanks
It’s a bug in the dmraid ddf1 handler – the erase function takes the offset to the metadata on the device (in bytes) and tries to use it as a sector count. Since the metadata is located at the end of the disk this naturally fails.
So “offset 500107861504 sectors” needs to be divided by the sector size (512 bytes) to give you 976773167 which you can then use with dd:
dd if=/dev/zero of=/dev/sda bs=512 seek=976773067 count=200
I took 100 off the number and erased 200 sectors just to be sure – but at this point dmraid no longer thought that there was a raid partition on the drive.
Author
Thanks for sharing your answer Peter, so I guess that’s the solve for it. Question, I imagine this process would keep the existing data intact? (not that I wouldn’t recommend anyone else backing up your data before running the dd command.)
I’ve got no idea how you figured it, mind you its been a long time since I looked into it. Hopefully I’ll not need it, but I know where the answer is if it is.
Thanks.
Solution of Peter confirmed.
Had the same problem with disks out of a RAID 10.
– dmraid -ay -vvv -d gives a (wrong) offset.
– devide the resulting offset by 512
– dd if=/dev/zero of=/dev/sda bs=512 seek= count=400
– again dmraid -ay -vvv -d which deletes some kind of config and confirms that the raid no longer exists…
Saves hours than clearing the whole disk.
Thanks for the info! It got me on the right path after my ‘update-grub’ command kept complaining about the RAID metadata from an old RAID1 hdd. Another posting revealed how to zero out the end of the drive. Posting my summary here for quick reference to others.
# Zero out the start and end of the drive to delete meta data left-over on drive from Linux software RAID:
sudo -s
umount /dev/sdXn
# where X = drive letter and n = partition number mounted
dd if=/dev/zero of=/dev/sdX bs=1M
# where X = drive letter; note that partition number is not included as we are zeroing out partition table)
dd bs=512 if=/dev/zero of=/dev/sda count=2048 seek=$((`blockdev –getsz /dev/sda` – 2048))
gparted
# (or non-gui ‘parted’). You’ll be prompted to create a new partition table.
# Then proceed as necessary to create new partition(s), click the check-mark to write, and move on with your life.
SOURCES:
+ http://www.lanchbury.id.au/?p=53
+ http://unix.stackexchange.com/questions/13848/wipe-last-1mb-of-a-hard-drive