Rescue system with RAID 0

Linux Kernel, Network, and Services configuration.
Post Reply
Message
Author
Darth9590
Posts: 20
Joined: 2022-02-21 19:12
Has thanked: 3 times
Been thanked: 1 time

Rescue system with RAID 0

#1 Post by Darth9590 »

Hello,

Booting into my previously working system, it only boots into GRUB. I have tried to restore or fix it, but I may have no idea what I am doing or maybe it's not fixable. One point of data, my 400W PSU failed 2 weeks ago and I upgraded it to an 800W Thermaltake, which I could boot into Debian fine. I am unsure if I had an HDD failure at this point.

My system is two 512 SSD that I made one logic drive in RAID 0. There is one other 125GB SSD that is just there for extra space. ls in GRUB shows

Code: Select all

(proc) (memdisk) (md/0) (hd0) (hd0,gpt1) (hd1) (hd1,gpt2) (hd1,gpt1) (hd2) (hd2,gpt2) (hd2,gpt1)
ls of (md/0) shows Filesystem type ext* - Last modification time 2024-11-15 21:55:39 Friday - Sector size 512B - Total size 997996544KiB
All other drives say No Known filesystem detected with ls.

I have tried this which resulted in error: attempt to read or write outside of disk 'hd0'.

Code: Select all

set root=(md/0)
linux /boot/vmlinuz-6.9.10+bpo-amd64 root=/dev/sda1
initrd /boot/initrd.img-6.9.10+bpo-amd64
error: attempt to read or write outside of disk 'hd0'.
I can see all my personal files by navigating (md/0). I am not sure how to proceed from aside a fresh install. Any advice on how I should proceed? Could this be a failing SSD that I should test? Thank you

lindi
Debian Developer
Debian Developer
Posts: 571
Joined: 2022-07-12 14:10
Has thanked: 2 times
Been thanked: 113 times

Re: Rescue system with RAID 0

#2 Post by lindi »

Boot from live usb and run

Code: Select all

cat /proc/partitions
. Then run "file -sL" for each of the devices to give us a better understanding of your setup.

Darth9590
Posts: 20
Joined: 2022-02-21 19:12
Has thanked: 3 times
Been thanked: 1 time

Re: Rescue system with RAID 0

#3 Post by Darth9590 »

Here are those results

Code: Select all

major minor  #blocks  name

   8        0  500107608 sda
   8        1     975872 sda1
   8        2  499130368 sda2
   8       16  117220824 sdb
   8       17  107421875 sdb1
   8       18    9796608 sdb2
   8       32  468851544 sdc
   8       33  468849664 sdc1
   9      127  967715840 md127
   8       48   30230528 sdd
   8       49    3427168 sdd1
   8       50       4768 sdd2
   7        0    2904948 loop0

/dev/sda: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 1000215215 sectors, extended partition table (last)

/dev/sda1: DOS/MBR boot sector, code offset 0x58+2, OEM-ID "mkfs.fat", sectors/cluster 8, Media descriptor 0xf8, sectors/track 63, heads 255, hidden sectors 2048, sectors 1951740 (volumes > 32 MB), FAT (32 bit), sectors/FAT 1904, reserved 0x1, serial number 0xd0ad144c, unlabeled

/dev/sda2: Linux Software RAID version 1.2 (1) UUID=84ae5824:cb048978:1a287a24:b6a550fb name=deathstar:0 level=0 disks=2

m/dev/sdb: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 234441647 sectors, extended partition table (last)

/dev/sdb1: Linux rev 1.0 ext4 filesystem data, UUID=9c604016-3942-42a6-8a58-5066f83bdb3c (extents) (64bit) (large files) (huge files)

/dev/sdb2: Linux swap file, 4k page size, little endian, version 1, size 2449151 pages, 0 bad pages, no label, UUID=e28028dc-03c3-437b-b88c-213b2ec25ed5

/dev/sdc: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 937703087 sectors, extended partition table (last)

/dev/sdc1: Linux Software RAID version 1.2 (1) UUID=84ae5824:cb048978:1a287a24:b6a550fb name=deathstar:0 level=0 disks=2

/dev/md127: Linux rev 1.0 ext4 filesystem data, UUID=c09ee2f9-8f0b-4bd2-a7cc-19904382206a (needs journal recovery) (extents) (64bit) (large files) (huge files)

/dev/sdd: ISO 9660 CD-ROM filesystem data (DOS/MBR boot sector) 'd-live 12.8.0 kd amd64' (bootable)

lindi
Debian Developer
Debian Developer
Posts: 571
Joined: 2022-07-12 14:10
Has thanked: 2 times
Been thanked: 113 times

Re: Rescue system with RAID 0

#4 Post by lindi »

Ok so you have two raid devices. It seems they have been automatically detected and the raid is up and running as /dev/md127 with a valid filesystem. Maybe you want to

Code: Select all

fsck /dev/md127
next to check the filesystem?

Darth9590
Posts: 20
Joined: 2022-02-21 19:12
Has thanked: 3 times
Been thanked: 1 time

Re: Rescue system with RAID 0

#5 Post by Darth9590 »

Hello, I ran fsck, and looks like some errors were found. I am not sure if this is due to an improper shutdown, possible I lost power while the computer was asleep at work.

Code: Select all

fsck from util-linux 2.38.1   
e2fsck 1.47.0 (5-Feb-2023)   
/dev/md127: recovering journal   
Clearing orphaned inode 57024857 (uid=1000, gid=1000, mode=0100644, size=16100)   
Clearing orphaned inode 50071561 (uid=0, gid=0, mode=0100644, size=53)   
Clearing orphaned inode 59244576 (uid=1000, gid=1000, mode=0100600, size=9082801)   
Clearing orphaned inode 59244573 (uid=1000, gid=1000, mode=0100600, size=8690436)   
Clearing orphaned inode 13647154 (uid=0, gid=0, mode=0100644, size=3048928)   
Clearing orphaned inode 13640002 (uid=0, gid=0, mode=0100644, size=51095)   
Clearing orphaned inode 13667023 (uid=0, gid=0, mode=0100644, size=372920)   
Clearing orphaned inode 13667347 (uid=0, gid=0, mode=0100644, size=691640)   
Clearing orphaned inode 13672962 (uid=0, gid=0, mode=0100644, size=817080)   
Clearing orphaned inode 13667340 (uid=0, gid=0, mode=0100644, size=190928)   
Clearing orphaned inode 13664903 (uid=0, gid=0, mode=0100644, size=8497304)   
Clearing orphaned inode 13664902 (uid=0, gid=0, mode=0100644, size=1083728)   
Clearing orphaned inode 13664723 (uid=0, gid=0, mode=0100644, size=3222)   
Clearing orphaned inode 13664722 (uid=0, gid=0, mode=0100644, size=326)   
Clearing orphaned inode 13667535 (uid=0, gid=0, mode=0100644, size=191072)   
Clearing orphaned inode 13667534 (uid=0, gid=0, mode=0100644, size=203056)   
Clearing orphaned inode 13667530 (uid=0, gid=0, mode=0100644, size=1406352)   
Clearing orphaned inode 13656234 (uid=0, gid=0, mode=0100644, size=14328)   
Clearing orphaned inode 13656233 (uid=0, gid=0, mode=0100644, size=387288)   
Clearing orphaned inode 13656232 (uid=0, gid=0, mode=0100644, size=18480)   
Clearing orphaned inode 13656231 (uid=0, gid=0, mode=0100644, size=1273360)   
Clearing orphaned inode 13656230 (uid=0, gid=0, mode=0100644, size=1953008)   
Clearing orphaned inode 57024799 (uid=1000, gid=1000, mode=0100644, size=849579)   
Clearing orphaned inode 13636074 (uid=0, gid=0, mode=0100644, size=182544)   
Clearing orphaned inode 13661692 (uid=0, gid=0, mode=0100644, size=1437848)   
Clearing orphaned inode 13635921 (uid=0, gid=0, mode=0100644, size=844736)   
Clearing orphaned inode 13635929 (uid=0, gid=0, mode=0100644, size=3344008)   
Clearing orphaned inode 13635928 (uid=0, gid=0, mode=0100644, size=2066856)   
Clearing orphaned inode 13634397 (uid=0, gid=0, mode=0100755, size=281096)   
Clearing orphaned inode 13634373 (uid=0, gid=0, mode=0100755, size=92544)   
Clearing orphaned inode 25165919 (uid=0, gid=0, mode=0100644, size=476048)   
Clearing orphaned inode 13633665 (uid=0, gid=0, mode=0100644, size=404096)   
Clearing orphaned inode 13634824 (uid=0, gid=0, mode=0100644, size=34872)   
Clearing orphaned inode 13633330 (uid=0, gid=0, mode=0100644, size=355328)   
Clearing orphaned inode 13668682 (uid=0, gid=0, mode=0100644, size=325904)   
Clearing orphaned inode 13646761 (uid=0, gid=0, mode=0100644, size=125000)   
Clearing orphaned inode 13646759 (uid=0, gid=0, mode=0100644, size=688160)   
Clearing orphaned inode 13646757 (uid=0, gid=0, mode=0100644, size=4730136)   
Clearing orphaned inode 13648496 (uid=0, gid=0, mode=0100644, size=27028)   
Clearing orphaned inode 13648491 (uid=0, gid=0, mode=0100644, size=18696)   
Clearing orphaned inode 13648386 (uid=0, gid=0, mode=0100644, size=18680)   
Clearing orphaned inode 13632835 (uid=0, gid=0, mode=0100644, size=14640)   
Clearing orphaned inode 13632834 (uid=0, gid=0, mode=0100644, size=60328)   
Clearing orphaned inode 13632833 (uid=0, gid=0, mode=0100644, size=14480)   
Clearing orphaned inode 13632824 (uid=0, gid=0, mode=0100644, size=907784)   
Clearing orphaned inode 13632823 (uid=0, gid=0, mode=0100644, size=14480)   
Clearing orphaned inode 13632821 (uid=0, gid=0, mode=0100755, size=1922136)   
Clearing orphaned inode 13632818 (uid=0, gid=0, mode=0100755, size=210904)   
Clearing orphaned inode 57024260 (uid=1000, gid=1000, mode=0100644, size=16100)   
Clearing orphaned inode 57541745 (uid=1000, gid=1000, mode=0100644, size=23612)   
Clearing orphaned inode 57541720 (uid=1000, gid=1000, mode=0100644, size=16100)   
Clearing orphaned inode 57541400 (uid=1000, gid=1000, mode=0100644, size=68668)   
Clearing orphaned inode 57022577 (uid=1000, gid=1000, mode=0100644, size=38348)   
Setting free inodes count to 59637904 (was 59637993)   
Setting free blocks count to 218633384 (was 218629909)   
/dev/md127: clean, 851824/60489728 files, 23295576/241928960 blocks

lindi
Debian Developer
Debian Developer
Posts: 571
Joined: 2022-07-12 14:10
Has thanked: 2 times
Been thanked: 113 times

Re: Rescue system with RAID 0

#6 Post by lindi »

I guess sda1 could be where your /boot is stored? Can you

Code: Select all

mount /dev/sda1 /mnt && ls /mnt && umount /mnt
to verify this?

If that is the case then in GRUB this would be called (hd0,gpt1) and surely not (md/0)?

Darth9590
Posts: 20
Joined: 2022-02-21 19:12
Has thanked: 3 times
Been thanked: 1 time

Re: Rescue system with RAID 0

#7 Post by Darth9590 »

sda1 returned with lost and found folder, no boot. It was 102GB which shouldn't be my boot. sdc1 is 1GB and that contains /EFI/debian/ (grub.conf, .efi files). Should there be a boot folder in there?

I found the boot folder under the MD127 disk, which contains the vmlinuz, entire files, efi, and grub folders. Did I mess up my installation six months ago?

Post Reply