QNAP Disk Drive Recovery on Linux – Part I

After years of providing good services, my 2-bay QNAP TS-251 NAS finally gave up. Without much hesitation, I threw it out, overly confident that I would be able to recover the data from the drives at a later time. I missed that a resistor fix (What to do with TS-251+) might have brought it back to life, at least temporarily for backing up the data. I know Linux – so I thought – and connected the drive to a Linux system. I soon realized that QNAP added some features to the logical volume manager (LVM) that wouldn’t allow me to mount thin partitions. This was the start of a longer journey

QNAP, to their credit, provides all their changes and (OSS) sources on QNAP NAS GPL Sources. However, it still took quite a few steps and fixes to create a scaled-down environment to mount the LVM volume.

In this blog post, I’ll provide some more information and instructions for backing up a QNAP disk drive connected to a regular Linux (Ubuntu) system using a VM. In a second part, I’ll explain the steps necessary to build such an environment.

Problem Description

After connecting a QNAP drive on a Linux system, you might see the following error when using lvs.

WARNING: Unrecognised segment type tier-thin-pool
...
Internal error: LV segments corrupted in tp1.

This doesn’t actually indicate a “corruption” error but a missing feature for thin-provisioned volumes in the default kernel and LVM tools.

Recovery Options

Searching for options to recover a failed QNAP NAS ranged from buying a new one, using commercially available recovery software, or trying to use a patched Linux version – well, there would have also been the option to fix the NAS, but I blew that one.

The commercial solutions I have found were R-Linux from r-tools technology and Recovery Explorer Both provide a free version, which I tried, and while they seemed to be able to recover the files, file names and directory structure were missing. The full versions might have been able to fully recover the drive, though.

I also came across a project that already provides a patched kernel and ramdisk to mount the QNAP drive: It all worked well, however, it didn’t seem to support networking or mounting a local directory. The project can be found here: QNAP Kernel and LVM.

Without any alternative but armed with the knowledge that at least one person was able to create a working VM for this task, I went ahead and built my own version of the Linux kernel and ramdisk.

Booting the Kernel and Ramdisk in a VM

If you don’t want to patch and build kernel, tools, and a root filesystem yourself, you can find already built versions in my https://github.com/czankel/qnap-kernel-ramdisk repository. I have only tested it with a single drive out of a RAID-1 (mirrored) array from a QNAP TS-251, so your mileage may vary.

It supports the following options as the destination targets:

  • Mount a local directory into the VM,
  • Mount a full drive into the VM,
  • Mount a remote (nfs) drive into the VM,
  • Use rsync to connect directory to a target system.

Setting up the Host OS

Starting with a fresh Ubuntu 24.04 installation, you’ll need the following additional packages:

  • qemu-system — for running the patched kernel as a VM
  • passt — if you want to use networking in the VM

Install these packages with apt:

apt update
apt install -y qemu-system passt

Note that passt is a newer networking option for Qemu and might not be supported on older Linux distributions. For more information, refer to: https://wiki.qemu.org/Documentation/Networking

Start RAID

Next, we have to ensure that the RAID array has been detected by the OS. In the case below, it is RAID-1 (mirrored) in degraded mode with only one disk. For RAID-0, you would need both drives, but I haven’t tried that.

Use the following command to show all started arrays:

sudo mdam --examine --scan

This should list the arrays that have been started:

ARRAY /dev/md/9 metadata=1.0 UUID=57119544:81797d9a:1bf58736:a8021b31
ARRAY /dev/md/256 metadata=1.0 UUID=6e94ac6e:8cd7abaf:82262a9e:b21cdf4d
ARRAY /dev/md/1 metadata=1.0 UUID=9cd65040:09c91492:31a1e16c:722f9ca4
ARRAY /dev/md/13 metadata=1.0 UUID=734fd1db:44738e25:a129cd2d:d7fd6770
ARRAY /dev/md/322 metadata=1.0 UUID=30521e2e:4f88dc3f:3af8bbd1:7bc59141

If these are missing, use mdadm --assemble --scan to start the arrays and check again.

Identify the QNAP Partition

We can now identify the mdX device of the QNAP partition to backup with the lsblk that lists all block devices:

sdb                         8:16   0 931.5G  0 disk
├─sdb1                      8:17   0 517.7M  0 part
│ └─md127                   9:127  0     0B  0 md
├─sdb2                      8:18   0 517.7M  0 part
│ └─md125                   9:125  0     0B  0 md
├─sdb3                      8:19   0   922G  0 part
│ └─md126                   9:126  0   922G  0 raid1
├─sdb4                      8:20   0 517.7M  0 part
│ └─md124                   9:124  0     0B  0 md
└─sdb5                      8:21   0     8G  0 part
  └─md123                   9:123  0     0B  0 md

The data partition is typically the third and largest partition. In this case, it’s md126, as highlighted above. Note that the mdX enumeration changes between reboots so ensure to pick the correct device.

Preparing the VM

Starting with the qemu template below, we need to add arguments for optionally enabling networking and the target destination. Chose one of these options:

  1. Use a local drive partition or file (as a partition)
  2. Use a local directory
  3. Use a remote NFS share
  4. Use rsync directly to a remote host

Replace the md device with the actual number from above and use trailing ‘\’ when adding additional arguments to continue the command line:

qemu-system-x86_64 \
-kernel "vmlinux-5.10-qnap" \
-initrd "initrd-5.10-qnap.img" \
-nographic -m 2048M -smp 4 \
-serial mon:stdio \
-drive /dev/md126 \
-append "init=/sbin/init console=ttyS0"

Setting up Networking

If you want to use networking inside the VM, for example, to mount a remote nfs share or use rsync directly to a remote host, start passt now and add the following lines to the qemu command above. Use the socket information from the output of passt. This is typically /tmp/passt_1.socket.

-device virtio-net-pci,netdev=s \
-netdev stream,id=s,server=off,addr.type=unix,addr.path=/tmp/passt_1.socket \

Note that passt will not route ICMP packages when running in user mode, and tools such as ping or traceroute won’t work. To enable ICMP, run the following command with a range of group ids (gid). Use id to identify a viable group id and replace both 1000 values (range) with that value. Ubuntu creates the first user with the group id 1000, which is used here:

sudo sh -c "echo 1000 1000 > /proc/sys/net/ipv4/ping_group_range"

Adding a Local Drive or File

The following argument adds a local drive to the VM. It is exposed as an /dev/sdX drive inside the VM.

-drive file=/dev/DRIVE \

Adding a Local Directory

Qemu also supports mounting a directory from the host directly through the virtfs driver. Replace LOCALPATH with the path to the exported directory. In this case, we are defining the mount tag share that is used later inside the VM to refer to this directory:

-virtfs local,path=LOCALPATH,mount_tag=share,security_model=mapped-xattr \

Booting the VM

You can now start the VM. Once booted, hit enter to start a shell.

First, let’s create two mounting points /mnt/src and /mnt/dst for the source and destination directories:

mkdir /mnt/src
mkdir /mnt/dst

Enabling Networking (optional)

If you have booted the VM with networking, use the following instructions to configure the network interface and routing table. Passt uses the 10.0.2/24 network and provides a gateway as 10.0.2.2. It also provides a DNS and DHCP server, which we are not using here. Hosts addresses would typically start with 10.0.2.15.

ip link set eth0 up
ip addr add 10.0.2.15/24 dev eth0
ip route add default via 10.0.2.1

Mounting the Source LVM Partition

We can now enable the LVM volumes:

vgchange -ay

Uselvdisplay to display all local volumes:

lvdisplay|grep -e LV\ Name -e Path -e Status -e Size

This will print all logical volumes with the path, status, and size. They should all have the status available, including thin-provisioned partitions:

LV Name lv1
LV Status available
LV Size 400.00 GiB
LV Path /dev/vg1/lv288
LV Name lv288
LV Status available
LV Size 300.00 GiB
LV Path /dev/vg1/lv1312
LV Name lv1312
LV Status available
LV Size 92.00 MiB
LV Path /dev/vg1/lv544
LV Name lv544
LV Status available
LV Size 9.13 GiB

Depending on the storage configuration you had set up on the QNAP device this list will look differently. In this case, there are two large data volumes. To mount the lv1 volume on /mnt/src, use:

mount /dev/vg1/lv1 /mnt/src

Mounting a Local Drive or Partition File as the Destination (Option 1)

If you have added a partition from a local drive or partition file, the guest OS should have created a device, such as /dev/sda or dev/sdb (ensure to use the correct device). You can mount it to /mnt/dst:

mount /dev/sdb /mnt/dst

Mounting a Local Directory as the Destination (Option 2)

Mounting a local directory uses the Plan-9 filesystem via the virtio driver. Use the tag you have specified in the qemu command. In this case, we have used share:

mount -t 9p -o trans=virtio -oversion=9p2000.L,posixacl,msize=10485760 share /mnt/dst

Mounting a Remote NFS Share as the Destination (Option 3)

Ensure to have networking enabled (see above) and that you can reach the remote host. Use the following command to mount the share SHARE on the remote host HOST:

mount -t nfs4 HOST:SHARE /mnt/dst

Note that in my tests, this option was much slower than using rsync directly but there might have been some additional rsync options to improve this command.

Backing up the QNAP Partition

You can now use rsync to copy the QNAP partition to the destination target:

rsync -av /mnt/src /mnt/dst

Note that rsync tries to preserve the file owner user and group ids by looking these up by name (and not numeric). This might cause problems for files that are owned by root or other users. You can, for example, use the --chown=user:group option to set the user and group. Please refer to the rsync man page for more options.

If you want to copy the QNAP partition to a remote system directly, use the following command:

rsync -av /mnt/src <user>@<host>:<path>

Final Notes

I hope these instructions and tools are useful for anyone trying to restore and backup the data from a QNAP drive. In a second part, I will describe the steps for patching building the kernel and ramdisk.

References