Replacing a Btrfs drive - best practices?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Replacing a Btrfs drive - best practices?

Matthew Crews
I'm looking to replace a near-failing hard drive, currently mounted as /home with Btrfs, with a pair of replacements in a Raid1 configuration. I was wondering about the pros and cons of using these two methods for doing so.

(For brevity, I will be referring to my drives as sda, sdb, etc)

Current setup:

/dev/sda mounted as /
/dev/sdb mounted as a Btrfs /home

Desired setup

/dev/sda mounted as / (no change)
/dev/sdb removed from system
/dev/sdc(d) in a Raid1 configuration mounted as Btrfs /home

Method 1:

1.Use the built in Btrfs tools to convert Sdb into a Raid1
2. add Sdc and Sdd to the RAID
3. Sync data between all 3 drives
4. Remove Sdb from RAID once sync is complete.

This seems like the simple solution, and can be done on a live system. But are the btrfs tools smart enough to mirror my data directly? Will I need to go and edit fstab afterwards? Will it just work(tm)?

Method 2:

1. Boot to a rescue disk.
2. Format Sdc and Sdd as a btrfs Raid1
3. Rsync data from Sdb to Sdc(d) array.
4. Manually edit fstab to point to my new drive
5. Reboot to live system

In theory this seems like it should just work, but obviously involves system downtime (I'm sure a similar method could work on a running system, but I can take the downtime hit). Will this also copy relevant snapshots? Do I need to do anything special to fstab besides change the UUID?

Finally, which is the preferred method, or is there a better method than the ones I've listed?

-Matt

Sent from ProtonMail, Swiss-based encrypted email.





--
ubuntu-users mailing list
[hidden email]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
Reply | Threaded
Open this post in threaded view
|

Re: Replacing a Btrfs drive - best practices?

Liam Proven
On 6 November 2017 at 19:12, Matthew Crews <[hidden email]> wrote:

>
> Finally, which is the preferred method, or is there a better method than the
> ones I've listed?

Disclaimer: I know virtually about Btrfs.

If it were me, I'd take method 2.

No, *AFAIK* rsync _won't_ copy snapshots. I think you'd have to
duplicate the partition at block level.

--
Liam Proven • Profile: https://about.me/liamproven
Email: [hidden email] • Google Mail/Talk/Plus: [hidden email]
Twitter/Facebook/Flickr: lproven • Skype/LinkedIn/AIM/Yahoo: liamproven
UK: +44 7939-087884 • ČR/WhatsApp/Telegram/Signal: +420 702 829 053

--
ubuntu-users mailing list
[hidden email]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
Reply | Threaded
Open this post in threaded view
|

Re: Replacing a Btrfs drive - best practices?

Ken D'Ambrosio
What you want to do is:

1) Set up your new RAID with the new disks.
2) Use "btrfs send" to send over your old volumes/snapshots.  It can be
a bit tricky, so give it a try, and if you still have problems, ping me
either here or directly, and I'll see if I can't step you through it.

Good luck!

-Ken

P.S.  Just for reference, here's the btrfs send/receive that I used the
other day to do an initial backup:

btrfs send /bigdisk-01/snapshots/newlxc-2017-10-24_13-28-RO | pv -L 25m
-b -a | btrfs receive /tmp/dest/backups/

Notes:
1) You have to first create a read-only snapshot of the volume/snapshot
you want to send.
2) The "pv -L 25m -b -a" is just so I can watch progress; otherwise, you
don't have any good way to tell how long it's going to take.
3) On the destination, you'll receive a read-only copy, as well.  You
can then make a snapshot of *that* that's read-write.


On 2017-11-06 16:03, Liam Proven wrote:

> On 6 November 2017 at 19:12, Matthew Crews <[hidden email]>
> wrote:
>
>>
>> Finally, which is the preferred method, or is there a better method
>> than the
>> ones I've listed?
>
> Disclaimer: I know virtually about Btrfs.
>
> If it were me, I'd take method 2.
>
> No, *AFAIK* rsync _won't_ copy snapshots. I think you'd have to
> duplicate the partition at block level.
>
> --
> Liam Proven • Profile: https://about.me/liamproven
> Email: [hidden email] • Google Mail/Talk/Plus: [hidden email]
> Twitter/Facebook/Flickr: lproven • Skype/LinkedIn/AIM/Yahoo: liamproven
> UK: +44 7939-087884 • ČR/WhatsApp/Telegram/Signal: +420 702 829 053

--
ubuntu-users mailing list
[hidden email]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
Reply | Threaded
Open this post in threaded view
|

Re: Replacing a Btrfs drive - best practices?

Matthew Crews
>-------- Original Message --------
>Subject: Re: Replacing a Btrfs drive - best practices?
>Local Time: November 6, 2017 2:19 PM
>UTC Time: November 6, 2017 9:19 PM
>From: [hidden email]
>To: "Ubuntu user technical support, not for general discussions" <[hidden email]>
>Matthew Crews <[hidden email]>
>
>What you want to do is:
>1. Set up your new RAID with the new disks.
>2. Use "btrfs send" to send over your old volumes/snapshots.  It can be
> a bit tricky, so give it a try, and if you still have problems, ping me
> either here or directly, and I'll see if I can't step you through it.
>
> Good luck!
>
> -Ken
>
> P.S.  Just for reference, here's the btrfs send/receive that I used the
> other day to do an initial backup:
>
> btrfs send /bigdisk-01/snapshots/newlxc-2017-10-24_13-28-RO | pv -L 25m
> -b -a | btrfs receive /tmp/dest/backups/
>
> Notes:
>3. You have to first create a read-only snapshot of the volume/snapshot
> you want to send.
>4. The "pv -L 25m -b -a" is just so I can watch progress; otherwise, you
> don't have any good way to tell how long it's going to take.
>5. On the destination, you'll receive a read-only copy, as well.  You
> can then make a snapshot of that that's read-write.

Thank you, this seems to be the most optimal method. I can then go and edit my fstab afterwards, and it should be golden.

Curious, why rate limit to 25 MiB/sec? Seems arbitrarily slow unless you are sending across a network. ("pv -L 25m" rate limits to 25 MiB/sec)
--
ubuntu-users mailing list
[hidden email]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
Reply | Threaded
Open this post in threaded view
|

Re: Replacing a Btrfs drive - best practices?

Ken D'Ambrosio

> Curious, why rate limit to 25 MiB/sec? Seems arbitrarily slow unless
> you are sending across a network. ("pv -L 25m" rate limits to 25
> MiB/sec)

D'oh.  My bad.  THat's 'cause I was pulling from a disk being hammered
in production, and didn't want to unduly slow it down.  Ignore that bit
for a non-production system. :-)

--
ubuntu-users mailing list
[hidden email]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
Reply | Threaded
Open this post in threaded view
|

Re: Replacing a Btrfs drive - best practices?

Matthew Crews
As a follow-up, here is the method I used that worked fairly well. This allowed me to move my Btrfs /home partition from a stand-alone disk to a new Btrfs raid1 array.


1. Make a Raid1 Btrfs filesystem

# mkfs.btrfs -m raid1 -d raid1 /dev/sdc /dev/sdd
where sdc and sdd are my new drives, sda is my existing / partition and sdb is my existing /home


2. Mount the Raid1 array

# mkdir /media/homebackup
# mount /dev/sdc /media/homebackup


3. Make a snapshot of my existing /home

# mount /dev/sdb1 /mnt
# btrfs subvolume snapshot /mnt/@home /mnt/@home_backup

Ubuntu Btrfs documentation recommends mounting Btrfs volumes to /mnt for ease of working with. See:
https://help.ubuntu.com/community/btrfs

Might as well make a backup of / (root) too. Just in case something gets mucked up in fstab later.
# mount /dev/sda1 /mnt
# btrfs subvolume snapshot /mnt/@ /mnt/@_backup

4. Send the snapshot to the new drive

# btrfs send /mnt/@home_backup | pv -b -a | btrfs receive /media/homebackup

This part can take several hours, depending on the amount of data that needs to be sent.


5. Make new snapshot, wdevid    2 size 1.82TiB used 1.21TiB path /dev/sdd

Label: none  uuid: 56b012cf-a870-4105-88e7-6f5806738e8d
        Total devices 1 FS bytes used 1.21TiB
        devid    1 size 1.82TiB used 1.58TiB path /dev/sdb1

hich resets to writable

# btrfs subvolume snapshot /media/homebackup/@home_backup /media/homebackup/@home


6. Edit /etc/fstab

The simple way is to change the UUID to point to the new drive(s), and have the old drive point to a new location. Something like:

UUID=XXX /media/homebackup btrfs defaults,subvol=@home,noauto 0 2
UUID=XXX /home btrfs noatime,nodiratime,autodefrag,subvol=@home 0 3


7. Reboot and verify

If all goes according to plan, the new raid array will mount as /home upon boot, and the old drive will not be mounted. In my case:

matthew@matt-linux-desktop:~$ df -k
Filesystem              1K-blocks       Used Available Use% Mounted on
/dev/sda1               470807552    8777432 460347848   2% /
/dev/sdc               1953514584 1302434304 650683152  67% /home
/dev/sdc               1953514584 1302434304 650683152  67% /mnt

matthew@matt-linux-desktop:~$ sudo lsblk -f
NAME           FSTYPE   LABEL UUID                                 MOUNTPOINT
sda                                                                
├─sda1         btrfs          dde09d67-fe28-4a79-ac1d-3dec0ef486ca /
├─sda2                                                            
└─sda5         swap           21c78577-95f6-49fb-be38-862013ec9252
  └─cryptswap1 swap           e3c2d3de-e2d0-4fba-9fda-d929aea3167b [SWAP]
sdb                                                                
└─sdb1         btrfs          56b012cf-a870-4105-88e7-6f5806738e8d
sdc            btrfs          d3c8cb5d-5394-477b-bb1f-5cc78316800e /mnt
sdd            btrfs          d3c8cb5d-5394-477b-bb1f-5cc78316800e

sdb1 is currently not mounted, while sdc is mounted at /home and /mnt. Due to a quirk with Btrfs raid arrays, sdd will appear to be unmounted. However you can verify that the raid array is working:

matthew@matt-linux-desktop:~$ sudo btrfs filesystem show
Label: none  uuid: d3c8cb5d-5394-477b-bb1f-5cc78316800e
        Total devices 2 FS bytes used 1.21TiB
        devid    1 size 1.82TiB used 1.21TiB path /dev/sdc
        devid    2 size 1.82TiB used 1.21TiB path /dev/sdd




Hope this helps anyone else out there!
--
ubuntu-users mailing list
[hidden email]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
Reply | Threaded
Open this post in threaded view
|

Re: Replacing a Btrfs drive - best practices?

Rashkae-2
In reply to this post by Matthew Crews
On 17-11-06 01:12 PM, Matthew Crews wrote:

>
> Method 1:
>
> 1.Use the built in Btrfs tools to convert Sdb into a Raid1
> 2. add Sdc and Sdd to the RAID
> 3. Sync data between all 3 drives
> 4. Remove Sdb from RAID once sync is complete.
>
>

This would be the proper way to do it.  However, since you are concerned
that sdb might already be failing, and bad things can happen if sdb
coughts up a hairball before it is cleanly mirrored. (BTRFS recovery
tools are still a work in progress.....). I would advise having a backup
first, before proceeding.

There would be no changes required to fstab.. the btrfs file system will
find it's members automagically.

Step 3 needs a bit of explaining.  You would not sync the data between
the 3 drives.  When you add the extra drives, and convert the filesystem
to Raid 1, BTRFS will make exactly 1 redundant copy of all the data.
When next you issue the command to remove sdb, the copy of the data
residing on that drive will first be moved to other available spare drives.

Documentation is here:
https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Conversion




--
ubuntu-users mailing list
[hidden email]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
Reply | Threaded
Open this post in threaded view
|

Re: Replacing a Btrfs drive - best practices?

Ken D'Ambrosio
In reply to this post by Matthew Crews
On 2017-11-07 00:11, Matthew Crews wrote:

> As a follow-up, here is the method I used that worked fairly well.
> This allowed me to move my Btrfs /home partition from a stand-alone
> disk to a new Btrfs raid1 array.
>
>
> 1. Make a Raid1 Btrfs filesystem
>
> # mkfs.btrfs -m raid1 -d raid1 /dev/sdc /dev/sdd
> where sdc and sdd are my new drives, sda is my existing / partition
> and sdb is my existing /home
>
>
> 2. Mount the Raid1 array
>
> # mkdir /media/homebackup
> # mount /dev/sdc /media/homebackup
>
>
> 3. Make a snapshot of my existing /home
>
> # mount /dev/sdb1 /mnt
> # btrfs subvolume snapshot /mnt/@home /mnt/@home_backup
>
> Ubuntu Btrfs documentation recommends mounting Btrfs volumes to /mnt
> for ease of working with. See:
> https://help.ubuntu.com/community/btrfs
>
> Might as well make a backup of / (root) too. Just in case something
> gets mucked up in fstab later.
> # mount /dev/sda1 /mnt
> # btrfs subvolume snapshot /mnt/@ /mnt/@_backup
>
> 4. Send the snapshot to the new drive
>
> # btrfs send /mnt/@home_backup | pv -b -a | btrfs receive
> /media/homebackup
>
> This part can take several hours, depending on the amount of data that
> needs to be sent.
>
>
> 5. Make new snapshot, wdevid    2 size 1.82TiB used 1.21TiB path
> /dev/sdd
>
> Label: none  uuid: 56b012cf-a870-4105-88e7-6f5806738e8d
> Total devices 1 FS bytes used 1.21TiB
> devid    1 size 1.82TiB used 1.58TiB path /dev/sdb1
>
> hich resets to writable
>
> # btrfs subvolume snapshot /media/homebackup/@home_backup
> /media/homebackup/@home
>
>
> 6. Edit /etc/fstab
>
> The simple way is to change the UUID to point to the new drive(s), and
> have the old drive point to a new location. Something like:
>
> UUID=XXX /media/homebackup btrfs defaults,subvol=@home,noauto 0 2
> UUID=XXX /home btrfs noatime,nodiratime,autodefrag,subvol=@home 0 3
>
>
> 7. Reboot and verify
>
> If all goes according to plan, the new raid array will mount as /home
> upon boot, and the old drive will not be mounted. In my case:
>
> matthew@matt-linux-desktop:~$ df -k
> Filesystem              1K-blocks       Used Available Use% Mounted on
> /dev/sda1               470807552    8777432 460347848   2% /
> /dev/sdc               1953514584 1302434304 650683152  67% /home
> /dev/sdc               1953514584 1302434304 650683152  67% /mnt
>
> matthew@matt-linux-desktop:~$ sudo lsblk -f
> NAME           FSTYPE   LABEL UUID                                
> MOUNTPOINT
> sda
> ├─sda1         btrfs          dde09d67-fe28-4a79-ac1d-3dec0ef486ca /
> ├─sda2
> └─sda5         swap           21c78577-95f6-49fb-be38-862013ec9252
>   └─cryptswap1 swap           e3c2d3de-e2d0-4fba-9fda-d929aea3167b
> [SWAP]
> sdb
> └─sdb1         btrfs          56b012cf-a870-4105-88e7-6f5806738e8d
> sdc            btrfs          d3c8cb5d-5394-477b-bb1f-5cc78316800e /mnt
> sdd            btrfs          d3c8cb5d-5394-477b-bb1f-5cc78316800e
>
> sdb1 is currently not mounted, while sdc is mounted at /home and /mnt.
> Due to a quirk with Btrfs raid arrays, sdd will appear to be
> unmounted. However you can verify that the raid array is working:
>
> matthew@matt-linux-desktop:~$ sudo btrfs filesystem show
> Label: none  uuid: d3c8cb5d-5394-477b-bb1f-5cc78316800e
> Total devices 2 FS bytes used 1.21TiB
> devid    1 size 1.82TiB used 1.21TiB path /dev/sdc
> devid    2 size 1.82TiB used 1.21TiB path /dev/sdd
>
>
>
>
> Hope this helps anyone else out there!

An excellent write-up and outcome!  Thanks for taking the time to reply
in detail -- that's truly a Google-worthy e-mail there.

I should, alas, mention one thing for said Googlers, though: btrfs-based
RAID-0 and RAID-1 (and RAID-10) are superfine for deployment;
btrfs-based RAID-5/RAID-6, however... not so much.  When I need to use
those, I use mdadm to create the RAID, then install standalone btrfs on
the mdadm virtual disk.  This doesn't allow the RAID to leverage btrfs's
features, which is a bummer, but it *does* make sure your data's safe.

Glad everything went swimmingly!

-Ken

--
ubuntu-users mailing list
[hidden email]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users