Categories
General Linux

Installing a software RAID 10 Debian system with LUKS disk encryption

In this guide we will be installing Debian 9 (aka stretch) on a physical server with 4 disks. The role of this machice is to be used as a Storage/NAS system. We will create a software RAID 10 setup, with LVM and LUKS full disk encryption. Our goals:

  • Install a Debian 9 with RAID10/LVM/LUKS.
  • Secure SSH.
  • Enable Firewall (UFW).
  • Setup bonding with the two network cards.
  • Setup remote system unlock with dropbear and initramfs.
  • Setup disk-monitoring with smartmontools and mdadm.
  • Setup kexec for faster reboots.

Computer specs

  • Type: HP ProLiant MicroServer
  • CPU: AMD Turion(tm) II Neo N40L Dual-Core Processor
  • RAM: 2GB
  • Disks: 4x3TB SATA
  • Network:
    • Broadcom Limited NetXtreme BCM5723 Gigabit Ethernet PCIe (on board)
    • Intel Corporation 82574L Gigabit Network Connection (extra)

Assumptions

  • Server IP: 192.168.1.10
  • Netmask: 255.255.255.0
  • Gateway IP: 192.168.1.1
  • DNS IP: 192.168.1.1
  • Hostname: storage.example.com

Install Debian stretch

Basic Settings

It would probably be more clear if there were screenshots for each step, but this was an installation on a physical server and taking photos for each step, opposes my laziness :). Just follow the instructions and you will be fine.

  • Choose: Install
  • Language: English
  • Country: other
  • Europe: Cyprus
  • Country to base default locale settings: United States
  • Keymap to use: American English
  • Primary network interface: enp3s0: Broadcom Limeted NetXtream BCM5723 Gigabit Ethernet PCIe
  • Let it get an IP from DHCP
  • Hostname: storage
  • Domain name: example.com
  • Root password: SomethingBigAndUnpredictable
  • Re-enter password to verify: SomethingBigAndUnpredictable
  • Full name: Sysadmin
  • Username: admin
  • Choose a password for the new user: AlsoSomethingBigAndUnpredictable
  • Re-enter password to verify: AlsoSomethingBigAndUnpredictable
  • Select your time zone: Asia/Nicosia
  • Partitioning method: Manual

Feel free to adjust the above according to your own preferences.

Partitioning

There are 4 disks of 3TB each (3.0 TB SATA):

  • SCSI1 (0,0,0) (sda)
  • SCSI2 (0,0,0) (sdb)
  • SCSI3 (0,0,0) (sdc)
  • SCSI4 (0,0,0) (sdd)

Not really SCSI but SATA in fact.

Partition the devices

Then create a raid partition for /boot:

  • Select the free space of sda and ‘Enter’
  • Create a new partition
  • New partition size: 512 MB
  • Location of new partition: Beginning
  • Use as: physical volume for RAID
  • Done setting up the partition

Lastly create the raid partition to be used by the encrypted volume:

  • Select the free space of sda and ‘Enter’
  • Create a new partition
  • New partition size: 3.0 TB
  • Location of new partition: Beginning
  • Use as: physical volume for RAID
  • Done setting up the partition

Repeat the above steps for sdb, sdc and sdd.

Setup Software RAID 10

First select ‘Configure software RAID’ and follow these steps:

  • Write the changes to the storage devices and configure RAID? Yes

The we create the software RAID (MD) devices. First we create device md0 for /boot:

  • Create MD Device
  • RAID10
  • Number of active devices inn the RAID10 array: 4
  • Number of spare devices inn the RAID10 array: 0
  • Active devices for the RAID10 array (use ‘Space bar’ to select)
    • /dev/sda2
    • /dev/sdb2
    • /dev/sdc2
    • /dev/sdd2
  • Press ‘Continue’ when done.

Then we create the software RAID device to be used for the encrypted volume (md1):

  • Create MD Device
  • RAID10
  • Number of active devices inn the RAID10 array: 4
  • Number of spare devices inn the RAID10 array: 0
  • Active devices for the RAID10 array (use the ‘Space bar’ to select)
    • /dev/sda3
    • /dev/sdb3
    • /dev/sdc3
    • /dev/sdd3
  • Press ‘Continue’ when done.
  • Press ‘Finish’ when done.

Create the /boot volume

When done press ‘Finish partitioning and write changes to disk’.`

When finished you will see a ‘RAID10 device #0 1GB Software RAID device’:

  • Select: #1 1.0GB
  • Use as: Ext4 journaling file system
  • Mount point: /boot
  • Done setting up the partition

Setup the encrypted volume

We will be using the software RAID /dev/md1 device for the encrypted volume.

Now select ‘Configure encrypted volumes’ and follow these steps:

  • Write the changes to disk and configure encrypted volumes? Yes
  • Create encrypted volumes

  • Select: /dev/md1
  • Erase data: yes (this will take a long time)
  • Done setting up the partition

  • Write the changes to disk and configure encrypted volumes? Yes

  • Finish
  • Encryption passphrase: MyVeryLongEncryptionPassphrase
  • Re-enter the passphrase to verify: MyVeryLongEncryptionPassphrase
  • Setup LVM

    Next we select the ‘Configure the Logical Volume Manager’ option and follow these steps:

    • Write the changes to disks and configure LVM? Yes
  • Create volume group

  • Volume group name: VG00
  • Devices for the new volume group:
    • /dev/mapper/md1_crypt
  • Then we create the Logical Volumes (LV). First let’s create a SWAP volume:

    • Create logical volume
    • Volume group: VG00
    • Logical volume name: SWAP
    • Logical volume size: 2048MB (2G is more than enough for this system)

    Lastly we create the system (ROOT) volume. On an enterprise installation we may want to use different volumes for /usr, /home, /var, etc but for a home installation we will be fine to use just one.

    • Create logical volume
    • Volume group: VG00
    • Logical volume name: ROOT
    • Logical volume size: 5996818MB (All available space)

    Press ‘Finish’ when done.

    Start the installation

    After all the steps are completed these Logical Volumes will be present on the system:

    • LVM VG VG00, LV ROOT 0 6.0 TB
    • LVM VG VG00, LV SWAP 0 2.0 GB

    Create the ROOT filesystem

    Under the ‘LVM VG VG00, LV ROOT 0 6.0 TB’ line select the ‘#1 6.0TB’ option:

    • Use as: Ext4 journaling file system
    • Mount point: /
    • Done setting up this partition

    Create the SWAP space

    Under the ‘LVM VG VG00, LV SWAP 0 2.0 GB’ line select the ‘#1 2.0GB’ option:

    • Use as: swap area
    • Done setting up this partition

    Now we are ready to write the changes and start the installation. Press the ‘Finish partitioning and write changes to disk’ option to continue:

    • Write the changes to disks? Yes

    Wait for the base install to finish. Then select a country close to you. No debian mirrors in Cyprus so I use UK:

    • Debian archive mirror country: United Kingdom
    • Debian archive mirror: ftp.uk.debian.org
    • HTTP proxy: (none)

    Wait for the APT configuration to Finish.

    • Participate in the package usage survey: no
  • Choose software to install:

    • SSH server
    • standard system utilities
  • Wait while software is installing

    • Install the GRUB boot loader to the master boot record.
    • Device for boot loader installation:
      • /dev/sda

    Wait for the installation to finish and reboot. Remember to remove the USB during the reboot cycle.

    Post install steps

    During start-up you will see the ‘Please unlock md1_crypt’ prompt. Type your LUKS passphrase to unlock the disk and continue.

    Update and Upgrade

    Login as root:

    # apt update && apt -y dist-upgrade
    

    Install essential packages

    # apt -y install vim htop multitail ntp byobu ufw unattended-upgrades downtimed
    

    Secure ssh

    You need to generate an SSH key pair on you PC, if you don’t have one (you should!):

    $ ssh-keygen -b 4096
    

    Copy the public key:

    $ cat ~/.ssh/id_rsa.pub
    

    Paste the public key at the end of the /root/.ssh/authorized_keys file in your server and try to login from your PC:

    $ ssh root@192.168.1.10
    

    Some final adjustments on your SSH config (/etc/ssh/sshd_config). Change these values:

    Port 2233
    PasswordAuthentication no
    

    Restart SSH:

    # systemctl restart ssh.service
    

    Enable the UFW firewall

    We are using port 2233 for SSH so we need to allow that and enable the firewall:

    # ufw allow 2233/tcp
    # ufw enable
    

    Setup bonding

    Since we have two ethernet cards, we may take advantage of thr Linux bonding feature and join them as one. We will be using the Adaptive load balancing mode which provides load balancing of transmit, load balancing of receive for IPv4 and requires no configuration from the switch side.

    First we need to install ifenslave:

    # apt -y install ifenslave
    

    Set up this in /etc/network/interfaces:

    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    
    source /etc/network/interfaces.d/*
    
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # enp2s0 is manually configured, and slave to the "bond0" bonded NIC
    auto enp2s0
    iface eth0 inet manual
        bond-master bond0
    
    # enp3s01 is also manually configured, thus creating a 2-link bond.
    auto enp3s0
    iface eth1 inet manual
        bond-master bond0
    
    # bond0 is the bonded NIC and can be used like any other normal NIC.
    # bond0 is configured using static network information.
    auto bond0
    iface bond0 inet static
        address 192.168.1.10
        gateway 192.168.1.1
        netmask 255.255.255.0
    
        # bond0 uses adaptive load balancing
        bond-mode 6
        bond-miimon 100
        bond-slaves enp2s0 enp3s0
    

    An ifup bond0 should bring the bonded interface up. Or you can just reboot.

    Setup remote-unlock with dropbear

    The server will be a headless system, located in a difficult to access location. So we need a way to unlock it when a power failure occurs. The most convenient way to do this is to use a mandos server but convenience comes at a cost. A safer and easier way is to use dropbear during boot (initrd). The weak point of this solution is that the server will be basically offline until the sysadmin manually unlocks it, to boot.

    First we install dropbear for initrd:

    # apt -y install dropbear-initramfs
    

    Then we set a custom ssh port for dropbear. This better be different than the custom ssh port we used earlier. Change the dropbear port to 2244 in /etc/dropbear-initramfs/config:

    DROPBEAR_OPTIONS="-p 2244"
    

    Add the static IP in the initramtools configuration (/etc/initramfs-tools/initramfs.conf):

    IP=192.168.1.10::192.168.1.1:255.255.255.0:storage:enp3s0:off
    

    Copy the authorized_keys file in /etc/dropbear-initramfs:

    # cp /root/.ssh/authorized_keys /etc/dropbear-initramfs/
    

    Regenerate the initrd file:

    # update-initramfs -u
    

    Now reboot and ssh to it to test it:

    $ ssh -p 2244 root@192.168.1.10
    

    If your pubkeys are in place you will enter a busybox shell. Enter the crypt-unlock command, supply your unlock passphrase and the system will boot to the encrypted system.

    Setup a local MTA for notifications

    We will be using our main mailserver as a smarthost for mail to go through.

    Install the postfix MTA and the mail utility:

    # apt -y install postfix mailutils
    

    Answer these questions:

    • General type of mail configuration: Internet with smarthost
    • System mail name: storage.example.com
    • SMTP relay host (blank for none): smtp.example.com

    Test it:

    # echo 'Testing #1' | mail -s 'Test #1' user@example.com
    

    If you get a mail in your mailbox then everything is set. If not extra configuration may be needed on the smarthost. Contact the sysadmin of the smarthost, or check the logs if you access to it.

    Setup pro-active disk monitoring

    Install smartmontools:

    # apt -y install smartmontools
    

    Enable S.M.A.R.T, offline testing, attribute autosave, short and long test on all 4 devices. Add these lines in /etc/smartd.conf:

    /dev/sda -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m user@example.com -M exec /usr/share/smartmontools/smartd-runner
    /dev/sdb -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m user@example.com -M exec /usr/share/smartmontools/smartd-runner
    /dev/sdc -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m user@example.com -M exec /usr/share/smartmontools/smartd-runner
    /dev/sdd -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m user@example.com -M exec /usr/share/smartmontools/smartd-runner
    

    Restart smartmontools:

    # systemctl restart smartmontools.service
    

    Setup Software RAID10 monitoring

    We also need to setup monitoring for the software raid. Add your email address in the /etc/mdadm/mdadm.conf file:

    MAILADDR user@example.com
    

    Restart the mdmonitor service:

    # systemctl restart mdmonitor.service
    

    Setup kexec for faster reboots

    Kexec is a Linux kernel mechanism that can load a fresh kernel from a running system. This results in a “reboot” without in fact rebooting the computer. The system loads a new kernel, the system appears “rebooted” but skipping the BIOS?UEFI initialization, thus resulting in faster reboots.

    Install kexec-tools:

    # apt -y install kexec-tools
    

    The ‘Should kexec-tools handle reboots (sysvinit only)?’ question is related only to sysvinit systems. Since we are using systemd, it has no effect in our case.

    Now if you want to reboot instead of running reboot you can run systemctl kexec. The latter command will reboot the system without going though BIOS/UEFI, POST etc and your system downtime is minimized.

    And we are done! Store your server in a protected location, add a UPS for power backup and you are ready.

    References

    • https://wiki.debian.org/Bonding
    • https://help.ubuntu.com/community/UbuntuBonding
    • https://www.theo-andreou.org/?p=1579
    • https://wiki.recompile.se/wiki/Mandos
    • http://forums.ayksolutions.com/forum/documentation/knowledgebase/general-server-questions/641-proactively-monitoring-hard-drive-health-using-smartd
    Categories
    Linux

    Increasing the disk size of a Linux VM

    In this guide we examine how to increase the disk size of a linux VM, when the need arises.

    Note
    Make sure you backup everything you have on your system, before trying this guide. This is an advanced HOWTO and it can break your system, irrecoverably, if you make a critical mistake!

    This guide assumes that you are using the Linux Logical Volume Manager (LVM) to manage your storage. If you are new to the concept of LVM you can study the excellent LVM HOWTO from The Linux Documentation Project website.

    Even though it may be possible to resize a Linux system without using LVM, an LVM setup is highly recommended. No matter if you are working on a physical or virtual machine, LVM is the preferred method of storage management in Linux, since it simplifies tasks related to storage, including volume resizing.

    Another assumption is that the disk is using the legacy MBR partition table format. But the guide can easily be adapted to disks using a GPT format.

    Increasing the size of the virtual disk

    In this guide we are using VMware but this section can be easily adapted to different virtualization systems.

    1. Before increasing the disk size, it is a good idea to consolidate the snapshots of your VM. Right click and go to: <br />Snapshots -> Consolidate:

      Consolidate Snapshots

    • Press ‘OK’ when asked to do so. When the confirmation dialog appears, press ‘Yes’:

      Confirm Consolidate
      When the operation is completed (Check the ‘Recent Tasks’ pane) move to the next step.

    1. Right click on the VM again and go to Edit Settings. From here, choose the disk you wish to enlarge:

      Enlarge Disk
      Change the size to your desired size and press OK. In my case I will change a 10G size hard disk to 65G. Press ‘OK’ when done.

    Now we should move to our linux system.

    Force Linux to detect the changes in the disk size

    1. Check the detected disk size:
      # cat /proc/partitions
      major minor  #blocks  name
      
        8        0   10485760 sda
        8        1     248832 sda1
        8        2          1 sda2
        8        5   10233856 sda5
       11        0    1048575 sr0
      254        0    9760768 dm-0
      254        1     471040 dm-1
      

      As you can see the primary disk (sda) has a size of 10485760KB, which translates to 10GB:

      # echo '10485760/1024/1024' | bc -l
      10.00000000000000000000
      
    2. Find the SCSI subsystem buses:
      # ls /sys/class/scsi_device/
      0:0:0:0  2:0:0:0
      

      0:0:0:0 is the primary bus.

  • Rescan for disk changes:

    # echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan
    

  • Check the new size:
    # cat /proc/partitions
    major minor  #blocks  name
    
      8        0   68157440 sda
      8        1     248832 sda1
      8        2          1 sda2
      8        5   10233856 sda5
     11        0    1048575 sr0
    254        0    9760768 dm-0
    254        1     471040 dm-1
    

    The size is now 65G:

    # echo '68157440/1024/1024' | bc -l
    65.00000000000000000000
    
  • Resize the partition used by the LVM Physical Volume (PV)

    1. Check which partition is used by the PV:
      # pvs
       PV         VG         Fmt  Attr PSize PFree
       /dev/sda5  myvgroup   lvm2 a--  9,76g    0
      

      So only the /dev/sda5 partition is used by LVM.

    2. Backup the partition table:

      # sfdisk -d /dev/sda > sda-part.mbr
      

      Now you need to save that file elsewhere because when if partition table goes down the drain, you will have no way to access the partition table backup file. You could use scp to transfer the file on another system:

      # scp sda-part.mbr user@another-server:
      

      If you need to restore the partition table you can use a recovery/live cd or usb like this:

      # scp user@another-server:sda-part.mbr
      # sfdisk /dev/sda < sda-part.mbr
      

      Note
      You can use sgdisk for disks with GPT tables.
      Backup:
      sgfdisk -b sda-part.gpt /dev/sda.
      Restore: sgfdisk -l sda-part.gpt /dev/sda

    3. Resize the partition used by the PV.
    • Check the size of the partition:

      # sfdisk -d /dev/sda
      Warning: extended partition does not start at a cylinder boundary.
      DOS and Linux will interpret the contents differently.
      # partition table of /dev/sda
      unit: sectors
      
      /dev/sda1 : start=     2048, size=   497664, Id=83, bootable
      /dev/sda2 : start=   501758, size= 20467714, Id= 5
      /dev/sda3 : start=        0, size=        0, Id= 0
      /dev/sda4 : start=        0, size=        0, Id= 0
      /dev/sda5 : start=   501760, size= 20467712, Id=8e
      

    • Mark down the details of the sda2 and sda5 partitions in the following table:
      Partition Start Sector size in KB size in Sectors
      sda2 501758 10233857 20467714
      sda5 501760 10233856 20467712

      Note
      Each Sector is 512 bytes. So the number of Sectors is double the number of KBytes (1024 Bytes). The logical sda5 partition is 1KB (or 2 Sectors) smaller than the extended sda2 partition.*

    • Calculate the sizes of the new partitions:

      The total size of the sda disk is 68157440KB which translates to 136314880 Sectors. So the new size (in Sectors) of sda2 would be:

      # echo 136314880-501758 | bc -l
      135813122
      

      The size, in sectors, of sda5 would be:

      # echo 136314880-501760 | bc -l
      135813120
      

      According to the calculations above, the new table with the partition details would be:

      Partition Start Sector size in KB size in Sectors
      sda2 501758 67906561 135813122
      sda5 501760 67906560 135813120
    • Resize the sda2 (extended) and sda5 partitions.

      Copy the sda-part.mbr file to sda-part-new.mbr and make the following changes to sda-part-new.mbr:

      # partition table of /dev/sda
      unit: sectors
      /dev/sda1 : start=     2048, size=    497664, Id=83, bootable
      /dev/sda2 : start=   501758, size= 135813122, Id= 5
      /dev/sda3 : start=        0, size=         0, Id= 0
      /dev/sda4 : start=        0, size=         0, Id= 0
      /dev/sda5 : start=   501760, size= 135813120, Id=8e
      

      Now apply these changes to the MBR using sfdisk:

      # sfdisk --no-reread /dev/sda < sda-part-new.mbr
      

      Ignore any warnings for now.

    • Verify the new partition table:

      # sfdisk -d /dev/sda
      Warning: extended partition does not start at a cylinder boundary.
      DOS and Linux will interpret the contents differently.
      # partition table of /dev/sda
      unit: sectors
      
      /dev/sda1 : start=     2048, size=   497664, Id=83, bootable
      /dev/sda2 : start=   501758, size=135813122, Id= 5
      /dev/sda3 : start=        0, size=        0, Id= 0
      /dev/sda4 : start=        0, size=        0, Id= 0
      /dev/sda5 : start=   501760, size=135813120, Id=8e
      

      It looks correct.

    • Verify that the linux kernel has been notified of the changes:

      # cat /proc/partitions 
      major minor  #blocks  name
      
         8        0   68157440 sda
         8        1     248832 sda1
         8        2          1 sda2
         8        5   10233856 sda5
        11        0    1048575 sr0
       254        0    9760768 dm-0
       254        1     471040 dm-1
      

      It looks like the system still sees the old partition size. You could use a utility like partprobre, kpartx or even sfdisk to force the kernel to re-read the new partition table:

      # sfdisk -R /dev/sda
      BLKRRPART: Device or resource busy
      This disk is currently in use.
      

      Alas if the partition is in use, the kernel will refuse to re-read the partition size. In that case just schedule a reboot and try again.

      After the system reboot:

      # cat /proc/partitions
      major minor  #blocks  name
      
         8        0   68157440 sda
         8        1     248832 sda1
         8        2          1 sda2
         8        5   67906560 sda5
        11        0    1048575 sr0
       254        0    9760768 dm-0
       254        1     471040 dm-1
      

      So the new size of the sda5 partition is 64,76GB:

      # echo '67906560/1024/1024' | bc -l
      64.76074218750000000000
      

      If the partition size has increased, we can move on to the next step.

    Resize the Physical Volume (PV).

    1. Check the size of the Physical Volume:

      # pvs
       PV         VG        Fmt  Attr PSize PFree
       /dev/sda5  ubuntu-vg lvm2 a--  9,76g    0 
      

      So the size of the PV is still 9,76GB.

    2. Resize the PV:

      # pvresize /dev/sda5
       Physical volume "/dev/sda5" changed
       1 physical volume(s) resized / 0 physical volume(s) not resized
      

    3. Verify that the size is resized:
      # pvs
       PV         VG        Fmt  Attr PSize  PFree 
       /dev/sda5  ubuntu-vg lvm2 a--  64,76g 55,00g
      

      So the new size of the PV is 64,8GB.

    Resize the logical volume.

    1. Check the current size of the logical volume (used for the root filesystem):

      # lvs
       LV     VG        Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
       root   ubuntu-vg -wi-ao--   9,31g
       swap_1 ubuntu-vg -wi-ao-- 460,00m
      

      The root volume is still at 9,3GB.

    2. Check the free space:

      # vgs
       VG        #PV #LV #SN Attr   VSize  VFree 
       ubuntu-vg   1   2   0 wz--n- 64,76g 55,00g
      

    3. Resize the root logical volume:
      # lvresize -L +55,00g /dev/mapper/ubuntu-vg-root
       Extending logical volume root to 64,31 GiB
       Logical volume root successfully resized
      
    4. Verify LV resize:
      # lvs
      LV     VG        Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
      root   ubuntu-vg -wi-ao--  64,31g
      swap_1 ubuntu-vg -wi-ao-- 460,00m
      

      The root logical volume size is now at 65,3GB

    Resize the root filesystem.

    1. Check the current size of the root filesystem:

      # df -hT
      Filesystem                  Type      Size  Used Avail Use% Mounted on
      rootfs                      rootfs    9,2G  2,2G  6,6G  25% /
      udev                        devtmpfs   10M     0   10M   0% /dev
      tmpfs                       tmpfs     101M  204K  101M   1% /run
      /dev/mapper/ubuntu-vg-root  ext4      9,2G  2,2G  6,6G  25% /
      tmpfs                       tmpfs     5,0M     0  5,0M   0% /run/lock
      tmpfs                       tmpfs     201M     0  201M   0% /run/shm
      /dev/sda1                   ext2      228M   18M  199M   9% /boot
      

      So the root filesystem is still at 9,2GB.

    2. Resize the file system:

      # resize2fs /dev/mapper/ubuntu-vg-root
      resize2fs 1.42.5 (29-Jul-2012)
      Filesystem at /dev/mapper/ubuntu-vg-root is mounted on /; on-line resizing required
      old_desc_blocks = 1, new_desc_blocks = 5
      Performing an on-line resize of /dev/mapper/ubuntu-vg-root to 16858112 (4k) blocks.
      The filesystem on /dev/mapper/ubuntu-vg-root is now 16858112 blocks long.
      

    3. Verify that the filesystem has been resized:
      # df -hT
      Filesystem                  Type      Size  Used Avail Use% Mounted on
      rootfs                      rootfs     64G  2,2G   58G   4% /
      udev                        devtmpfs   10M     0   10M   0% /dev
      tmpfs                       tmpfs     101M  204K  101M   1% /run
      /dev/mapper/ubuntu-vg-root  ext4       64G  2,2G   58G   4% /
      tmpfs                       tmpfs     5,0M     0  5,0M   0% /run/lock
      tmpfs                       tmpfs     201M     0  201M   0% /run/shm
      /dev/sda1                   ext2      228M   18M  199M   9% /boot
      

    So now you have 55GB of additional storage on your root partition, to satisfy your increasing storage needs.

    References

    • https://ma.ttias.be/increase-a-vmware-disk-size-vmdk-formatted-as-linux-lvm-without-rebooting/
    • http://gumptravels.blogspot.com/2009/05/using-sfdisk-to-backup-and-restore.html
    • http://askubuntu.com/questions/57908/how-can-i-quickly-copy-a-gpt-partition-scheme-from-one-hard-drive-to-another