Docu review done: Tue 17 Oct 2023 10:53:31 AM CEST

Table of content

Misc

ComandsDescription
pydfimproved version of df
`udiskctl power-off -b /dev/sdauseful before unplugging an external or hot-pluggable disk

Rescan extended volume

$ echo '1' > /sys/class/<type>_disk/<deviceAddr>/device/rescan # will force the system to rescan the physical disk live
$ echo '1' > /sys/class/scsi_disk/0\:0\:0\:0/device/rescan
$ echo '1' > /sys/block/<disk>/device/rescan                   # works also for LUNs

Physical volume

ComandsDescription
pvsshows informaitons about physical volume
pvresize /dev/[device]resices to the max which was assigned on the physical side e.g. pvresize /dev/sdb
pvcreate /dev/[new disk/partition]create physical volume on disk or partiontion
pvremove /dev/[new disk/partition]removes physical volume on disk

Volume group

ComandsDescription
vgcreatecreates VG e.g. vgcreate [VGname] /dev/[disc|partition]
vgscanshows all volume groups
vgsshows informations about volume group
vgdisplayshows all needed informations about volume group
vgchange -ayactivates all vgs
vgchange -a y [vgname]activates dedecated vg

Logical volume

ComandsDescription
lvsshows informations about logical volume
lvs --segmentsshows informations about logical voluem and type (linear/striped)
lvs -a -o +devicesshows informations about logical voluem and the assigned disks/partitions
lvs -a -o +devices --segmentsshows informations about logical voluem and the assigned disks/partitions + type (linear/striped)
lvdisplay [lvname or empty]shows all needed informations about logical volume
lvcreate -n [lvname] --size [size] [vgname]creats lv example: lvmcreate -n temp --size 10G rootvg
lvcreate -n [lvname] --extents 100%FREE [vgname]create lv with full leftover free space example: lvmcreate -n temp --extents 100%FREE rootvg
lvcreate -i[Nr] -I[Nr] -n [lvname] --size [size] [vgname]creats lv in stripe mode with -i[Nr] of disks from vg and -I[Nr] striped size (kB) example: lvmcreate -i3 -I4 -n temp --size 10G rootvg
lvremove [lv]removes lv example: lvmremove /dev/mapper/rootvg-temp ; mkfs.[type] /dev/mapper/rootvg-temp
lvextend -L +[size] [lv]extends lv example: lvmextend -L +4G /dev/mapper/rootvg-temp
lvextend -l +100%FREE [lv]extends lv to the full free size on the pv
lvextend -l +100%FREE [lv] -r-r grows the FS right after the lv extention
lvreduce -L -[siye] [lv]reduces lv example: lvmreduce -L -8G /dev/rootvg/temp
lvrename [vgname] [oldlvname] [newlvname]renames lv example: lvrename datavg datalvnew datalv

Filesystem for ext-fs

ComandsDescription
resize2fs [lv]activates changes on extends or on reduces of lv example: resize2fs /dev/mapper/rootvg-temp
resize2fs [lv] [size]resizes filesystem example: resize2fs /dev/rootvg/temp 1G

Filesystem for xfs-fs

ComandsDescription
xfs_info [lv]shows infos about lv e.g blocksize and so on
xfs_growfs [lv]actives changes on extends on lvs example: xfs_growfs /dev/mapper/rootvg-temp

Filesystem

ComandsDescription
mkfs.[fstype] /dev/[path/to/device]create fs on lv
wipefs /dev/[path/to/device]shows status of current filesystem, if nothing returns no filesystem applied
wipefs -q /dev/[path/to/device]removes filesystem from device

Renaming VG

# This will rename the vg on the system
$ vgrename -v <oldVGname> <newVGname>

Replace now the old VG name in the /etc/fstab and in the /boot/grub/grub.cfg, than update grup and initramfs

$ update-grub
$ update-initramfs -u

Generate VG with striped LV

Preperation

If you have not the disks alreay beeing part of the vg, lets quickly add them or create a new vg. In our case we use sdb, sdc and sdd and create a new vg.

$ pvcreate /dev/sd[bcd]
$ vgcreate <vgname> /dev/sd[bcd]

Now as we have the vg ready, we create the lv.

$ lvcreate --size 10G -i3 -n <lvname> <vgname>

And that is it, now you have a striped lv in your newly generate vg.

To veryfy that everything is correct, you can use lvs to display the inforation about existing lvs

$ lvs --segments -a -o +devices
  LV     VG     Attr       #Str Type    SSize   Devices
  testlv testvg -wi-ao----    4 striped   10g   /dev/sdb(0),/dev/sdc(0),/dev/sdd(0)

Last step, is to create a filesystem on the lv and we are ready to mount and write data on it.

$ mkfs.xfs /dev/<vgname>/<lvname>

Enforcing unmount of unreachable mount points

Using umount will force it and place it to lazy:

$ umount -f -l /path/to/mount

Using kill command to kill the blocking process:

$ fuser -k -9 /path/to/mount

Other solution:

Be careful, it retriggers a instaned reboot of the system

$ echo 1 > /proc/sys/kernel/sysrq
$ echo b > /proc/sysrq-trigger

For more details about sysrq have a loot there > linux magic system request

lvconvert mirror issues

Issue1: The source lv contains a space which can not be devided by the number of strips

$ lvconvert -m1 --type mirror --stripes 3 --mirrorlog core /dev/logvg/loglv /dev/sd[def]
  Using default stripesize 64.00 KiB
  Number of extents requested (71680) needs to be divisible by 3.
  Unable to allocate extents for mirror(s).

Solution: increate the lv size to make it devideable

Issue2: The destination disk (summ of all stripes) do not contain enough space to cover the old one:
$ lvconvert -m1 --type mirror --stripes 3 --mirrorlog core /dev/logvg/loglv /dev/sd[def]
  Using default stripesize 64.00 KiB
  Insufficient free space: 76800 extents needed, but only 76797 available
  Unable to allocate extents for mirror(s).

Solution: increate the disks o]f the stripes eesl

Migrate linear LV to striped LV

$ pvcreate /dev/sd[def]
# add the new disks to the vg
$ vgextend datavg /dev/sd[def]

Create mirror from single disk to the striped disks (-m 1 and --mirrors 1 are the same)

Run the lvconvert in a screen session (+ multiuser) because it can take a while

$ lvconvert --mirrors 1 --type mirror --stripes 3 --mirrorlog core /dev/logsvg/datalogesnew /dev/sd[def]
  Using default stripesize 64.00 KiB
  datavg/datalv: Converted: 0.0%
  datavg/datalv: Converted: 30.4%
  datavg/datalv: Converted: 59.4%
  datavg/datalv: Converted: 85.3%
  datavg/datalv: Converted: 100.0%

IMPORTANT is that the Cpy%Sync is at 100! only than you can continue

$ lvs -a -o +devices
  LV                  VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  datalv              datavg rwi-aor--- 1.49t                                      100.00           datalv_mimage_0(0),datalv_mimage_1(0)
  [datalv_mimage_0]   datavg Iwi-aor--- 1.49t                                                       /dev/sdc(0)
  [datalv_mimage_1]   datavg Iwi-aor--- 1.49t                                                       /dev/sdd(1),/dev/sde(0),/dev/sdf(0)

After the sync is done, you can remove the liniar/initial disk from the mirror (-m 0 and --mirrors 0 are the same)

$ lvconvert --mirrors 0 /dev/datavg/datalv /dev/sdc

Remove the disks from the vg and remove the disks from LVM

$ vgreduce datavg /dev/sdc
$ pvremove /dev/sdc

EMERGENCY Extend existing (striped) LV with additional Disk (linear)

Lets assume we have a lv called datavg/datalv and it is striped over 3 disks

$ lvs --segments -a -o +devices
  LV     VG        Attr       #Str Type    SSize    Devices
  datalv datavg    -wi-ao----    3 striped    2.99g /dev/sdc(0),/dev/sdd(0),/dev/sde(0)

Now prepare the disk/partition

$ pvcreate /dev/sdf

Add the new disk to the datavg volume and extend the datavglv volume using the new disk

$ vgextend datavg /dev/sdf
$ lvextend -i 1 -r /dev/datavg/datalv /dev/sdf

Now you can see in lvs that it was added to the vg

$ lvs --segments -a -o +devices
  LV     VG        Attr       #Str Type    SSize    Devices
  datalv datavg    -wi-ao----    3 striped    2.99g /dev/sdc(0),/dev/sdd(0),/dev/sde(0)
  datalv datavg    -wi-ao----    1 linear  1020.00m /dev/sdf(0)

After that you have gained some time to prepare the prefered way of extending striped lvs

Prefered Options Extend existing (striped) LV

Option 1

In the first option, you will add new disks to the currently used striped lv. You need to add a minimum of 2 disks to create the extention of the stripe. The size of the new disks can be smaller,equal or bigger as the existing ones.

$ lvs --segments -ao +devices
  LV     VG        Attr       #Str Type    SSize   Devices
  datalv datavg -wi-a-----    2 striped   3.99g /dev/sdc(0),/dev/sdd(0)

Prepare the disk/partition and add new disks to datavg volume

$ pvcreate /dev/sd[gh]
$ vgextend datavg /dev/sd[gh]

Mow you should see fee space in the vg

$ vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  datavg   4   1   0 wz--n-  5.98g  1.99g

Perform extention of striled lv

$ lvextend datavg/datalv -L <size> # in our case it is -L 5.98g as this the the full size of the vg

Now you can see that you will have two striped entries for the lv datalv

$ lvs --segments -ao +devices
  LV     VG     Attr       #Str Type    SSize   Devices
  datalv datavg -wi-a-----    2 striped   3.99g /dev/sdc(0),/dev/sdd(0)
  datalv datavg -wi-a-----    2 striped   1.99g /dev/sdd(255),/dev/sde(0)

In lvdisplay <vgname>/<lvname> you will find the full size

$ lvdisplay datavg/datalv | grep size -i
  LV Size                5.98 GiB

Option 2

In the second option you will use at first the double of the size on disk space at our backend storage as we are building up a second stripe cluster and mirror all the data from the old one to the new one. If you have a lot of time, use this option, as you will have a clean setup of disks afterwards.

Prepare the disk/partition and add new disks to datavg volume

$ pvcreate /dev/sd[ghij]
$ vgextend datavg /dev/sd[ghij]

Create a mirror out of existing datavg volume (-m 1 and --mirrors 1 are the same)

$ lvconvert -m 1 --type mirror --stripes 4 --mirrorlog core /dev/datavg/datalv /dev/sd[ghij]
  Using default stripesize 64.00 KiB
  datavg/datalv: Converted: 0.1%
  datavg/datalv: Converted: 27.8%
  datavg/datalv: Converted: 55.4%
  datavg/datalv: Converted: 84.8%
  datavg/datalv: Converted: 100.0%

Check now lvs to see if the mirror is applied correctly and now you should see that the new striped disks are getting morrored.

IMPORTANT is that the Cpy%Sync is at 100! only than you can continue

$ lvs -a -o +devices
  LV                VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  testlv            datavg    mwi-aom---   3.98g                                    100.00           testlv_mimage_0(0),testlv_mimage_1(0)
  [testlv_mimage_0] datavg    iwi-aom---   3.98g                                                     /dev/sdc(0),/dev/sdd(0),/dev/sde(0)
  [testlv_mimage_0] datavg    iwi-aom---   3.98g                                                     /dev/sdf(0)
  [testlv_mimage_1] datavg    iwi-aom---   3.98g                                                     /dev/sdg(0),/dev/sdh(0),/dev/sdi(0),/dev/sdj(0)

Now you can remove the old and unused disks (-m 0 and --mirrors 0 are the same), remove the disks from the vg and remove the disks from LVM

$ lvconvert -m 0 /dev/datavg/datalv /dev/sd[cdef]
$ vgreduce datavg /dev/sd[cdef]
$ pvremove /dev/sd[cdef]

Extend existing (linear) LV

After you have increased the vomule in vmware or some where else you have to check if it was applied on the server

$ lsblk | grep <device>                           # e.g. lsblk | grep sdb

If it did not extend, just rescan the disk

$ echo '1' > /sys/class/<type>_disk/<deviceAddr>/device/rescan # will force the system to rescan the physical disk live

If now the lsblk shows its ok than you are fine, if not check the dmesg, sometime it takes a minute or two to be visible, specially on huge devices. Now you have to extend the physical volume and after that you should see in the pvs that more free space is available and you can extend the lv

$ pvresize /dev/<device>
$ lvextend -<L|l> +<size/100%FREE> <lvm>

lvextend will return you if it worked or not, if it is fine you have to extend the FS, for example here with xfs_growfs

$ xfs_growfs <lvm>

Extend existing VG with new partion/disk

Only needed if its done with a partition:

#craete partition
$ fdsik /dev/<disk>
    n ,  p , <number/default is fine> , enter , enter , w

# update the kernel partition information from the disk
$ partx -u /dev/<disk>

Continue here if you just use a full disk:

Create physical volume on disk or partiontion and add new disk/partition to vg

$ pvcreate /dev/<new disk/partition>

$ vgextend <vg> /dev/<new disk/partition>

Remove LV

$ umount /<mountpath>
$ lvremove /dev/<vg>/<lv>
$ update-grub
$ update-initramfs -u

Remove VG

If you still have lvs on it, remove them fist (look above)

$ vgremove <vgname>

Remove disk from vg

Display all disks with needed inforamtions with pvs -o+pv_used

Set the vg to not active and validate that the lvs are set to inactive which are stored on the vg

$ vgchange -an <VG-name>
$ lvscan

Option 1

Remove the lv (if you don’t need it any more) and remove then afterwards the device it self

$ lvremove /dev/<VG-name>/<LV-name>
$ pvremove /dev/sd<a-z> --force --force

If you get the message Couldn't find device with uuid <uuid> do the following:

$ vgreduce --removemissing --verbose <vG-name>        # removes none detecable disks in the VG
$ update-grub
$ update-initramfs -u

Option 2

Move the filesystem and data to another disk first und remove afterwards the disk vrom the vg

$ pvmove /dev/sd<a-z> /dev/sd<a-z>
$ vgreduce <VG-name> /dev/sd<a-z>
$ update-grub
$ update-initramfs -u

VG droped without removefing lvs and drives

The error what you see could look like this for example: error msg: /dev/<vgname>/<lvname>: read failed after 0 of <value>

To remove the lvs use dmsetup like this:

$ dmsetup remove /dev/<vgname>/<lvname>

Extend pysical volume parts

$ sudo fdisk /dev/sda

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   192940031    96468992   83  Linux
/dev/sda2       192942078   209713151     8385537    5  Extended

Command (m for help): d
Partition number (1-2): 1

Command (m for help): d
Partition number (1-2): 2

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-524287999, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-524287999, default 524287999): 507516925

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): e
Partition number (1-4, default 2): 2
First sector (507516926-524287999, default 507516926):
Using default value 507516926
Last sector, +sectors or +size{K,M,G} (507516926-524287999, default 524287999):
Using default value 524287999

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux
/dev/sda2       507516926   524287999     8385537    5  Extended

Command (m for help): t
Partition number (1-2): 2

Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux
/dev/sda2       507516926   524287999     8385537   8e  Linux VLM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

$ sudo reboot

#extend the lvg and your fine

tmpfs/ramfs

ramfs

This memory is generally used by Linux to cache recently accessed files so that the next time they are requested then can be fetched from RAM very quickly. ramfs uses this same memory and exactly the same mechanism which causes Linux to cache files with the exception that it is not removed when the memory used exceeds threshold set by the system.

ramfs file systems cannot be limited in size like a disk base file system which is limited by it’s capacity. ramfs will continue using memory storage until the system runs out of RAM and likely crashes or becomes unresponsive. This is a problem if the application writing to the file system cannot be limited in total size. Another issue is you cannot see the size of the file system in df and it can only be estimated by looking at the cached entry in free.

tmpfs

tmpfs is a more recent RAM file system which overcomes many of the drawbacks with ramfs. You can specify a size limit in tmpfs which will give a ‘disk full’ error when the limit is reached. This behaviour is exactly the same as a partition of a physical disk.

The size and used amount of space on a tmpfs partition is also displayed in df. The below example shows an empty 512MB RAM disk.

Generate tmpfs/ramfs

Assuming that you have somewhere a directory where you can mount

Usage and sample for ramfs:

# Usage
$ mount -t ramfs myramdisk /dest/ination/path

# Sample
$ mount -t ramfs myramdisk /mnt/dir1

Usage and sample for tmpfs:

# Usage
$ mount -t tmpfs -o size=[SIZE] tmpfs /dest/ination/path

# Sample
$ mount -t tmpfs tmpfs /mnt/dir2
$ mount -t tmpfs -o size=512m tmpfs /mnt/dir3