Frequently Asked Question
DON'T PANIC
When the system drops to the emergency mode prompt you are unable to continue the normal boot process. The messages you see usually mean that the root filesystem could not be mounted or a required device could not be started. This does mean the node is down, but it's not usually a tragic end to your compute. However, the next steps do matter so take it slowly and make good choices.
What to Do First
- Read the prompt carefully and take a copy (photo/screenshot etc)
2 logical volume(s) in volume group now active
Found volume group "ove" using metadat@ type lvm2t.gemu.org)
/dev/mapper/pve-root: clean, 68300/6291456 files, 1951226/25165824 blocks
[ TIME ] Timed out waiting for device dev-disk-by1x2duuid-11536013\x2d4121\x2d4d34\.c8b9d48cb8.device - /dev/disk/by-uuid/11536013-4121-4d34-ba97-efc8b9d48cb8.
[DEPEND] Dependency failed for systemd-fsck@dev-disk-by\x2duuid-11536013\x2d4121\x2.ile System Check on /dev/disk/by-uuid/11536013-4121-4d34-ba97-efc8b9d48cb8.
[DEPEND] Dependency failed for mnt-data.mount - /mnt/data.
[DEPEND] Dependency failed for local-fs.target - Local File Systems.
You are in emergency mode. After logging in, type to view
system logs, "systemctl reboot" to reboot, or
to continue bootup.
Enter root password for system maintenance
(or press Control-D to continue):
- It asks for the root password to enter maintenance mode.
- It also offers to reboot (
system logs,systemctl 1 reboot) or to continue booting (Control‑D).
- Enter the root password
- Type the password you use for the
rootaccount and press Enter. - If you do not know it, you will need to reset the root password from a rescue environment (see step 5).
- Check the system logs (optional but useful) and capture in a console log (for later if needed)
journalctl -b -p err..crit
This will show the most recent error messages that caused the boot failure.
Common Causes & Fixes
| Symptom | Likely Cause | Action |
|---|---|---|
| “Found volume group … 2 logical volume(s) in volume now active” followed by a timeout on a device‑by‑UUID path | The logical volume that contains the root filesystem failed to activate. | 1. Activate the volume manually: vgchange -ay name> 2. Verify the root LV is present: ls /dev/mapper/ 3. Try to mount it: mount /dev/mapper/name>- / |
Dependency failed for systemd-fsck@… | Filesystem check failed or the device is missing. | 1. Run a manual check: fsck -y /dev/mapper/name>-root> 2. If errors are reported, let fsck fix them and reboot. |
Dependency failed for local-fs.target | Some mount point could not be reached (often a network or external drive). | 1. Identify the missing mount point from /etc/fstab. 2. Temporarily comment it out or fix the underlying device. |
| Timeout waiting for device … | The kernel could not find the block device (e.g., a SAN LUN, iSCSI target, or USB drive). | 1. Verify the storage is presented to the host. 2. Check multipath or iSCSI service status ( systemctl status multipathd, iscsi). 3. If using RAID, ensure the array is healthy ( mdadm --detail /dev/md*). |
Step‑by‑Step Recovery Procedure
Activate all volume groups, and check to see what came up (if anything)
vgchange -ay
ls /dev/mapper/
Check which volumes mounted, and which are notably absent. In a standard proxmox installation you should have
pve-root pve-swap
and optionally, if you have yet to tear out LVM-Thin
pve-data_tdata pve-data_tmeta pve-data-tpool
and a bunch of thin volumes for guests.
In Emergency mode, you've likely lost pve-root, since it should be able to move past a loss of pve-swap, so we'll focus on that - but the process is the same for any LVM2 volume.
Identify the root logical volume
- It is usually named something like pve-root (but you can change it)
- If you are unsure, look for the LV that matches the size reported in the boot messages (e.g.,
pve-rootis 20 GB in the example).
Mount the root filesystem read‑write
mount -o rw /dev/mapper/- /mnt
If it fails to mount, run a filesystem check
fsck -y /dev/mapper/-
After fixing the underlying issue, rebuild the initramfs to avoid the same problem on the next boot
update-initramfs -u -k all
and reboot
reboot
- If the system still drops to emergency mode, repeat the steps and verify that the device referenced by the UUID (
11536013-4121-4d34-ba97-efc8b9d48cb8) is correctly defined in/etc/fstabor in the storage configuration (e.g., LVM, RAID, iSCSI).
When to Escalate
These FAQs are a free resource for anyone in this situation needing knowledge to fix a no-boot scenario, but there are times when professional support is recommended if...
- The root LV cannot be activated even after
vgchange -ay. - The filesystem check reports unrecoverable errors.
- The missing device is a critical SAN/iSCSI target that is offline on the entire storage network.
- You are unable to reset the root password or access a rescue environment.
In these cases, open a support ticket with the following details:
- Full boot log (copy the messages from the emergency prompt).
- Output of
vgdisplay,lvdisplay, andfsck(if run). - Relevant entries from
/etc/fstaband any custom storage configuration files. - Confirmation of whether the storage device is visible in
lspci/lsblk.
