Frequently Asked Question

Repairing oc_filecache corruptions - This folder is unavailable, please try again later or contact the administrator
Last Updated about an hour ago

The error message "This folder is unavailable, please try again later or contact the administrator" in Nextcloud or ownCloud almost always points to a corruption in the oc_filecache table — the internal database index that maps every file and folder to its location on storage. This article explains why that corruption occurs, how to identify it, and how to repair it safely.


Why oc_filecache Corruptions Occur

The ocfilecache table is Nextcloud's internal file index. Every file and folder known to the system has a row in this table recording its storage ID, parent folder ID, path, name, and various metadata (size, MIME type, modification time, ETag, etc.). The actual bytes live on the underlying storage (local disk, object store, Ceph, NFS, etc.), but Nextcloud trusts ocfilecache as the authoritative map of what exists and where.

Because writes to the database and writes to storage are two separate operations, anything that interrupts the sequence mid-way can leave the index in an inconsistent state. Common causes include:

  • Lost network connections during a file upload or sync — the file may be partially written to storage whilst the database row is never committed, or vice versa.
  • Failed copy or move operations — a server-side move updates the database path but the underlying storage operation fails, leaving the recorded path pointing nowhere, or pointing to the wrong location.
  • Failed uploads — a chunked upload that is never assembled leaves orphaned temporary entries in the cache.
  • Server crashes or hard reboots — if the database or PHP process is killed mid-transaction, partial writes can be committed without their counterpart storage operations completing.
  • Storage backend failures — NFS timeouts, Ceph I/O errors, or object store throttling can cause Nextcloud to record a successful write that never actually landed.
  • Group folder operations — moves, renames, and bulk operations on group folders are particularly susceptible because they touch large numbers of rows atomically; any failure partway through leaves the index partially updated.
  • Manual filesystem manipulation — moving or deleting files directly on the server without going through Nextcloud leaves the index describing paths that no longer exist.

The result is a mismatch: the path recorded in oc_filecache for a given file or folder does not match what the path should be based on the parent folder's recorded path and the entry's own name. Nextcloud cannot resolve the location, and the folder becomes unavailable.

IMPORTANT: If you don't know how to connect to your nextcloud database, and are not familiar with command line tools or SQL then we strongly recommend you seek professional assistance before proceeding. If you do proceed, then you do so at your own risk. 

Where we've used www-data replace this with the user that the webserver runs under, can be apache, nginx, www-data depending on distribution/platform choices. 


The Most Common Corruption Pattern

Each row in oc_filecache has three relevant fields for this problem:

Field Meaning
path The full path recorded for this entry
parent The fileid of the parent folder
name The filename or folder name of this entry

The path of any entry must equal the path of its parent, followed by a /, followed by its own name. In other words:

child.path = parent.path + '/' + child.name

When a move or rename fails partway through, this invariant is broken. The child's path column still holds the old location, but its parent foreign key has been updated to point to the new parent — or vice versa. Nextcloud cannot reconcile the two, and the folder tree becomes unresolvable at that point.


Step 1 — Clean Up Group Folder Trash and Expired Versions First

Before touching the database directly, use the built-in OCC commands to remove as much stale data as possible. This reduces the number of broken rows you will need to deal with and avoids deleting entries that the cleanup commands would have removed anyway.

Run the following as your web server user (typically www-data):

sudo -u www-data php /var/www/nextcloud/occ groupfolders:expire

This removes expired file versions from group folders, clearing out version history entries that may themselves be corrupt or orphaned.

sudo -u www-data php /var/www/nextcloud/occ groupfolders:trash:cleanup

This purges the group folder trash, removing deleted-file entries that are past their retention period. Corrupt entries in the trash are a frequent source of oc_filecache inconsistencies.

Note: Running these two commands first means the subsequent database query and deletion will operate on a smaller, cleaner dataset. It also means you are not manually deleting rows that Nextcloud itself would have cleaned up through normal housekeeping.

Step 2 — Build a Manifest of Broken Entries

The following SQL query identifies every row in oc_filecache where the recorded path does not match what the path should be, given the parent's path and the entry's own name. It writes the results into a new table called nc_filecache_broken_manifest so you can inspect them before taking any destructive action.

Connect to your Nextcloud database and run:

CREATE TABLE nc_filecache_broken_manifest AS
SELECT
  c.fileid                      AS child_fileid,
  c.parent                      AS parent_fileid,
  p.path                        AS parent_path,
  c.name                        AS child_name,
  c.path                        AS child_path_recorded,
  CONCAT(p.path, '/', c.name)   AS child_path_expected,
  c.storage
FROM oc_filecache c
JOIN oc_filecache p ON p.fileid = c.parent
WHERE c.storage = p.storage
  AND c.path != CONCAT(p.path, '/', c.name);

What this query does

  • FROM oc_filecache c — iterates over every entry in the file cache (the child).
  • JOIN oc_filecache p ON p.fileid = c.parent — joins each child to its parent row using the parent foreign key.
  • WHERE c.storage = p.storage — ensures the comparison is only made between entries on the same storage backend (avoids false positives across mount points).
  • AND c.path != CONCAT(p.path, '/', c.name) — the core filter: selects only rows where the recorded path does not equal what the path should be.
  • The CONCAT(p.path, '/', c.name) column (childpathexpected) shows you what the path ought to be, whilst childpathrecorded shows what is actually stored — making the discrepancy immediately visible.

The result is a manifest table containing every broken entry, with enough context to understand what has gone wrong for each one.


Step 3 — Inspect the Manifest Table

Do not skip this step.

Before deleting anything, examine the manifest carefully:

SELECT * FROM nc_filecache_broken_manifest LIMIT 100;
SELECT COUNT(*) FROM nc_filecache_broken_manifest;
-- Check which storage IDs are affected
SELECT storage, COUNT(*) AS broken_count
FROM nc_filecache_broken_manifest
GROUP BY storage
ORDER BY broken_count DESC;
-- Look at the actual path mismatches
SELECT
  child_path_recorded,
  child_path_expected,
  child_name
FROM nc_filecache_broken_manifest
LIMIT 50;

You are looking for:

  • How many rows are affected — a handful is normal after a single failed operation; thousands may indicate a more serious underlying problem that needs investigating before repair.
  • Which storage IDs are involved — if the corruption spans multiple storage backends, the root cause may be systemic.
  • Whether the mismatches make sense — you should be able to see the old path versus the expected new path and understand what operation failed. If the data looks completely unexpected, stop and investigate further before proceeding.
  • Whether any entries look like live, actively-used data — if so, consider whether a restore from backup is more appropriate than a surgical delete.
⚠️ You must understand what you are deleting before you proceed. The manifest table exists precisely so that this decision is made deliberately, not automatically.

Step 4 — Delete the Broken Entries

Once you are satisfied that the manifest contains only genuinely broken entries and you understand the scope of the deletion, remove the corrupt rows from oc_filecache:

DELETE c
FROM oc_filecache c
JOIN nc_filecache_broken_manifest m
  ON m.child_fileid = c.fileid;

This deletes only the rows identified in the manifest — it does not touch any other entries in oc_filecache.


Step 5 — Rescan Group Folders

With the broken entries removed, instruct Nextcloud to rescan all group folders and rebuild the file cache from the actual storage:

sudo -u www-data php /var/www/nextcloud/occ groupfolders:scan --all

This walks the physical storage for every group folder, re-creates missing oc_filecache entries for files that genuinely exist on disk, and updates metadata. Files that exist on storage but had no valid cache entry will reappear. Files that existed only in the cache (orphaned index entries with no corresponding storage object) will not be re-added.


Why This Works

Nextcloud does not read the filesystem directly when serving files to users — it reads oc_filecache. A broken path entry means Nextcloud cannot resolve the file's location, so it returns the "folder unavailable" error even if the underlying data is physically present on storage.

By removing the corrupt index entries and then running a fresh scan, you allow Nextcloud to rebuild the index from the ground truth — the actual files on storage. The scan re-creates correct entries with accurate paths, restoring the invariant that child.path = parent.path + '/' + child.name throughout the tree.

The delete-then-rescan approach is safer than attempting to UPDATE the broken paths in place, because an update requires you to know with certainty what the correct path should be for every affected row. The rescan derives the correct paths directly from the filesystem, eliminating the risk of writing a plausible-but-wrong path.


Important Caveats and Trade-offs

This repair process has consequences that cannot be avoided:

  • File metadata will be lost. ETag values, custom tags, comments, and any other metadata stored against the deleted oc_filecache rows will not be restored by the rescan. The rescan creates new rows with freshly generated metadata.
  • Version history will be lost. Nextcloud's file versioning is linked to oc_filecache entries by file ID. When the broken entries are deleted, their associated version records become orphaned and will not be accessible. Depending on the extent of the corruption, this may mean losing some or all previous versions of affected files.
  • Trash contents may be lost. Items in the Nextcloud trash that were linked to broken cache entries will not survive the repair. The groupfolders:trash:cleanup step in Stage 1 removes most of these proactively, but any remaining trash entries tied to deleted file IDs will become inaccessible.
  • The files themselves are not deleted. This process only modifies the database index. The actual file data on storage is untouched. If a file existed on disk before the repair, it will reappear after the rescan. The loss is of metadata and history, not of the primary file content.

The trade-off is straightforward: accept the loss of version history and metadata in exchange for a working, consistent file index. In most operational scenarios this is the correct decision, but it should be made consciously and with the affected users informed where appropriate.


Cleanup

Once the repair is confirmed to be working correctly, drop the manifest table:

DROP TABLE nc_filecache_broken_manifest;

This FAQ was generated and/or edited by GAIN, GENs Artificial Intelligence Network and should not be considered 100% accurate. Always check facts and do your research, things change all the time. If you are unsure about any information provided, please raise a support ticket for clarification.

This website relies on temporary cookies to function, but no personal data is ever stored in the cookies.
OK
Powered by GEN UK CLEAN GREEN ENERGY

Loading ...