302
votes

I have a disk drive where the inode usage is 100% (using df -i command). However after deleting files substantially, the usage remains 100%.

What's the correct way to do it then?

How is it possible that a disk drive with less disk space usage can have higher Inode usage than disk drive with higher disk space usage?

Is it possible if i zip lot of files would that reduce the used inode count?

17
Want to give you 50 points for this question. How can I do! :)Sophy
@Sophy Don't do that. you'll get auto-bannedSteven Lu
@StevenLu Thank you for your info! I want to give credit to him because i spent a few days to solve my issue. But this issue can help me. Thank again,Sophy
@Sophy : why award something off-topic for SO? :) That's definitely not a programming question, no matter how many upvotes it gets.tink
Empty directories also consume inodes. Deleting them can free up some inodes. The number can be significant in some use-cases. You can delete empty directories with: find . -type d -empty -deleteRuchit Patel

17 Answers

181
votes

It's quite easy for a disk to have a large number of inodes used even if the disk is not very full.

An inode is allocated to a file so, if you have gazillions of files, all 1 byte each, you'll run out of inodes long before you run out of disk.

It's also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry. If a file has two directory entries linked to it, deleting one will not free the inode.

Additionally, you can delete a directory entry but, if a running process still has the file open, the inode won't be freed.

My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open.

If you do that and you still have a problem, let us know.

By the way, if you're looking for the directories that contain lots of files, this script may help:

#!/bin/bash
# count_em - count files in all subdirectories under current directory.
echo 'echo $(ls -a "$1" | wc -l) $1' >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$
220
votes

If you are very unlucky you have used about 100% of all inodes and can't create the scipt. You can check this with df -ih.

Then this bash command may help you:

sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n

And yes, this will take time, but you can locate the directory with the most files.

77
votes

My situation was that I was out of inodes and I had already deleted about everything I could.

$ df -i
Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/sda1      942080 507361     11  100% /

I am on an ubuntu 12.04LTS and could not remove the old linux kernels which took up about 400,000 inodes because apt was broken because of a missing package. And I couldn't install the new package because I was out of inodes so I was stuck.

I ended up deleting a few old linux kernels by hand to free up about 10,000 inodes

$ sudo rm -rf /usr/src/linux-headers-3.2.0-2*

This was enough to then let me install the missing package and fix my apt

$ sudo apt-get install linux-headers-3.2.0-76-generic-pae

and then remove the rest of the old linux kernels with apt

$ sudo apt-get autoremove

things are much better now

$ df -i
Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/sda1      942080 507361 434719   54% /
52
votes

My solution:

Try to find if this is an inodes problem with:

df -ih

Try to find root folders with large inodes count:

for i in /*; do echo $i; find $i |wc -l; done

Try to find specific folders:

for i in /src/*; do echo $i; find $i |wc -l; done

If this is linux headers, try to remove oldest with:

sudo apt-get autoremove linux-headers-3.13.0-24

Personally I moved them to a mounted folder (because for me last command failed) and installed the latest with:

sudo apt-get autoremove -f

This solved my problem.

12
votes

I had the same problem, fixed it by removing the directory sessions of php

rm -rf /var/lib/php/sessions/

It may be under /var/lib/php5 if you are using a older php version.

Recreate it with the following permission

mkdir /var/lib/php/sessions/ && chmod 1733 /var/lib/php/sessions/

Permission by default for directory on Debian showed drwx-wx-wt (1733)

2
votes

We experienced this on a HostGator account (who place inode limits on all their hosting) following a spam attack. It left vast numbers of queue records in /root/.cpanel/comet. If this happens and you find you have no free inodes, you can run this cpanel utility through shell:

/usr/local/cpanel/bin/purge_dead_comet_files
2
votes

You can use RSYNC to DELETE the large number of files

rsync -a --delete blanktest/ test/

Create blanktest folder with 0 files in it and command will sync your test folders with large number of files(I have deleted nearly 5M files using this method).

Thanks to http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux

2
votes

Late answer: In my case, it was my session files under

/var/lib/php/sessions

that were using Inodes.
I was even unable to open my crontab or making a new directory let alone triggering the deletion operation. Since I use PHP, we have this guide where I copied the code from example 1 and set up a cronjob to execute that part of the code.

<?php
// Note: This script should be executed by the same user of web server 
process.

// Need active session to initialize session data storage access.
session_start();

// Executes GC immediately
session_gc();

// Clean up session ID created by session_gc()
session_destroy();
?>

If you're wondering how did I manage to open my crontab, then well, I deleted some sessions manually through CLI.

Hope this helps!

1
votes

eaccelerator could be causing the problem since it compiles PHP into blocks...I've had this problem with an Amazon AWS server on a site with heavy load. Free up Inodes by deleting the eaccelerator cache in /var/cache/eaccelerator if you continue to have issues.

rm -rf /var/cache/eaccelerator/*

(or whatever your cache dir)

1
votes

We faced similar issue recently, In case if a process refers to a deleted file, the Inode shall not be released, so you need to check lsof /, and kill/ restart the process will release the inodes.

Correct me if am wrong here.

1
votes

As told before, filesystem may run out of inodes, if there are a lot of small files. I have provided some means to find directories that contain most files here.

1
votes

In one of the above answers it was suggested that sessions was the cause of running out of inodes and in our case that is exactly what it was. To add to that answer though I would suggest to check the php.ini file and ensure session.gc_probability = 1 also session.gc_divisor = 1000 and session.gc_maxlifetime = 1440. In our case session.gc_probability was equal to 0 and caused this issue.

1
votes

On Raspberry Pi I had a problem with /var/cache/fontconfig dir with large number of files. Removing it took more than hour. And of couse rm -rf *.cache* raised Argument list too long error. I used below one

find . -name '*.cache*' | xargs rm -f
0
votes

you could see this info

for i in /var/run/*;do echo -n "$i "; find $i| wc -l;done | column -t
-1
votes

Many answers to this one so far and all of the above seem concrete. I think you'll be safe by using stat as you go along, but OS depending, you may get some inode errors creep up on you. So implementing your own stat call functionality using 64bit to avoid any overflow issues seems fairly compatible.

-1
votes

this article saved my day: https://bewilderedoctothorpe.net/2018/12/21/out-of-inodes/

find . -maxdepth 1 -type d | grep -v '^\.$' | xargs -n 1 -i{} find {} -xdev -type f | cut -d "/" -f 2 | uniq -c | sort -n
-3
votes

If you use docker, remove all images. They used many space....

Stop all containers

docker stop $(docker ps -a -q)

Delete all containers

docker rm $(docker ps -a -q)

Delete all images

docker rmi $(docker images -q)

Works to me