|
|
Question : Ideal max files/directories per directory
|
|
Hi guys,
I'm using Red Hat Enterprise and would like to know what the ideal maximum number of files/directories per directory is. Currently we have a directory with over 7000 directories in it. For various reasons we now have to restructure this directory, and an important factor in deciding how to do this will be the ideal number of files/directories per directory.
A few years ago I was doing a similar project but on a Sun Solaris system, and the sys admin there told me the ideal max inodes per directory was about 200.
If anyone knows what the figure would be for Red Hat Enterprise, your advise would be very much appreciated.
Rangi
|
Answer : Ideal max files/directories per directory
|
|
It probably doesn't matter much that the access is mostly read, nor should it matter as to the total data size, with respect to directory size. What would matter most is number of file opens per unit of time. A good hash scheme reduces the time required to locate a file for reading and as a bonus reduces the directory overhead associated with file creation.
> - our sys admin has said the 7000 dirs in one dir may be responsible for crashing our backup software
I don't know what you are using for backups, but I've never had any problems with Solaris ufsdump or Legato Networker with far larger directories than that. I discourage people from creating really big directories, but sometimes they still do ('til I find out about it).
|
|
|
|
|