# Virtual File System ## Basic Concepts The Virtual File System \(VFS\) is not a real file system. It is an abstract layer on top of a heterogeneous file system and provides you with a unified Unix-like file operation interface. Different types of file systems provide different interfaces. If there are multiple types of file systems in the system, different and non-standard interfaces are required for accessing these file systems. The VFS can be introduced as an abstract layer in the system to harmonize the differences between these heterogeneous file systems. In this way, the system does not need to care about the storage medium and file system type at the bottom layer when accessing a file system. In the OpenHarmony kernel, the VFS framework is implemented through the tree structure in the memory. Each node of the tree is a **Vnode** structure, and the relationship between the parent and child nodes is stored in the **PathCache** structure. The VFS provides the following functions: - Node query - Unified file system invoking \(standard\) ## Working Principles The VFS layer uses function pointers to call different interfaces for different types of file systems to implement standard APIs. It uses the Vnode and PathCache mechanisms to improve path search and file access performance, manages partitions through mount point management, and isolates File Descriptors \(FDs\) between processes through FD management. These mechanisms are briefly described below. 1. File system function pointer: The VFS uses function pointers to distribute calls to different file systems for underlying operations based on the file system type. Each file system implements a set of Vnode operation, mount point operation, and file operation APIs, and stores them in the corresponding Vnode, mount point, and file structures in the form of function pointer structures, so that the VFS layer can access the Vnode, mount point, and file structures. 2. Vnode: A Vnode is an abstract encapsulation of a specific file or directory at the VFS layer. It shields the differences between different file systems and implements unified resource management. Vnodes include the following types: - Mount point: used to mount a specific file system, for example, **/** and **/storage**. - Device node: mapping to a device in the **/dev** directory, for example, **/dev/mmcblk0**. - File/Directory node: corresponds to a file or directory in a file system, for example, **/bin/init**. Vnodes are managed using the hash and least recently used \(LRU\) mechanisms. After a system is started, the Vnode cache is preferentially searched in the hash linked list for an access request for a file or directory. If the cache is not hit, the target file or directory is searched in the corresponding file system, and the corresponding Vnode is created and cached. When the number of cached Vnodes reaches the upper limit, the Vnodes that are not accessed for a long time will be deleted. The mount point Vnodes and device node Vnodes are not deleted. The default number of Vnodes in the system is 512. You can configure the number in **LOSCFG\_MAX\_VNODE\_SIZE**. Increasing the value can improve search performance but causes high memory usage. The following figure shows the process of creating a Vnode. **Figure 1** Process of creating a Vnode ![](figures/process-of-creating-a-vnode.png "process-of-creating-a-vnode") 1. PathCache: The PathCache is a path cache. It is stored in a hash table. Based on the address of the parent Vnode and the file name of the child node in the PathCache, the Vnode corresponding to the child node can be quickly found. The following figure shows how a file or directory is located. **Figure 2** Process of locating a file ![](figures/process-of-locating-a-file.png "process-of-locating-a-file") 1. PageCache: The PageCache is a cache of files in the kernel. Currently, the PageCache can cache only binary files. When a file is accessed for the first time, the file is mapped to the memory using **mmap**. When the file is accessed the next time, the file can be directly read from the PageCache. This accelerates the file read and write speed. In addition, PageCache helps implement IPC based on files. 2. FD management: An FD uniquely identifies an open file or directory in an OS. The OpenHarmony has 896 FDs in the following categories: - 512 file descriptors - 128 socket descriptors - 256 message queue descriptors In the OpenHarmony kernel, the FDs of different processes are isolated. That is, a process can access only its own FD. The FDs of all processes are mapped to a global FD table for unified allocation and management. The maximum number of process file descriptors is 256. 3. Mount point management: The OpenHarmony kernel manages all mount points in a linked list. The mount point structure records all Vnodes in the mounted partition. When a partition is unmounted, all Vnodes in the partition are released. ## Development Guidelines ### Available APIs In the following table, "√" indicates that the corresponding file system supports the API, and "×" indicates that the corresponding file system does not support the API. **Table 1** File system APIs