This kernel coding specification is developed based on the general programming specifications in the industry. It defines the programming styles for kernel developers to follow.
## Principle<a name="section9512812145915"></a>
## Principle
Overall principle:
...
...
@@ -25,13 +14,13 @@ Overall principle:
Comply with this specification in most cases. When third-party open-source code needs to modified or a large number of open-source code APIs are used, follow the specifications applied to the third-party open-source code. Flexibly use this specification based on general principles.
You are advised to divide directories by function module and then define the header file directory and source file directory for each module.
Unless otherwise specified, use lowercase letters separated by underscores \(\_\) for directory names and file names.
## **Naming**<a name="section1375364815017"></a>
## **Naming**
The CamelCase style is recommended. The rules are as follows:
...
...
@@ -106,7 +95,7 @@ OsTaskScan
OsMuxInit
```
## Comments<a name="section1692516179119"></a>
## Comments
Generally, clear software architecture and appropriate symbol naming improve code readability.
...
...
@@ -142,7 +131,7 @@ You are advised to align multiple consecutive comments on the right. For example
#define CONST_B 2000 /* Const B */
```
## **Format**<a name="section10888536113"></a>
## **Format**
Indent code of each level with four spaces rather than tabs \('\\t'\) for a better readability.
...
...
@@ -233,7 +222,7 @@ int Foo(const char * restrict p); // OK: When there is the restrict modifier, a
sz = sizeof(int*); // OK: There is no variable on the right, and * follows the data type.
```
## Macros<a name="section12276501124"></a>
## Macros
If a function-like macro can be replaced by a function, use a function instead. Use inline functions for performance-critical scenarios.
...
...
@@ -293,11 +282,11 @@ Do not reference external function APIs or variables in declaration mode. Use th
It is recommended that header files be included by stability in the following sequence: header file corresponding to the source code, C standard library, operating system library, platform library, project public library, and other dependencies.
## Data Types<a name="section91351731446"></a>
## Data Types
You are advised to use the basic data types defined in **los\_compiler.h**. For example, define the 32-bit unsigned integer as **UINT32**.
## Variables<a name="section575493915417"></a>
## Variables
Avoid large stack allocations, such as large local arrays.
...
...
@@ -309,7 +298,7 @@ Do not return the address of a local variable outside its scope.
A variable that points to a resource handle or descriptor is assigned a new value immediately after the resource is released. If the scope of the variable ends immediately, no new value needs to be assigned. Variables that point to resource handles or descriptors include pointers, file descriptors, socket descriptors, and other variables that point to resources.
## Assertions<a name="section13864440410"></a>
## Assertions
Assertions must be defined using macros and take effect only in the debugging version.
...
...
@@ -319,7 +308,7 @@ Do not change the running environment in an assertion.
An assertion is used to check only one error.
## Functions<a name="section671919481745"></a>
## Functions
The validity of data sent from a process to another process and data sent from an application to the kernel must be verified. The verification includes but is not limited to the following:
A doubly linked list is a linked data structure that consists of a set of sequentially linked records called nodes. Each node contains a pointer to the previous node and a pointer to the next node in the sequence of nodes. The pointer head is unique.
...
...
@@ -109,7 +101,7 @@ The doubly linked list module provides the following APIs. For more details abou
</tbody>
</table>
## How to Develop<a name="section01781261552"></a>
## How to Develop
The typical development process of the doubly linked list is as follows:
...
...
@@ -136,7 +128,7 @@ This example implements the following:
3. Delete a node.
4. Check whether the operation is performed successfully.
The Cortex Microcontroller Software Interface Standard \([CMSIS](https://developer.arm.com/tools-and-software/embedded/cmsis)\) is a vendor-independent hardware abstraction layer for microcontrollers based on Arm Cortex processors. Of the CMSIS components, the Real Time Operating System \(RTOS\) defines a set of universal and standardized APIs to reduce the dependency of application developers on specific RTOS and facilitate software porting and reuse. The CMSIS provides CMSIS-RTOS v1 and CMSIS-RTOS v2. The OpenHarmony LiteOS-M supports only the implementation of CMSIS-RTOS v2.
## Development Guidelines<a name="section57653573161"></a>
## Development Guidelines
### Available APIs<a name="section1795910417173"></a>
### Available APIs
The following table describes CMSIS-RTOS v2 APIs. For more details about the APIs, see the API reference.
...
...
@@ -102,7 +95,7 @@ The following table describes CMSIS-RTOS v2 APIs. For more details about the API
| | osMessageQueuePut | Puts the message into the queue or times out if the queue is full.|
| | osMessageQueueReset | Initialized the message queue to the empty state (not implemented yet).|
### How to Develop<a name="section48301225131720"></a>
### How to Develop
The CMSIS-RTOS v2 component can be provided as a library \(shown in the figure\) or source code. By adding the CMSIS-RTOS v2 component \(typically configuration files\), you can implement RTOS capabilities on CMSIS-based applications. You only need to include the **cmsis\_os2.h** header file. RTOS APIs can then be called to process RTOS kernel-related events. You do not need to recompile the source code when the kernel is replaced.
...
...
@@ -110,7 +103,7 @@ The RTOS object control block definition needs to be called for static object al
![](figures/how-to-develop.png)
### Development Example<a name="section524434761713"></a>
The OpenHarmony kernel uses the **musl libc** library and self-developed APIs and supports the Portable Operating System Interface \(POSIX\). You can develop components and applications working on the kernel based on the POSIX.
## Development Guidelines<a name="section1573664211318"></a>
## Development Guidelines
### Available APIs<a name="section10429150121317"></a>
### Available APIs
**Table 1** Available APIs
...
...
@@ -191,7 +186,7 @@ The OpenHarmony kernel uses the **musl libc** library and self-developed APIs
| | #include <libc.h> | int libc_get_version(void); | Obtains the libc version.|
### Important Notes<a name="section109174418147"></a>
### Important Notes
Error codes
...
...
@@ -248,7 +243,7 @@ Error codes
| EOVERFLOW | 75 | Value too large for defined data type |
| EMSGSIZE | 90 | Message too long |
### Development Example<a name="section206149278155"></a>
An interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code being executed by the processor. When a hardware interrupt is triggered, the interrupt handler is located based on the interrupt ID and then executed to handle the interrupt.
...
...
@@ -37,7 +37,7 @@ The following describes the concepts related to interrupts:
An area for storing interrupt vectors. It stores the mapping between interrupt vectors and interrupt IDs.
## Available APIs<a name="section158501652121514"></a>
## Available APIs
The following table describes APIs available for the OpenHarmony LiteOS-M interrupt module. For more details about the APIs, see the API reference.
...
...
@@ -56,19 +56,19 @@ The following table describes APIs available for the OpenHarmony LiteOS-M interr
| Triggering an interrupt| LOS_HwiTrigger | Triggers an interrupt (simulate an external interrupt by writing the related register of the interrupt controller).|
| Clearing interrupt register status| LOS_HwiClear | Clears the status bit of the interrupt register corresponding to the interrupt ID. The implementation of this API depends on the interrupt controller version. It is optional.|
## How to Develop<a name="section11841123033618"></a>
## How to Develop
1. Call **LOS_HwiCreate** to create an interrupt.
2. Call **LOS_HwiTrigger** to trigger the interrupt.
3. Call **LOS_HwiDelete** to delete the specified interrupt. Use this API based on actual requirements.
>- Configure the maximum number of interrupts supported and the number of configurable interrupt priorities based on the specific hardware.
>- If the interrupt handler takes long time, the CPU cannot respond to interrupt requests in a timely manner.
>- Functions that trigger **LOS\_Schedule** cannot be directly or indirectly executed during interrupt response process.
>- The input parameter of **LOS\_IntRestore\(\)** must be the return value of **LOS\_IntLock\(\)**, that is, the current program status register \(CPSR\) value before the interrupt is disabled. Interrupts 0 to 15 in the Cortex-M series processors are for internal use. You are advised not to apply for or create interrupts 0 to 15.
## Development Example<a name="section460018317164"></a>
@@ -119,7 +106,7 @@ The input parameters **eventMask** and **mode** determine whether the condit
</tbody>
</table>
## How to Develop<a name="section783435801510"></a>
## How to Develop
The typical event development process is as follows:
...
...
@@ -134,9 +121,9 @@ The typical event development process is as follows:
>- When an event is read or written, the 25th bit of the event is reserved and cannot be set.
>- Repeated writes of the same event are treated as one write.
## Development Example<a name="section460018317164"></a>
## Development Example
### Example Description<a name="section896412438910"></a>
### Example Description
In this example, run the **Example\_TaskEntry** task to create the **Example\_Event** task. Run the **Example\_Event** task to read an event to trigger task switching. Run the **Example\_TaskEntry** task to write an event. You can understand the task switching during event operations based on the sequence in which logs are recorded.
A mutual exclusion \(mutex\) is a special binary semaphore used for exclusive access to shared resources.
...
...
@@ -24,10 +14,10 @@ In a multi-task environment, multiple tasks may access the same shared resources
When non-shared resources are accessed by a task, the mutex is locked. Other tasks will be blocked until the mutex is released by the task. The mutex allows only one task to access the shared resources at a time, ensuring integrity of operations on the shared resources.
**Figure 1** Mutex working mechanism for mini systems<aname="fig04871041163213"></a>
**Figure 1** Mutex working mechanism for mini systems
>- Two tasks cannot lock the same mutex. If a task attempts to lock a mutex held by another task, the task will be blocked until the mutex is unclocked.
>- Mutexes cannot be used in the interrupt service program.
>- When using the LiteOS-M kernel, OpenHarmony must ensure real-time task scheduling and avoid long-time task blocking. Therefore, a mutex must be released as soon as possible after use.
>- When a mutex is held by a task, the task priority cannot be changed by using APIs such as **LOS\_TaskPriSet**.
## Development Example<a name="section1426719434114"></a>
## Development Example
### Example Description<a name="section896412438910"></a>
### Example Description
This example implements the following:
...
...
@@ -103,7 +93,7 @@ This example implements the following:
3.**Example\_MutexTask1** requests a mutex in scheduled block mode, and waits for 10 ticks. Because the mutex is still held by **Example\_MutexTask2**, **Example\_MutexTask1** is suspended. After 10 ticks, **Example\_MutexTask1** is woken up and attempts to request a mutex in permanent block mode. **Example\_MutexTask1** is suspended because the mutex is still held by **Example\_MutexTask2**.
4. After 100 ticks, **Example\_MutexTask2** is woken up and releases the mutex, and then **Example\_MutexTask1** is woken up. **Example\_MutexTask1** acquires the mutex and then releases the mutex. At last, the mutex is deleted.
>- The maximum number of queues supported by the system is the total number of queue resources of the system, not the number of queue resources available to users. For example, if the system software timer occupies one more queue resource, the number of queue resources available to users decreases by one.
>- The input parameters queue name and flags passed when a queue is created are reserved for future use.
>- The input parameter **timeOut** in the queue interface function is relative time.
...
...
@@ -147,9 +145,9 @@ The preceding figure illustrates how to write data to the tail node only. Writin
>- If the input parameter **bufferSize** in **LOS\_QueueReadCopy** is less than the length of the message, the message will be truncated.
>- **LOS\_QueueWrite**, **LOS\_QueueWriteHead**, and **LOS\_QueueRead** are called to manage data addresses, which means that the actual data read or written is pointer data. Therefore, before using these APIs, ensure that the message node size is the pointer length during queue creation, to avoid waste and read failures.
## Development Example<a name="section460018317164"></a>
## Development Example
### Example Description<a name="section2148236125814"></a>
### Example Description
Create a queue and two tasks. Enable task 1 to call the queue write API to send messages, and enable task 2 to receive messages by calling the queue read API.
...
...
@@ -159,7 +157,7 @@ Create a queue and two tasks. Enable task 1 to call the queue write API to send
4. Enable messages to be received in task 2 by calling **RecvEntry**.
Semaphore is a mechanism for implementing communication between tasks. It implements synchronization between tasks or exclusive access to shared resources.
...
...
@@ -27,9 +14,9 @@ The usage of the counter value varies with the function of the semaphore.
- If the semaphore is used as a mutex, the counter value indicates the number of units of the shared resources available and its initial value cannot be **0**. The semaphore must be acquired before the shared resource is used, and released after the resource is used. When all shared resources are used, the semaphore counter is reduced to **0** and the tasks that need to obtain the semaphores will be blocked. This ensures exclusive access to shared resources. In addition, when the number of shared resources is **1**, a binary semaphore \(similar to the mutex mechanism\) is recommended.
- If the semaphore is used for synchronization, the initial semaphore counter value is **0**. When a task fails to acquire the semaphore, it will be blocked and enters Ready or Running state only when the semaphore is released. In this way, task synchronization is implemented.
## Working Principles<a name="section1794010261861"></a>
## Working Principles
### Semaphore control block<a name="section11372149164815"></a>
### Semaphore control block
```
/**
...
...
@@ -44,7 +31,7 @@ typedef struct {
} LosSemCB;
```
### Working Principles<a name="section139726510491"></a>
### Working Principles
Initializing semaphores: Request memory for the semaphores configured \(the number of semaphores can be configured in the **LOSCFG\_BASE\_IPC\_SEM\_LIMIT** macro by users\), set all semaphores to the unused state, and add them to the linked list for unused semaphores.
...
...
@@ -61,7 +48,7 @@ A semaphore can also be used to limit the number of tasks that can access the sh
**Figure 1** Semaphore working mechanism for mini systems<aname="fig467314634214"></a>
>As interrupts cannot be blocked, semaphores cannot be requested in block mode for interrupts.
## Development Example<a name="section460018317164"></a>
## Development Example
### Example Description<a name="section22061718111412"></a>
### Example Description
This example implements the following:
...
...
@@ -126,7 +113,7 @@ This example implements the following:
4. After 20 ticks, **ExampleSemTask2** is woken up and releases the semaphore. **ExampleSemTask1** acquires the semaphore and is scheduled to run. When **ExampleSemTask1** is complete, it releases the semaphore.
5. Task **ExampleSem** is woken up after 400 ticks and deletes the semaphore.
Memory management, one of the core modules of the OS, manages the memory resources of the system. Memory management primarily involves initializing, allocating, and releasing memory.
## Working Principles<a name="section328282013571"></a>
## Working Principles
Dynamic memory management allows memory blocks of any size to be allocated from a large contiguous memory \(memory pool or heap memory\) configured in the system based on user demands when memory resources are sufficient. The memory block can be released for further use when not required. Compared with static memory management, dynamic memory management allows memory allocation on demand but causes fragmentation of memory.
The dynamic memory of the OpenHarmony LiteOS-M has optimized the memory space partitioning based on the Two-Level Segregate Fit \(TLSF\) algorithm to achieve higher performance and minimize fragmentation. [Figure 1](#fig1179964042818) shows the core algorithm of the dynamic memory.
The dynamic memory of the OpenHarmony LiteOS-M has optimized the memory space partitioning based on the Two-Level Segregate Fit \(TLSF\) algorithm to achieve higher performance and minimize fragmentation. [Figure 1](#fig1179964042818) shows the core algorithm of the dynamic memory.
**Figure 1** Dynamic memory algorithm for mini systems<aname="fig1179964042818"></a>
**Figure 1** Dynamic memory algorithm for mini systems
Multiple free lists are used for management based on the size of the free memory block. The free memory blocks are divided into two parts: \[4, 127\] and \[2<sup>7</sup>, 2<sup>31</sup>\], as indicated by the size class in [Figure 1](#fig1179964042818).
Multiple free lists are used for management based on the size of the free memory block. The free memory blocks are divided into two parts: \[4, 127\] and \[2<sup>7</sup>, 2<sup>31</sup>\], as indicated by the size class in [Figure 1](#fig1179964042818).
1. The memory in the range of \[4, 127\]\(lower part in [Figure 1](#fig1179964042818)\) is divided into 31 parts. The size of the memory block corresponding to each part is a multiple of 4 bytes. Each part corresponds to a free list and a bit that indicates whether the free list is empty. The value **1** indicates that the free list is not empty. There are 31 bits corresponding to the 31 memory parts in the range of \[4, 127\].
2. The memory greater than 127 bytes is managed in power of two increments. The size of each range is \[2^n, 2^\(n+1\)-1\], where n is an integer in \[7, 30\]. This range is divided into 24 parts, each of which is further divided into 8 second-level \(L2\) ranges, as shown in Size Class and Size SubClass in the upper part of [Figure 1](#fig1179964042818). Each L2 range corresponds to a free list and a bit that indicates whether the free list is empty. There are a total of 192 \(24 x 8\) L2 ranges, corresponding to 192 free lists and 192 bits.
1. The memory in the range of \[4, 127\]\(lower part in [Figure 1](#fig1179964042818)\) is divided into 31 parts. The size of the memory block corresponding to each part is a multiple of 4 bytes. Each part corresponds to a free list and a bit that indicates whether the free list is empty. The value **1** indicates that the free list is not empty. There are 31 bits corresponding to the 31 memory parts in the range of \[4, 127\].
2. The memory greater than 127 bytes is managed in power of two increments. The size of each range is \[2^n, 2^\(n+1\)-1\], where n is an integer in \[7, 30\]. This range is divided into 24 parts, each of which is further divided into 8 second-level \(L2\) ranges, as shown in Size Class and Size SubClass in the upper part of [Figure 1](#fig1179964042818). Each L2 range corresponds to a free list and a bit that indicates whether the free list is empty. There are a total of 192 \(24 x 8\) L2 ranges, corresponding to 192 free lists and 192 bits.
For example, insert 40-byte free memory to a free list. The 40-byte free memory corresponds to the 10th free list in the range of \[40, 43\], and the 10th bit indicates the use of the free list. The system inserts the 40-byte free memory to the 10th free list and determines whether to update the bitmap flag. When 40-byte memory is requested, the system obtains the free list corresponding to the memory block of the requested size based on the bitmap flag, and then obtains a free memory node from the free list. If the size of the allocated node is greater than the memory requested, the system splits the node and inserts the remaining node to the free list. If 580-byte free memory needs to be inserted to a free list, the 580-byte free memory corresponds to the 47th \(31 + 2 x 8\) free list in L2 range \[2^9, 2^9+2^6\], and the 47th bit indicates the use of the free list. The system inserts the 580-byte free memory to the 47th free list and determines whether to update the bitmap flag. When 580-byte memory is requested, the system obtains the free list corresponding to the memory block of the requested size based on the bitmap flag, and then obtains a free memory node from the free list. If the size of the allocated node is greater than the memory requested, the system splits the node and inserts the remaining node to the free list. If the corresponding free list is empty, the system checks for a free list meeting the requirements in a larger memory range. In actual application, the system can locate the free list that meets the requirements at a time.
[Figure 2](#fig10997102213017)shows the memory management structure.
[Figure 2](#fig10997102213017) shows the memory management structure.
**Figure 2** Dynamic memory management structure for mini systems<aname="fig10997102213017"></a>
**Figure 2** Dynamic memory management structure for mini systems
The memory pool header contains the memory pool information, bitmap flag array, and free list array. The memory pool information includes the start address of the memory pool, total size of the heap memory, and attributes of the memory pool. The bitmap flag array consists of seven 32-bit unsigned integers. Each bit indicates whether the free list is inserted with free memory block nodes. The free list contains information about 223 free memory head nodes. The free memory head node information contains a memory node header and information about the previous and next nodes in the free list.
The memory pool header contains the memory pool information, bitmap flag array, and free list array. The memory pool information includes the start address of the memory pool, total size of the heap memory, and attributes of the memory pool. The bitmap flag array consists of seven 32-bit unsigned integers. Each bit indicates whether the free list is inserted with free memory block nodes. The free list contains information about 223 free memory head nodes. The free memory head node information contains a memory node header and information about the previous and next nodes in the free list.
-Memory pool nodes
- Memory pool nodes
There are three types of nodes: free node, used node, and end node. Each memory node maintains the size and use flag of the memory node and a pointer to the previous memory node in the memory pool. The free nodes and used nodes have a data area, but the end node has no data area.
There are three types of nodes: free node, used node, and end node. Each memory node maintains the size and use flag of the memory node and a pointer to the previous memory node in the memory pool. The free nodes and used nodes have a data area, but the end node has no data area.
The off-chip physical memory needs to be used because the on-chip RAMs of some chips cannot meet requirements. The OpenHarmony LiteOS-M kernel can logically combine multiple discontiguous memory regions so that users are unaware of the discontiguous memory regions in the underlying layer. The OpenHarmony LiteOS-M kernel memory module inserts discontiguous memory regions into a free list as free memory nodes and marks the discontiguous parts as virtual memory nodes that have been used. In this way, the discontinuous memory regions are logically combined as a unified memory pool. [Figure 3](#fig18471556115917) shows how the discontiguous memory regions are logically integrated.
The off-chip physical memory needs to be used because the on-chip RAMs of some chips cannot meet requirements. The OpenHarmony LiteOS-M kernel can logically combine multiple discontiguous memory regions so that users are unaware of the discontiguous memory regions in the underlying layer. The OpenHarmony LiteOS-M kernel memory module inserts discontiguous memory regions into a free list as free memory nodes and marks the discontiguous parts as virtual memory nodes that have been used. In this way, the discontinuous memory regions are logically combined as a unified memory pool. [Figure 3](#fig18471556115917) shows how the discontiguous memory regions are logically integrated.
The discontiguous memory regions are integrated into a unified memory pool as follows:
1. Call **LOS\_MemInit** to initialize the first memory region of multiple discontiguous memory regions.
2.<aname="li26042441209"></a>Obtain the start address and length of the next memory region, and calculate the **gapSize** between the current memory region and its previous memory region. The **gapSize** is considered as a used virtual node.
3. Set the size of the end node of the previous memory region to the sum of **gapSize** and **OS\_MEM\_NODE\_HEAD\_SIZE**.
4.<aname="li10604194419014"></a>Divide the current memory region into a free memory node and an end node, insert the free memory node to the free list, and set the link relationship between the nodes.
5. Repeat [2](#li26042441209) to [4](#li10604194419014) to integrate more discontiguous memory regions.
1.Call **LOS\_MemInit** to initialize the first memory region of multiple discontiguous memory regions.
2.<aname="li26042441209"></a>Obtain the start address and length of the next memory region, and calculate the **gapSize** between the current memory region and its previous memory region. The **gapSize** is considered as a used virtual node.
3.Set the size of the end node of the previous memory region to the sum of **gapSize** and**OS\_MEM\_NODE\_HEAD\_SIZE**.
4.<aname="li10604194419014"></a>Divide the current memory region into a free memory node and an end node, insert the free memory node to the free list, and set the link relationship between the nodes.
5.Repeat [2](#li26042441209) to [4](#li10604194419014) to integrate more discontiguous memory regions.
## Development Guidelines<a name="section7921151015814"></a>
## Development Guidelines
### When to Use<a name="section326917198583"></a>
### When to Use
Dynamic memory management allocates and manages memory resources requested by users dynamically. It is a good choice when users need memory blocks of different sizes. You can call the dynamic memory allocation function of the OS to request a memory block of the specified size. You can call the dynamic memory release function to release the memory at any time.
### Available APIs<a name="section1032331584"></a>
### Available APIs
The following table describes APIs available for OpenHarmony LiteOS-M dynamic memory management. For more details about the APIs, see the API reference.
>- The dynamic memory module manages memory through control block structures, which consume extra memory. Therefore, the actual memory space available to users is less than the value of **OS\_SYS\_MEM\_SIZE**.
>- The **LOS\_MemAllocAlign** and **LOS\_MemMallocAlign** APIs consume extra memory for memory alignment, which may cause memory loss. When the memory used for alignment is freed up, the lost memory will be reclaimed.
>- The discontiguous memory regions passed by the **LosMemRegion** array to the **LOS\_MemRegionsAdd** API must be sorted in ascending order by memory start address in memory regions, and the memory regions cannot overlap.
>- The dynamic memory module manages memory through control block structures, which consume extra memory. Therefore, the actual memory space available to users is less than the value of **OS\_SYS\_MEM\_SIZE**.
>- The **LOS\_MemAllocAlign** and **LOS\_MemMallocAlign** APIs consume extra memory for memory alignment, which may cause memory loss. When the memory used for alignment is freed up, the lost memory will be reclaimed.
>- The discontiguous memory regions passed by the **LosMemRegion** array to the **LOS\_MemRegionsAdd** API must be sorted in ascending order by memory start address in memory regions, and the memory regions cannot overlap.
### How to Develop<a name="section07271773592"></a>
### How to Develop
The typical development process of dynamic memory is as follows:
1. Call the **LOS\_MemInit** API to initialize a memory pool.
1.Call the **LOS\_MemInit** API to initialize a memory pool.
After a memory pool is initialized, a memory pool control header and end node will be generated, and the remaining memory is marked as free nodes. The end node is the last node in the memory pool, and its size is **0**.
After a memory pool is initialized, a memory pool control header and end node will be generated, and the remaining memory is marked as free nodes. The end node is the last node in the memory pool, and its size is**0**.
1. Call the **LOS\_MemAlloc** API to allocate dynamic memory of any size.
1.Call the **LOS\_MemAlloc** API to allocate dynamic memory of any size.
The system checks whether the dynamic memory pool has free memory blocks greater than the requested size. If yes, the system allocates a memory block and returns the pointer to the memory block. If no, the system returns NULL. If the memory block allocated is greater than the requested size, the system splits the memory block and inserts the remaining memory block to the free list.
The system checks whether the dynamic memory pool has free memory blocks greater than the requested size. If yes, the system allocates a memory block and returns the pointer to the memory block. If no, the system returns NULL. If the memory block allocated is greater than the requested size, the system splits the memory block and inserts the remaining memory block to the free list.
1. Call the **LOS\_MemFree** API to release dynamic memory.
1.Call the **LOS\_MemFree** API to release dynamic memory.
The released memory block can be reused. When **LOS\_MemFree** is called, the memory block will be reclaimed and marked as free nodes. When memory blocks are reclaimed, adjacent free nodes are automatically merged.
The released memory block can be reused. When **LOS\_MemFree** is called, the memory block will be reclaimed and marked as free nodes. When memory blocks are reclaimed, adjacent free nodes are automatically merged.
### Development Example<a name="section84931234145913"></a>
### Development Example
This example implements the following:
1.Initialize a dynamic memory pool.
2.Allocate a memory block from the dynamic memory pool.
3.Store a piece of data in the memory block.
4.Print the data in the memory block.
5.Release the memory block.
1. Initialize a dynamic memory pool.
2. Allocate a memory block from the dynamic memory pool.
## Working Principles<a name="section165473517522"></a>
## Working Principles
The static memory is a static array. The block size in the static memory pool is set during initialization and cannot be changed after initialization.
The static memory pool consists of a control block **LOS\_MEMBOX\_INFO** and several memory blocks **LOS\_MEMBOX\_NODE** of the same size. The control block is located at the head of the memory pool and used for memory block management. It contains the memory block size \(**uwBlkSize**\), number of memory blocks \(**uwBlkNum**\), number of allocated memory blocks \(**uwBlkCnt**\), and free list \(**stFreeList**\). Memory is allocated and released by block size. Each memory block contains the pointer **pstNext** that points to the next memory block.
The static memory pool consists of a control block **LOS\_MEMBOX\_INFO** and several memory blocks **LOS\_MEMBOX\_NODE** of the same size. The control block is located at the head of the memory pool and used for memory block management. It contains the memory block size \(**uwBlkSize**\), number of memory blocks \(**uwBlkNum**\), number of allocated memory blocks \(**uwBlkCnt**\), and free list \(**stFreeList**\). Memory is allocated and released by block size. Each memory block contains the pointer **pstNext** that points to the next memory block.
## Development Guidelines<a name="section57511620165218"></a>
## Development Guidelines
### When to Use<a name="section215474911529"></a>
### When to Use
Use static memory allocation to obtain memory blocks of the fixed size. When the memory is no longer required, release the static memory.
### Available APIs<a name="section79231214539"></a>
### Available APIs
The following table describes APIs available for OpenHarmony LiteOS-M static memory management. For more details about the APIs, see the API reference.
>The number of memory blocks in the memory pool after initialization is not equal to the total memory size divided by the memory block size. The reason is the control block of the memory pool and the control header of each memory block have memory overheads. When setting the total memory size, you need to consider these factors.
### How to Develop<a name="section1388511316548"></a>
### How to Develop
The typical development process of static memory is as follows:
1. Plan a memory space as the static memory pool.
2. Call the **LOS\_MemboxInit** API to initialize the static memory pool.
2. Call the **LOS\_MemboxInit** API to initialize the static memory pool.
During initialization, the memory space specified by the input parameter is divided into multiple blocks \(the number of blocks depends on the total static memory size and the block size\). Insert all memory blocks to the free list, and place the control header at the beginning of the memory.
3. Call the **LOS\_MemboxAlloc** API to allocate static memory.
3. Call the **LOS\_MemboxAlloc** API to allocate static memory.
The system allocates the first free memory block from the free list and returns the start address of this memory block.
4. Call the **LOS\_MemboxClr** API.
4. Call the **LOS\_MemboxClr** API.
Clear the memory block corresponding to the address contained in the input parameter.
5. Call the **LOS\_MemboxFree** API.
5. Call the **LOS\_MemboxFree** API.
Add the memory block to the free list.
### Development Example<a name="section17801515105519"></a>
The software timer is a software-simulated timer based on system tick interrupts. When the preset tick counter value has elapsed, the user-defined callback will be invoked. The timing precision is related to the cycle of the system tick clock.
...
...
@@ -28,9 +15,9 @@ The software timer supports the following functions:
- Deleting a software timer
- Obtaining the number of remaining ticks of a software timer
## Working Principles<a name="section070665816719"></a>
## Working Principles
The software timer is a system resource. When modules are initialized, a contiguous section of memory is allocated for software timers. The maximum number of timers supported by the system is configured by the **LOSCFG\_BASE\_CORE\_SWTMR\_LIMIT** macro in **los\_config.h**.
The software timer is a system resource. When modules are initialized, a contiguous section of memory is allocated for software timers. The maximum number of timers supported by the system is configured by the **LOSCFG\_BASE\_CORE\_SWTMR\_LIMIT** macro in**los\_config.h**.
Software timers use a queue and a task resource of the system. The software timers are triggered based on the First In First Out \(FIFO\) rule. A timer with a shorter value is always closer to the queue head than a timer with a longer value, and is preferentially triggered.
...
...
@@ -40,7 +27,7 @@ When a tick interrupt occurs, the tick interrupt handler scans the global timing
When the tick interrupt handling function is complete, the software timer task \(with the highest priority\) is woken up. In this task, the timeout callback function for the recorded timer is called.
@@ -49,12 +36,12 @@ When the tick interrupt handling function is complete, the software timer task \
- OS\_SWTMR\_STATUS\_CREATED
The timer is created but not started or the timer is stopped. When **LOS\_SwtmrCreate** is called for a timer that is not in use or **LOS\_SwtmrStop** is called for a newly started timer, the timer changes to this state.
The timer is created but not started or the timer is stopped. When **LOS\_SwtmrCreate** is called for a timer that is not in use or **LOS\_SwtmrStop** is called for a newly started timer, the timer changes to this state.
- OS\_SWTMR\_STATUS\_TICKING
The timer is running \(counting\). When **LOS\_SwtmrStart** is called for a newly created timer, the timer enters this state.
The timer is running \(counting\). When **LOS\_SwtmrStart** is called for a newly created timer, the timer enters this state.
@@ -65,11 +52,11 @@ The OpenHarmony LiteOS-M kernel provides three types of software timers:
- Periodic timer: This type of timer periodically triggers timer events until it is manually stopped.
- One-shot timer deleted by calling an API
## Available APIs<a name="section158501652121514"></a>
## Available APIs
The following table describes APIs available for the OpenHarmony LiteOS-M software timer module. For more details about the APIs, see the API reference.
>- Avoid too many operations in the callback function of the software timer. Do not use APIs or perform operations that may cause task suspension or blocking.
>- The software timers use a queue and a task resource of the system. The priority of the software timer tasks is set to **0** and cannot be changed.
>- The software timers use a queue and a task resource of the system. The priority of the software timer tasks is set to **0** and cannot be changed.
>- The number of software timer resources that can be configured in the system is the total number of software timer resources available to the entire system, not the number of software timer resources available to users. For example, if the system software timer occupies one more resource, the number of software timer resources available to users decreases by one.
>- If a one-shot software timer is created, the system automatically deletes the timer and reclaims resources after the timer times out and the callback function is executed.
>- For a one-shot software timer that will not be automatically deleted after expiration, you need to call **LOS\_SwtmrDelete** to delete it and reclaim the timer resource to prevent resource leakage.
>- For a one-shot software timer that will not be automatically deleted after expiration, you need to call **LOS\_SwtmrDelete** to delete it and reclaim the timer resource to prevent resource leakage.
## Development Example<a name="section460018317164"></a>
## Development Example
### Example Description<a name="section3741753191918"></a>
### Example Description
The following programming example demonstrates how to:
1. Create, start, delete, pause, and restart a software timer.
2. Use a one-shot software timer and a periodic software timer
From the perspective of the operating system, tasks are the minimum running units that compete for system resources. They can use or wait for CPUs, use system resources such as memory, and run independently.
...
...
@@ -20,9 +10,9 @@ The task module of the OpenHarmony LiteOS-M provides multiple tasks and supports
- A task represents a thread.
- The preemptive scheduling mechanism is used for tasks. High-priority tasks can interrupt low-priority tasks. Low-priority tasks can be scheduled only after high-priority tasks are blocked or complete.
- Time slice round-robin is used to schedule tasks with the same priority.
- A total of 32 \(**0** to **31**\) priorities are defined. **0** is the highest priority, and **31** is the lowest.
- A total of 32 \(**0**to **31**\) priorities are defined. **0** is the highest priority, and **31** is the lowest.
@@ -103,11 +93,11 @@ Task switching involves actions, such as obtaining the task with the highest pri
When a task is created, the system initializes the task stack and presets the context. The system places the task entry function in the corresponding position so that the function is executed when the task enters the running state for the first time.
## Available APIs<a name="section158501652121514"></a>
## Available APIs
The following table describes APIs available for the OpenHarmony LiteOS-M task module. For more details about the APIs, see the API reference.
**Table 1** APIs of the task management module
**Table 1** APIs of the task management module
| Category| API| Description|
| -------- | -------- | -------- |
...
...
@@ -139,33 +129,33 @@ The following table describes APIs available for the OpenHarmony LiteOS-M task m
>- Running idle tasks reclaims the TCBs and stacks in the to-be-recycled linked list.
>- The task name is a pointer without memory space allocated. When setting the task name, do not assign the local variable address to the task name pointer.
>- The task stack size is 8-byte aligned. Follow the "nothing more and nothing less" principle while determining the task stack size.
>- A running task cannot be suspended if task scheduling is locked.
>- Idle tasks and software timer tasks cannot be suspended or deleted.
>- In an interrupt handler or when a task is locked, the operation of calling **LOS\_TaskDelay** fails.
>- In an interrupt handler or when a task is locked, the operation of calling **LOS\_TaskDelay** fails.
>- Locking task scheduling does not disable interrupts. Tasks can still be interrupted while task scheduling is locked.
>- Locking task scheduling must be used together with unlocking task scheduling.
>- Task scheduling may occur while a task priority is being set.
>- The maximum number of tasks that can be set for the operating system is the total number of tasks of the operating system, not the number of tasks available to users. For example, if the system software timer occupies one more task resource, the number of task resources available to users decreases by one.
>- **LOS\_CurTaskPriSet** and **LOS\_TaskPriSet** cannot be used in interrupts or used to modify the priorities of software timer tasks.
>- If the task corresponding to the task ID sent to **LOS\_TaskPriGet** has not been created or the task ID exceeds the maximum number of tasks, **-1** will be returned.
>- **LOS\_CurTaskPriSet** and **LOS\_TaskPriSet** cannot be used in interrupts or used to modify the priorities of software timer tasks.
>- If the task corresponding to the task ID sent to **LOS\_TaskPriGet** has not been created or the task ID exceeds the maximum number of tasks, **-1** will be returned.
>- Resources such as a mutex or a semaphore allocated to a task must have been released before the task is deleted.
## Development Example<a name="section460018317164"></a>
## Development Example
This example describes the priority-based task scheduling and use of task-related APIs, including creating, delaying, suspending, and resuming two tasks with different priorities, and locking/unlocking task scheduling. The sample code is as follows:
The central processing unit percent \(CPUP\) includes the system CPUP and task CPUP.
The system CPUP is the CPU usage of the system within a period of time. It reflects the CPU load and the system running status \(idle or busy\) in the given period of time. The valid range of the system CPUP is 0 to 100 in percentage. The precision can be adjusted through configuration. The value **100** indicates that the system runs with full load.
The system CPUP is the CPU usage of the system within a period of time. It reflects the CPU load and the system running status \(idle or busy\) in the given period of time. The valid range of the system CPUP is 0 to 100 in percentage. The precision can be adjusted through configuration. The value **100** indicates that the system runs with full load.
Task CPUP refers to the CPU usage of a single task. It reflects the task status, busy or idle, in a period of time. The valid range of task CPUP is 0 to 100 in percentage. The precision can be adjusted through configuration. The value **100** indicates that the task is being executed for the given period of time.
Task CPUP refers to the CPU usage of a single task. It reflects the task status, busy or idle, in a period of time. The valid range of task CPUP is 0 to 100 in percentage. The precision can be adjusted through configuration. The value **100** indicates that the task is being executed for the given period of time.
With the system CPUP, you can determine whether the current system load exceeds the designed specifications.
With the CPUP of each task, you can determine whether the CPU usage of each task meets expectations of the design.
## Working Principles<a name="section96644177124"></a>
## Working Principles
The OpenHarmony LiteOS-M CPUP records the system CPU usage on a task basis. When task switching occurs, the task start time and task switch-out or exit time are recorded. Each time when a task exits, the system accumulates the CPU time used by the task.
You can configure this function in **target\_config.h**.
You can configure this function in **target\_config.h**.
The OpenHarmony LiteOS-M provides the following types of CPUP information:
...
...
@@ -41,7 +31,7 @@ Task CPUP = Total running time of the task/Total running time of the system
## Available APIs<a name="section158501652121514"></a>
-[Exporting the Symbol Table](#section15414650102716)
-[Loading an ELF File](#section5221181562810)
-[ELF File Link](#section68441639182817)
## Basic Concepts
-[ELF Specifications](#section187315541916)
-[ELF Type](#section1701552268)
-[Options for Linking](#section17292133274)
In small devices with limited hardware resources, dynamic algorithm deployment capability is required to solve the problem that multiple algorithms cannot be deployed at the same time. The LiteOS-M kernel uses the Executable and Linkable Format \(ELF\) loading because it is easy to use and compatible with a wide variety of platforms. The LiteOS-M provides APIs similar to **dlopen** and **dlsym**. Apps can load and unload required algorithm libraries by using the APIs provided by the dynamic loading module. As shown in the following figure, the app obtains the corresponding information output through the API required by the third-party algorithm library. The third-party algorithm library depends on the basic APIs provided by the kernel, such as **malloc**. After the app loads the API and relocates undefined symbols, it can call the API to complete the function. The dynamic loading component supports only the Arm architecture. In addition, the signature and source of the shared library to be loaded must be verified to ensure system security.
In small devices with limited hardware resources, dynamic algorithm deployment capability is required to solve the problem that multiple algorithms cannot be deployed at the same time. The LiteOS-M kernel uses the Executable and Linkable Format \(ELF\) loading because it is easy to use and compatible with a wide variety of platforms. The LiteOS-M provides APIs similar to **dlopen** and **dlsym**. Apps can load and unload required algorithm libraries by using the APIs provided by the dynamic loading module. As shown in the following figure, the app obtains the corresponding information output through the API required by the third-party algorithm library. The third-party algorithm library depends on the basic APIs provided by the kernel, such as **malloc**. After the app loads the API and relocates undefined symbols, it can call the API to complete the function. The dynamic loading component supports only the Arm architecture. In addition, the signature and source of the shared library to be loaded must be verified to ensure system security.
## Working Principles<a name="section139861939219"></a>
## Working Principles
### Exporting the Symbol Table<a name="section15414650102716"></a>
### Exporting the Symbol Table
The kernel needs to proactively expose the API required by the dynamic library when the shared library calls a kernel API, as shown in the following figure. This mechanism compiles the symbol information to the specified section and calls the **SYM\_EXPORT** macro to export information of the specified symbol. The symbol information is described in the structure **SymInfo**. Its members include the symbol name and symbol address information. The macro **SYM\_EXPORT** imports the symbol information to the **.sym.\*** section by using the **\_\_attribute\_\_** compilation attribute.
The kernel needs to proactively expose the API required by the dynamic library when the shared library calls a kernel API, as shown in the following figure. This mechanism compiles the symbol information to the specified section and calls the **SYM\_EXPORT** macro to export information of the specified symbol. The symbol information is described in the structure **SymInfo**. Its members include the symbol name and symbol address information. The macro **SYM\_EXPORT** imports the symbol information to the **.sym.\*** section by using the **\_\_attribute\_\_** compilation attribute.
### Loading an ELF File<a name="section5221181562810"></a>
### Loading an ELF File
During the loading process, the LOAD section to be loaded to the memory is obtained based on the ELF file handle and the section offset of the program header table. Generally, there are two sections: read-only section and read-write section. You can run the **readelf -l** command to view the LOAD section information of the ELF file. The physical memory is requested according to the related alignment attributes. Then, a code section or a data segment is written into the memory based on the loading base address and an offset of each section.
During the loading process, the LOAD section to be loaded to the memory is obtained based on the ELF file handle and the section offset of the program header table. Generally, there are two sections: read-only section and read-write section. You can run the **readelf -l** command to view the LOAD section information of the ELF file. The physical memory is requested according to the related alignment attributes. Then, a code section or a data segment is written into the memory based on the loading base address and an offset of each section.
```
$ readelf -l lib.so
...
...
@@ -66,29 +55,29 @@ Program Headers:
03 .dynamic
```
**Figure 3** Process of loading an ELF file<aname="fig15547494157"></a>
A relocation table is obtained by using a **.dynamic** section of the ELF file. Each entry that needs to be relocated in the table is traversed. Then, the symbol is searched, based on the symbol name that needs to be relocated, in the shared library and the exported symbol table provided by the kernel. The relocation information is updated based on the symbol found.
A relocation table is obtained by using a **.dynamic** section of the ELF file. Each entry that needs to be relocated in the table is traversed. Then, the symbol is searched, based on the symbol name that needs to be relocated, in the shared library and the exported symbol table provided by the kernel. The relocation information is updated based on the symbol found.
When compiling a shared library, you can add **-fPIC**\(a compilation option\) to compile location-independent code. The shared library file type is **ET\_DYN**, which can be loaded to any valid address range.
When compiling a shared library, you can add **-fPIC**\(a compilation option\) to compile location-independent code. The shared library file type is**ET\_DYN**, which can be loaded to any valid address range.
4.**-z max-page-size=4**: sets the number of alignment bytes of the loadable sections in the binary file to **4**. This setting saves memory and can be used for a dynamic library.
5.**-mcpu=**specifies the CPU architecture.
4.**-z max-page-size=4**: sets the number of alignment bytes of the loadable sections in the binary file to **4**. This setting saves memory and can be used for a dynamic library.
File Allocation Table \(FAT\) is a file system developed for personal computers. It consists of the DOS Boot Record \(DBR\) region, FAT region, and Data region. Each entry in the FAT region records information about the corresponding cluster in the storage device. The cluster information includes whether the cluster is used, number of the next cluster of the file, whether the file ends with the cluster. The FAT file system supports multiple formats, such as FAT12, FAT16, and FAT32. The numbers 12, 16, and 32 indicate the number of bits per cluster within the FAT, respectively. The FAT file system supports multiple media, especially removable media \(such as USB flash drives, SD cards, and removable hard drives\). The FAT file system ensures good compatibility between embedded devices and desktop systems \(such as Windows and Linux\) and facilitates file management.
The OpenHarmony kernel supports FAT12, FAT16, and FAT32 file systems. These file systems require a tiny amount of code to implement, use less resources, support a variety of physical media, and are tailorable and compatible with Windows and Linux systems. They also support identification of multiple devices and partitions. The kernel supports multiple partitions on hard drives and allows creation of the FAT file system on the primary partition and logical partition.
## Development Guidelines<a name="section1149072811148"></a>
## Development Guidelines
### Adaptation of Drivers<a name="section19174939191414"></a>
### Adaptation of Drivers
The use of the FAT file system requires support from the underlying MultiMediaCard \(MMC\) drivers. To run FatFS on a board with an MMC storage device, you must:
1. Implement the **disk\_status**, **disk\_initialize**, **disk\_read**, **disk\_write**, and **disk\_ioctl** APIs to adapt to the embedded MMC \(eMMC\) drivers on the board.
1. Implement the **disk\_status**, **disk\_initialize**, **disk\_read**, **disk\_write**, and **disk\_ioctl** APIs to adapt to the embedded MMC \(eMMC\) drivers on the board.
2. Add the **fs\_config.h** file with information such as **FS\_MAX\_SS**\(maximum sector size of the storage device\) and **FF\_VOLUME\_STRS**\(partition names\) configured. The following is an example:
2. Add the **fs\_config.h** file with information such as **FS\_MAX\_SS**\(maximum sector size of the storage device\) and **FF\_VOLUME\_STRS**\(partition names\) configured. The following is an example:
>- Note the following when managing FatFS files and directories:
> - A file cannot exceed 4 GB.
> - **FAT\_MAX\_OPEN\_FILES** specifies the maximum number files you can open at a time, and **FAT\_MAX\_OPEN\_DIRS** specifies the maximum number of folders you can open at a time.
> - Root directory management is not supported. File and directory names start with the partition name. For example, **user/testfile** indicates the file or directory **testfile** in the **user** partition.
> - To open a file multiple times, use **O\_RDONLY** \(read-only mode\). **O\_RDWR** or **O\_WRONLY** \(writable mode\) can open a file only once.
> - The read and write pointers are not separated. If a file is open in **O\_APPEND** mode, the read pointer is also at the end of the file. If you want to read the file from the beginning, you must manually set the position of the read pointer.
> - **FAT\_MAX\_OPEN\_FILES** specifies the maximum number files you can open at a time, and **FAT\_MAX\_OPEN\_DIRS** specifies the maximum number of folders you can open at a time.
> - Root directory management is not supported. File and directory names start with the partition name. For example, **user/testfile** indicates the file or directory **testfile** in the **user** partition.
> - To open a file multiple times, use **O\_RDONLY** \(read-only mode\). **O\_RDWR** or **O\_WRONLY** \(writable mode\) can open a file only once.
> - The read and write pointers are not separated. If a file is open in **O\_APPEND** mode, the read pointer is also at the end of the file. If you want to read the file from the beginning, you must manually set the position of the read pointer.
> - File and directory permission management is not supported.
> - The **stat** and **fstat** APIs do not support query of the modification time, creation time, and last access time. The Microsoft FAT protocol does not support time before A.D. 1980.
> - The **stat** and **fstat** APIs do not support query of the modification time, creation time, and last access time. The Microsoft FAT protocol does not support time before A.D. 1980.
>- Note the following when mounting and unmounting FatFS partitions:
> - Partitions can be mounted with the read-only attribute. When the input parameter of the **mount** function is **MS\_RDONLY**, all APIs with the write attribute, such as **write**, **mkdir**, **unlink**, and **open** with **non-O\_RDONLY** attributes, will be rejected.
> - You can use the **MS\_REMOUNT** flag with **mount** to modify the permission for a mounted partition.
> - Partitions can be mounted with the read-only attribute. When the input parameter of the **mount** function is **MS\_RDONLY**, all APIs with the write attribute, such as **write**, **mkdir**, **unlink**, and **open** with **non-O\_RDONLY** attributes, will be rejected.
> - You can use the **MS\_REMOUNT** flag with **mount** to modify the permission for a mounted partition.
> - Before unmounting a partition, ensure that all directories and files in the partition are closed.
> - You can use **umount2** with the **MNT\_FORCE** parameter to forcibly close all files and folders and unmount the partition. However, this may cause data loss. Therefore, exercise caution when running **umount2**.
>- The FAT file system supports re-partitioning and formatting of storage devices using **fatfs\_fdisk** and **fatfs\_format**.
> - If a partition is mounted before being formatted using **fatfs\_format**, you must close all directories and files in the partition and unmount the partition first.
> - Before calling **fatfs\_fdisk**, ensure that all partitions in the device are unmounted.
> - Using **fatfs\_fdisk** and **fatfs\_format** may cause data loss. Exercise caution when using them.
> - You can use **umount2** with the **MNT\_FORCE** parameter to forcibly close all files and folders and unmount the partition. However, this may cause data loss. Therefore, exercise caution when running **umount2**.
>- The FAT file system supports re-partitioning and formatting of storage devices using **fatfs\_fdisk** and **fatfs\_format**.
> - If a partition is mounted before being formatted using **fatfs\_format**, you must close all directories and files in the partition and unmount the partition first.
> - Before calling **fatfs\_fdisk**, ensure that all partitions in the device are unmounted.
> - Using **fatfs\_fdisk** and **fatfs\_format** may cause data loss. Exercise caution when using them.
## Development Example<a name="section1133718619459"></a>
## Development Example
### Example Description<a name="section45337345313"></a>
### Example Description
This example implements the following:
1. Create the **user/test** directory.
2. Create the **file.txt** file in the **user/test** directory.
1. Create the **user/test** directory.
2. Create the **file.txt** file in the **user/test** directory.
3. Write "Hello OpenHarmony!" at the beginning of the file.
4. Save the update of the file to the device.
5. Set the offset to the beginning of the file.
...
...
@@ -70,11 +59,11 @@ This example implements the following:
8. Delete the file.
9. Delete the directory.
### Sample Code<a name="section119813171539"></a>
### Sample Code
Prerequisites
- The MMC device partition is mounted to the **user** directory.
- The MMC device partition is mounted to the **user** directory.
LittleFS is a small file system designed for flash. By combining the log-structured file system and the copy-on-write \(COW\) file system, LittleFS stores metadata in log structure and data in the COW structure. This special storage empowers LittleFS high power-loss resilience. LittleFS uses the statistical wear leveling algorithm when allocating COW data blocks, effectively prolonging the service life of flash devices. LittleFS is designed for small-sized devices with limited resources, such as ROM and RAM. All RAM resources are allocated through a buffer with the fixed size \(configurable\). That is, the RAM usage does not grow with the file system.
LittleFS is a good choice when you look for a flash file system that is power-cut resilient and has wear leveling support on a small device with limited resources.
## Development Guidelines<a name="section1496101821515"></a>
## Development Guidelines
When porting LittleFS to a new hardware device, you need to declare **lfs\_config**:
...
...
@@ -45,7 +39,7 @@ const struct lfs_config cfg = {
**block\_count** indicates the number of blocks that can be erased, which depends on the capacity of the block device and the size of the block to be erased \(**block\_size**\).
The OpenHarmony LiteOS-M kernel supports File Allocation Table file system \(FATFS\) and LittleFS file systems. Like the OpenHarmony LiteOS-A kernel, the OpenHarmony LiteOS-M kernel provides POSIX over the virtual file system \(VFS\) to ensure interface consistency. However, the VFS of the LiteOS-M kernel is light due to insufficient resources and does not provide advanced functions \(such as pagecache\). Therefore, the VFS of the LiteOS-M kernel implements only API standardization and adaptation. The file systems handle specific transactions. The following table lists the functions supported by the file systems.
As one of the most widely used programming languages, C++ supports features, such as classes, encapsulation, and overloading. It is an object-oriented programming language developed based on the C language.
## Working Principles<a name="section189351319134418"></a>
## Working Principles
The compiler supports C++ code identification. The system calls the constructors of global objects to perform initialization operations.
## Development Guidelines<a name="section166302407911"></a>
## Development Guidelines
### Available APIs<a name="section1881825119919"></a>
### Available APIs
**Table 1** APIs supported by C++
...
...
@@ -41,7 +33,7 @@ The compiler supports C++ code identification. The system calls the constructors
</tbody>
</table>
### How to Develop<a name="section76371145108"></a>
### How to Develop
Before using C++ features, you need to call **LOS\_CppSystemInit** to initialize C++ constructors. The initialized constructors are stored in the **init\_array** section, and the section range is passed by the variables **\_\_init\_array\_start\_\_** and **\_\_init\_array\_end\_\_**.
...
...
@@ -70,7 +62,7 @@ Before using C++ features, you need to call **LOS\_CppSystemInit** to initiali
>The **LOS\_CppSystemInit** function must be called before a C++ service. When the C library used by the third-party compiler is not musl libc, some classes or APIs \(such as **std::thread** and **std::mutex**\) that are closely related to system resources have compatibility issues and are not recommended to use.
### Development Example<a name="section994427141111"></a>
As an optional function of the kernel, memory corruption check is used to check the integrity of a dynamic memory pool. This mechanism can detect memory corruption errors in the memory pool in a timely manner and provide alerts. It helps reduce problem locating costs and increase troubleshooting efficiency.
## Function Configuration<a name="section4696190123420"></a>
## Function Configuration
**LOSCFG\_BASE\_MEM\_NODE\_INTEGRITY\_CHECK**: specifies the setting of the memory corruption check. This function is disabled by default. To enable the function, set this macro to **1** in **target\_config.h**.
...
...
@@ -25,13 +19,13 @@ This check only detects the corrupted memory node and provides information about
>If memory corruption check is enabled, a magic number is added to the node header, which increases the size of the node header. The real-time integrity check has a great impact on the performance. In performance-sensitive scenarios, you are advised to disable this function and use **LOS\_MemIntegrityCheck** to check the memory pool integrity.
## Development Guidelines<a name="section672362973417"></a>
## Development Guidelines
### How to Develop<a name="section026014863416"></a>
### How to Develop
Check for memory corruption by calling **LOS\_MemIntegrityCheck**. If no memory corruption occurs, **0** is returned and no log is output. If memory corruption occurs, related log is output. For details, see the output of the following example.
### Development Example<a name="section186311302356"></a>
### Development Example
This example implements the following:
...
...
@@ -39,7 +33,7 @@ This example implements the following:
2. Call **memset** to construct an out-of-bounds access and overwrites the first four bytes of the next node.
3. Call **LOS\_MemIntegrityCheck** to check whether memory corruption occurs.
As an optional function of the kernel, memory leak check is used to locate dynamic memory leak problems. After this function is enabled, the dynamic memory automatically records the link registers \(LRs\) used when memory is allocated. If a memory leak occurs, the recorded information helps locate the memory allocated for further analysis.
## Function Configuration<a name="section13991354162914"></a>
## Function Configuration
1.**LOSCFG\_MEM\_LEAKCHECK**: specifies the setting of the memory leak check. This function is disabled by default. To enable the function, set this macro to **1** in **target\_config.h**.
2.**LOSCFG\_MEM\_RECORD\_LR\_CNT**: number of LRs recorded. The default value is **3**. Each LR consumes the memory of **sizeof\(void \*\)** bytes.
...
...
@@ -25,9 +16,9 @@ As an optional function of the kernel, memory leak check is used to locate dynam
Correctly setting this macro can ignore invalid LRs and reduce memory consumption.
## Development Guidelines<a name="section95828159308"></a>
## Development Guidelines
### How to Develop<a name="section369844416304"></a>
### How to Develop
Memory leak check provides a method to check for memory leak in key code logic. If this function is enabled, LR information is recorded each time when memory is allocated. When **LOS\_MemUsedNodeShow** is called before and after the code snippet is checked, information about all nodes that have been used in the specified memory pool is printed. You can compare the node information. The newly added node information indicates the node where the memory leak may occur. You can locate the code based on the LR and further check whether a memory leak occurs.
Memory information includes the memory pool size, memory usage, remaining memory size, maximum free memory, memory waterline, number of memory nodes, and fragmentation rate.
...
...
@@ -22,13 +13,13 @@ Memory information includes the memory pool size, memory usage, remaining memory
- Other parameters: You can call APIs \(described in [Memory Management](kernel-mini-basic-memory-basic.md)\) to scan node information in the memory pool and collect statistics.
## Function Configuration<a name="section470611682411"></a>
## Function Configuration
**LOSCFG\_MEM\_WATERLINE**: specifies the setting of the memory information statistics function. This function is enabled by default. To disable the function, set this macro to **0** in **target\_config.h**. If you want to obtain the memory waterline, you must enable this macro.
## Development Guidelines<a name="section9368374243"></a>
## Development Guidelines
### How to Develop<a name="section679912407257"></a>
### How to Develop
Key structure:
...
...
@@ -52,7 +43,7 @@ typedef struct {
Fragmentation rate = 100 – 100 x Maximum free memory block size/Remaining memory size
### Development Example<a name="section1025453412611"></a>
### Development Example
This example implements the following:
...
...
@@ -62,7 +53,7 @@ This example implements the following:
3. Calculate the memory usage and fragmentation rate.
The purpose of memory debugging is to locate problems related to dynamic memory. The kernel provides a variety of memory debugging methods. Dynamic memory pool statistics helps you learn the memory pool waterline and fragmentation rate. Memory leak check helps you accurately locate the code where memory leak occurs and analyze the memory usage of each module. Memory corruption check helps you locate memory corruptions.
-**[Memory Information Statistics](kernel-mini-memory-debug-mes.md)**
The OpenHarmony LiteOS-M provides exception handling and debugging measures to help locate and analyze problems. Exception handling involves a series of actions taken by the OS to respond to exceptions occurred during the OS running, for example, printing the exception type, system status, call stack information of the current function, CPU information, and call stack information of tasks.
## Working Principles<a name="section16618124317346"></a>
## Working Principles
A stack frame contains information such as function parameters, variables, and return value in a function call process. When a function is called, a stack frame of the subfunction is created, and the input parameters, local variables, and registers of the function are stored into the stack. Stack frames grow towards lower addresses. The ARM32 CPU architecture is used as an example. Each stack frame stores the historical values of the program counter \(PC\), LR \(link register\), stack pointer \(SP\), and frame pointer \(FP\) registers. The LR points to the return address of a function, and the FP points to the start address of the stack frame of the function's parent function. The FP helps locate the parent function's stack frame, which further helps locate the parent function's FP. The parent function's FP helps locate the grandparent function's stack frame and FP... In this way, the call stack of the program can be traced to obtain the relationship between the functions called.
...
...
@@ -20,12 +12,12 @@ When an exception occurs in the system, the system prints the register informati
The following figure illustrates the stack analysis mechanism for your reference. The actual stack information varies depending on the CPU architecture.
In the figure, the registers in different colors indicate different functions. The registers save related data when functions are called. The FP register helps track the stack to the parent function of the abnormal function and further presents the relationships between the functions called.
## Available APIs<a name="section16111931351"></a>
## Available APIs
The following table describes APIs available for the OpenHarmony LiteOS-M stack trace module. For more details about the APIs, see the API reference.
...
...
@@ -55,9 +47,9 @@ The following table describes APIs available for the OpenHarmony LiteOS-M stack
Lite Memory Sanitizer \(LMS\) is a tool used to detect memory errors on a real-time basis. LMS can detect buffer overflow, Use-After-Free \(UAF\), and double free errors in real time, and notify the operating system immediately. Together with locating methods such as Backtrace, LMS can locate the code line that causes the memory error. It greatly improves the efficiency of locating memory errors.
...
...
@@ -21,7 +11,7 @@ The LMS module of the OpenHarmony LiteOS-M kernel provides the following functio
- Checks the memory when bounds-checking functions are called \(enabled by default\).
- Checks the memory when libc frequently accessed functions, including **memset**, **memcpy**, **memmove**, **strcat**, **strcpy**, **strncat** and **strncpy**, are called.
## Working Principles<a name="section8495832174910"></a>
## Working Principles
LMS uses shadow memory mapping to mark the system memory state. There are three states: **Accessible**, **RedZone**, and **Freed**. The shadow memory is located in the tail of the memory pool.
...
...
@@ -30,7 +20,7 @@ LMS uses shadow memory mapping to mark the system memory state. There are three
- During code compilation, a function is inserted before the read/write instructions in the code to check the address validity. The tool checks the state value of the shadow memory that accesses the memory. If the shadow memory is in the **RedZone** statue, an overflow error will be reported. If the shadow memory is in the **Freed** state, a UAF error will be reported.
- When memory is released, the tool checks the state value of the shadow memory at the released address. If the shadow memory is in the **RedZone** state, a double free error will be reported.
## Available APIs<a name="section05501853194918"></a>
## Available APIs
The LMS module of the OpenHarmony LiteOS-M kernel provides the following APIs. For more details about the APIs, see the [API](https://gitee.com/openharmony/kernel_liteos_m/blob/master/components/lms/los_lms.h) reference.
...
...
@@ -76,9 +66,9 @@ The LMS module of the OpenHarmony LiteOS-M kernel provides the following APIs. F
</tbody>
</table>
## Development Guidelines<a name="section177357243510"></a>
## Development Guidelines
### How to Develop<a name="section125863345112"></a>
### How to Develop
The typical process for enabling LMS is as follows:
...
...
@@ -173,7 +163,7 @@ The typical process for enabling LMS is as follows:
3. Recompile the code and check the serial port output. The memory problem detected will be displayed.
### Development Example<a name="section812115715313"></a>
### Development Example
This example implements the following:
...
...
@@ -181,7 +171,7 @@ This example implements the following:
2. Construct a buffer overflow error and a UAF error.
3. Add "-fsanitize=kernel-address", execute the compilation, and check the output.
perf is a performance analysis tool. It uses the performance monitoring unit \(PMU\) to count sampling events and collect context information and provides hot spot distribution and hot paths.
## Working Principles<a name="section5125124532010"></a>
## Working Principles
When a performance event occurs, the corresponding event counter overflows and triggers an interrupt. The interrupt handler records the event information, including the current PC, task ID, and call stack.
...
...
@@ -30,9 +13,9 @@ perf provides two working modes: counting mode and sampling mode.
In counting mode, perf collects only the number of event occurrences and duration. In sampling mode, perf also collects context data and stores the data in a circular buffer. The IDE then analyzes the data and provides information about hotspot functions and paths.
## Available APIs<a name="section17747184017458"></a>
## Available APIs
### Kernel Mode<a name="section104473014465"></a>
### Kernel Mode
The perf module of the OpenHarmony LiteOS-A kernel provides the following APIs. For more details about the APIs, see the [API](https://gitee.com/openharmony/kernel_liteos_a/blob/master/kernel/include/los_perf.h) reference.
...
...
@@ -117,7 +100,7 @@ The perf module of the OpenHarmony LiteOS-A kernel provides the following APIs.
The API for flushing the cache is configured based on the platform.
### User Mode<a name="section1996920294531"></a>
### User Mode
The perf character device is located in **/dev/perf**. You can read, write, and control the user-mode perf by running the following commands on the device node:
...
...
@@ -134,11 +117,11 @@ The perf character device is located in **/dev/perf**. You can read, write, and
The operations correspond to **LOS\_PerfStart** and **LOS\_PerfStop**.
For more details, see [User-mode Development Example](#section3470546163).
For more details, see [User-mode Development Example](#user-mode-development-example).
## Development Guidelines<a name="section10302175017543"></a>
## Development Guidelines
### Kernel-mode Development Process<a name="section04021008552"></a>
### Kernel-mode Development Process
The typical process of enabling perf is as follows:
...
...
@@ -235,7 +218,7 @@ The typical process of enabling perf is as follows:
4. Call **LOS\_PerfStop** at the end of the code to be sampled.
5. Call **LOS\_PerfDataRead** to read the sampling data and use IDE to analyze the collected data.
## Kernel-mode Development Example<a name="section112034213583"></a>
## Kernel-mode Development Example
This example implements the following:
...
...
@@ -246,7 +229,7 @@ This example implements the following:
@@ -362,7 +345,7 @@ hex: 00 ef ef ef 00 00 00 00 14 00 00 00 60 00 00 00 00 00 00 00 70 88 36 40 08
You can also call **LOS\_PerfDataRead** to read data to a specified address for further analysis. In the example, **OsPrintBuff** is a test API, which prints the sampled data by byte. **num** indicates the sequence number of the byte, and **hex** indicates the value in the byte.
### User-mode Development Process<a name="section1425821711114"></a>
### User-mode Development Process
Choose **Driver**\>**Enable PERF DRIVER** in **menuconfig** to enable the perf driver. This option is available in **Driver** only after **Enable Perf Feature** is selected in the kernel.
>After running the **./perf stat/record** command, you can run the **./perf start** and **./perf stop** commands multiple times. The sampling event configuration is as per the parameters set in the latest **./perfstat/record** command.
### User-mode Development Example<a name="section3470546163"></a>
### User-mode Development Example
This example implements the following:
...
...
@@ -453,7 +436,7 @@ This example implements the following:
Trace helps you learn about the kernel running process and the execution sequence of modules and tasks. With the information, you can better understand the code running process of the kernel and locate time sequence problems.
## Working Principles<a name="section5282148123813"></a>
## Working Principles
The kernel provides a hook framework to embed hooks in the main process of each module. In the initial startup phase of the kernel, the trace function is initialized and the trace handlers are registered with the hooks.
...
...
@@ -28,7 +18,7 @@ In offline mode, trace frames are stored in a circular buffer. If too many frame
The online mode must be used with the integrated development environment \(IDE\). Trace frames are sent to the IDE in real time. The IDE parses the records and displays them in a visualized manner.
## Available APIs<a name="section16304193215387"></a>
## Available APIs
The trace module of the OpenHarmony LiteOS-M kernel provides the following functions. For more details about the APIs, see the API reference.
...
...
@@ -178,9 +168,9 @@ The trace module of the OpenHarmony LiteOS-M kernel provides the following funct
The interrupt events with interrupt ID of **TIMER\_INT** or **DMA\_INT** are not traced.
## Development Guidelines<a name="section498695853819"></a>
## Development Guidelines
### How to Develop<a name="section1875652316393"></a>
### How to Develop
The typical trace process is as follows:
...
...
@@ -271,7 +261,7 @@ The methods in steps 3 to 7 are encapsulated with shell commands. After the shel
- LOS\_TraceStop —— trace\_stop
- LOS\_TraceRecordDump —— trace\_dump
### Development Example<a name="section0403134913395"></a>
### Development Example
This example implements the following:
...
...
@@ -281,7 +271,7 @@ This example implements the following:
The OpenHarmony LiteOS-M kernel is a lightweight operating system \(OS\) kernel designed for the IoT field. It features small size, low power consumption, and high performance. The LiteOS-M kernel has simple code structure, including the minimum function set, kernel abstraction layer \(KAL\), optional components, and project directory. It supports the Hardware Driver Foundation \(HDF\), which provides unified driver standards and access mode for device vendors to simplify porting of drivers and allow one-time development for multi-device deployment.
The OpenHarmony LiteOS-M kernel architecture consists of the hardware layer and hardware-irrelevant layers, as shown in [Figure 1](#fig1287712172318). The hardware layer is classified based on the compiler toolchain and chip architecture, and provides a unified Hardware Abstraction Layer \(HAL\) interface to improve hardware adaptation and facilitate the expansion of various types of AIoT hardware and compilation toolchains. The other modules are irrelevant to the hardware. The basic kernel module provides basic kernel capabilities. The extended modules provide capabilities of components, such as the network and file systems, as well as exception handling and debug tools. The KAL provides unified standard APIs.
### CPU Architecture Support<a name="section48891456112819"></a>
### CPU Architecture Support
The CPU architecture includes two layers: general architecture definition layer and specific architecture definition layer. The former provides interfaces supported and implemented by all architectures. The latter is specific to an architecture. For a new architecture to be added, the general architecture definition layer must be implemented first and the architecture-specific functions can be implemented at the specific architecture definition layer.
...
...
@@ -55,10 +50,10 @@ The CPU architecture includes two layers: general architecture definition layer
LiteOS-M supports mainstream architectures, such as ARM Cortex-M3, ARM Cortex-M4, ARM Cortex-M7, ARM Cortex-M33, and RISC-V. If you need to expand the CPU architecture, see [Chip Architecture Adaptation](../porting/porting-chip-kernel-overview.md#section137431650339).
### Working Principles<a name="section4599142312817"></a>
### Working Principles
Configure the system clock and number of ticks per second in the **target\_config.h** file of the development board. Configure the task, memory, inter-process communication \(IPC\), and exception handling modules based on service requirements. When the system boots, the modules are initialized based on the configuration. The kernel startup process includes peripheral initialization, system clock configuration, kernel initialization, and OS boot. For details, see [Figure 2](#fig74259220441).
A bitwise operation operates on a binary number at the level of its individual bits. For example, a variable can be set as a program status word \(PSW\), and each bit \(flag bit\) in the PSW can have a self-defined meaning.
...
...
@@ -110,7 +105,7 @@ static UINT32 BitSample(VOID)
}
```
### Verification<a name="section8931859194"></a>
### Verification
The development is successful if the return result is as follows:
A doubly linked list is a linked data structure that consists of a set of sequentially linked records called nodes. Each node contains a pointer to the previous node and a pointer to the next node in the sequence of nodes. The pointer head is unique. A doubly linked list allows access from a list node to its next node and also the previous node on the list. This data structure facilitates data search, especially traversal of a large amount of data. The symmetry of the doubly linked list also makes operations, such as insertion and deletion, easy. However, pay attention to the pointer direction when performing operations.
## Available APIs<a name="section848334511411"></a>
## Available APIs
The following table describes APIs available for the doubly linked list. For more details about the APIs, see the API reference.
# Standard Library<a name="EN-US_TOPIC_0000001126847658"></a>
-[Standard Library API Framework](#section149319478561)
-[Development Example](#section20874620185915)
-[Differences from the Linux Standard Library](#section6555642165713)
-[Process](#section11299104511409)
-[Memory](#section175754484116)
-[File System](#section118191113134220)
-[Signal](#section195939264421)
-[Time](#section20825124304213)
# Standard Library
The OpenHarmony kernel uses the musl libc library that supports the Portable Operating System Interface \(POSIX\). You can develop components and applications working on the kernel based on the POSIX.
## Standard Library API Framework<a name="section149319478561"></a>
@@ -21,7 +12,7 @@ The musl libc library supports POSIX standards. The OpenHarmony kernel adapts th
For details about the APIs supported by the standard library, see the API document of the C library, which also covers the differences between the standard library and the POSIX standard library.
## Development Example<a name="section20874620185915"></a>
## Development Example
In this example, the main thread creates **THREAD\_NUM** child threads. Once a child thread is started, it enters the standby state. After the main thread successfully wakes up all child threads, they continue to execute until the lifecycle ends. The main thread uses the **pthread\_join** method to wait until all child threads are executed.
...
...
@@ -197,17 +188,17 @@ int main(int argc, char *argv[])
#endif /* __cplusplus */
```
## Differences from the Linux Standard Library<a name="section6555642165713"></a>
## Differences from the Linux Standard Library
This section describes the key differences between the standard library carried by the OpenHarmony kernel and the Linux standard library. For more differences, see the API document of the C library.
### Process<a name="section11299104511409"></a>
### Process
1. The OpenHarmony user-mode processes support only static priorities, which range from 10 \(highest\) to 31 \(lowest\).
2. The OpenHarmony user-mode threads support only static priorities, which range from 0 \(highest\) to 31 \(lowest\).
3. The OpenHarmony process scheduling supports **SCHED\_RR** only, and thread scheduling supports **SCHED\_RR** or **SCHED\_FIFO**.
### Memory<a name="section175754484116"></a>
### Memory
**h2****Difference with Linux mmap**
...
...
@@ -266,7 +257,7 @@ int main(int argc, char *argv[])
**System directories**: You cannot modify system directories and device mount directories, which include **/dev**, **/proc**, **/app**, **/bin**, **/data**, **/etc**, **/lib**, **/system** and **/usr**.
...
...
@@ -274,14 +265,14 @@ int main(int argc, char *argv[])
Except in the system and user directories, you can create directories and mount devices. Note that nested mount is not allowed, that is, a mounted folder and its subfolders cannot be mounted repeatedly. A non-empty folder cannot be mounted.
### Signal<a name="section195939264421"></a>
### Signal
- The default behavior for signals does not include **STOP**, **CONTINUE**, or **COREDUMP**.
- A sleeping process \(for example, a process enters the sleeping status by calling the sleep function\) cannot be woken up by a signal. The signal mechanism does not support the wakeup function. The behavior for a signal can be processed only when the process is scheduled by the CPU.
- After a process exits, **SIGCHLD** is sent to the parent process. The sending action cannot be canceled.
- Only signals 1 to 30 are supported. The callback is executed only once even if the same signal is received multiple times.
### Time<a name="section20825124304213"></a>
### Time
The OpenHarmony time precision is based on tick. The default value is 10 ms/tick. The time error of the **sleep** and **timeout** functions is less than or equal to 20 ms.
In an OS that supports multiple tasks, modifying data in a memory area requires three steps: read data, modify data, and write data. However, data in a memory area may be simultaneously accessed by multiple tasks. If the data modification is interrupted by another task, the execution result of the operation is unpredictable.
...
...
@@ -16,7 +9,7 @@ Although you can enable or disable interrupts to ensure that the multi-task exec
The ARMv6 architecture has introduced the **LDREX** and **STREX** instructions to support more discreet non-blocking synchronization of the shared memory. The atomic operations implemented thereby can ensure that the "read-modify-write" operations on the same data will not be interrupted, that is, the operation atomicity is ensured.
## Working Principles<a name="section1786635117596"></a>
## Working Principles
The OpenHarmony system has encapsulated the **LDREX** and **STREX** in the ARMv6 architecture to provide a set of atomic operation APIs.
...
...
@@ -45,9 +38,9 @@ The OpenHarmony system has encapsulated the **LDREX** and **STREX** in the A
- If the flag register is **1**, the system continues the loop and performs the atomic operation again.
## Development Guidelines<a name="section2911115308"></a>
## Development Guidelines
### Available APIs<a name="section335914201010"></a>
### Available APIs
The following table describes the APIs available for the OpenHarmony LiteOS-A kernel atomic operation module. For more details about the APIs, see the API reference.
...
...
@@ -197,11 +190,11 @@ The following table describes the APIs available for the OpenHarmony LiteOS-A ke
</tbody>
</table>
### How to Develop<a name="section12207371304"></a>
### How to Develop
When multiple tasks perform addition, subtraction, and swap operations on the same memory data, use atomic operations to ensure predictability of results.
The Memory Management Unit \(MMU\) is used to map the virtual addresses in the process space and the actual physical addresses and specify corresponding access permissions and cache attributes. When a program is executed, the CPU accesses the virtual memory, locates the corresponding physical memory based on the MMU page table entry, and executes the code or performs data read/write operations. The page tables of the MMU store the mappings between virtual and physical addresses and the access permission. A page table is created when each process is created. The page table contains page table entries \(PTEs\), and each PTE describes a mapping between a virtual address region and a physical address region. The MMU has a Translation Lookaside Buffer \(TLB\) for address translation. During address translation, the MMU first searches the TLB for the corresponding PTE. If a match is found, the address can be returned directly. The following figure illustrates how the CPU accesses the memory or peripherals.
**Figure 1** CPU accessing the memory or peripheral<aname="fig209379387574"></a>
**Figure 1** CPU accessing the memory or peripheral
## Working Principles<a name="section12392621871"></a>
## Working Principles
Virtual-to-physical address mapping is a process of establishing page tables. The MMU supports multi-level page tables. The LiteOS-A kernel uses the level-2 page tables to describe the process space. Each level-1 PTE descriptor occupies 4 bytes, which indicate a mapping record of 1 MiB memory space. The 1 GiB user space of the LiteOS-A kernel has 1024 level-1 PTEs. When a user process is created, a 4 KiB memory block is requested from the memory as the storage area of the level-1 page table. Memory is dynamically allocated for the level-2 page table based on requirements of the process.
...
...
@@ -22,12 +16,12 @@ Virtual-to-physical address mapping is a process of establishing page tables. Th
- When the program is executed, as shown by the bold arrow in the following figure, the CPU accesses the virtual address and checks for the corresponding physical memory in the MMU. If the virtual address does not have the corresponding physical address, a page missing fault is triggered. The kernel requests the physical memory, writes the virtual-physical address mapping and the related attributes to the page table, and caches the PTE in the TLB. Then, the CPU can directly access the actual physical memory.
- If the PTE already exists in the TLB, the CPU can access the physical memory without accessing the page table stored in the memory.
**Figure 2** CPU accessing the memory<aname="fig95557155719"></a>
>The preceding APIs can be used after the MMU initialization is complete and the page tables of the related process are created. The MMU initialization is complete during system startup, and page tables are created when the processes are created. You do not need to perform any operation.
>1. The **open** and **close** APIs are not necessarily implemented because they are used to operate files and are imperceptible to the underlying file system. You need to implement them only when special operations need to be performed during the open and close operations on the file system.
>2. Basic file system knowledge is required for file system adaptation. You need to have a deep understanding of the principles and implementation of the target file system. This section does not include the file system basics in detail. If you have any questions during the adaptation process, refer to the code in the **kernel/liteos\_a/fs** directory.
...
...
@@ -215,7 +211,7 @@ The general adaptation procedure is as follows:
The core logic is how to use the private data to implement API functions. These APIs implement common functions of the file systems and are generally implemented before the files systems are ported. Therefore, the key is to determine the private data required by the file system and store the data in the Vnode for later use. Generally, the private data is information that can uniquely locate a file on a storage medium. Most file systems have similar data structures, for example, the inode data structure in JFFS2.
>1. When a file is accessed, the **Lookup** API of the file system is not necessarily called. The **Lookup** API is called only when the PathCache is invalid.
>2. Do not directly return the Vnode located by using **VfsHashGet** as the result. The information stored in the Vnode may be invalid. Update the fields and return it.
>3. Vnodes are automatically released in the background based on memory usage. If data needs to be stored persistently, do not save it only in Vnodes.
>- The size of a single FAT file cannot be greater than 4 GiB.
>- When there are two SD card slots, the first card inserted is card 0, and that inserted later is card 1.
>- When multi-partition is enabled and there are multiple partitions, the device node **/dev/mmcblk0** \(primary device\) registered by card 0 and **/dev/mmcblk0p0** \(secondary device\) are the same device. In this case, you cannot perform operations on the primary device.
A file system \(often abbreviated to FS\) provides an input and output manner for an OS. It implements the interaction with internal and external storage devices.
The file system provides standard POSIX operation APIs for the upper-layer system through the C library. For details, see the API reference of the C library. The Virtual File System \(VFS\) layer in kernel mode shields the differences between file systems. The basic architecture is as follows:
**Figure 1** Overall file system architecture<aname="fig950992513313"></a>
The central processing unit percent \(CPUP\) includes the system CPUP, process CPUP, task CPUP, and interrupt CPUP. With the system CPUP, you can determine whether the current system load exceeds the designed specifications. With the CPUP of each task/process/interrupt, you can determine whether their CPU usage meets expectations of the design.