diff --git a/en/device-dev/kernel/kernel-basic-mini-time.md b/en/device-dev/kernel/kernel-basic-mini-time.md
index 586349d0c4683c5b782b61ea2cd0b714e7b1b4b1..375e481f4c04354b83d1e043cac3d3f7f0a83c0c 100644
--- a/en/device-dev/kernel/kernel-basic-mini-time.md
+++ b/en/device-dev/kernel/kernel-basic-mini-time.md
@@ -1,8 +1,7 @@
-# Time Management
+# Time Management
-
-## Basic Concepts
+## Basic Concepts
Time management provides all time-related services for applications based on the system clock.
@@ -12,67 +11,79 @@ People use second or millisecond as the time unit, while the operating system us
The time management module of the OpenHarmony LiteOS-M kernel provides time conversion and statistics functions.
-## Time Unit
-- Cycle
+## Time Unit
+
+- Cycle
+
+ Cycle is the minimum time unit in the system. The cycle duration is determined by the system clock frequency, that is, the number of cycles per second.
+- Tick
+
+ Tick is the basic time unit of the operating system and is determined by the number of ticks per second configured by the user.
- Cycle is the minimum time unit in the system. The cycle duration is determined by the system clock frequency, that is, the number of cycles per second.
-- Tick
+## Available APIs
- Tick is the basic time unit of the operating system and is determined by the number of ticks per second configured by the user.
+The following table describes APIs available for OpenHarmony LiteOS-M time management. For more details about the APIs, see the API reference.
+**Table 1** APIs of the time management module
-## Available APIs
+| API| Description|
+| -------- | -------- |
+| LOS_MS2Tick | Converts milliseconds into ticks.|
+| LOS_Tick2MS | Converts ticks into milliseconds.|
+| OsCpuTick2MS | Converts cycles into milliseconds. Two UINT32 values indicate the high-order and low-order 32 bits of the result value, respectively.|
+| OsCpuTick2US | Converts cycles into microseconds. Two UINT32 values indicate the high-order and low-order 32 bits of the result value, respectively.|
-The following table describes APIs available for the OpenHarmony LiteOS-M time management. For more details about the APIs, see the API reference.
+**Table 2** APIs for time statistics
-**Table 1** APIs of the time management module
+| API| Description|
+| -------- | -------- |
+| LOS_SysClockGet | Obtains the system clock.|
+| LOS_TickCountGet | Obtains the number of ticks since the system starts.|
+| LOS_CyclePerTickGet | Obtains the number of cycles for each tick.|
-| Category| API| Description|
-| -------- | -------- | -------- |
-| Time conversion| LOS_MS2Tick | Converts milliseconds into ticks.|
-| | LOS_Tick2MS | Converts ticks into milliseconds.|
-| | OsCpuTick2MS | Converts cycles into milliseconds. Two UINT32 values indicate the high-order and low-order 32 bits of the result value, respectively.|
-| | OsCpuTick2US | Converts cycles into microseconds. Two UINT32 values indicate the high-order and low-order 32 bits of the result value, respectively.|
-| Time statistics| LOS_SysClockGet | Obtains the system clock.|
-| | LOS_TickCountGet | Obtains the number of ticks since the system starts.|
-| | LOS_CyclePerTickGet | Obtains the number of cycles for each tick.|
-| | LOS_CurrNanosec |Obtains the number of nanoseconds since the system starts.|
-| Delay management| LOS_UDelay |Performs busy waiting in μs, which can be preempted by a task with a higher priority.|
-| | LOS_MDelay |Performs busy waiting in ms, which can be preempted by a task with a higher priority.|
-## How to Develop
+## How to Develop
The typical development process of time management is as follows:
-1. Complete board configuration and adaptation as required, and configure the system clock frequency \(**OS\_SYS\_CLOCK** in Hz and **LOSCFG\_BASE\_CORE\_TICK\_PER\_SECOND**\). The default value of **OS\_SYS\_CLOCK** varies with the hardware platform.
-2. Call the clock conversion and statistics APIs.
+1. Complete board configuration and adaptation as required, and configure the system clock frequency (**OS_SYS_CLOCK** in Hz and **LOSCFG_BASE_CORE_TICK_PER_SECOND**). The default value of **OS_SYS_CLOCK** varies with the hardware platform.
-> **NOTE**
+2. Call the clock conversion and statistics APIs.
+
+> **NOTE**
>
->- The time management module depends on **OS\_SYS\_CLOCK** and **LOSCFG\_BASE\_CORE\_TICK\_PER\_SECOND**.
->- The number of system ticks is not counted when the interrupt feature is disabled. Therefore, the number of ticks cannot be used as the accurate time.
->- The configuration options are maintained in the **target\_config.h** file of the development board project.
+> - The time management module depends on **OS_SYS_CLOCK** and **LOSCFG_BASE_CORE_TICK_PER_SECOND**.
+>
+> - The number of system ticks is not counted when the interrupt feature is disabled. Therefore, the number of ticks cannot be used as the accurate time.
+>
+> - The configuration options are maintained in the **target_config.h** file of the development board project.
+
-## Development Example
+## Development Example
-### Example Description
+
+### Example Description
The following example describes basic time management methods, including:
- Time conversion: convert milliseconds to ticks or convert ticks to milliseconds.
+
- Time statistics: obtain the number of cycles per tick, number of ticks since system startup, and number of delayed ticks.
-### Sample Code
+
+### Sample Code
Prerequisites
-- The default value of **LOSCFG\_BASE\_CORE\_TICK\_PER\_SECOND** is **100**.
-- The system clock frequency **OS\_SYS\_CLOCK** is configured.
+- The default value of **LOSCFG_BASE_CORE_TICK_PER_SECOND** is **100**.
+
+- The system clock frequency **OS_SYS_CLOCK** is configured.
Time conversion:
+
```
VOID Example_TransformTime(VOID)
{
@@ -88,6 +99,7 @@ VOID Example_TransformTime(VOID)
Time statistics and delay:
+
```
VOID Example_GetTime(VOID)
{
@@ -112,12 +124,14 @@ VOID Example_GetTime(VOID)
}
```
-### Verification
+
+### Verification
The development is successful if the return result is as follows:
Time conversion:
+
```
tick = 1000
ms = 1000
@@ -125,6 +139,7 @@ ms = 1000
Time statistics and delay:
+
```
LOS_CyclePerTickGet = 495000
LOS_TickCountGet = 1
diff --git a/en/device-dev/kernel/kernel-mini-basic-soft.md b/en/device-dev/kernel/kernel-mini-basic-soft.md
index ec8e10ec130d2feb0d5c931b1ffbcccca56a7818..e6bb601669fcc6fd7403c25ccea520323eeea70d 100644
--- a/en/device-dev/kernel/kernel-mini-basic-soft.md
+++ b/en/device-dev/kernel/kernel-mini-basic-soft.md
@@ -1,5 +1,6 @@
# Software Timer
+
## Basic Concepts
The software timer is a software-simulated timer based on system tick interrupts. When the preset tick counter value has elapsed, the user-defined callback will be invoked. The timing precision is related to the cycle of the system tick clock.
@@ -8,144 +9,132 @@ Due to the limitation in hardware, the number of hardware timers cannot meet use
The software timer supports the following functions:
-- Disabling the software timer using a macro
-- Creating a software timer
-- Starting a software timer
-- Stopping a software timer
-- Deleting a software timer
-- Obtaining the number of remaining ticks of a software timer
+- Disabling the software timer using a macro
-## Working Principles
+- Creating a software timer
-The software timer is a system resource. When modules are initialized, a contiguous section of memory is allocated for software timers. The maximum number of timers supported by the system is configured by the **LOSCFG\_BASE\_CORE\_SWTMR\_LIMIT** macro in **los\_config.h**.
+- Starting a software timer
-Software timers use a queue and a task resource of the system. The software timers are triggered based on the First In First Out \(FIFO\) rule. A timer with a shorter value is always closer to the queue head than a timer with a longer value, and is preferentially triggered.
+- Stopping a software timer
-The software timer counts time in ticks. When a software timer is created and started, the OpenHarmony LiteOS-M kernel determines the timer expiry time based on the current system time \(in ticks\) and the timing interval set by the user, and adds the timer control structure to the global timing list.
+- Deleting a software timer
-When a tick interrupt occurs, the tick interrupt handler scans the global timing list for expired timers. If such timers are found, the timers are recorded.
+- Obtaining the number of remaining ticks of a software timer
-When the tick interrupt handling function is complete, the software timer task \(with the highest priority\) is woken up. In this task, the timeout callback function for the recorded timer is called.
-### Timer States
+## Working Principles
-- OS\_SWTMR\_STATUS\_UNUSED
+The software timer is a system resource. When modules are initialized, a contiguous section of memory is allocated for software timers. The maximum number of timers supported by the system is configured by the **LOSCFG_BASE_CORE_SWTMR_LIMIT** macro in **los_config.h**.
- The timer is not in use. When the timer module is initialized, all timer resources in the system are set to this state.
+Software timers use a queue and a task resource of the system. The software timers are triggered based on the First In First Out (FIFO) rule. A timer with a shorter value is always closer to the queue head than a timer with a longer value, and is preferentially triggered.
+The software timer counts time in ticks. When a software timer is created and started, the OpenHarmony LiteOS-M kernel determines the timer expiry time based on the current system time (in ticks) and the timing interval set by the user, and adds the timer control structure to the global timing list.
-- OS\_SWTMR\_STATUS\_CREATED
+When a tick interrupt occurs, the tick interrupt handler scans the global timing list for expired timers. If such timers are found, the timers are recorded.
- The timer is created but not started or the timer is stopped. When **LOS\_SwtmrCreate** is called for a timer that is not in use or **LOS\_SwtmrStop** is called for a newly started timer, the timer changes to this state.
+When the tick interrupt handling function is complete, the software timer task (with the highest priority) is woken up. In this task, the timeout callback function for the recorded timer is called.
-- OS\_SWTMR\_STATUS\_TICKING
+### Timer States
- The timer is running \(counting\). When **LOS\_SwtmrStart** is called for a newly created timer, the timer enters this state.
+- OS_SWTMR_STATUS_UNUSED
+
+ The timer is not in use. When the timer module is initialized, all timer resources in the system are set to this state.
+- OS_SWTMR_STATUS_CREATED
+
+ The timer is created but not started or the timer is stopped. When **LOS_SwtmrCreate** is called for a timer that is not in use or **LOS_SwtmrStop** is called for a newly started timer, the timer changes to this state.
+
+- OS_SWTMR_STATUS_TICKING
+
+ The timer is running (counting). When **LOS_SwtmrStart** is called for a newly created timer, the timer enters this state.
-### Timer Modes
+### Timer Modes
-The OpenHarmony LiteOS-M kernel provides three types of software timers:
+The OpenHarmony LiteOS-M kernel provides the following types of software timers:
+
+- One-shot timer: Once started, the timer is automatically deleted after triggering only one timer event.
+
+- Periodic timer: This type of timer periodically triggers timer events until it is manually stopped.
+
+- One-shot timer deleted by calling an API
-- One-shot timer: Once started, the timer is automatically deleted after triggering only one timer event.
-- Periodic timer: This type of timer periodically triggers timer events until it is manually stopped.
-- One-shot timer deleted by calling an API
## Available APIs
The following table describes APIs available for the OpenHarmony LiteOS-M software timer module. For more details about the APIs, see the API reference.
-**Table 1** Software timer APIs
-
-
-
Function
- |
-API
- |
-Description
- |
-
-
-Creating or deleting timers
- |
-LOS_SwtmrCreate
- |
-Creates a software timer.
- |
-
-LOS_SwtmrDelete
- |
-Deletes a software timer.
- |
-
-Starting or stopping timers
- |
-LOS_SwtmrStart
- |
-Starts a software timer.
- |
-
-LOS_SwtmrStop
- |
-Stop a software timer.
- |
-
-Obtaining remaining ticks of a software timer
- |
-LOS_SwtmrTimeGet
- |
-Obtaining remaining ticks of a software timer
- |
-
-
-
+ **Table 1** Software timer APIs
+
+| API| Description|
+| -------- | -------- |
+| LOS_SwtmrCreate| Creates a timer.|
+| LOS_SwtmrDelete| Deletes a timer.|
+| LOS_SwtmrStart| Starts a timer.|
+| LOS_SwtmrStop| Stops a timer.|
+| LOS_SwtmrTimeGet| Obtains the remaining ticks of a software timer.|
+
## How to Develop
The typical development process of software timers is as follows:
-1. Configure the software timer.
- - Check that **LOSCFG\_BASE\_CORE\_SWTMR** and **LOSCFG\_BASE\_IPC\_QUEUE** are set to **1**.
- - Configure **LOSCFG\_BASE\_CORE\_SWTMR\_LIMIT** \(maximum number of software timers supported by the system\).
- - Configure **OS\_SWTMR\_HANDLE\_QUEUE\_SIZE** \(maximum length of the software timer queue\).
+1. Configure the software timer.
+ - Check that **LOSCFG_BASE_CORE_SWTMR** and **LOSCFG_BASE_IPC_QUEUE** are set to **1**.
+ - Configure **LOSCFG_BASE_CORE_SWTMR_LIMIT** (maximum number of software timers supported by the system).
+ - Configure **OS_SWTMR_HANDLE_QUEUE_SIZE** (maximum length of the software timer queue).
+
+2. Call **LOS_SwtmrCreate** to create a software timer.
+ - Create a software timer with the specified timing duration, timeout handling function, and triggering mode.
+ - Return the function execution result (success or failure).
-2. Call **LOS\_SwtmrCreate** to create a software timer.
- - Create a software timer with the specified timing duration, timeout handling function, and triggering mode.
- - Return the function execution result \(success or failure\).
+3. Call **LOS_SwtmrStart** to start the software timer.
-3. Call **LOS\_SwtmrStart** to start the software timer.
-4. Call **LOS\_SwtmrTimeGet** to obtain the remaining number of ticks of the software timer.
-5. Call **LOS\_SwtmrStop** to stop the software timer.
-6. Call **LOS\_SwtmrDelete** to delete the software timer.
+4. Call **LOS_SwtmrTimeGet** to obtain the remaining number of ticks of the software timer.
+
+5. Call **LOS_SwtmrStop** to stop the software timer.
+
+6. Call **LOS_SwtmrDelete** to delete the software timer.
+
+> **NOTE**
+> - Avoid too many operations in the callback function of the software timer. Do not use APIs or perform operations that may cause task suspension or blocking.
+>
+> - The software timers use a queue and a task resource of the system. The priority of the software timer tasks is set to **0** and cannot be changed.
+>
+> - The number of software timer resources that can be configured in the system is the total number of software timer resources available to the entire system, not the number of software timer resources available to users. For example, if the system software timer occupies one more resource, the number of software timer resources available to users decreases by one.
+>
+> - If a one-shot software timer is created, the system automatically deletes the timer and reclaims resources after the timer times out and the callback function is executed.
+>
+> - For a one-shot software timer that will not be automatically deleted after expiration, you need to call **LOS_SwtmrDelete** to delete it and reclaim the timer resource to prevent resource leakage.
-> **NOTE**
->- Avoid too many operations in the callback function of the software timer. Do not use APIs or perform operations that may cause task suspension or blocking.
->- The software timers use a queue and a task resource of the system. The priority of the software timer tasks is set to **0** and cannot be changed.
->- The number of software timer resources that can be configured in the system is the total number of software timer resources available to the entire system, not the number of software timer resources available to users. For example, if the system software timer occupies one more resource, the number of software timer resources available to users decreases by one.
->- If a one-shot software timer is created, the system automatically deletes the timer and reclaims resources after the timer times out and the callback function is executed.
->- For a one-shot software timer that will not be automatically deleted after expiration, you need to call **LOS\_SwtmrDelete** to delete it and reclaim the timer resource to prevent resource leakage.
## Development Example
+
### Example Description
The following programming example demonstrates how to:
-1. Create, start, delete, pause, and restart a software timer.
-2. Use a one-shot software timer and a periodic software timer
+1. Create, start, delete, pause, and restart a software timer.
+
+2. Use a one-shot software timer and a periodic software timer
+
### Sample Code
Prerequisites
-- In **los\_config.h**, **LOSCFG\_BASE\_CORE\_SWTMR** is enabled.
-- In **los\_config.h**, **LOSCFG\_BASE\_CORE\_SWTMR\_ALIGN** is disabled. The sample code does not involve timer alignment.
-- The maximum number of software timers supported by the system \(**LOSCFG\_BASE\_CORE\_SWTMR\_LIMIT**\) is configured.
-- The maximum length of the software timer queue \(OS\_SWTMR\_HANDLE\_QUEUE\_SIZE\) is configured.
+- In **los_config.h**, **LOSCFG_BASE_CORE_SWTMR** is enabled.
+
+- In **los_config.h**, **LOSCFG_BASE_CORE_SWTMR_ALIGN** is disabled. The sample code does not involve timer alignment.
+
+- The maximum number of software timers supported by the system (**LOSCFG_BASE_CORE_SWTMR_LIMIT**) is configured.
+
+- The maximum length of the software timer queue (OS_SWTMR_HANDLE_QUEUE_SIZE) is configured.
The sample code is as follows:
+
```
#include "los_swtmr.h"
@@ -156,7 +145,7 @@ UINT32 g_timerCount2 = 0;
/* Task ID*/
UINT32 g_testTaskId01;
-void Timer1_Callback(UINT32 arg) //Callback function 1
+void Timer1_Callback(UINT32 arg) //Callback 1
{
UINT32 tick_last1;
g_timerCount1++;
@@ -164,7 +153,7 @@ void Timer1_Callback(UINT32 arg) //Callback function 1
printf("g_timerCount1=%d, tick_last1=%d\n", g_timerCount1, tick_last1);
}
-void Timer2_Callback(UINT32 arg) //Callback function 2
+void Timer2_Callback(UINT32 arg) //Callback 2
{
UINT32 tick_last2;
tick_last2 = (UINT32)LOS_TickCountGet();
@@ -237,10 +226,12 @@ UINT32 Example_TaskEntry(VOID)
}
```
+
### Verification
The output is as follows:
+
```
create Timer1 success
start Timer1 success
@@ -261,4 +252,3 @@ g_timerCount2=9 tick_last2=2113
g_timerCount2=10 tick_last2=2213
delete Timer2 success
```
-
diff --git a/en/device-dev/kernel/kernel-mini-basic-task.md b/en/device-dev/kernel/kernel-mini-basic-task.md
index b431d06f674446fe4d0ac7c990eea128d641f7af..e2b71e961f6335ce9da5485e8b025b0df88e7308 100644
--- a/en/device-dev/kernel/kernel-mini-basic-task.md
+++ b/en/device-dev/kernel/kernel-mini-basic-task.md
@@ -1,65 +1,68 @@
# Task Management
+
## Basic Concepts
From the perspective of the operating system, tasks are the minimum running units that compete for system resources. They can use or wait for CPUs, use system resources such as memory, and run independently.
The task module of the OpenHarmony LiteOS-M provides multiple tasks and supports switching between tasks, helping users manage business process procedures. The task module has the following features:
-- Multiple tasks are supported.
-- A task represents a thread.
-- The preemptive scheduling mechanism is used for tasks. High-priority tasks can interrupt low-priority tasks. Low-priority tasks can be scheduled only after high-priority tasks are blocked or complete.
-- Time slice round-robin is used to schedule tasks with the same priority.
-- A total of 32 \(**0** to **31**\) priorities are defined. **0** is the highest priority, and **31** is the lowest.
-
-### Task-related Concepts
-
-**Task States**
-
-A task has multiple states. After the system initialization is complete, the created tasks can compete for certain resources in the system according to the scheduling procedure regulated by the kernel.
-
-A task can be in any of the following states:
-
-- Ready: The task is in the ready queue, waiting for execution by a CPU.
-- Running: The task is being executed.
-- Blocked: The task is not in the ready queue. The task may be suspended, delayed, waiting for a semaphore, waiting to read from or write into a queue, or reading from or writing into an event.
-- Dead: The task execution is complete and waiting for the system to reclaim resources.
-
-**Task State Transitions**
-
-**Figure 1** Task state transitions
-
-
-The task transition process is as follows:
+- Multiple tasks are supported.
-- Ready → Running
+- A task represents a thread.
- A task enters Ready state once created. When task switching occurs, the task with the highest priority in the Ready queue will be executed. The task being executed enters the Running state and is removed from the Ready queue.
+- The preemptive scheduling mechanism is used for tasks. High-priority tasks can interrupt low-priority tasks. Low-priority tasks can be scheduled only after high-priority tasks are blocked or complete.
-- Running → Blocked
+- Time slice round-robin is used to schedule tasks with the same priority.
- When a running task is blocked \(suspended, delayed, or reading semaphores\), it will be inserted to the blocked task queue and changes from the Running state to the Blocked state. Then, task switching is triggered to run the task with the highest priority in the Ready queue.
+- A total of 32 (**0** to **31**) priorities are defined. **0** is the highest priority, and **31** is the lowest.
-- Blocked → Ready \(Blocked → Running\)
- When a blocked task is recovered \(for example, the task is resumed, the delay period or semaphore read period times out, or the task successfully reads a semaphore\), the task will be added to the Ready queue and change from the Blocked state to the Ready state. If the priority of the recovered task is higher than that of the running task, task switching will be triggered to run the recovered task. Then, the task changes from the Ready state to the Running state.
+### Task-related Concepts
-- Ready → Blocked
+**Task States**
- When a task in the Ready state is blocked \(suspended\), the task changes to the Blocked state and is deleted from the Ready queue. The blocked task will not be scheduled until it is recovered.
+A task has multiple states. After the system initialization is complete, the created tasks can compete for certain resources in the system according to the scheduling procedure regulated by the kernel.
-- Running → Ready
+A task can be in any of the following states:
- When a task with a higher priority is created or recovered, tasks will be scheduled. The task with the highest priority in the Ready queue changes to the Running state. The originally running task changes to the Ready state and remains in the Ready queue.
+- Ready: The task is in the ready queue, waiting for execution by a CPU.
-- Running → Dead
+- Running: The task is being executed.
- When a running task is complete, it changes to the Dead state. The Dead state includes normal exit state as the task is complete and the Invalid state. For example, if a task is complete but is not automatically deleted, the task is in the Invalid state.
+- Blocked: The task is not in the ready queue. The task may be suspended, delayed, waiting for a semaphore, waiting to read from or write into a queue, or reading from or writing into an event.
-- Blocked → Dead
+- Dead: The task execution is complete and waiting for the system to reclaim resources.
- If an API is called to delete a blocked task, the task state change from Blocked to Dead.
+**Task State Transitions**
+**Figure 1** Task state transition
+
+ 
+
+The task state transition process is as follows:
+
+- Ready → Running
+
+ A task enters Ready state once created. When task switching occurs, the task with the highest priority in the Ready queue will be executed. The task being executed enters the Running state and is removed from the Ready queue.
+- Running → Blocked
+
+ When a running task is blocked (suspended, delayed, or reading semaphores), it will be inserted to the blocked task queue and changes from the Running state to the Blocked state. Then, task switching is triggered to run the task with the highest priority in the Ready queue.
+- Blocked → Ready (Blocked → Running)
+
+ When a blocked task is recovered (for example, the task is resumed, the delay period or semaphore read period times out, or the task successfully reads a semaphore), the task will be added to the Ready queue and change from the Blocked state to the Ready state. If the priority of the recovered task is higher than that of the running task, task switching will be triggered to run the recovered task. Then, the task changes from the Ready state to the Running state.
+- Ready → Blocked
+
+ When a task in the Ready state is blocked (suspended), the task changes to the Blocked state and is deleted from the Ready queue. The blocked task will not be scheduled until it is recovered.
+- Running → Ready
+
+ When a task with a higher priority is created or recovered, tasks will be scheduled. The task with the highest priority in the Ready queue changes to the Running state. The originally running task changes to the Ready state and remains in the Ready queue.
+- Running → Dead
+
+ When a running task is complete, it changes to the Dead state. The Dead state includes normal exit state as the task is complete and the Invalid state. For example, if a task is complete but is not automatically deleted, the task is in the Invalid state.
+- Blocked → Dead
+
+ If an API is called to delete a blocked task, the task state change from Blocked to Dead.
**Task ID**
@@ -83,81 +86,84 @@ Resources, such as registers, used during the running of a task. When a task is
**Task Control Block**
-Each task has a task control block \(TCB\). A TCB contains task information, such as context stack pointer, state, priority, ID, name, and stack size. The TCB reflects the running status of a task.
+Each task has a task control block (TCB). A TCB contains task information, such as context stack pointer, state, priority, ID, name, and stack size. The TCB reflects the running status of a task.
**Task Switching**
Task switching involves actions, such as obtaining the task with the highest priority in the Ready queue, saving the context of the switched-out task, and restoring the context of the switched-in task.
-### Task Running Mechanism
+
+### Task Running Mechanism
When a task is created, the system initializes the task stack and presets the context. The system places the task entry function in the corresponding position so that the function is executed when the task enters the running state for the first time.
+
## Available APIs
The following table describes APIs available for the OpenHarmony LiteOS-M task module. For more details about the APIs, see the API reference.
-**Table 1** APIs of the task management module
-
-| Category| API| Description|
-| -------- | -------- | -------- |
-| Creating or deleting a task| LOS_TaskCreateOnly | Creates a task and suspends the task to disable scheduling of the task. To enable scheduling of the task, call **LOS_TaskResume** to make the task enter the Ready state.|
-| | LOS_TaskCreate | Creates a task and places the task in the Ready state. If there is no task with a higher priority in the Ready queue, the task will be executed.|
-| | LOS_TaskDelete | Deletes a task.|
-| Controlling task status| LOS_TaskResume | Resumes a suspended task to place it in the Ready state.|
-| | LOS_TaskSuspend | Suspends the specified task and performs task switching.|
-| | LOS_TaskJoin | Suspends this task till the specified task is complete and the task control block resources are reclaimed.|
-| | LOS_TaskDetach | Changes the task attribute from **joinable** to **detach**. After the task of the **detach** attribute is complete, the task control block resources will be automatically reclaimed.|
-| | LOS_TaskDelay | Makes a task wait for a period of time (in ticks) and releases CPU resources. When the delay time expires, the task enters the Ready state again. The input parameter is the number of ticks.|
-| | LOS_Msleep | Converts the input number of milliseconds into number of ticks, and use the result to call **LOS_TaskDelay**.|
-| | LOS_TaskYield | Sets the time slice of the current task to **0** to release CPU resources and schedule the task with the highest priority in the Ready queue to run.|
-| Controlling task scheduling| LOS_TaskLock | Locks task scheduling. However, tasks can still be interrupted.|
-| | LOS_TaskUnlock | Unlocks task scheduling.|
-| | LOS_Schedule | Triggers task scheduling|
-| Controlling task priority| LOS_CurTaskPriSet | Sets the priority for the current task.|
-| | LOS_TaskPriSet | Sets the priority for a specified task.|
-| | LOS_TaskPriGet | Obtains the priority of a specified task.|
-| Obtaining Job information| LOS_CurTaskIDGet | Obtains the ID of the current task.|
-| | LOS_NextTaskIDGet | Obtains the ID of the task with the highest priority in the Ready queue.|
-| | LOS_NewTaskIDGet | Same as **LOS_NextTaskIDGet**.|
-| | LOS_CurTaskNameGet | Obtains the name of the current task.|
-| | LOS_TaskNameGet | Obtains the name of a specified task.|
-| | LOS_TaskStatusGet | Obtains the state of a specified task.|
-| | LOS_TaskInfoGet | Obtains information about a specified task, including the task state, priority, stack size, stack pointer (SP), task entry function, and used stack space.|
-| | LOS_TaskIsRunning | Checks whether the task module has started scheduling.|
-| Updating task information| LOS_TaskSwitchInfoGet | Obtains task switching information. The macro **LOSCFG_BASE_CORE_EXC_TSK_SWITCH** must be enabled.|
-| Reclaiming task stack resources| LOS_TaskResRecycle | Reclaims all task stack resources.|
+ **Table 1** APIs of the task management module
+
+| Category| Description|
+| -------- | -------- |
+| Creating or deleting a task| **LOS_TaskCreateOnly**: creates a task and places the task in the Ready state. If there is no task with a higher priority in the Ready queue, the task will be executed.
**LOS_TaskCreate**: creates a task and places the task in the Ready state. If there is no task with a higher priority in the Ready queue, the task will be executed.
**LOS_TaskDelete**: deletes a task.|
+| Controlling task status| **LOS_TaskResume**: resumes a suspended task to place the task in the Ready state.
**LOS_TaskSuspend**: suspends the specified task and performs task switching.
**LOS_TaskJoin**: suspends this task till the specified task is complete and the task control block resources are reclaimed.
**LOS_TaskDelay**: makes a task wait for a period of time (in ticks) and releases CPU resources. When the delay timer expires, the task enters the Ready state again. The input parameter is the number of ticks.
**LOS_Msleep**: converts the input parameter number of milliseconds into number of ticks, and use the result to call **LOS_TaskDelay**.
**LOS_TaskYield**: sets the time slice of the current task to **0** to release CPU resources and schedule the task with the highest priority in the Ready queue to run.|
+| Controlling task scheduling| **LOS_TaskLock**: locks task scheduling. However, tasks can still be interrupted.
**LOS_TaskUnlock**: unlocks task scheduling.
**LOS_Schedule**: triggers task scheduling.|
+| Controlling task priority| **LOS_CurTaskPriSet**: sets the priority for the current task.
**LOS_TaskPriSet**: sets the priority for a specified task.
**LOS_TaskPriGet**: obtains the priority of a specified task.|
+| Obtaining Job information| **LOS_CurTaskIDGet**: obtains the ID of the current task.
**LOS_NextTaskIDGet**: obtains the ID of the task with the highest priority in the Ready queue.
**LOS_NewTaskIDGet**: equivalent to **LOS_NextTaskIDGet**.
**LOS_CurTaskNameGet**: obtains the name of the current task.
**LOS_TaskNameGet**: obtains the name of a task.
**LOS_TaskStatusGet**: obtains the state of a task.
**LOS_TaskInfoGet**: obtains information about a specified task, including the task state, priority, stack size, stack pointer (SP), task entry function, and used stack space.
**LOS_TaskIsRunning**: checks whether the task module has started scheduling.|
+| Updating task information| **LOS_TaskSwitchInfoGet**: obtains task switching information. The macro **LOSCFG_BASE_CORE_EXC_TSK_SWITCH** must be enabled.|
## How to Develop
The typical development process of the task module is as follows:
-1. Use **LOS\_TaskLock** to lock task scheduling and prevent high-priority tasks from being scheduled.
-2. Use **LOS\_TaskCreate** to create a task.
-3. Use **LOS\_TaskUnlock** to unlock task scheduling so that tasks can be scheduled by priority.
-4. Use **LOS\_TaskDelay** to delay a task.
-5. Use **LOS\_TaskSuspend** to suspend a task.
-6. Use **LOS\_TaskResume** to resume the suspended task.
-
-> **NOTE**
->- Running idle tasks reclaims the TCBs and stacks in the to-be-recycled linked list.
->- The task name is a pointer without memory space allocated. When setting the task name, do not assign the local variable address to the task name pointer.
->- The task stack size is 8-byte aligned. Follow the "nothing more and nothing less" principle while determining the task stack size.
->- A running task cannot be suspended if task scheduling is locked.
->- Idle tasks and software timer tasks cannot be suspended or deleted.
->- In an interrupt handler or when a task is locked, the operation of calling **LOS\_TaskDelay** fails.
->- Locking task scheduling does not disable interrupts. Tasks can still be interrupted while task scheduling is locked.
->- Locking task scheduling must be used together with unlocking task scheduling.
->- Task scheduling may occur while a task priority is being set.
->- The maximum number of tasks that can be set for the operating system is the total number of tasks of the operating system, not the number of tasks available to users. For example, if the system software timer occupies one more task resource, the number of task resources available to users decreases by one.
->- **LOS\_CurTaskPriSet** and **LOS\_TaskPriSet** cannot be used in interrupts or used to modify the priorities of software timer tasks.
->- If the task corresponding to the task ID sent to **LOS\_TaskPriGet** has not been created or the task ID exceeds the maximum number of tasks, **-1** will be returned.
->- Resources such as a mutex or a semaphore allocated to a task must have been released before the task is deleted.
+1. Use **LOS_TaskLock** to lock task scheduling and prevent high-priority tasks from being scheduled.
+
+2. Use **LOS_TaskCreate** to create a task.
+
+3. Use **LOS_TaskUnlock** to unlock task scheduling so that tasks can be scheduled by priority.
+
+4. Use **LOS_TaskDelay** to delay a task.
+
+5. Use **LOS_TaskSuspend** to suspend a task.
+
+6. Use **LOS_TaskResume** to resume the suspended task.
+
+> **NOTE**
+> - Running idle tasks reclaims the TCBs and stacks in the to-be-recycled linked list.
+>
+> - The task name is a pointer without memory space allocated. When setting the task name, do not assign the local variable address to the task name pointer.
+>
+> - The task stack size is 8-byte aligned. Follow the "nothing more and nothing less" principle while determining the task stack size.
+>
+> - A running task cannot be suspended if task scheduling is locked.
+>
+> - Idle tasks and software timer tasks cannot be suspended or deleted.
+>
+> - In an interrupt handler or when a task is locked, the operation of calling **LOS_TaskDelay** fails.
+>
+> - Locking task scheduling does not disable interrupts. Tasks can still be interrupted while task scheduling is locked.
+>
+> - Locking task scheduling must be used together with unlocking task scheduling.
+>
+> - Task scheduling may occur while a task priority is being set.
+>
+> - The maximum number of tasks that can be set for the operating system is the total number of tasks of the operating system, not the number of tasks available to users. For example, if the system software timer occupies one more task resource, the number of task resources available to users decreases by one.
+>
+> - **LOS_CurTaskPriSet** and **LOS_TaskPriSet** cannot be used in interrupts or used to modify the priorities of software timer tasks.
+>
+> - If the task corresponding to the task ID sent to **LOS_TaskPriGet** has not been created or the task ID exceeds the maximum number of tasks, **-1** will be returned.
+>
+> - Resources such as a mutex or a semaphore allocated to a task must have been released before the task is deleted.
+
## Development Example
-This example describes the priority-based task scheduling and use of task-related APIs, including creating, delaying, suspending, and resuming two tasks with different priorities, and locking/unlocking task scheduling. The sample code is as follows:
+This example describes the priority-based task scheduling and use of task-related APIs, including creating, delaying, suspending, and resuming two tasks with different priorities, and locking/unlocking task scheduling.
+
+The sample code is as follows:
+
```
UINT32 g_taskHiId;
@@ -249,7 +255,7 @@ UINT32 Example_TskCaseEntry(VOID)
initParam.pcName = "TaskLo";
initParam.uwStackSize = LOSCFG_BASE_CORE_TSK_DEFAULT_STACK_SIZE;
- /*Create a low-priority task. The task will not be executed immediately after being created, because task scheduling is locked. */
+ /* Create a low-priority task. The task will not be executed immediately after being created, because task scheduling is locked. */
ret = LOS_TaskCreate(&g_taskLoId, &initParam);
if (ret != LOS_OK) {
LOS_TaskUnlock();
@@ -271,10 +277,12 @@ UINT32 Example_TskCaseEntry(VOID)
}
```
+
### Verification
The development is successful if the return result is as follows:
+
```
LOS_TaskLock() Success!
Example_TaskHi create Success!
diff --git a/en/device-dev/kernel/kernel-mini-extend-dynamic-loading.md b/en/device-dev/kernel/kernel-mini-extend-dynamic-loading.md
index 1bf198e1fa337518d06a4062c74a2cf290630b4c..3e6ab8c31e41128a1f15f10a2d396cb74dd7bc7f 100644
--- a/en/device-dev/kernel/kernel-mini-extend-dynamic-loading.md
+++ b/en/device-dev/kernel/kernel-mini-extend-dynamic-loading.md
@@ -1,17 +1,26 @@
# Dynamic Loading
+
## Basic Concepts
-In small devices with limited hardware resources, dynamic algorithm deployment capability is required to solve the problem that multiple algorithms cannot be deployed at the same time. The LiteOS-M kernel uses the Executable and Linkable Format \(ELF\) loading because it is easy to use and compatible with a wide variety of platforms. The LiteOS-M provides APIs similar to **dlopen** and **dlsym**. Apps can load and unload required algorithm libraries by using the APIs provided by the dynamic loading module. As shown in the following figure, the app obtains the corresponding information output through the API required by the third-party algorithm library. The third-party algorithm library depends on the basic APIs provided by the kernel, such as **malloc**. After the app loads the API and relocates undefined symbols, it can call the API to complete the function. The dynamic loading component supports only the Arm architecture. In addition, the signature and source of the shared library to be loaded must be verified to ensure system security.
+In small devices with limited hardware resources, dynamic algorithm deployment capability is required to allow multiple algorithms to be deployed at the same time. The LiteOS-M kernel uses the Executable and Linkable Format (ELF) loading because it is easy to use and compatible with a wide variety of platforms.
+
+The LiteOS-M provides APIs similar to **dlopen** and **dlsym**. Apps can load and unload required algorithm libraries by using the APIs provided by the dynamic loading module. As shown in the following figure, the app obtains the corresponding information output through the API required by the third-party algorithm library. The third-party algorithm library depends on the basic APIs provided by the kernel, such as **malloc**. After the app loads the API and relocates undefined symbols, it can call the API to complete the function.
+
+The dynamic loading component supports only the Arm architecture. In addition, the signature and source of the shared library to be loaded must be verified to ensure system security.
+
+ **Figure 1** LiteOS-M kernel dynamic loading architecture
+
+ 
-**Figure 1** LiteOS-M kernel dynamic loading architecture
-
## Working Principles
+
### Exporting the Symbol Table
-The kernel needs to proactively expose the API required by the dynamic library when the shared library calls a kernel API, as shown in the following figure. This mechanism compiles the symbol information to the specified section and calls the **SYM\_EXPORT** macro to export information of the specified symbol. The symbol information is described in the structure **SymInfo**. Its members include the symbol name and symbol address information. The macro **SYM\_EXPORT** imports the symbol information to the **.sym.\*** section by using the **\_\_attribute\_\_** compilation attribute.
+The kernel needs to proactively expose the API required by the dynamic library when the shared library calls a kernel API, as shown in the following figure. This mechanism compiles the symbol information to the specified section and calls the **SYM_EXPORT** macro to export information of the specified symbol. The symbol information is described in the structure **SymInfo**, which includes the symbol name and address information. The macro **SYM_EXPORT** imports the symbol information to the **.sym.*** section by using **__attribute__**.
+
```
typedef struct {
@@ -26,12 +35,15 @@ const SymInfo sym_##func __attribute__((section(".sym."#func))) = { \
};
```
-**Figure 2** Exported symbol table information
-
+ **Figure 2** Exported symbol table
+
+ 
+
### Loading an ELF File
-During the loading process, the LOAD section to be loaded to the memory is obtained based on the ELF file handle and the section offset of the program header table. Generally, there are two sections: read-only section and read-write section. You can run the **readelf -l** command to view the LOAD section information of the ELF file. The physical memory is requested according to the related alignment attributes. Then, a code section or a data segment is written into the memory based on the loading base address and an offset of each section.
+The **LOAD** section to be loaded to the memory can be obtained based on the ELF file handle and the section offset of the program header table. Generally, there are two sections: read-only and read-write. You can run the **readelf -l** command to view the LOAD section information of the ELF file. The physical memory is requested according to the related alignment attributes. Then, a code section or a data segment is written into the memory based on the loading base address and an offset of each section.
+
```
$ readelf -l lib.so
@@ -43,8 +55,7 @@ There are 4 program headers, starting at offset 52
Program Headers:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
EXIDX 0x000760 0x00000760 0x00000760 0x00008 0x00008 R 0x4
- LOAD 0x000000 0x00000000 0x00000000 0x0076c 0x0076c R E 0x10000
- LOAD 0x00076c 0x0001076c 0x0001076c 0x0010c 0x00128 RW 0x10000
+ LOAD 0x000000 0x00000000 0x00000000 0x0076c 0x0076c R E 0x10000LOAD 0x00076c 0x0001076c 0x0001076c 0x0010c 0x00128 RW 0x10000
DYNAMIC 0x000774 0x00010774 0x00010774 0x000c8 0x000c8 RW 0x4
Section to Segment mapping:
@@ -55,29 +66,45 @@ Program Headers:
03 .dynamic
```
-**Figure 3** Process of loading an ELF file
-
+ **Figure 3** Process of loading an ELF file
+ 
+
-### ELF File Link
+### ELF File Linking
A relocation table is obtained by using a **.dynamic** section of the ELF file. Each entry that needs to be relocated in the table is traversed. Then, the symbol is searched, based on the symbol name that needs to be relocated, in the shared library and the exported symbol table provided by the kernel. The relocation information is updated based on the symbol found.
-**Figure 4** ELF file linking process
-
+ **Figure 4** ELF file linking process
+
+ 
+
## ELF Specifications
+
### ELF Type
-When compiling a shared library, you can add **-fPIC** \(a compilation option\) to compile location-independent code. The shared library file type is **ET\_DYN**, which can be loaded to any valid address range.
+When compiling a shared library, you can add **-fPIC** (a compilation option) to compile location-independent code. The shared library file type is **ET_DYN**, which can be loaded to any valid address range.
Example: **arm-none-eabi-gcc -fPIC –shared –o lib.so lib.c**
+
### Options for Linking
-1. **-nostdlib**: Do not use the lib library in the compiler when linking.
-2. **-nostartfiles**: Do not use the startup files in the compiler when linking.
-3. **-fPIC**: compiles location-independent shared libraries.
-4. **-z max-page-size=4**: sets the number of alignment bytes of the loadable sections in the binary file to **4**. This setting saves memory and can be used for a dynamic library.
-5. **-mcpu=** specifies the CPU architecture.
+- **-nostdlib**: Do not use the lib library in the compiler when linking.
+
+- **-nostartfiles**: Do not use the startup files in the compiler when linking.
+
+- **-fPIC**: compiles location-independent shared libraries.
+
+- **-z max-page-size=4**: sets the number of alignment bytes of the loadable sections in the binary file to **4**. This setting saves memory and can be used for a dynamic library.
+
+- **-mcpu=** specifies the CPU architecture.
+
+
+## Constraints
+
+- Applications cannot be loaded. Only shared libraries can be loaded.
+- The shared library to be loaded cannot depend on the libc library or other shared libraries in the compiler. It can depend only on the external APIs provided by the kernel (provided by the exported symbol table).
+- This feature depends on the cross compiler and file system.
diff --git a/en/device-dev/kernel/kernel-mini-extend-file-fat.md b/en/device-dev/kernel/kernel-mini-extend-file-fat.md
index 5f191076a6e75743b254e0b8611698de701bb8e0..b7a9bffdd5e106d1afedff3f960b4f29d9b8dca2 100644
--- a/en/device-dev/kernel/kernel-mini-extend-file-fat.md
+++ b/en/device-dev/kernel/kernel-mini-extend-file-fat.md
@@ -1,20 +1,27 @@
# FAT
+
## Basic Concepts
-File Allocation Table \(FAT\) is a file system developed for personal computers. It consists of the DOS Boot Record \(DBR\) region, FAT region, and Data region. Each entry in the FAT region records information about the corresponding cluster in the storage device. The cluster information includes whether the cluster is used, number of the next cluster of the file, whether the file ends with the cluster. The FAT file system supports multiple formats, such as FAT12, FAT16, and FAT32. The numbers 12, 16, and 32 indicate the number of bits per cluster within the FAT, respectively. The FAT file system supports multiple media, especially removable media \(such as USB flash drives, SD cards, and removable hard drives\). The FAT file system ensures good compatibility between embedded devices and desktop systems \(such as Windows and Linux\) and facilitates file management.
+File Allocation Table (FAT) is a file system developed for personal computers. It consists of the DOS Boot Record (DBR) region, FAT region, and Data region. Each entry in the FAT region records information about the corresponding cluster in the storage device. The cluster information includes whether the cluster is used, number of the next cluster of the file, whether the file ends with the cluster.
+
+The FAT file system supports multiple formats, such as FAT12, FAT16, and FAT32. The numbers 12, 16, and 32 indicate the number of bits per cluster within the FAT, respectively. The FAT file system supports multiple media, especially removable media (such as USB flash drives, SD cards, and removable hard drives). The FAT file system ensures good compatibility between embedded devices and desktop systems (such as Windows and Linux) and facilitates file management.
The OpenHarmony kernel supports FAT12, FAT16, and FAT32 file systems. These file systems require a tiny amount of code to implement, use less resources, support a variety of physical media, and are tailorable and compatible with Windows and Linux systems. They also support identification of multiple devices and partitions. The kernel supports multiple partitions on hard drives and allows creation of the FAT file system on the primary partition and logical partition.
+
## Development Guidelines
-### Adaptation of Drivers
-The use of the FAT file system requires support from the underlying MultiMediaCard \(MMC\) drivers. To run FatFS on a board with an MMC storage device, you must:
+### Driver Adaptation
+
+The use of the FAT file system requires support from the underlying MultiMedia Card (MMC) drivers. To run FatFS on a board with an MMC storage device, you must:
-1. Implement the **disk\_status**, **disk\_initialize**, **disk\_read**, **disk\_write**, and **disk\_ioctl** APIs to adapt to the embedded MMC \(eMMC\) drivers on the board.
+1. Implement the **disk_status**, **disk_initialize**, **disk_read**, **disk_write**, and **disk_ioctl** APIs to adapt to the embedded MMC (eMMC) drivers on the board.
+2. Add the **fs_config.h** file with information such as **FS_MAX_SS** (maximum sector size of the storage device) and **FF_VOLUME_STRS** (partition names) configured.
+
+The following is an example:
-2. Add the **fs\_config.h** file with information such as **FS\_MAX\_SS** \(maximum sector size of the storage device\) and **FF\_VOLUME\_STRS** \(partition names\) configured. The following is an example:
```
#define FF_VOLUME_STRS "system", "inner", "update", "user"
@@ -22,63 +29,70 @@ The use of the FAT file system requires support from the underlying MultiMediaCa
#define FAT_MAX_OPEN_FILES 50
```
+
### How to Develop
-> **NOTE**
->- Note the following when managing FatFS files and directories:
-> - A file cannot exceed 4 GB.
-> - **FAT\_MAX\_OPEN\_FILES** specifies the maximum number files you can open at a time, and **FAT\_MAX\_OPEN\_DIRS** specifies the maximum number of folders you can open at a time.
-> - Root directory management is not supported. File and directory names start with the partition name. For example, **user/testfile** indicates the file or directory **testfile** in the **user** partition.
-> - To open a file multiple times, use **O\_RDONLY** \(read-only mode\). **O\_RDWR** or **O\_WRONLY** \(writable mode\) can open a file only once.
-> - The read and write pointers are not separated. If a file is open in **O\_APPEND** mode, the read pointer is also at the end of the file. If you want to read the file from the beginning, you must manually set the position of the read pointer.
-> - File and directory permission management is not supported.
-> - The **stat** and **fstat** APIs do not support query of the modification time, creation time, and last access time. The Microsoft FAT protocol does not support time before A.D. 1980.
->- Note the following when mounting and unmounting FatFS partitions:
-> - Partitions can be mounted with the read-only attribute. When the input parameter of the **mount** function is **MS\_RDONLY**, all APIs with the write attribute, such as **write**, **mkdir**, **unlink**, and **open** with **non-O\_RDONLY** attributes, will be rejected.
-> - You can use the **MS\_REMOUNT** flag with **mount** to modify the permission for a mounted partition.
-> - Before unmounting a partition, ensure that all directories and files in the partition are closed.
-> - You can use **umount2** with the **MNT\_FORCE** parameter to forcibly close all files and folders and unmount the partition. However, this may cause data loss. Therefore, exercise caution when running **umount2**.
->- The FAT file system supports re-partitioning and formatting of storage devices using **fatfs\_fdisk** and **fatfs\_format**.
-> - If a partition is mounted before being formatted using **fatfs\_format**, you must close all directories and files in the partition and unmount the partition first.
-> - Before calling **fatfs\_fdisk**, ensure that all partitions in the device are unmounted.
-> - Using **fatfs\_fdisk** and **fatfs\_format** may cause data loss. Exercise caution when using them.
+>  **NOTE**
+>
+> Note the following when managing FatFS files and directories:
+> - A file cannot exceed 4 GB.
+> - **FAT\_MAX\_OPEN\_FILES** specifies the maximum number files you can open at a time, and **FAT\_MAX\_OPEN\_DIRS** specifies the maximum number of folders you can open at a time.
+> - Root directory management is not supported. File and directory names start with the partition name. For example, **user/testfile** indicates the file or directory **testfile** in the **user** partition.
+> - To open a file multiple times, use **O_RDONLY** (read-only mode). **O_RDWR** or **O_WRONLY** (writable mode) can open a file only once.
+> - The read and write pointers are not separated. If a file is open in **O_APPEND** mode, the read pointer is also at the end of the file. If you want to read the file from the beginning, you must manually set the position of the read pointer.
+> - File and directory permission management is not supported.
+> - The **stat** and **fstat** APIs do not support query of the modification time, creation time, and last access time. The Microsoft FAT protocol does not support time before A.D. 1980.
+>
+> Note the following when mounting and unmounting FatFS partitions:
+> - Partitions can be mounted with the read-only attribute. When the input parameter of the **mount** function is **MS_RDONLY**, all APIs with the write attribute, such as **write**, **mkdir**, **unlink**, and **open** with **non-O_RDONLY** attributes, will be rejected.
+> - You can use the **MS_REMOUNT** flag with **mount** to modify the permission for a mounted partition.
+> - Before unmounting a partition, ensure that all directories and files in the partition are closed.
+> - You can use **umount2** with the **MNT_FORCE** parameter to forcibly close all files and folders and unmount the partition. However, this may cause data loss. Therefore, exercise caution when running **umount2**.
+>
+> The FAT file system supports re-partitioning and formatting of storage devices using **fatfs_fdisk** and **fatfs_format**.
+> - If a partition is mounted before being formatted using **fatfs_format**, you must close all directories and files in the partition and unmount the partition first.
+> - Before calling **fatfs_fdisk**, ensure that all partitions in the device are unmounted.
+> - Using **fatfs_fdisk** and **fatfs_format** may cause data loss. Exercise caution when using them.
+
## Development Example
+
### Example Description
This example implements the following:
-1. Create the **user/test** directory.
-2. Create the **file.txt** file in the **user/test** directory.
-3. Write "Hello OpenHarmony!" at the beginning of the file.
-4. Save the update of the file to the device.
-5. Set the offset to the beginning of the file.
-6. Read the file.
-7. Close the file.
-8. Delete the file.
-9. Delete the directory.
+1. Create the **user/test** directory.
+2. Create the **file.txt** file in the **user/test** directory.
+3. Write **Hello OpenHarmony!** at the beginning of the file.
+4. Save the file to a device.
+5. Set the offset to the start position of the file.
+6. Read the file.
+7. Close the file.
+8. Delete the file.
+9. Delete the directory.
+
### Sample Code
-Prerequisites
+**Prerequisites**
-- The MMC device partition is mounted to the **user** directory.
+The MMC device partition is mounted to the **user** directory.
-The sample code is as follows:
+ The sample code is as follows:
-```
-#include
-#include
-#include "sys/stat.h"
-#include "fcntl.h"
-#include "unistd.h"
+ ```
+ #include
+ #include
+ #include "sys/stat.h"
+ #include "fcntl.h"
+ #include "unistd.h"
-#define LOS_OK 0
-#define LOS_NOK -1
+ #define LOS_OK 0
+ #define LOS_NOK -1
-int FatfsTest(void)
-{
+ int FatfsTest(void)
+ {
int ret;
int fd = -1;
ssize_t len;
@@ -88,14 +102,14 @@ int FatfsTest(void)
char writeBuf[20] = "Hello OpenHarmony!";
char readBuf[20] = {0};
- /* Create the user/test directory.*/
+ /* Create the user/test directory. */
ret = mkdir(dirName, 0777);
if (ret != LOS_OK) {
printf("mkdir failed.\n");
return LOS_NOK;
}
- /* Create the file user/test/file.txt and make it readable and writable.*/
+ /* Create a readable and writable file named file.txt in the user/test/ directory. */
fd = open(fileName, O_RDWR | O_CREAT, 0777);
if (fd < 0) {
printf("open file failed.\n");
@@ -109,21 +123,21 @@ int FatfsTest(void)
return LOS_NOK;
}
- /* Save the update of the file to the device.*/
+ /* Save the file to a storage device. */
ret = fsync(fd);
if (ret != LOS_OK) {
printf("fsync failed.\n");
return LOS_NOK;
}
- /* Move the read/write pointer to the file header. */
+ /* Move the read/write pointer to the beginning of the file. */
off = lseek(fd, 0, SEEK_SET);
if (off != 0) {
printf("lseek failed.\n");
return LOS_NOK;
}
- /* Read the file content, with the same size as readBuf, to readBuf.*/
+ /* Read the file content with the length of readBuf to readBuf. */
len = read(fd, readBuf, sizeof(readBuf));
if (len != strlen(writeBuf)) {
printf("read file failed.\n");
@@ -138,14 +152,14 @@ int FatfsTest(void)
return LOS_NOK;
}
- /*Delete the file user/test/file.txt.*/
+ /* Delete the file file.txt from the user/test directory. */
ret = unlink(fileName);
if (ret != LOS_OK) {
printf("unlink failed.\n");
return LOS_NOK;
}
- /*Delete the user/test directory.*/
+ /* Delete the user/test directory. */
ret = rmdir(dirName);
if (ret != LOS_OK) {
printf("rmdir failed.\n");
@@ -153,14 +167,15 @@ int FatfsTest(void)
}
return LOS_OK;
-}
-```
+ }
+ ```
+
### Verification
The development is successful if the return result is as follows:
+
```
Hello OpenHarmony!
```
-
diff --git a/en/device-dev/kernel/kernel-mini-extend-file-lit.md b/en/device-dev/kernel/kernel-mini-extend-file-lit.md
index 7d51b76ab3167951eeaa72d839481a4630ec2390..599c94b5a12ea632374a03fcf0ca3e03afea7d8b 100644
--- a/en/device-dev/kernel/kernel-mini-extend-file-lit.md
+++ b/en/device-dev/kernel/kernel-mini-extend-file-lit.md
@@ -1,15 +1,17 @@
# LittleFS
+
## Basic Concepts
-LittleFS is a small file system designed for flash. By combining the log-structured file system and the copy-on-write \(COW\) file system, LittleFS stores metadata in log structure and data in the COW structure. This special storage empowers LittleFS high power-loss resilience. LittleFS uses the statistical wear leveling algorithm when allocating COW data blocks, effectively prolonging the service life of flash devices. LittleFS is designed for small-sized devices with limited resources, such as ROM and RAM. All RAM resources are allocated through a buffer with the fixed size \(configurable\). That is, the RAM usage does not grow with the file system.
+LittleFS is a small file system designed for flash. By combining the log-structured file system and the copy-on-write (COW) file system, LittleFS stores metadata in log structure and data in the COW structure. This special storage empowers LittleFS high power-loss resilience. LittleFS uses the statistical wear leveling algorithm when allocating COW data blocks, effectively prolonging the service life of flash devices. LittleFS is designed for small-sized devices with limited resources, such as ROM and RAM. All RAM resources are allocated through a buffer with the fixed size (configurable). That is, the RAM usage does not grow with the file system.
LittleFS is a good choice when you look for a flash file system that is power-cut resilient and has wear leveling support on a small device with limited resources.
## Development Guidelines
-When porting LittleFS to a new hardware device, you need to declare **lfs\_config**:
+When porting LittleFS to a new hardware device, you need to declare **lfs_config**:
+
```
const struct lfs_config cfg = {
// block device operations
@@ -29,20 +31,21 @@ const struct lfs_config cfg = {
};
```
-**.read**, **.prog**, **.erase**, and **.sync** correspond to the read, write, erase, and synchronization APIs at the bottom layer of the hardware platform, respectively.
+**.read**, **.prog**, **.erase**, and **.sync** correspond to the read, write, erase, and synchronization APIs at the bottom layer of the hardware platform, respectively.
-**read\_size** indicates the number of bytes read each time. You can set it to a value greater than the physical read unit to improve performance. This value determines the size of the read cache. However, if the value is too large, more memory is consumed.
+**read_size** indicates the number of bytes read each time. You can set it to a value greater than the physical read unit to improve performance. This value determines the size of the read cache. However, if the value is too large, more memory is consumed.
-**prog\_size** indicates the number of bytes written each time. You can set it to a value greater than the physical write unit to improve performance. This value determines the size of the write cache and must be an integral multiple of **read\_size**. However, if the value is too large, more memory is consumed.
+**prog_size** indicates the number of bytes written each time. You can set it to a value greater than the physical write unit to improve performance. This value determines the size of the write cache and must be an integral multiple of **read_size**. However, if the value is too large, more memory is consumed.
-**block\_size**: indicates the number of bytes in each erase block. The value can be greater than that of the physical erase unit. However, a smaller value is recommended because each file occupies at least one block. The value must be an integral multiple of **prog\_size**.
+**block_size**: indicates the number of bytes in each erase block. The value can be greater than that of the physical erase unit. However, a smaller value is recommended because each file occupies at least one block. The value must be an integral multiple of **prog_size**.
-**block\_count** indicates the number of blocks that can be erased, which depends on the capacity of the block device and the size of the block to be erased \(**block\_size**\).
+**block_count** indicates the number of blocks that can be erased, which depends on the capacity of the block device and the size of the block to be erased (**block_size**).
-## Sample Code
-The sample code is as follows:
+## Sample Code
+ The sample code is as follows:
+
```
#include "lfs.h"
#include "stdio.h"
@@ -89,11 +92,12 @@ int main(void) {
}
```
-### Verification
+
+ **Verification**
The development is successful if the return result is as follows:
+
```
Say hello 1 times.
```
-
diff --git a/en/device-dev/kernel/kernel-mini-memory-perf.md b/en/device-dev/kernel/kernel-mini-memory-perf.md
index 3da1dd5dc276a844ed393af9b7d73bed23a4b767..95d097792307376342a55eefcfde96f7f921952b 100644
--- a/en/device-dev/kernel/kernel-mini-memory-perf.md
+++ b/en/device-dev/kernel/kernel-mini-memory-perf.md
@@ -3,7 +3,8 @@
## Basic Concepts
-perf is a performance analysis tool. It uses the performance monitoring unit \(PMU\) to count sampling events and collect context information and provides hot spot distribution and hot paths.
+perf is a performance analysis tool. It uses the performance monitoring unit (PMU) to count sampling events and collect context information and provides hot spot distribution and hot paths.
+
## Working Principles
@@ -13,227 +14,142 @@ perf provides two working modes: counting mode and sampling mode.
In counting mode, perf collects only the number of event occurrences and duration. In sampling mode, perf also collects context data and stores the data in a circular buffer. The IDE then analyzes the data and provides information about hotspot functions and paths.
+
## Available APIs
+
### Kernel Mode
-The perf module of the OpenHarmony LiteOS-A kernel provides the following APIs. For more details about the APIs, see the [API](https://gitee.com/openharmony/kernel_liteos_a/blob/master/kernel/include/los_perf.h) reference.
-
-**Table 1** perf module APIs
-
-
-Function
- |
-API
- |
-Description
- |
-
-
-Starting or stopping perf sampling
- |
-LOS_PerfStart
- |
-Starts sampling.
- |
-
-LOS_PerfStop
- |
-Stops sampling.
- |
-
-Configuring perf sampling events
- |
-LOS_PerfConfig
- |
-Sets the type and period of a sampling event.
- |
-
-Reading sampling data
- |
-LOS_PerfDataRead
- |
-Reads the sampling data to a specified address.
- |
-
-Registering a hook for the sampling data buffer
- |
-LOS_PerfNotifyHookReg
- |
-Registers the hook to be called when the buffer waterline is reached.
- |
-
-LOS_PerfFlushHookReg
- |
-Registers the hook for flushing the cache in the buffer.
- |
-
-
-
-
-1. The structure of the perf sampling event is **PerfConfigAttr**. For details, see **kernel\\include\\los\_perf.h**.
-2. The sampling data buffer is a circular buffer, and only the region that has been read in the buffer can be overwritten.
-3. The buffer has limited space. You can register a hook to provide a buffer overflow notification or perform buffer read operation when the buffer waterline is reached. The default buffer waterline is 1/2 of the buffer size. The sample code is as follows:
-
- ```
- VOID Example_PerfNotifyHook(VOID)
- {
- CHAR buf[LOSCFG_PERF_BUFFER_SIZE] = {0};
- UINT32 len;
- PRINT_DEBUG("perf buffer reach the waterline!\n");
- len = LOS_PerfDataRead(buf, LOSCFG_PERF_BUFFER_SIZE);
- OsPrintBuff(buf, len); /* print data */
- }
- LOS_PerfNotifyHookReg(Example_PerfNotifyHook);
- ```
+The Perf module of the OpenHarmony LiteOS-A kernel provides the following functions. For details about the interfaces, see the [API reference](https://gitee.com/openharmony/kernel_liteos_a/blob/master/kernel/include/los_perf.h).
-4. If the buffer sampled by perf involves caches across CPUs, you can register a hook for flushing the cache to ensure cache consistency. The sample code is as follows:
+ **Table 1** APIs of the perf module
- ```
- VOID Example_PerfFlushHook(VOID *addr, UINT32 size)
- {
- OsCacheFlush(addr, size); /* platform interface */
- }
- LOS_PerfNotifyHookReg(Example_PerfFlushHook);
- ```
+| API| Description|
+| -------- | -------- |
+| LOS_PerfStart| Starts sampling.|
+| LOS_PerfStop| Stops sampling.|
+| LOS_PerfConfig| Sets the event type and sampling interval.|
+| LOS_PerfDataRead| Reads the sampling data.|
+| LOS_PerfNotifyHookReg| Registers the hook to be called when the buffer waterline is reached.|
+| LOS_PerfFlushHookReg| Registers the hook for flushing the cache in the buffer.|
+
+- The structure of the perf sampling event is **PerfConfigAttr**. For details, see **kernel\include\los_perf.h**.
+
+- The sampling data buffer is a circular buffer, and only the region that has been read in the buffer can be overwritten.
- The API for flushing the cache is configured based on the platform.
+- The buffer has limited space. You can register a hook to provide a buffer overflow notification or perform buffer read operation when the buffer waterline is reached. The default buffer waterline is 1/2 of the buffer size.
+
+ Example:
+
+ ```
+ VOID Example_PerfNotifyHook(VOID)
+ {
+ CHAR buf[LOSCFG_PERF_BUFFER_SIZE] = {0};
+ UINT32 len;
+ PRINT_DEBUG("perf buffer reach the waterline!\n");
+ len = LOS_PerfDataRead(buf, LOSCFG_PERF_BUFFER_SIZE);
+ OsPrintBuff(buf, len); /* print data */
+ }
+ LOS_PerfNotifyHookReg(Example_PerfNotifyHook);
+ ```
+
+- If the buffer sampled by perf involves caches across CPUs, you can register a hook for flushing the cache to ensure cache consistency.
+
+ Example:
+
+ ```
+ VOID Example_PerfFlushHook(VOID *addr, UINT32 size)
+ {
+ OsCacheFlush(addr, size); /* platform interface */
+ }
+ LOS_PerfNotifyHookReg(Example_PerfFlushHook);
+ ```
+
+ The API for flushing the cache is configured based on the platform.
### User Mode
-The perf character device is located in **/dev/perf**. You can read, write, and control the user-mode perf by running the following commands on the device node:
-- **read**: reads perf data in user mode.
-- **write**: writes user-mode sampling events.
-- **ioctl**: controls the user-mode perf, which includes the following:
+The perf character device is located in **/dev/perf**. You can read, write, and control the user-mode perf by running the following commands on the device node:
+
- ```
- #define PERF_IOC_MAGIC 'T'
- #define PERF_START _IO(PERF_IOC_MAGIC, 1)
- #define PERF_STOP _IO(PERF_IOC_MAGIC, 2)
- ```
+- **read**: reads perf data in user mode.
- The operations correspond to **LOS\_PerfStart** and **LOS\_PerfStop**.
+- **write**: writes user-mode sampling events.
+- **ioctl**: controls the user-mode perf, which includes the following:
+
+ ```
+ #define PERF_IOC_MAGIC 'T'
+ #define PERF_START _IO(PERF_IOC_MAGIC, 1)
+ #define PERF_STOP _IO(PERF_IOC_MAGIC, 2)
+ ```
-For more details, see [User-mode Development Example](#user-mode-development-example).
+ The operations correspond to **LOS_PerfStart** and **LOS_PerfStop**.
-## Development Guidelines
-### Kernel-mode Development Process
+For details, see [User-Mode Development Example](#user-mode-development-example).
+
+
+## How to Develop
+
+
+### Kernel-Mode Development Process
The typical process of enabling perf is as follows:
-1. Configure the macros related to the perf module.
-
- Configure the perf control macro **LOSCFG\_KERNEL\_PERF**, which is disabled by default. In the **kernel/liteos\_a** directory, run the **make update\_config** command, choose **Kernel**, and select **Enable Perf Feature**.
-
-
- Macro
- |
- menuconfig Option
- |
- Description
- |
- Value
- |
-
-
- LOSCFG_KERNEL_PERF
- |
- Enable Perf Feature
- |
- Whether to enable perf.
- |
- YES/NO
- |
-
- LOSCFG_PERF_CALC_TIME_BY_TICK
- |
- Time-consuming Calc Methods->By Tick
- |
- Whether to use tick as the perf timing unit.
- |
- YES/NO
- |
-
- LOSCFG_PERF_CALC_TIME_BY_CYCLE
- |
- Time-consuming Calc Methods->By Cpu Cycle
- |
- Whether to use cycle as the perf timing unit.
- |
- YES/NO
- |
-
- LOSCFG_PERF_BUFFER_SIZE
- |
- Perf Sampling Buffer Size
- |
- Size of the buffer used for perf sampling.
- |
- INT
- |
-
- LOSCFG_PERF_HW_PMU
- |
- Enable Hardware Pmu Events for Sampling
- |
- Whether to enable hardware PMU events. The target platform must support the hardware PMU.
- |
- YES/NO
- |
-
- LOSCFG_PERF_TIMED_PMU
- |
- Enable Hrtimer Period Events for Sampling
- |
- Whether to enable high-precision periodical events. The target platform must support the high precision event timer (HPET).
- |
- YES/NO
- |
-
- LOSCFG_PERF_SW_PMU
- |
- Enable Software Events for Sampling
- |
- Whether to enable software events. LOSCFG_KERNEL_HOOK must also be enabled.
- |
- YES/NO
- |
-
-
-
-
-2. Call **LOS\_PerfConfig** to configure the events to be sampled.
-
- perf provides two working modes and three types of events.
-
- Two modes: counting mode \(counts only the number of event occurrences\) and sampling mode \(collects context information such as task IDs, PC, and backtrace\)
-
- Three types of events: CPU hardware events \(such as cycle, branch, icache, and dcache\), high-precision periodical events \(such as CPU clock\), and OS software events \(such as task switch, mux pend, and IRQ\)
-
-3. Call **LOS\_PerfStart\(UINT32 sectionId\)** at the start of the code to be sampled. The input parameter **sectionId** specifies different sampling session IDs.
-4. Call **LOS\_PerfStop** at the end of the code to be sampled.
-5. Call **LOS\_PerfDataRead** to read the sampling data and use IDE to analyze the collected data.
-
-## Kernel-mode Development Example
+1. Configure the macros related to the perf module.
+
+ Configure the perf control macro **LOSCFG_KERNEL_PERF**, which is disabled by default. In the **kernel/liteos_a** directory, run the **make update_config** command, choose **Kernel**, and select **Enable Perf Feature**.
+
+ | Item| menuconfig Option| Description| Value|
+ | -------- | -------- | -------- | -------- |
+ | LOSCFG_KERNEL_PERF | Enable Perf Feature | Whether to enable perf.| YES/NO |
+ | LOSCFG_PERF_CALC_TIME_BY_TICK | Time-consuming Calc Methods->By Tick | Whether to use tick as the perf timing unit.| YES/NO |
+ | LOSCFG_PERF_CALC_TIME_BY_CYCLE | Time-consuming Calc Methods->By Cpu Cycle | Whether to use cycle as the perf timing unit.| YES/NO |
+ | LOSCFG_PERF_BUFFER_SIZE | Perf Sampling Buffer Size | Size of the buffer used for perf sampling.| INT |
+ | LOSCFG_PERF_HW_PMU | Enable Hardware Pmu Events for Sampling | Whether to enable hardware PMU events. The target platform must support the hardware PMU.| YES/NO |
+ | LOSCFG_PERF_TIMED_PMU | Enable Hrtimer Period Events for Sampling | Whether to enable high-precision periodical events. The target platform must support the high precision event timer (HPET).| YES/NO |
+ | LOSCFG_PERF_SW_PMU | Enable Software Events for Sampling | Whether to enable software events. **LOSCFG_KERNEL_HOOK** must also be enabled.| YES/NO |
+
+2. Call **LOS_PerfConfig** to configure the events to be sampled.
+
+ perf provides two working modes and three types of events.
+
+ Working modes: counting mode (counts only the number of event occurrences) and sampling mode (collects context information such as task IDs, PC, and backtrace)
+
+ Events: CPU hardware events (such as cycle, branch, icache, and dcache), high-precision periodical events (such as CPU clock), and OS software events (such as task switch, mux pend, and IRQ)
+
+3. Call **LOS_PerfStart(UINT32 sectionId)** at the start of the code to be sampled. The input parameter **sectionId** specifies different sampling session IDs.
+
+4. Call **LOS_PerfStop** at the end of the code to be sampled.
+
+5. Call **LOS_PerfDataRead** to read the sampling data and use IDE to analyze the collected data.
+
+
+#### Kernel-Mode Development Example
This example implements the following:
-1. Create a perf task.
-2. Configure sampling events.
-3. Start perf.
-4. Execute algorithms for statistics.
-5. Stop perf.
-6. Export the result.
+1. Create a perf task.
-## Kernel-mode Sample Code
+2. Configure sampling events.
-Prerequisites: The perf module configuration is complete in **menuconfig**.
+3. Start perf.
-The code is as follows:
+4. Execute algorithms for statistics.
+
+5. Stop perf.
+
+6. Export the result.
+
+
+#### Kernel-Mode Sample Code
+
+Prerequisites: The perf module configuration is complete in **menuconfig**.
+
+The sample code is as follows:
```
#include "los_perf.h"
@@ -299,10 +215,10 @@ STATIC VOID perfTestHwEvent(VOID)
UINT32 Example_Perf_test(VOID){
UINT32 ret;
TSK_INIT_PARAM_S perfTestTask;
- /* Create a perf task.*/
+ /* Create a perf task. */
memset(&perfTestTask, 0, sizeof(TSK_INIT_PARAM_S));
perfTestTask.pfnTaskEntry = (TSK_ENTRY_FUNC)perfTestHwEvent;
- perfTestTask.pcName = "TestPerfTsk"; /* Task name.*/
+ perfTestTask.pcName = "TestPerfTsk"; /* Test task name. */
perfTestTask.uwStackSize = 0x800;
perfTestTask.usTaskPrio = 5;
perfTestTask.uwResved = LOS_TASK_STATUS_DETACHED;
@@ -316,9 +232,10 @@ UINT32 Example_Perf_test(VOID){
LOS_MODULE_INIT(perfTestHwEvent, LOS_INIT_LEVEL_KMOD_EXTENDED);
```
-### Kernel-mode Verification
-The output is as follows:
+#### Kernel-Mode Verification
+
+ The output is as follows:
```
--------count mode----------
@@ -330,48 +247,50 @@ num: 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
hex: 00 ef ef ef 00 00 00 00 14 00 00 00 60 00 00 00 00 00 00 00 70 88 36 40 08 00 00 00 6b 65 72 6e 65 6c 00 00 01 00 00 00 cc 55 30 40 08 00 00 00 6b 65 72 6e 65 6c 00 00
```
-- For the counting mode, the following information is displayed after perf is stopped:
-
- Event name \(cycles\), event type \(0xff\), and number of event occurrences \(5466989440\)
+- For the counting mode, the following information is displayed after perf is stopped:
+ Event name (cycles), event type (0xff), and number of event occurrences (5466989440)
- For hardware PMU events, the displayed event type is the hardware event ID, not the abstract type defined in **enum PmuHWId**.
+ For hardware PMU events, the displayed event type is the hardware event ID, not the abstract type defined in **enum PmuHWId**.
-- For the sampling mode, the address and length of the sampled data will be displayed after perf is stopped:
+- For the sampling mode, the address and length of the sampled data will be displayed after perf is stopped:
+ dump section data, addr: (0x8000000) length: (0x5000)
- dump section data, addr: \(0x8000000\) length: \(0x5000\)
+ You can export the data using the JTAG interface and then use the IDE offline tool to analyze the data.
- You can export the data using the JTAG interface and then use the IDE offline tool to analyze the data.
+ You can also call **LOS_PerfDataRead** to read data to a specified address for further analysis. In the example, **OsPrintBuff** is a test API, which prints the sampled data by byte. **num** indicates the sequence number of the byte, and **hex** indicates the value in the byte.
- You can also call **LOS\_PerfDataRead** to read data to a specified address for further analysis. In the example, **OsPrintBuff** is a test API, which prints the sampled data by byte. **num** indicates the sequence number of the byte, and **hex** indicates the value in the byte.
+### User-Mode Development Process
-### User-mode Development Process
+Choose **Driver** > **Enable PERF DRIVER** in **menuconfig** to enable the perf driver. This option is available in **Driver** only after **Enable Perf Feature** is selected in the kernel.
-Choose **Driver** \> **Enable PERF DRIVER** in **menuconfig** to enable the perf driver. This option is available in **Driver** only after **Enable Perf Feature** is selected in the kernel.
+1. Open the **/dev/perf** file and perform read, write, and ioctl operations.
-1. Open the **/dev/perf** file and perform read, write, and ioctl operations.
-2. Run the **perf** commands in user mode in the **/bin** directory. After running **cd bin**, you can use the following commands:
- - **./perf start \[_id_\]**: starts perf sampling. _id_ is optional and is **0** by default.
- - **./perf stop**: stops perf sampling.
- - **./perf read <_nBytes_\>**: reads n-byte data from the sampling buffer and displays the data.
- - **./perf list**: lists the events supported by **-e**.
- - **./perf stat/record \[_option_\] <_command_\>**: sets counting or sampling parameters.
- - The \[_option_\] can be any of the following:
- - **-e**: sets sampling events. Events of the same type listed in **./perf list** can be used.
- - **-p**: sets the event sampling interval.
- - **-o**: specifies the path of the file for saving the perf sampling data.
- - **-t**: specifies the task IDs for data collection. Only the contexts of the specified tasks are collected. If this parameter is not specified, all tasks are collected by default.
- - **-s**: specifies the context type for sampling. For details, see **PerfSampleType** defined in **los\_perf.h**.
- - **-P**: specifies the process IDs for data collection. Only the contexts of the specified processes are collected. If this parameter is not specified, all processes are collected by default.
- - **-d**: specifies whether to divide the frequency \(the value is incremented by 1 each time an event occurs 64 times\). This option is valid only for hardware cycle events.
-
- - _command_ specifies the program to be checked by perf.
+2. Run the **perf** commands in user mode in the **/bin** directory.
+
+ After running **cd bin**, you can use the following commands:
+
+ - **./perf start [*id*]**: starts perf sampling. *id* is optional and is **0** by default.
+ - **./perf stop**: stops perf sampling.
+ - **./perf read <*nBytes*>**: reads n-byte data from the sampling buffer and displays the data.
+ - **./perf list**: lists the events supported by **-e**.
+ - **./perf stat/record [*option*] <*command*>**: sets counting or sampling parameters.
+ - The [*option*] can be any of the following:
+ - -**-e**: sets sampling events. Events of the same type listed in **./perf list** can be used.
+ - -**-p**: sets the event sampling interval.
+ - -**-o**: specifies the path of the file for saving the perf sampling data.
+ - -**-t**: specifies the task IDs for data collection. Only the contexts of the specified tasks are collected. If this parameter is not specified, all tasks are collected by default.
+ - -**-s**: specifies the context type for sampling. For details, see **PerfSampleType** defined in **los_perf.h**.
+ - -**-P**: specifies the process IDs for data collection. Only the contexts of the specified processes are collected. If this parameter is not specified, all processes are collected by default.
+ - -**-d**: specifies whether to divide the frequency (the value is incremented by 1 each time an event occurs 64 times). This option is valid only for hardware cycle events.
+ - *command* specifies the program to be checked by perf.
+Examples:
+Run the **./perf list** command to display available events.
-Examples:
+The output is as follows:
-Run the **./perf list** command to display available events. The output is as follows:
```
cycles [Hardware event]
@@ -389,7 +308,10 @@ mem-alloc [Software event]
mux-pend [Software event]
```
-Run **./perf stat -e cycles os\_dump**. The output is as follows:
+Run **./perf stat -e cycles os_dump**.
+
+The output is as follows:
+
```
type: 0
@@ -406,7 +328,10 @@ time used: 0.058000(s)
[cycles] eventType: 0xff [core 1]: 13583830
```
-Run **./perf record -e cycles os\_dump**. The output is as follows:
+Run **./perf record -e cycles os_dump**.
+
+The output is as follows:
+
```
type: 0
@@ -423,22 +348,28 @@ time used: 0.059000(s)
save perf data success at /storage/data/perf.data
```
-> **NOTE**
->After running the **./perf stat/record** command, you can run the **./perf start** and **./perf stop** commands multiple times. The sampling event configuration is as per the parameters set in the latest **./perfstat/record** command.
+>  **NOTE**
+> After running the **./perf stat/record** command, you can run the **./perf start** and **./perf stop** commands multiple times. The sampling event configuration is as per the parameters set in the latest **./perfstat/record** command.
+
-### User-mode Development Example
+#### User-Mode Development Example
This example implements the following:
-1. Open the perf character device.
-2. Write the perf events.
-3. Start perf.
-4. Stop perf.
-5. Read the perf sampling data.
+1. Open the perf character device.
+
+2. Write the perf events.
-### User-Mode Sample Code
+3. Start perf.
-The code is as follows:
+4. Stop perf.
+
+5. Read the perf sampling data.
+
+
+#### User-Mode Sample Code
+
+ The code is as follows:
```
#include "fcntl.h"
@@ -506,13 +437,13 @@ int main(int argc, char **argv)
}
```
-### User-mode Verification
-The output is as follows:
+#### User-Mode Verification
+
+ The output is as follows:
```
[EMG] dump section data, addr: 0x8000000 length: 0x800000
num: 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 ...
hex: 00 ef ef ef 00 00 00 00 14 00 00 00 60 00 00 00 00 00 00 00 70 88 36 40 08 00 00 00 6b 65 72 6e 65 6c 00 00 01 00 00 00 cc 55 30 40 08 00 00 00 6b 65 72 6e 65 6c 00 00
```
-
diff --git a/en/device-dev/kernel/kernel-mini-overview.md b/en/device-dev/kernel/kernel-mini-overview.md
index 7b86f1a82028e03366eb3441d6257eb7791764c3..a1b7902c802ad44bac6477e73757955d4611aecf 100644
--- a/en/device-dev/kernel/kernel-mini-overview.md
+++ b/en/device-dev/kernel/kernel-mini-overview.md
@@ -1,59 +1,35 @@
# Kernel Overview
+
## Overview
-The OpenHarmony LiteOS-M kernel is a lightweight operating system \(OS\) kernel designed for the IoT field. It features small size, low power consumption, and high performance. The LiteOS-M kernel has simple code structure, including the minimum function set, kernel abstraction layer \(KAL\), optional components, and project directory. It supports the Hardware Driver Foundation \(HDF\), which provides unified driver standards and access mode for device vendors to simplify porting of drivers and allow one-time development for multi-device deployment.
+The OpenHarmony LiteOS-M kernel is a lightweight operating system (OS) kernel designed for the IoT field. It features small size, low power consumption, and high performance. The LiteOS-M kernel has simple code structure, including the minimum function set, kernel abstraction layer (KAL), optional components, and project directory. It supports the Hardware Driver Foundation (HDF), which provides unified driver standards and access mode for device vendors to simplify porting of drivers and allow one-time development for multi-device deployment.
+
+The OpenHarmony LiteOS-M kernel architecture consists of the hardware layer and hardware-irrelevant layers, as shown in the figure below. The hardware layer is classified based on the compiler toolchain and chip architecture, and provides a unified Hardware Abstraction Layer (HAL) interface to improve hardware adaptation and facilitate the expansion of various types of AIoT hardware and compilation toolchains. The other modules are irrelevant to the hardware. The basic kernel module provides basic kernel capabilities. The extended modules provide capabilities of components, such as the network and file systems, as well as exception handling and debug tools. The KAL provides unified standard APIs.
-The OpenHarmony LiteOS-M kernel architecture consists of the hardware layer and hardware-irrelevant layers, as shown in the figure below. The hardware layer is classified based on the compiler toolchain and chip architecture, and provides a unified Hardware Abstraction Layer \(HAL\) interface to improve hardware adaptation and facilitate the expansion of various types of AIoT hardware and compilation toolchains. The other modules are irrelevant to the hardware. The basic kernel module provides basic kernel capabilities. The extended modules provide capabilities of components, such as the network and file systems, as well as exception handling and debug tools. The KAL provides unified standard APIs.
+ **Figure 1** Kernel architecture
-**Figure 1** Kernel architecture
-
+ 
-### CPU Architecture Support
+
+## CPU Architecture Support
The CPU architecture includes two layers: general architecture definition layer and specific architecture definition layer. The former provides interfaces supported and implemented by all architectures. The latter is specific to an architecture. For a new architecture to be added, the general architecture definition layer must be implemented first and the architecture-specific functions can be implemented at the specific architecture definition layer.
-**Table 1** CPU architecture rules
-
-
-Rule
- |
-General Architecture Definition Layer
- |
-Specific Architecture Definition Layer
- |
-
-
-Header file location
- |
-arch/include
- |
-arch/<arch>/<arch>/<toolchain>/
- |
-
-Header file name
- |
-los_<function>.h
- |
-los_arch_<function>.h
- |
-
-Function name
- |
-Halxxxx
- |
-Halxxxx
- |
-
-
-
-
-LiteOS-M supports mainstream architectures, such as ARM Cortex-M3, ARM Cortex-M4, ARM Cortex-M7, ARM Cortex-M33, and RISC-V. If you need to expand the CPU architecture, see [Chip Architecture Adaptation](../porting/porting-chip-kernel-overview.md#section137431650339).
-
-### Working Principles
-
-Configure the system clock and number of ticks per second in the **target\_config.h** file of the development board. Configure the task, memory, inter-process communication \(IPC\), and exception handling modules based on service requirements. When the system boots, the modules are initialized based on the configuration. The kernel startup process includes peripheral initialization, system clock configuration, kernel initialization, and OS boot, as shown in the figure below.
-
-**Figure 2** Kernel startup process
-
+ **Table 1** CPU architecture rules
+
+| Rule| General Architecture Layer| Specific Architecture Layer|
+| -------- | -------- | -------- |
+| Header file location| arch/include | arch/<arch>/<arch>/<toolchain>/ |
+| Header file name| los_<function>.h | los_arch_<function>.h |
+| Function name| Halxxxx | Halxxxx |
+
+LiteOS-M supports mainstream architectures, such as ARM Cortex-M3, ARM Cortex-M4, ARM Cortex-M7, ARM Cortex-M33, and RISC-V. If you need to expand the CPU architecture, see [Chip Architecture Adaptation](../porting/porting-chip-kernel-overview.md).
+
+
+## Working Principles
+
+In the **target\_config.h** file of the development board, configure the system clock and number of ticks per second, and configure the task, memory, inter-process communication (IPC), and exception handling modules based on service requirements. When the system boots, the modules are initialized based on the configuration. The kernel startup process includes peripheral initialization, system clock configuration, kernel initialization, and OS boot, as shown in the figure below.
+ **Figure 2** Kernel startup process
+ 
diff --git a/en/device-dev/kernel/kernel-small-apx-dll.md b/en/device-dev/kernel/kernel-small-apx-dll.md
index 2e7bb0d0fadcf92e14ea957b89728e240542c18a..e33e8e55d65e6a5e39fbb33e154557e2751148e9 100644
--- a/en/device-dev/kernel/kernel-small-apx-dll.md
+++ b/en/device-dev/kernel/kernel-small-apx-dll.md
@@ -1,178 +1,69 @@
# Doubly Linked List
-
## Basic Concepts
-A doubly linked list is a linked data structure that consists of a set of sequentially linked records called nodes. Each node contains a pointer to the previous node and a pointer to the next node in the sequence of nodes. The pointer head is unique. A doubly linked list allows access from a list node to its next node and also the previous node on the list. This data structure facilitates data search, especially traversal of a large amount of data. The symmetry of the doubly linked list also makes operations, such as insertion and deletion, easy. However, pay attention to the pointer direction when performing operations.
+A doubly linked list (DLL) is a linked data structure that consists of a set of sequentially linked records called nodes. Each node contains a pointer to the previous node and a pointer to the next node in the sequence of nodes. The pointer head is unique. A DLL allows access from a list node to its next node and also the previous node on the list. This data structure facilitates data search, especially traversal of a large amount of data. The symmetry of the DLL also makes operations, such as insertion and deletion, easy. However, pay attention to the pointer direction when performing operations.
+
## Available APIs
-The following table describes APIs available for the doubly linked list. For more details about the APIs, see the API reference.
-
-
-Function
- |
-API
- |
-Description
- |
-
-Initializing a linked list
- |
-LOS_ListInit
- |
-Initializes a specified node as a doubly linked list node.
- |
-
-LOS_DL_LIST_HEAD
- |
-Defines a node and initializes it as a doubly linked list node.
- |
-
-Adding a node
- |
-LOS_ListAdd
- |
-Inserts the specified node to the head of a doubly linked list.
- |
-
-LOS_ListHeadInsert
- |
-Inserts the specified node to the head of a doubly linked list. It is the same as LOS_ListAdd.
- |
-
-LOS_ListTailInsert
- |
-Inserts the specified node to the end of a doubly linked list.
- |
-
-Adding a linked list
- |
-LOS_ListAddList
- |
-Inserts the head of a specified linked list into the head of a doubly linked list.
- |
-
-LOS_ListHeadInsertList
- |
-Inserts the head of a specified linked list into the head of a doubly linked list. It is the same as LOS_ListAddList.
- |
-
-LOS_ListTailInsertList
- |
-Inserts the end of a specified linked list into the head of a doubly linked list.
- |
-
-Deleting a node
- |
-LOS_ListDelete
- |
-Deletes the specified node from a doubly linked list.
- |
-
-LOS_ListDelInit
- |
-Deletes the specified node from the linked list and uses the node to initialize the linked list.
- |
-
-Determining a doubly linked list
- |
-LOS_ListEmpty
- |
-Checks whether a linked list is empty.
- |
-
-LOS_DL_LIST_IS_END
- |
-Checks whether the specified linked list node is at the end of the linked list.
- |
-
-LOS_DL_LIST_IS_ON_QUEUE
- |
-Checks whether the linked list node is in a doubly linked list.
- |
-
-Obtaining structure information
- |
-LOS_OFF_SET_OF
- |
-Obtains the offset of a member in a specified structure relative to the start address of the structure.
- |
-
-LOS_DL_LIST_ENTRY
- |
-Obtains the address of the structure that contains the first node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure.
- |
-
-LOS_ListPeekHeadType
- |
-Obtains the address of the structure that contains the first node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure. Return Null if the linked list is empty.
- |
-
-LOS_ListRemoveHeadType
- |
-Obtains the address of the structure that contains the first node in the linked list, and deletes the first node from the list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure. Return Null if the linked list is empty.
- |
-
-LOS_ListNextType
- |
-Obtains the address of the structure that contains the next node of the specified node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the specified node, the third parameter indicates the name of the structure to be obtained, and the fourth input parameter indicates the name of the linked list in the structure. If the next node of the linked list node is the head node and is empty, NULL is returned.
- |
-
-Traversing a doubly linked list
- |
-LOS_DL_LIST_FOR_EACH
- |
-Traverses a doubly linked list.
- |
-
-LOS_DL_LIST_FOR_EACH_SAFE
- |
-Traverses a doubly linked list, and stores the next node of the current node for security verification.
- |
-
-Traversing the structure that contains the doubly linked list
- |
-LOS_DL_LIST_FOR_EACH_ENTRY
- |
-Traverses the specified doubly linked list and obtains the address of the structure that contains the linked list node.
- |
-
-LOS_DL_LIST_FOR_EACH_ENTRY_SAFE
- |
-Traverses the specified doubly linked list, obtains the structure address of the node that contains the linked list, and stores the structure address that contains the next node of the current node.
- |
-
-
-
-
-## How to Develop
-
-The typical development process of the doubly linked list is as follows:
-
-1. Call **LOS\_ListInit/LOS\_DL\_LIST\_HEAD** to initialize a doubly linked list.
-2. Call **LOS\_ListAdd** to insert a node to the list.
-3. Call **LOS\_ListTailInsert** to insert a node to the end of the list.
-4. Call **LOS\_ListDelete** to delete the specified node.
-5. Call **LOS\_ListEmpty** to check whether a linked list is empty.
-6. Call **LOS\_ListDelInit** to delete a specified node, and initialize the linked list based on this node.
-
-> **NOTE:**
->- Pay attention to the operations of the front and back pointer of the node.
->- The linked list operation APIs are underlying APIs and do not check whether the input parameters are empty. You must ensure that the input parameters are valid.
->- If the memory of a linked list node is dynamically requested, release the memory after deleting the node.
-
-### Development Example
+The table below describes the DLL APIs. For more details about the APIs, see the API reference.
+
+| **Category**| **API**|
+| -------- | -------- |
+| Initializing a DLL| - **LOS_ListInit**: initializes a node as a DLL node.
- **LOS_DL_LIST_HEAD**: defines a node and initializes it as a DLL node.|
+| Adding a node| - **LOS_ListAdd**: adds a node to the head of a DLL.
- **LOS_ListHeadInsert**: same as **LOS_ListAdd**.
- **LOS_ListTailInsert**: inserts a node to the tail of a DLL.|
+| Adding a DLL| - **LOS_ListAddList**: adds the head of a DLL to the head of this DLL.
- **LOS_ListHeadInsertList**: inserts the head of a DLL to the head of this DLL.
- **LOS_ListTailInsertList**: Inserts the end of a DLL to the head of this DLL.|
+| Deleting a node| - **LOS_ListDelete**: deletes a node from this DLL.
- **LOS_ListDelInit**: deletes a node from this DLL and uses this node to initialize the DLL.|
+| Checking a DLL| - **LOS_ListEmpty**: checks whether a DLL is empty.
- **LOS_DL_LIST_IS_END**: checks whether a node is the tail of the DLL.
- **LOS_DL_LIST_IS_ON_QUEUE**: checks whether a node is in the DLL.|
+| Obtains structure information.| - **LOS_OFF_SET_OF**: obtains the offset of a member in the specified structure relative to the start address of the structure.
- **LOS_DL_LIST_ENTRY**: obtains the address of the structure that contains the first node in the DLL. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure.
- **LOS_ListPeekHeadType**: obtains the address of the structure that contains the first node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure. Null will be returned if the DLL is empty.
- **LOS_ListRemoveHeadType**: obtains the address of the structure that contains the first node in the linked list, and deletes the first node from the list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure. Null will be returned if the DLL is empty.
- **LOS_ListNextType**: obtains the address of the structure that contains the next node of the specified node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the specified node, the third parameter indicates the name of the structure to be obtained, and the fourth input parameter indicates the name of the linked list in the structure. If the next node of the linked list node is the head node and is empty, NULL will be returned.|
+| Traversing a DLL| - **LOS_DL_LIST_FOR_EACH**: traverses a DLL.
- **LOS_DL_LIST_FOR_EACH_SAFE**: traverses the DLL and stores the subsequent nodes of the current node for security verification.|
+| Traversing the structure that contains the DLL| - **LOS_DL_LIST_FOR_EACH_ENTRY**: traverses a DLL and obtains the address of the structure that contains the linked list node.
- **LOS_DL_LIST_FOR_EACH_ENTRY_SAFE**: traverses a DLL, obtains the address of the structure that contains the linked list node, and stores the address of the structure that contains the subsequent node of the current node.|
+
+
+## How to Develop
+
+The typical development process of the DLL is as follows:
+
+1. Call **LOS_ListInit** or **LOS_DL_LIST_HEAD** to initialize a DLL.
+
+2. Call **LOS_ListAdd** to add a node into the DLL.
+
+3. Call **LOS_ListTailInsert** to insert a node to the tail of the DLL.
+
+4. Call **LOS_ListDelete** to delete the specified node.
+
+5. Call **LOS_ListEmpty** to check whether the DLL is empty.
+
+6. Call **LOS_ListDelInit** to delete the specified node and initialize the DLL based on the node.
+
+
+>  **NOTE**
+> - Pay attention to the operations operations of the front and back pointer of the node.
+>
+> - The DLL APIs are underlying interfaces and do not check whether the input parameters are empty. You must ensure that the input parameters are valid.
+>
+> - If the memory of a linked list node is dynamically allocated, release the memory when deleting the node.
+
+
+ **Development Example**
**Example Description**
+
This example implements the following:
-1. Initialize a doubly linked list.
-2. Add nodes.
-3. Delete nodes.
-4. Check the operation result.
+
+1. Initialize a DLL.
+
+2. Add nodes.
+
+3. Delete nodes.
+
+4. Check the operation result.
+
+
```
#include "stdio.h"
@@ -184,7 +75,7 @@ static UINT32 ListSample(VOID)
LOS_DL_LIST listNode1 = {NULL,NULL};
LOS_DL_LIST listNode2 = {NULL,NULL};
- // Initialize the linked list.
+ // Initialize the DLL.
PRINTK("Initial head\n");
LOS_ListInit(&listHead);
@@ -203,7 +94,7 @@ static UINT32 ListSample(VOID)
LOS_ListDelete(&listNode1);
LOS_ListDelete(&listNode2);
- // Check that the linked list is empty.
+ // Check whether the DLL is empty.
if (LOS_ListEmpty(&listHead)) {
PRINTK("Delete success\n");
}
@@ -212,8 +103,10 @@ static UINT32 ListSample(VOID)
}
```
+
**Verification**
+
The development is successful if the return result is as follows:
```
@@ -222,4 +115,3 @@ Add listNode1 success
Tail insert listNode2 success
Delete success
```
-
diff --git a/en/device-dev/kernel/kernel-small-basic-memory-virtual.md b/en/device-dev/kernel/kernel-small-basic-memory-virtual.md
index 1c3b440fd5ecc5741b2cad1a49ef59b90d9082aa..bcef2eb5c93bf3ba1b15552d055a5cfa7dc9405b 100644
--- a/en/device-dev/kernel/kernel-small-basic-memory-virtual.md
+++ b/en/device-dev/kernel/kernel-small-basic-memory-virtual.md
@@ -1,322 +1,129 @@
# Virtual Memory Management
-## Basic Concepts
+## Basic Concepts
Virtual memory management is a technology used by computer systems to manage memory. Each process has a continuous virtual address space. The size of the virtual address space is determined by the number of CPU bits. The maximum addressing space for a 32-bit hardware platform ranges from 0 GiB to 4 GiB. The 4 GiB space is divided into two parts: 3 GiB higher-address space for the LiteOS-A kernel and 1 GiB lower-address space for user-mode processes. The virtual address space of each process space is independent, and the code and data do not affect each other.
-The system divides the virtual memory into memory blocks called virtual pages. The size of a virtual page is generally 4 KiB or 64 KiB. The virtual page of the LiteOS-A kernel is 4 KiB by default. You can configure memory management units \(MMUs\) as required. The minimum unit of the virtual memory management is a page. A virtual address region in the LiteOS-A kernel can contain one virtual page or multiple virtual pages with contiguous addresses. Similarly, the physical memory is also divided by page, and each memory block is called page frame. The virtual address space is divided as follows: 3 GiB \(**0x40000000** to **0xFFFFFFFF**\) for the kernel space and 1 GiB \(**0x01000000** to **0x3F000000**\) for the user space. The following tables describe the virtual address plan. You can view or configure virtual addresses in **los\_vm\_zone.h**.
-
-**Table 1** Kernel-mode addresses
-
-
-Zone
- |
-Description
- |
-Property
- |
-
-
-DMA zone
- |
-Addresses for direct memory access (DMA) of I/O devices.
- |
-Uncache
- |
-
-Normal zone
- |
-Addresses for loading the kernel code segment, data segment, heap, and stack.
- |
-Cache
- |
-
-high mem zone
- |
-Addresses for allocating contiguous virtual memory. The mapped physical memory blocks may not be contiguous.
- |
-Cache
- |
-
-
-
-
-**Table 2** User-mode virtual addresses
-
-
-Zone
- |
-Description
- |
-Property
- |
-
-
-Code segment
- |
-User-mode code segment address range
- |
-Cache
- |
-
-Heap
- |
-User-mode heap address range
- |
-Cache
- |
-
-Stack
- |
-User-mode stack address range
- |
-Cache
- |
-
-Shared library
- |
-Address range for loading the user-mode shared library, including the address range mapped by mmap.
- |
-Cache
- |
-
-
-
-
-## Working Principles
+The system divides the virtual memory into memory blocks called virtual pages. The size of a virtual page is generally 4 KiB or 64 KiB. The virtual page of the LiteOS-A kernel is 4 KiB by default. You can configure memory management units (MMUs) as required. The minimum unit of the virtual memory management is a page. A virtual address region in the LiteOS-A kernel can contain one virtual page or multiple virtual pages with contiguous addresses. Similarly, the physical memory is also divided by page, and each memory block is called page frame. The virtual address space is divided as follows: 3 GiB (**0x40000000** to **0xFFFFFFFF**) for the kernel space and 1 GiB (**0x01000000** to **0x3F000000**) for the user space. The following tables describe the virtual address plan. You can view or configure virtual addresses in **los_vm_zone.h**.
+
+**Table 1** Kernel-mode addresses
+
+| Zone| Description| Property|
+| -------- | -------- | -------- |
+| DMA zone | Addresses for direct memory access (DMA) of I/O devices.| Uncache |
+| Normal zone | Addresses for loading the kernel code segment, data segment, heap, and stack.| Cache |
+| high mem zone | Addresses for allocating contiguous virtual memory. The mapped physical memory blocks may not be contiguous.| Cache |
+
+**Table 2** User-mode virtual addresses
+
+| Zone| Description| Property|
+| -------- | -------- | -------- |
+| Code snippet| User-mode code segment address range.| Cache |
+| Heap| User-mode heap address range.| Cache |
+| Stack| User-mode stack address range.| Cache |
+| Shared databases| Address range for loading the user-mode shared library, including the address range mapped by mmap.| Cache |
+
+
+## Working Principles
In virtual memory management, the virtual address space is contiguous, but the mapped physical memory is not necessarily contiguous, as depicted in the following figure. When an executable program is loaded and runs, the CPU accesses the code or data in the virtual address space in the following two cases:
-- If the page \(for example, V0\) containing the virtual address accessed by the CPU is mapped to a physical page \(for example, P0\), the CPU locates the page table entry corresponding to the process \(for details, see [Virtual-to-Physical Mapping](kernel-small-basic-inner-reflect.md)"\), accesses the physical memory based on the physical address information in the page table entry, and returns the content.
-- If the page \(for example, V2\) containing the virtual address accessed by the CPU is not mapped to a physical page, the system triggers a page missing fault, requests a physical page, copies the corresponding information to the physical page, and updates the start address of the physical page to the page table entry. Then, the CPU can access specific code or data by executing the instruction for accessing the virtual memory again.
-
-**Figure 1** Mapping between the virtual and physical memory addresses
-
-
-## Development Guidelines
-
-### Available APIs
-
-**Table 3** APIs of the virtual memory management module
-
-
-Function
- |
-API
- |
-Description
- |
-
-
-Obtaining process memory space
- |
-LOS_CurrSpaceGet
- |
-Obtains the pointer to the current process space structure.
- |
-
-LOS_SpaceGet
- |
-Obtains the pointer to the process space structure corresponding to the virtual address.
- |
-
-LOS_GetKVmSpace
- |
-Obtains the pointer to the kernel process space structure.
- |
-
-LOS_GetVmallocSpace
- |
-Obtains the pointer to the vmalloc space structure.
- |
-
-LOS_GetVmSpaceList
- |
-Obtains the pointer to the process space linked list.
- |
-
-Operations related to the virtual address region
- |
-LOS_RegionFind
- |
-Searches for and returns the virtual address region corresponding to the specified address in the process space.
- |
-
-LOS_RegionRangeFind
- |
-Searches for and returns the virtual address region corresponding to the specified address range in the process space.
- |
-
-LOS_IsRegionFileValid
- |
-Checks whether the virtual address region is mapped to a file.
- |
-
-LOS_RegionAlloc
- |
-Requests a free virtual address region.
- |
-
-LOS_RegionFree
- |
-Releases a specific region in the process space.
- |
-
-LOS_RegionEndAddr
- |
-Obtains the end address of the specified address region.
- |
-
-LOS_RegionSize
- |
-Obtains the size of a region.
- |
-
-LOS_IsRegionTypeFile
- |
-Checks whether the address region is a file memory mapping.
- |
-
-LOS_IsRegionPermUserReadOnly
- |
-Checks whether the address region is read-only in the user space.
- |
-
-LOS_IsRegionFlagPrivateOnly
- |
-Checks whether the address region has private attributes.
- |
-
-LOS_SetRegionTypeFile
- |
-Sets the file memory mapping attribute.
- |
-
-LOS_IsRegionTypeDev
- |
-Checks whether the address region is device memory mapping.
- |
-
-LOS_SetRegionTypeDev
- |
-Sets the device memory mapping attribute.
- |
-
-LOS_IsRegionTypeAnon
- |
-Checks whether the address region is an anonymous mapping.
- |
-
-LOS_SetRegionTypeAnon
- |
-Sets the anonymous mapping attribute.
- |
-
-Verifying address
- |
-LOS_IsUserAddress
- |
-Checks whether the address is in the user space.
- |
-
-LOS_IsUserAddressRange
- |
-Checks whether the address region is in the user space.
- |
-
-LOS_IsKernelAddress
- |
-Checks whether the address is in the kernel space.
- |
-
-LOS_IsKernelAddressRange
- |
-Checks whether the address region is in the kernel space.
- |
-
-LOS_IsRangeInSpace
- |
-Checks whether the address region is in the process space.
- |
-
-vmalloc operations
- |
-LOS_VMalloc
- |
-Requests memory using vmalloc.
- |
-
-LOS_VFree
- |
-Releases memory using vmalloc.
- |
-
-LOS_IsVmallocAddress
- |
-Checks whether the address is requested by using vmalloc.
- |
-
-Requesting memory
- |
-LOS_KernelMalloc
- |
-Allocates memory from the heap memory pool if the requested memory is less than 16 KiB; allocates multiple contiguous physical pages for memory allocation if the requested memory is greater than 16 KiB.
- |
-
-LOS_KernelMallocAlign
- |
-Allocates memory with alignment attributes. The allocation rule is the same as that of the LOS_KernelMalloc API.
- |
-
-LOS_KernelFree
- |
-Releases the memory requested by LOS_KernelMalloc and LOS_KernelMallocAlign.
- |
-
-LOS_KernelRealloc
- |
-Reallocates the memory requested by LOS_KernelMalloc and LOS_KernelMallocAlign.
- |
-
-Others
- |
-LOS_PaddrQuery
- |
-Obtains the physical address based on the virtual address.
- |
-
-LOS_VmSpaceFree
- |
-Releases the process space, including the virtual memory region and page table.
- |
-
-LOS_VmSpaceReserve
- |
-Reserves a memory space in the process space.
- |
-
-LOS_VaddrToPaddrMmap
- |
-Maps the physical address region with the specified length to a virtual address region. You need to request the physical address region before the operation.
- |
-
-
-
-
-### How to Develop
+- If the page (for example, V0) containing the virtual address accessed by the CPU is mapped to a physical page (for example, P0), the CPU locates the page table entry corresponding to the process (for details, see [Virtual-to-Physical Mapping](kernel-small-basic-inner-reflect.md)"), accesses the physical memory based on the physical address information in the page table entry, and returns the content.
-To use APIs related to virtual memory:
+- If the page (for example, V2) containing the virtual address accessed by the CPU is not mapped to a physical page, the system triggers a page missing fault, requests a physical page, copies the corresponding information to the physical page, and updates the start address of the physical page to the page table entry. Then, the CPU can access specific code or data by executing the instruction for accessing the virtual memory again.
+
+ **Figure 1** Mapping between the virtual and physical memory addresses
+
+ 
+
+
+## Development Guidelines
+
+
+### Available APIs
+
+**Table 3** APIs of the virtual memory management module
-1. Obtain the process space structure using the APIs for obtaining the process space, and access the structure information.
-2. Perform the following operations on the virtual address region:
- - Call **LOS\_RegionAlloc** to request a virtual address region.
+| API| Description|
+| -------- | -------- |
+| LOS_CurrSpaceGet | Obtains the pointer to the current process space structure.|
+| LOS_SpaceGet | Obtains the pointer to the process space structure corresponding to the virtual address.|
+| LOS_GetKVmSpace | Obtains the pointer to the kernel process space structure.|
+| LOS_GetVmallocSpace | Obtains the pointer to the vmalloc space structure.|
+| LOS_GetVmSpaceList | Obtains the pointer to the process space linked list.|
- - Call **LOS\_RegionFind** and **LOS\_RegionRangeFind** to check whether the corresponding address region exists.
- - Call **LOS\_RegionFree** to release a virtual address region.
+**Table 4** Operations related to the virtual address region
+
+| API| Description|
+| -------- | -------- |
+| LOS_RegionFind | Searches for and returns the virtual address region corresponding to the specified address in the process space.|
+| LOS_RegionRangeFind | Searches for and returns the virtual address region corresponding to the specified address range in the process space.|
+| LOS_IsRegionFileValid | Checks whether the virtual address region is mapped to a file.|
+| LOS_RegionAlloc | Requests a free virtual address region.|
+| LOS_RegionFree | Releases a specific region in the process space.|
+| LOS_RegionEndAddr | Obtains the end address of the specified address region.|
+| LOS_RegionSize | Obtains the size of a region.|
+| LOS_IsRegionTypeFile | Checks whether the address region is a file memory mapping.|
+| LOS_IsRegionPermUserReadOnly | Checks whether the address region is read-only in the user space.|
+| LOS_IsRegionFlagPrivateOnly | Checks whether the address region has private attributes.|
+| LOS_SetRegionTypeFile | Sets the file memory mapping attributes. |
+| LOS_IsRegionTypeDev | Checks whether the address region is device memory mapping.|
+| LOS_SetRegionTypeDev | Sets the device memory mapping attributes. |
+| LOS_IsRegionTypeAnon | Checks whether the address region is an anonymous mapping.|
+| LOS_SetRegionTypeAnon | Sets the anonymous mapping attributes. |
+
+**Table 5** APIs for address verification
+
+| API| Description|
+| -------- | -------- |
+| LOS_IsUserAddress | Checks whether the address is in the user space.|
+| LOS_IsUserAddressRange | Checks whether the address region is in the user space.|
+| LOS_IsKernelAddress | Checks whether the address is in the kernel space.|
+| LOS_IsKernelAddressRange | Checks whether the address region is in the kernel space.|
+| LOS_IsRangeInSpace | Checks whether the address region is in the process space.|
+
+**Table 6** APIs for vmalloc operations
+
+| API| Description|
+| -------- | -------- |
+| LOS_VMalloc | Requests memory using **vmalloc**.|
+| LOS_VFree | Releases memory using **vmalloc**.|
+| LOS_IsVmallocAddress | Checks whether the address is requested using **vmalloc**. |
+
+**Table 7** APIs for memory allocation
+
+| API| Description|
+| -------- | -------- |
+| LOS_KernelMalloc | Allocates memory from the heap memory pool if the requested memory is less than 16 KiB; allocates multiple contiguous physical pages if the requested memory is greater than 16 KiB. |
+| LOS_KernelMallocAlign | Allocates memory with alignment attributes. The allocation rule is the same as that of the **LOS_KernelMalloc** API.|
+| LOS_KernelFree | Releases the memory requested by **LOS_KernelMalloc** and **LOS_KernelMallocAlign**.|
+| LOS_KernelRealloc | Reallocates the memory requested by **LOS_KernelMalloc** and **LOS_KernelMallocAlign**.|
+
+**Table 8** Other APIs
+
+| API | Description |
+| -------- | -------- |
+| LOS_PaddrQuery | Obtains the physical address based on the virtual address. |
+| LOS_VmSpaceFree | Releases the process space, including the virtual memory region and page table. |
+| LOS_VmSpaceReserve | Reserves a memory space in the process space. |
+| LOS_VaddrToPaddrMmap | Maps the physical address region with the specified length to a virtual address region. You need to request the physical address region before the operation. |
+
+
+### How to Develop
+
+To use APIs related to virtual memory:
-3. Call **vmalloc** and memory requesting APIs to apply for memory in the kernel as demanded.
+1. Obtain the process space structure using the APIs for obtaining the process space, and access the structure information.
+2. Perform the following operations on the virtual address region:
+ - Call **LOS_RegionAlloc** to request a virtual address region.
+ - Call **LOS_RegionFind** and **LOS_RegionRangeFind** to check whether the corresponding address region exists.
+ - Call **LOS_RegionFree** to release a virtual address region.
-> **NOTE:**
->The physical memory requested by using the memory requesting APIs must be contiguous. If the system cannot provide a large number of contiguous memory blocks, the request fails. Therefore, the memory requesting APIs are recommended for requesting small memory blocks. **vmalloc** is recommended for requesting non-contiguous physical memory. However, the memory is allocated in the unit of pages \(4096 bytes/page in the current system\). If you want memory that is an integer multiple of a page, you can use **vmalloc**. For example, you can use **vmalloc** to request memory for file reading in a file system, which demands a large cache.
+3. Call **vmalloc** APIs (see table 6) and memory allocation APIs (see table 7) to apply for memory in the kernel as required.
+>  **NOTE**
+>
+> The physical memory requested by using the memory allocation APIs must be contiguous. If the system cannot provide a large number of contiguous memory blocks, the request fails. Therefore, the memory allocation APIs are recommended for requesting small memory blocks.
+>
+> **vmalloc** APIs are recommended for requesting non-contiguous physical memory. However, the memory is allocated in the unit of pages (4096 bytes/page in the current system). If you want memory that is an integer multiple of a page, you can use **vmalloc** APIs. For example, you can use **vmalloc** to request memory for file reading in a file system, which demands a large cache.
diff --git a/en/device-dev/kernel/kernel-small-basic-trans-rwlock.md b/en/device-dev/kernel/kernel-small-basic-trans-rwlock.md
index 97f8b77622ee03c6701df1e0b426778c8a956ddd..18558986daf9a5cf45e72a771aa8dc6e98454876 100644
--- a/en/device-dev/kernel/kernel-small-basic-trans-rwlock.md
+++ b/en/device-dev/kernel/kernel-small-basic-trans-rwlock.md
@@ -1,128 +1,85 @@
# RW Lock
-## Basic Concepts
+## Basic Concepts
-Similar to a mutex, a read-write lock \(RW lock\) can be used to synchronize tasks in the same process. Different from a mutex, an RW lock allows concurrent access for read operations and exclusive access for write operations.
+Similar to a mutex, a read-write lock (RW lock) can be used to synchronize tasks in the same process. Different from a mutex, an RW lock allows concurrent access for read operations and exclusive access for write operations.
An RW lock has three states: locked in read mode, locked in write mode, and unlocked.
Observe the following rules when using RW locks:
-- If there is no lock in write mode in the protected area, any task can add a lock in read mode.
-- A lock in write mode can be added only when the protected area is in the unlocked state.
+- If there is no lock in write mode in the protected area, any task can add a lock in read mode.
+
+- A lock in write mode can be added only when the protected area is in the unlocked state.
In a multi-task environment, multiple tasks may access the same shared resource. A lock in read mode allows access to a protected area in shared mode, and a lock in a write mode allows exclusive access to the shared resource.
This sharing-exclusive manner is suitable for a multi-task environment where the data read operations are far more than data write operations. It can improve multi-task concurrency of the application.
-## Working Principles
+
+## Working Principles
How does an RW lock implement lock in read mode and lock in write mode to control read/write access of multiple tasks?
-- If task A acquires the lock in write mode for the first time, other tasks cannot acquire or attempt to acquire the lock in read mode.
-
-- If task A acquires the lock in read mode, the RW lock count increments by 1 when a task acquires or attempts to acquire the lock in read mode.
-
-## Development Guidelines
-
-### Available APIs
-
-**Table 1** Read/write lock module APIs
-
-
-Function
- |
-API
- |
-Description
- |
-
-
-Creating and deleting an RW lock
- |
-LOS_RwlockInit
- |
-Creates an RW lock.
- |
-
-LOS_RwlockDestroy
- |
-Deletes the specified RW lock.
- |
-
-Requesting a lock in read mode
- |
-LOS_RwlockRdLock
- |
-Requests the specified lock in read mode.
- |
-
-LOS_RwlockTryRdLock
- |
-Attempts to request the specified lock in read mode.
- |
-
-Requesting a lock in write mode
- |
-LOS_RwlockWrLock
- |
-Requests the specified lock in write mode.
- |
-
-LOS_RwlockTryWrLock
- |
-Attempts to request the specified lock in write mode.
- |
-
-Releasing an RW lock
- |
-LOS_RwlockUnLock
- |
-Releases the specified RW lock.
- |
-
-Verifying RW lock validity
- |
-LOS_RwlockIsValid
- |
-Checks the validity of an RW lock.
- |
-
-
-
-
-### How to Develop
+- If task A acquires the lock in write mode for the first time, other tasks cannot acquire or attempt to acquire the lock in read mode.
+
+- If task A acquires the lock in read mode, the RW lock count increments by 1 when a task acquires or attempts to acquire the lock in read mode.
+
+
+## Development Guidelines
+
+
+### Available APIs
+
+**Table 1** APIs of the RW lock module
+
+| API| Description|
+| -------- | -------- |
+| LOS_RwlockInit| Creates an RW lock.|
+| LOS_RwlockDestroy| Deletes an RW lock.|
+| LOS_RwlockRdLock| Requests the specified lock in read mode.|
+| LOS_RwlockTryRdLock| Attempts to request a lock in read mode.|
+| LOS_RwlockWrLock| Requests the specified lock in write mode.|
+| LOS_RwlockTryWrLock| Attempts to request a lock in write mode.|
+| LOS_RwlockUnLock| Releases the specified RW lock.|
+| LOS_RwlockIsValid| Checks the validity of an RW lock.|
+
+
+### How to Develop
The typical development process is as follows:
-1. Call **LOS\_RwlockInit** to create an RW lock.
+1. Call **LOS_RwlockInit** to create an RW lock.
-2. Call **LOS\_RwlockRdLock** to request a lock in read mode or call **LOS\_RwlockWrLock** to request a lock in write mode.
+2. Call **LOS_RwlockRdLock** to request a lock in read mode or call **LOS_RwlockWrLock** to request a lock in write mode.
-If a lock in read mode is requested:
+ If a lock in read mode is requested:
-- If the lock is not held, the read task can acquire the lock.
-- If the lock is held, the read task acquires the lock and is executed based on the task priority.
-- If the lock in write mode is held by another task, the task cannot acquire the lock until the lock in write mode is released.
+ - If the lock is not held, the read task can acquire the lock.
-If a lock in write mode is requested:
+ - If the lock is held, the read task acquires the lock and is executed based on the task priority.
-- If the lock is not held or if the task that holds the lock in read mode is the one that requests the lock in write mode, the task acquires the lock in write mode immediately.
-- If the lock already has a lock in read mode and the read task has a higher priority, the current task is suspended until the lock in read mode is released.
+ - If the lock in write mode is held by another task, the task cannot acquire the lock until the lock in write mode is released.
-3. There are three types of locks in read mode and write mode: non-block mode, permanent block mode, and scheduled block mode. The difference lies in the task suspension time.
+ If a lock in write mode is requested:
+
+ - If the lock is not held or if the task that holds the lock in read mode is the one that requests the lock in write mode, the task acquires the lock in write mode immediately.
-4. Call **LOS\_RwlockUnLock** to release an RW lock.
+ - If the lock already has a lock in read mode and the read task has a higher priority, the current task is suspended until the lock in read mode is released.
-- If tasks are blocked by the specified RW lock, the task with the highest priority is woken up, enters the Ready state, and is scheduled.
+3. There are three types of locks in read mode and write mode: non-block mode, permanent block mode, and scheduled block mode. The difference lies in the task suspension time.
-- If no task is blocked by the specified RW lock, the RW lock is released.
+4. Call **LOS_RwlockUnLock** to release an RW lock.
-5. Call **LOS\_RwlockDestroy** to delete an RW lock.
+ - If tasks are blocked by the specified RW lock, the task with the highest priority is woken up, enters the Ready state, and is scheduled.
+ - If no task is blocked by the specified RW lock, the RW lock is released.
-> **NOTE:**
->- The RW lock cannot be used in the interrupt service program.
->- When using the LiteOS-A kernel, the OpenHarmony must ensure real-time task scheduling and avoid long-time task blocking. Therefore, an RW lock must be released as soon as possible after use.
->- When an RW lock is held by a task, the task priority cannot be changed by using APIs such as **LOS\_TaskPriSet**.
+5. Call **LOS_RwlockDestroy** to delete an RW lock.
+ >  **NOTE**
+ > - The RW lock cannot be used in the interrupt service program.
+ >
+ > - The LiteOS-A kernel used in the RTOS must ensure real-time task scheduling and avoid long-time task blocking. Therefore, RW locks must be released as soon as possible after use.
+ >
+ > - When an RW lock is held by a task, the task priority cannot be changed by using APIs, such as **LOS_TaskPriSet**.
diff --git a/en/device-dev/kernel/kernel-small-bundles-fs-support-fat.md b/en/device-dev/kernel/kernel-small-bundles-fs-support-fat.md
index d82858014b528de1c82b4313cfc3a0e0312a9540..e2bf8109175e2498239134fd959f71326d263bf9 100644
--- a/en/device-dev/kernel/kernel-small-bundles-fs-support-fat.md
+++ b/en/device-dev/kernel/kernel-small-bundles-fs-support-fat.md
@@ -1,40 +1,49 @@
# FAT
-## Basic Concepts
+## Basic Concepts
-File Allocation Table \(FAT\) is a file system developed for personal computers. It consists of the DOS Boot Record \(DBR\) region, FAT region, and Data region. Each entry in the FAT region records information about the corresponding cluster in the storage device. The cluster information includes whether the cluster is used, the number of the next cluster of the file, and whether the file ends with the cluster. The FAT file system supports multiple formats, such as FAT12, FAT16, and FAT32. The numbers 12, 16, and 32 indicate the number of bits per cluster within the FAT, and also restrict the maximum file size in the system. The FAT file system supports multiple media, especially removable media \(such as USB flash drives, SD cards, and removable hard drives\). The FAT file system ensures good compatibility between embedded devices and desktop systems \(such as Windows and Linux\) and facilitates file management.
+File Allocation Table (FAT) is a file system developed for personal computers. It consists of the DOS Boot Record (DBR) region, FAT region, and Data region. Each entry in the FAT region records information about the corresponding cluster in the storage device. The cluster information includes whether the cluster is used, number of the next cluster of the file, whether the file ends with the cluster. The FAT file system supports multiple formats, such as FAT12, FAT16, and FAT32. The numbers 12, 16, and 32 indicate the number of bits per cluster within the FAT, and also restrict the maximum file size in the system. The FAT file system supports multiple media, especially removable media (such as USB flash drives, SD cards, and removable hard drives). The FAT file system ensures good compatibility between embedded devices and desktop systems (such as Windows and Linux) and facilitates file management.
The OpenHarmony kernel supports FAT12, FAT16, and FAT32 file systems. These file systems require a tiny amount of code to implement, use less resources, support a variety of physical media, and are tailorable and compatible with Windows and Linux systems. They also support identification of multiple devices and partitions. The kernel supports multiple partitions on hard drives and allows creation of the FAT file system on the primary partition and logical partition.
-## Working Principles
+
+## Working Principles
This document does not include the FAT design and physical layout. You can find a lot of reference on the Internet.
-The OpenHarmony LiteOS-A kernel uses block cache \(Bcache\) to improve FAT performance. When read and write operations are performed, Bcache caches the sectors close to the read and write sectors to reduce the number of I/Os and improve performance. The basic cache unit of Bcache is block. The size of each block is the same. By default, there are 28 blocks, and each block caches data of 64 sectors. When the Bcache dirty block rate \(number of dirty sectors/total number of sectors\) reaches the threshold, writeback is triggered and cached data is written back to disks. You can manually call **sync** and **fsync** to write data to disks if you want. Some FAT APIs \(such as **close** and **umount**\) may also trigger writeback operations. However, you are advised not to use them to trigger writeback.
+The OpenHarmony LiteOS-A kernel uses block cache (Bcache) to improve FAT performance. When read and write operations are performed, Bcache caches the sectors close to the read and write sectors to reduce the number of I/Os and improve performance. The basic cache unit of Bcache is block. The size of each block is the same. By default, there are 28 blocks, and each block caches data of 64 sectors. When the Bcache dirty block rate (number of dirty sectors/total number of sectors) reaches the threshold, writeback is triggered and cached data is written back to disks. You can manually call **sync** and **fsync** to write data to disks if you want. Some FAT APIs (such as **close** and **umount**) may also trigger writeback operations. However, you are advised not to use them to trigger writeback.
+
-## Development Guidelines
+## Development Guidelines
-### How to Develop
+
+ **How to Develop**
The development process involves mounting partitions, managing files and directories, and unmounting partitions.
-The device name of the SD card or MMC is **mmcblk\[x\]p\[y\]**, and the file system type is **vfat**.
+The device name of the SD card or MMC is **mmcblk[x]p[y]**, and the file system type is **vfat**.
Example:
+
```
mount("/dev/mmcblk0p0", "/mnt", "vfat", 0, NULL);
```
-> **NOTE**
->- The size of a single FAT file cannot be greater than 4 GiB.
->- When there are two SD card slots, the first card inserted is card 0, and that inserted later is card 1.
->- When multi-partition is enabled and there are multiple partitions, the device node **/dev/mmcblk0** \(primary device\) registered by card 0 and **/dev/mmcblk0p0** \(secondary device\) are the same device. In this case, you cannot perform operations on the primary device.
->- Before removing an SD card, close the open files and directories and unmount the related nodes. Otherwise, SD card exceptions or memory leaks may occur.
->- Before performing the **format** operation, unmount the mount point.
->- After the Bcache feature takes effect, note the following:
-> - When **MS\_NOSYNC** is carried in the **mount** function, FAT does not proactively write the content in the cache back to the storage device. The FAT-related APIs **open**, **close**, **unlink**, **rename**, **mkdir**, **rmdir**, and **truncate** do not automatically perform the **sync** operation, which improves the operation speed. However, the upper layer must actively invoke the **sync** operation to synchronize data. Otherwise, data loss may occur.
-> - Bcache provides scheduled writeback. After **LOSCFG\_FS\_FAT\_CACHE\_SYNC\_THREAD** is enabled in **menuconfig**, the OpenHarmony kernel creates a scheduled task to write the Bcache data back to disks. By default, the kernel checks the dirty block rate in the Bcache every 5 seconds. If the dirty block rate exceeds 80%, the **sync** operation will be performed to write all dirty data in the Bcache to disks. You can call **LOS\_SetSyncThreadPrio**, **LOS\_SetSyncThreadInterval**, and **LOS\_SetDirtyRatioThreshold** to set the task priority, flush interval, and dirty block rate threshold, respectively.
-> - The cache has 28 blocks by default, and each block has 64 sectors.
-
+>  **NOTE**
+> - The size of a single FAT file cannot be greater than 4 GiB.
+>
+> - When there are two SD card slots, the first card inserted is card 0, and that inserted later is card 1.
+>
+> - When multi-partition is enabled and there are multiple partitions, the device node **/dev/mmcblk0** (primary device) registered by card 0 and **/dev/mmcblk0p0** (secondary device) are the same device. In this case, you cannot perform operations on the primary device.
+>
+> - Before removing an SD card, close the open files and directories and unmount the related nodes. Otherwise, SD card exceptions or memory leaks may occur.
+>
+> - Before performing the **format** operation, unmount the mount point.
+>
+> - After the Bcache feature takes effect, note the following:
+> - When **MS_NOSYNC** is carried in the **mount** function, FAT does not proactively write the content in the cache back to the storage device. The FAT-related APIs **open**, **close**, **unlink**, **rename**, **mkdir**, **rmdir**, and **truncate** do not automatically perform the **sync** operation, which improves the operation speed. However, the upper layer must actively invoke the **sync** operation to synchronize data. Otherwise, data loss may occur.
+>
+> - Bcache provides scheduled writeback. After **LOSCFG_FS_FAT_CACHE_SYNC_THREAD** is enabled in **menuconfig**, the OpenHarmony kernel creates a scheduled task to write the Bcache data back to disks. By default, the kernel checks the dirty block rate in the Bcache every 5 seconds. If the dirty block rate exceeds 80%, the **sync** operation will be performed to write all dirty data in the Bcache to disks. You can call **LOS_SetSyncThreadPrio**, **LOS_SetSyncThreadInterval**, and **LOS_SetDirtyRatioThreshold** to set the task priority, flush interval, and dirty block rate threshold, respectively.
+> - The cache has 28 blocks by default, and each block has 64 sectors.
diff --git a/en/device-dev/kernel/kernel-small-bundles-fs-support-nfs.md b/en/device-dev/kernel/kernel-small-bundles-fs-support-nfs.md
index d205531eb3d7ac83645ce872669a645d0d4d5aa5..57e01b4a58d9430f167b89c0cd03f6b4ded73dbf 100644
--- a/en/device-dev/kernel/kernel-small-bundles-fs-support-nfs.md
+++ b/en/device-dev/kernel/kernel-small-bundles-fs-support-nfs.md
@@ -1,135 +1,140 @@
# NFS
-## Basic Concepts
+## Basic Concepts
-NFS allows you to share files across hosts and OSs over a network. You can treat NFS as a file system service, which is equivalent to folder sharing in the Windows OS to some extent.
+Network File System (NFS) allows you to share files across hosts and OSs over a network. You can treat NFS as a file system service, which is equivalent to folder sharing in the Windows OS to some extent.
-## Working Principles
-The NFS of the OpenHarmony LiteOS-A kernel acts as an NFS client. The NFS client can mount the directory shared by a remote NFS server to the local machine and run the programs and shared files without occupying the storage space of the current system. To the local machine, the directory on the remote server is like its disk.
-
-## Development Guidelines
-
-1. Create an NFS server.
-
-The following uses the Ubuntu OS as an example to describe how to configure an NFS server.
-
-- Install the NFS server software.
-
-Set the download source of the Ubuntu OS when the network connection is normal.
-
-```
-sudo apt-get install nfs-kernel-server
-```
+## Working Principles
-- Create a directory for mounting and assign full permissions for the directory.
-
-```
-mkdir -p /home/sqbin/nfs
-sudo chmod 777 /home/sqbin/nfs
-```
-
-- Configure and start the NFS server.
-
-Append the following line in the **/etc/exports** file:
-
-```
-/home/sqbin/nfs *(rw,no_root_squash,async)
-```
-
-**/home/sqbin/nfs** is the root directory shared by the NFS server.
-
-Start the NFS server.
+The NFS of the OpenHarmony LiteOS-A kernel acts as an NFS client. The NFS client can mount the directory shared by a remote NFS server to the local machine and run the programs and shared files without occupying the storage space of the current system. To the local machine, the directory on the remote server is like its disk.
-```
-sudo /etc/init.d/nfs-kernel-server start
-```
-Restart the NFS server.
+## Development Guidelines
-```
-sudo /etc/init.d/nfs-kernel-server restart
-```
+1. Create an NFS server.
-1. Configure the board as an NFS client.
+ The following uses the Ubuntu OS as an example to describe how to configure an NFS server.
-In this section, the NFS client is a device running the OpenHarmony kernel.
+ - Install the NFS server software.
-- Set the hardware connection.
+ Set the download source of the Ubuntu OS when the network connection is normal.
-Connect the OpenHarmony kernel device to the NFS server. Set their IP addresses in the same network segment. For example, set the IP address of the NFS server to **10.67.212.178/24** and set the IP address of the OpenHarmony kernel device to **10.67.212.3/24**. Note that the preceding IP addresses are private IP addresses used as examples. You need to use your actual IP addresses.
+ ```
+ sudo apt-get install nfs-kernel-server
+ ```
-You can run the **ifconfig** command to check the OpenHarmony kernel device's IP address.
+ - Create a directory for mounting and assign full permissions for the directory.
-- Start the network and ensure that the network between the board and NFS server is normal.
+ ```
+ mkdir -p /home/sqbin/nfs
+ sudo chmod 777 /home/sqbin/nfs
+ ```
-Start the Ethernet or another type of network, and then run **ping** to check whether the network connection to the server is normal.
+ - Configure and start the NFS server.
-```
-OHOS # ping 10.67.212.178
-[0]Reply from 10.67.212.178: time=1ms TTL=63
-[1]Reply from 10.67.212.178: time=0ms TTL=63
-[2]Reply from 10.67.212.178: time=1ms TTL=63
-[3]Reply from 10.67.212.178: time=1ms TTL=63
---- 10.67.212.178 ping statistics ---
-4 packets transmitted, 4 received, 0 loss
-```
+ Append the following line in the **/etc/exports** file:
-Initialize the NFS client.
+ ```
+ /home/sqbin/nfs *(rw,no_root_squash,async)
+ ```
+
+ **/home/sqbin/nfs** is the root directory shared by the NFS server.
+
+ Start the NFS server.
-```
-OHOS # mkdir /nfs
-OHOS # mount 10.67.212.178:/home/sqbin/nfs /nfs nfs 1011 1000
-```
+ ```
+ sudo /etc/init.d/nfs-kernel-server start
+ ```
+
+ Restart the NFS server.
-If the following information is displayed, the NFS client is initialized.
+ ```
+ sudo /etc/init.d/nfs-kernel-server restart
+ ```
-```
-OHOS # mount 10.67.212.178:/home/sqbin/nfs /nfs nfs 1011 1000
-Mount nfs on 10.67.212.178:/home/sqbin/nfs, uid:1011, gid:1000
-Mount nfs finished.
-```
+2. Configure the board as an NFS client.
-This command mounts the **/home/sqbin/nfs** directory on the NFS server whose IP address is 10.67.212.178 to the **/nfs** directory on the OpenHarmony kernel device.
+ In this section, the NFS client is a device running the OpenHarmony kernel.
-> **NOTE:**
->This section assumes that the NFS server is available, that is, the **/home/sqbin/nfs** directory on the NFS server 10.67.212.178 is accessible.
->The **mount** command format is as follows:
->```
->mount nfs
->```
->- **SERVER\_IP** indicates the IP address of the server.
->- **SERVER\_PATH** indicates the path of the shared directory on the NFS server.
->- **CLIENT\_PATH** indicates the NFS path on the local device.
->- **nfs** indicates the path to which the remote shared directory is mounted on the local device.
->Replace the parameters as required.
->If you do not want to restrict the NFS access permission, set the permission of the NFS root directory to **777** on the Linux CLI.
->```
->chmod -R 777 /home/sqbin/nfs
->```
->The NFS client setting is complete, and the NFS file system has been mounted.
+ - Set the hardware connection.
-1. Share files using NFS.
+ Connect the OpenHarmony kernel device to the NFS server. Set their IP addresses in the same network segment. For example, set the IP address of the NFS server to **10.67.212.178/24** and the IP address of the OpenHarmony kernel device to
+ **10.67.212.3/24**. Note that this IP address is an intranet private IP address. Use the actual IP address.
-Create the **dir** directory on the NFS server and save the directory. Run the **ls** command in the OpenHarmony kernel.
+ You can run the **ifconfig** command to check the OpenHarmony kernel device's IP address.
-```
-OHOS # ls /nfs
-```
+ - Start the network and ensure that the network between the board and NFS server is normal.
-The following information is returned from the serial port:
+ Start the Ethernet or another type of network, and then run **ping** to check whether the network connection to the server is normal.
-```
-OHOS # ls /nfs
-Directory /nfs:
-drwxr-xr-x 0 u:0 g:0 dir
-```
-The **dir** directory created on the NFS server has been synchronized to the **/nfs** directory on the client \(OpenHarmony kernel system\).
+ ```
+ OHOS # ping 10.67.212.178
+ [0]Reply from 10.67.212.178: time=1ms TTL=63
+ [1]Reply from 10.67.212.178: time=0ms TTL=63
+ [2]Reply from 10.67.212.178: time=1ms TTL=63
+ [3]Reply from 10.67.212.178: time=1ms TTL=63
+ --- 10.67.212.178 ping statistics ---
+ 4 packets transmitted, 4 received, 0 loss
+ ```
+
+ Initialize the NFS client.
-Similarly, you can create files and directories on the client \(OpenHarmony kernel system\) and access them from the NFS server.
+ ```
+ OHOS # mkdir /nfs
+ OHOS # mount 10.67.212.178:/home/sqbin/nfs /nfs nfs 1011 1000
+ ```
+
+ If the following information is displayed, the NFS client is initialized.
-> **NOTE:**
->Currently, the NFS client supports some NFS v3 specifications. Therefore, the NFS client is not fully compatible with all types of NFS servers. You are advised to use the Linux NFS server to perform the development.
+ ```
+ OHOS # mount 10.67.212.178:/home/sqbin/nfs /nfs nfs 1011 1000
+ Mount nfs on 10.67.212.178:/home/sqbin/nfs, uid:1011, gid:1000
+ Mount nfs finished.
+ ```
+
+ This command mounts the **/home/sqbin/nfs** directory on the NFS server (IP address: 10.67.212.178) to the **/nfs** directory on the OpenHarmony kernel device.
+ >  **NOTE**
+ >
+ > This example assumes that the NFS server is available, that is, the **/home/sqbin/nfs** directory on the NFS server 10.67.212.178 is accessible.
+ >
+ > The **mount** command format is as follows:
+ >
+ > ```
+ > mount nfs
+ > ```
+ >
+ > **SERVER_IP** indicates the IP address of the server; **SERVER_PATH** indicates the path of the shared directory on the NFS server; **CLIENT_PATH** indicates the NFS path on the local device; **nfs** indicates the path to which the remote shared directory is mounted on the local device. Replace the parameters as required.
+ >
+ > If you do not want to restrict the NFS access permission, set the permission of the NFS root directory to **777** on the Linux CLI.
+ >
+ > ```
+ > chmod -R 777 /home/sqbin/nfs
+ > ```
+ >
+ > The NFS client setting is complete, and the NFS file system is mounted.
+
+3. Share files using NFS.
+
+ Create the **dir** directory on the NFS server. Run the **ls** command in the OpenHarmony kernel.
+
+ ```
+ OHOS # ls /nfs
+ ```
+
+ The following information is returned from the serial port:
+
+ ```
+ OHOS # ls /nfs
+ Directory /nfs:
+ drwxr-xr-x 0 u:0 g:0 dir
+ ```
+
+ The **dir** directory created on the NFS server has been synchronized to the **/nfs** directory on the client (OpenHarmony kernel system). Similarly, you can create files and directories on the client (OpenHarmony kernel system) and access them from the NFS server.
+
+ >  **NOTE**
+ >
+ > Currently, the NFS client supports some NFS v3 specifications. Therefore, the NFS client is not fully compatible with all types of NFS servers. You are advised to use the Linux NFS server to perform the development.
diff --git a/en/device-dev/kernel/kernel-small-bundles-fs-support-procfs.md b/en/device-dev/kernel/kernel-small-bundles-fs-support-procfs.md
index a9b031e9f72b535f22f3083491c5b76c08459e3f..261eae927bd78029daead0a19c56aca175a8623e 100644
--- a/en/device-dev/kernel/kernel-small-bundles-fs-support-procfs.md
+++ b/en/device-dev/kernel/kernel-small-bundles-fs-support-procfs.md
@@ -1,28 +1,32 @@
# procfs
-## Basic Concepts
+## Basic Concepts
-The proc filesystem \(procfs\) is a virtual file system that displays process or other system information in a file-like structure. It is more convenient to obtain system information in file operation mode than API calling mode.
+The proc filesystem (procfs) is a virtual file system that displays process or other system information in a file-like structure. It is more convenient to obtain system information in file operation mode than API calling mode.
-## Working Principles
-In the OpenHarmony kernel, procfs is automatically mounted to the **/proc** directory during startup. Only the kernel module can create file nodes to provide the query service.
+## Working Principles
-## Development Guidelines
+In the OpenHarmony kernel, procfs is automatically mounted to the **/proc** directory during startup. Only the kernel module can create file nodes to provide the query service.
-To create a procfs file, you need to use **ProcMkdir** to create a directory and use **CreateProcEntry** to create a file. The development of the file node function is to hook the read and write functions to the file created by **CreateProcEntry**. When the procfs file is read or written, the hooked functions will be called to implement custom functions.
-### Development Example
+## Development Guidelines
-The following describes how to create the **/proc/hello/world** file to implement the following functions:
+To create a procfs file, you need to use **ProcMkdir** to create a directory and use **CreateProcEntry** to create a file. The development of the file node function is to hook the read and write functions to the file created by **CreateProcEntry**. When the procfs file is read or written, the hooked functions will be called to implement custom functions.
-1. Create a file in **/proc/hello/world**.
+
+### Development Example
+
+The following describes how to create the **/proc/hello/world** file to implement the following functions:
+
+1. Create a file in **/proc/hello/world**.
2. Read the file. When the file is read, "HelloWorld!" is returned.
3. Write the file and print the data written in the file.
+
```
#include "proc_fs.h"
@@ -48,7 +52,7 @@ static const struct ProcFileOperations HELLO_WORLD_OPS = {
void HelloWorldInit(void)
{
- /* Create the hello directory.*/
+ /* Create the hello directory. */
struct ProcDirEntry *dir = ProcMkdir("hello", NULL);
if (dir == NULL) {
PRINT_ERR("create dir failed!\n");
@@ -69,7 +73,8 @@ void HelloWorldInit(void)
**Verification**
-After the OS startup, run the following command in the shell:
+After the OS startup, run the following commands in the shell:
+
```
OHOS # cat /proc/hello/world
@@ -77,4 +82,3 @@ OHOS # Hello World!
OHOS # echo "yo" > /proc/hello/world
OHOS # your input is: yo
```
-
diff --git a/en/device-dev/kernel/kernel-small-bundles-fs-support-ramfs.md b/en/device-dev/kernel/kernel-small-bundles-fs-support-ramfs.md
index 975baff8c25166e4e9afa703c4208aa03af5d066..ee785aeffd5fa016fe4a605183d68324aaff73dc 100644
--- a/en/device-dev/kernel/kernel-small-bundles-fs-support-ramfs.md
+++ b/en/device-dev/kernel/kernel-small-bundles-fs-support-ramfs.md
@@ -1,60 +1,43 @@
# Ramfs
-## Basic Concepts
-Ramfs is a RAM-based file system whose size can be dynamically adjusted. Ramfs does not have a backing store. Directory entries and page caches are allocated when files are written into ramfs. However, data is not written back to any other storage medium. This means that data will be lost after a power outage.
-
-## Working Principles
-
-Ramfs stores all files in RAM, and read/write operations are performed in RAM. Ramfs is generally used to store temporary data or data that needs to be frequently modified, such as the **/tmp** and **/var** directories. Using ramfs reduces the read/write loss of the memory and improves the data read/write speed.
-
-## Development Guidelines
+## Basic Concepts
+Ramfs is a RAM-based file system whose size can be dynamically adjusted. Ramfs does not have a backing store. Directory entries and page caches are allocated when files are written into ramfs. However, data is not written back to any other storage medium. This means that data will be lost after a power outage.
+## Working Principles
+Ramfs stores all files in RAM, and read/write operations are performed in RAM. Ramfs is generally used to store temporary data or data that needs to be frequently modified, such as the **/tmp** and **/var** directories. Using ramfs reduces the read/write loss of the memory and improves the data read/write speed.
+## Development Guidelines
Mount:
-
```
mount(NULL, "/dev/shm", "ramfs", 0, NULL)
```
-
-Create a directory:
-
+Create a directory:
```
mkdir(pathname, mode)
```
-
Create a file:
-
```
open(pathname, O_NONBLOCK | O_CREAT | O_RDWR, mode)
```
-
-Read a directory:
-
+Read a directory:
```
dir = opendir(pathname)
ptr = readdir(dir)
closedir(dir)
```
-
-Delete a file:
-
+Delete a file:
```
unlink(pathname)
```
-
Delete a directory:
-
```
rmdir(pathname)
```
-
-Unmount:
-
+Unmount:
```
umount("/dev/shm")
```
-
-> **CAUTION:**
->- A ramfs file system can be mounted only once. Once mounted to a directory, it cannot be mounted to other directories.
->- Ramfs is under debugging and disabled by default. Do not use it in formal products.
-
+>  **CAUTION**
+> - A ramfs file system can be mounted only once. Once mounted to a directory, it cannot be mounted to other directories.
+>
+> - Ramfs is under debugging and disabled by default. Do not use it in formal products.
diff --git a/en/device-dev/kernel/kernel-small-debug-shell-guide.md b/en/device-dev/kernel/kernel-small-debug-shell-guide.md
index d20dd2abf4420eb5f115171aafaa697108f38ed7..3bb07ddf59843ff76a2b7f4472ecc9a089f99421 100644
--- a/en/device-dev/kernel/kernel-small-debug-shell-guide.md
+++ b/en/device-dev/kernel/kernel-small-debug-shell-guide.md
@@ -1,164 +1,87 @@
-# Shell Command Development Guidelines
-
-
-## Development Guidelines
+# Shell Command Development
You can perform the following operations to add shell commands:
-1. Include the following header files:
-
- ```
- #include "shell.h"
- #include "shcmd.h"
- ```
-
-2. Register commands. You can register commands either statically or dynamically when the system is running. In most cases, static registration is widely used by common system commands, and dynamic registration is widely used by user commands.
-
- 1. Static registration:
-
- 1. Register a command using a macro.
-
- The prototype of the macro is as follows:
-
- ```
- SHELLCMD_ENTRY(l, cmdType, cmdKey, paraNum, cmdHook)
- ```
-
- **Table 1** Parameters of the SHELLCMD\_ENTRY macro
-
-
- Parameter
- |
- Description
- |
-
-
- l
- |
- Specifies the global variable name passed in static registration. Note that the name cannot be the same as other symbol names in the system.
- |
-
- cmdType
- |
- Specifies the command type, which can be any of the following:
- CMD_TYPE_EX: does not support standard command parameters and will mask the command keywords you entered. For example, if you enter ls /ramfs, only /ramfs will be passed to the registration function, and ls will be masked.
- CMD_TYPE_STD: supports standard command parameters. All the characters you entered will be passed to the registration function after being parsed.
-
- |
-
- cmdKey
- |
- Specifies the command keyword, which is the name used to access a shell function.
- |
-
- paraNum
- |
- Specifies the maximum number of input parameters in the execution function to be called. This parameter is not supported currently.
- |
-
- cmdHook
- |
- Specifies the address of the execution function, that is, the function executed by the command.
- |
-
-
-
-
- Example:
-
- ```
- SHELLCMD_ENTRY(ls_shellcmd, CMD_TYPE_EX, "ls", XARGS, (CMD_CBK_FUNC)osShellCmdLs)
- ```
-
- 2. Add options to the **build/mk/liteos\_tables\_ldflags.mk** file.
-
- For example, when registering the **ls** command, add **-uls\_shellcmd** to the **build/mk/liteos\_tables\_ldflags.mk** file. **-u** is followed by the first parameter of **SHELLCMD\_ENTRY**.
-
- 2. Dynamic registration:
-
- The prototype of the function to register is as follows:
-
- ```
- UINT32 osCmdReg(CmdT ype cmdType, CHAR *cmdKey, UINT32 paraNum, CmdCallBackFunc cmdProc)
- ```
-
- **Table 2** Parameters of UINT32 osCmdReg
-
-
- Parameter
- |
- Description
- |
-
-
- cmdType
- |
- Specifies the command type, which can be any of the following:
- CMD_TYPE_EX: does not support standard command parameters and will mask the command keywords you entered. For example, if you enter ls /ramfs, only /ramfs will be passed to the registration function, and ls will be masked.
- CMD_TYPE_STD: supports standard command parameters. All the characters you entered will be passed to the registration function after being parsed.
-
- |
-
- cmdKey
- |
- Specifies the command keyword, which is the name used to access a shell function.
- |
-
- paraNum
- |
- Specifies the maximum number of input parameters in the execution function to be called. This parameter is not supported currently. The default value is XARGS(0xFFFFFFFF).
- |
-
- cmdHook
- |
- Specifies the address of the execution function, that is, the function executed by the command.
- |
-
-
-
-
- Example:
-
- ```
- osCmdReg(CMD_TYPE_EX, "ls", XARGS, (CMD_CBK_FUNC)osShellCmdLs)
- ```
+1. Include header files.
- > **NOTE:**
- >The command keyword must be unique. That is, two different commands cannot share the same command keyword. Otherwise, only one command is executed.
- >When executing user commands sharing the same keyword, the shell executes only the first command in the **help** commands.
+
+ ```
+ #include "shell.h"
+ #include "shcmd.h"
+ ```
-3. Use the following function prototype to add built-in commands:
+2. Register commands.
- ```
- UINT32 osShellCmdLs(UINT32 argc, CHAR **argv)
- ```
+ You can register commands either statically or dynamically (with the system running). Generally, common system commands are registered statically, and user commands are registered dynamically.
- **Table 3** Parameters of osShellCmdLs
+ - Static registration
-
- Parameter
- |
- Description
- |
-
-
- argc
- |
- Specifies the number of parameters in the shell command.
- |
-
- argv
- |
- Specifies a pointer array, where each element points to a string. You can determine whether to pass the command keyword to the registration function by specifying the command type.
- |
-
-
-
+ 1. Register a command using a macro.
-4. Enter the shell command in either of the following methods:
- - Enter the shell command in a serial port tool.
+ The prototype of the macro is as follows:
- - Enter the shell command in the Telnet tool. For details, see [telnet](kernel-small-debug-shell-net-telnet.md).
+ ```
+ SHELLCMD_ENTRY(l, cmdType, cmdKey, paraNum, cmdHook)
+ ```
+ **Table 1** SHELLCMD_ENTRY parameters
+ | Parameter| Description|
+ | -------- | -------- |
+ | l | Specifies the global variable name passed in static registration. Note that the name cannot be the same as other symbol names in the system.|
+ | cmdType | Specifies the command type, which can be any of the following:
**CMD_TYPE_EX**: does not support standard command parameters and will mask the command keywords you entered. For example, if you enter **ls /ramfs**, only **/ramfs** will be passed to the registration function and **ls** will be masked.
**CMD_TYPE_STD**: supports standard command parameters. All the characters you entered will be passed to the registration function after being parsed. |
+ | cmdKey | Specifies the command keyword, which is the name used to access a shell function.|
+ | paraNum | Specifies the maximum number of input parameters in the execution function to be called. This parameter is not supported currently.|
+ | cmdHook | Specifies the address of the execution function, that is, the function executed by the command.|
+ Example:
+
+ ```
+ SHELLCMD_ENTRY(ls_shellcmd, CMD_TYPE_EX, "ls", XARGS, (CMD_CBK_FUNC)osShellCmdLs)
+ ```
+
+
+ 2. Add options to the **build/mk/liteos_tables_ldflags.mk** file.
+
+ For example, when registering the **ls** command, add **-uls_shellcmd** to the **build/mk/liteos_tables_ldflags.mk** file. **-u** is followed by the first parameter of **SHELLCMD_ENTRY**.
+
+ - Dynamic registration
+
+ The prototype of the function to register is as follows:
+
+ ```
+ UINT32 osCmdReg(CmdT ype cmdType, CHAR *cmdKey, UINT32 paraNum, CmdCallBackFunc cmdProc)
+ ```
+ **Table 2** UINT32 osCmdReg parameters
+ | Parameter| Description|
+ | -------- | -------- |
+ | cmdType | Specifies the command type, which can be any of the following:
**CMD_TYPE_EX**: does not support standard command parameters and will mask the command keywords you entered. For example, if you enter **ls /ramfs**, only **/ramfs** will be passed to the registration function, and **ls** will be masked.
**CMD_TYPE_STD**: supports standard command parameters. All the characters you entered will be passed to the registration function after being parsed.|
+ | cmdKey | Specifies the command keyword, which is the name used to access a shell function.|
+ | paraNum | Specifies the maximum number of input parameters in the execution function to be called. This parameter is not supported currently. The default value is **XARGS(0xFFFFFFFF)**.|
+ | cmdHook | Specifies the address of the execution function, that is, the function executed by the command.|
+
+ Example:
+ ```
+ osCmdReg(CMD_TYPE_EX, "ls", XARGS, (CMD_CBK_FUNC)osShellCmdLs)
+ ```
+  NOTE
+ > The command keyword must be unique. That is, two different commands cannot share the same command keyword. Otherwise, only one command is executed. When executing user commands sharing the same keyword, the shell executes only the first command in the **help** commands.
+
+
+3. Use the following function prototype to add built-in commands:
+
+ ```
+ UINT32 osShellCmdLs(UINT32 argc, CHAR **argv)
+ ```
+
+ **Table 3** osShellCmdLs parameters
+
+ | Parameter| Description|
+ | -------- | -------- |
+ | argc | Specifies the number of parameters in the shell command.|
+ | argv | Specifies a pointer array, where each element points to a string. You can determine whether to pass the command keyword to the registration function by specifying the command type.|
+
+4. Enter the shell command in either of the following methods:
+
+ - Enter the shell command using a serial port tool.
+ - Enter the shell command using the Telnet tool. For details about how to use Telnet, see [telnet](../kernel/kernel-small-debug-shell-net-telnet.md).
diff --git a/en/device-dev/kernel/kernel-small-debug-shell-magickey.md b/en/device-dev/kernel/kernel-small-debug-shell-magickey.md
index 95dd7f69f0160ae301c51a210ddb51c3c357728b..80248ba4e5fbc59fbef823ca8b34e8584709c243 100644
--- a/en/device-dev/kernel/kernel-small-debug-shell-magickey.md
+++ b/en/device-dev/kernel/kernel-small-debug-shell-magickey.md
@@ -1,42 +1,36 @@
# Magic Key
-## When to Use
+## When to Use
-When the system does not respond, you can use the magic key to check whether the system is locked and interrupted \(the magic key also does not respond\) or view the system task running status.
+When the system does not respond, you can use the magic key function to check whether the system is suspended by an interrupt lock (the magic key also does not respond) or view the system task running status.
-If an interrupt is responded, you can use the magic key to check the task CPU usage \(**cpup**\) and find out the task with the highest CPU usage. Generally, the task with a higher priority preempts the CPU resources.
+If interrupts are responded, you can use the magic key to check the task CPU usage (**cpup**) and find out the task with the highest CPU usage. Generally, the task with a higher priority preempts the CPU resources.
-## How to Use
-1. Configure the macro **LOSCFG\_ENABLE\_MAGICKEY**.
+## How to Use
-The magic key depends on the **LOSCFG\_ENABLE\_MAGICKEY** macro. Before using the magic key, select **Enable MAGIC KEY** on **menuconfig**.
+1. Configure the macro **LOSCFG_ENABLE_MAGICKEY**.
-**Enable MAGIC KEY**: **Debug** ---\> **Enable MAGIC KEY**
+ The magic key depends on the **LOSCFG_ENABLE_MAGICKEY** macro. Before using the magic key, select **Enable MAGIC KEY** (**Debug** ---> **Enable MAGIC KEY**) on **menuconfig**. The magic key cannot be used if this option is disabled.
-The magic key cannot be used if this macro is disabled.
+ >  **NOTE**
+ >
+ > On **menuconfig**, you can move the cursor to **LOSCFG_ENABLE_MAGICKEY** and enter a question mark (?) to view help Information.
+
+2. Press **Ctrl+R** to enable the magic key.
-> **NOTE:**
->On **menuconfig**, you can move the cursor to **LOSCFG\_ENABLE\_MAGICKEY** and type a question mark \(?\) to view help information.
+ When the UART or USB-to-virtual serial port is connected, press **Ctrl+R**. If "Magic key on" is displayed, the magic key is enabled. To disable it, press **Ctrl+R** again. If "Magic key off" is displayed, the magic key is disabled.
-2. Press **Ctrl+R** to enable the magic key.
+ The functions of the magic key are as follows:
-When the UART or USB-to-virtual serial port is connected, press **Ctrl+R**. If "Magic key on" is displayed, the magic key is enabled.
+ - **Ctrl+Z**: displays help information about the related magic keys.
-To disable the magic key, press **Ctrl+R** again. If "Magic key off" is displayed, the magic key is disabled.
+ - **Ctrl+T**: displays task information.
-You can use the magic key combinations as follows:
-
-- **Ctrl+Z**: displays help information about the related magic keys.
-
-- **Ctrl+T**: displays task information.
-
-- **Ctrl+P**: allows the system to proactively enter the panic state. After the panic-related information is printed, the system is suspended.
-
-- **Ctrl+E**: Checks the integrity of the memory pool. If an error is detected, the system displays an error message. If no error is detected, the system displays "system memcheck over, all passed!".
-
-
-> **NOTICE:**
->If magic key is enabled, when special characters need to be entered through the UART or USB-to-virtual serial port, avoid using characters the same as the magic keys. Otherwise, the magic key may be triggered by mistake, causing errors in the original design.
+ - **Ctrl+P**: allows the system to proactively enter the panic state. After the panic-related information is printed, the system is suspended.
+ - **Ctrl+E**: Checks the integrity of the memory pool. If an error is detected, the system displays an error message. If no error is detected, the system displays "system memcheck over, all passed!".
+
+ >  **NOTICE**
+ > If magic key is enabled, when special characters need to be entered through the UART or USB-to-virtual serial port, avoid using characters the same as the magic keys. Otherwise, the magic key may be triggered by mistake, causing errors in the original design.
diff --git a/en/device-dev/kernel/kernel-small-debug-shell-overview.md b/en/device-dev/kernel/kernel-small-debug-shell-overview.md
index 5ac89f7eb900197534dca3a3d73846a9bdde0b6f..7c1ce9d0e6e8d0caf21fc5bd38d7c4f976de65eb 100644
--- a/en/device-dev/kernel/kernel-small-debug-shell-overview.md
+++ b/en/device-dev/kernel/kernel-small-debug-shell-overview.md
@@ -1,35 +1,34 @@
-# Introduction to the Shell
+# Shell
The shell provided by the OpenHarmony kernel supports commonly used debugging commands. You can also add and customize commands to the shell of the OpenHarmony kernel to address your service needs. The common debugging commands include the following:
-- System commands: commands used to query information, such as system tasks, semaphores, system software timers, CPU usage, and interrupts.
-- File commands: commands used to manage files and directories, such as **ls** and **cd**.
+- System commands: commands used to query information, such as system tasks, semaphores, system software timers, CPU usage, and interrupts.
-- Network commands: commands used to query the IP addresses of other devices connected to the development board, querying the IP address of the local device, testing network connectivity, and setting the access point \(AP\) and station \(STA\) modes of the development board.
+- File commands: commands used to manage files and directories, such as **ls** and **cd**.
- For details about how to add a command, see [Shell Command Development Guidelines](kernel-small-debug-shell-guide.md) and [Shell Command Programming Example](kernel-small-debug-shell-build.md).
+- Network commands: commands used to query the IP addresses of other devices connected to the development board, querying the IP address of the local device, testing network connectivity, and setting the access point (AP) and station (STA) modes of the development board.
+ For details about the process of adding commands, see [Shell Command Development](../kernel/kernel-small-debug-shell-guide.md) and [Shell Command Programming Example](../kernel/kernel-small-debug-shell-build.md).
-## Important Notes
-Note the following when using the shell:
+ **Precautions**
-- You can use the **exec** command to run executable files.
-- The shell supports English input by default. To delete the Chinese characters entered in UTF-8 format, press the backspace key for three times.
+Note the following when using shell:
-- When entering shell commands, file names, and directory names, you can press **Tab** to enable automatic completion. If there are multiple completions, multiple items are printed based on the same characters they have. If more than 24 lines of completions are available, the system displays the message "Display all num possibilities?\(y/n\)", asking you to determine whether to print all items. You can enter **y** to print all items or enter **n** to exit the printing. If more than 24 lines are printed after your selection, the system displays "--More--". In this case, you can press **Enter** to continue the printing or press **q** \(or **Ctrl+c**\) to exit.
+- You can use the **exec** command to run executable files.
-- The shell working directory is separated from the system working directory. You can run commands such as **cd** and **pwd** on the shell to perform operations on the shell working directory, and run commands such as **chdir** and **getcwd** to perform operations on the system working directory. Pay special attention when an input parameter in a file system operation command is a relative path.
+- The shell supports English input by default. To delete the Chinese characters entered in UTF-8 format, press the backspace key for three times.
-- Before using network shell commands, you need to call the **tcpip\_init** function to initialize the network and set up the Telnet connection. By default, the kernel does not call **tcpip\_init**.
+- When entering shell commands, file names, and directory names, you can press **Tab** to enable automatic completion. If there are multiple completions, multiple items are printed based on the same characters they have. If more than 24 lines of completions are available, the system displays the message "Display all num possibilities?(y/n)", asking you to determine whether to print all items. You can enter **y** to print all items or enter **n** to exit the printing. If more than 24 lines are printed after your selection, the system displays "--More--". In this case, you can press **Enter** to continue the printing or press **q** (or **Ctrl+c**) to exit.
-- You are not advised to run shell commands to perform operations on device files in the **/dev** directory, which may cause unexpected results.
+- The shell working directory is separated from the system working directory. You can run commands such as **cd** and **pwd** on the shell to perform operations on the shell working directory, and run commands such as **chdir** and **getcwd** to perform operations on the system working directory. Pay special attention when an input parameter in a file system operation command is a relative path.
-- The shell functions do not comply with the POSIX standards and are used only for debugging.
-
- > **NOTICE**
- >The shell functions are used for debugging only and can be enabled only in the Debug version \(by enabling the **LOSCFG\_DEBUG\_VERSION** configuration item using **menuconfig**\).
+- Before using network shell commands, you need to call the **tcpip_init** function to initialize the network and set up the Telnet connection. By default, the kernel does not call **tcpip_init**.
+- You are not advised to run shell commands to perform operations on device files in the **/dev** directory, which may cause unexpected results.
+- The shell functions do not comply with the POSIX standards and are used only for debugging.
+ >  **NOTICE**
+ > The shell functions are used for debugging only and can be enabled only in the Debug version (by enabling **LOSCFG_DEBUG_VERSION** using **menuconfig**).
diff --git a/en/device-dev/kernel/kernel-small-debug-trace.md b/en/device-dev/kernel/kernel-small-debug-trace.md
index b808e35d2515e9ede4def18ec35fd7a06a638d59..df41fd67f3d7d2ddc196e1ea4ddc3c10e701aa95 100644
--- a/en/device-dev/kernel/kernel-small-debug-trace.md
+++ b/en/device-dev/kernel/kernel-small-debug-trace.md
@@ -1,10 +1,12 @@
# Trace
-## Basic Concepts
+
+## Basic Concepts
Trace helps you learn about the kernel running process and the execution sequence of modules and tasks. With the information, you can better understand the code running process of the kernel and locate time sequence problems.
-## Working Principles
+
+## Working Principles
The kernel provides a hook framework to embed hooks in the main process of each module. In the initial startup phase of the kernel, the trace function is initialized and the trace handlers are registered with the hooks.
@@ -16,170 +18,117 @@ In offline mode, trace frames are stored in a circular buffer. If too many frame

-The online mode must be used with the integrated development environment \(IDE\). Trace frames are sent to the IDE in real time. The IDE parses the records and displays them in a visualized manner.
-
-## **Available APIs**
-
-### Kernel Mode
-
-The trace module of the OpenHarmony LiteOS-A kernel provides the following functions. For details about the APIs, see the [API](https://gitee.com/openharmony/kernel_liteos_a/blob/master/kernel/include/los_trace.h) reference.
-
-**Table 1** Trace module APIs
-
-
-Function
- |
-API
- |
-Description
- |
-
-
-Starting and stopping trace
- |
-LOS_TraceStart
- |
-Starts trace.
- |
-
-LOS_TraceStop
- |
-Stops trace.
- |
-
-Managing trace records
- |
-LOS_TraceRecordDump
- |
-Exports data in the trace buffer.
- |
-
-LOS_TraceRecordGet
- |
-Obtains the start address of the trace buffer.
- |
-
-LOS_TraceReset
- |
-Clears events in the trace buffer.
- |
-
-Filtering trace records
- |
-LOS_TraceEventMaskSet
- |
-Sets the event mask to trace only events of the specified modules.
- |
-
-Masking events of specified interrupt IDs
- |
-LOS_TraceHwiFilterHookReg
- |
-Registers a hook to filter out events of specified interrupt IDs.
- |
-
-Performing function instrumentation
- |
-LOS_TRACE_EASY
- |
-Performs simple instrumentation.
- |
-
-LOS_TRACE
- |
-Performs standard instrumentation.
- |
-
-
-
-
-- You can perform function instrumentation in the source code to trace specific events. The system provides the following APIs for instrumentation:
- - **LOS\_TRACE\_EASY\(TYPE, IDENTITY, params...\)** for simple instrumentation
- - You only need to insert this API into the source code.
- - **TYPE** specifies the event type. The value range is 0 to 0xF. The meaning of each value is user-defined.
- - **IDENTITY** specifies the object of the event operation. The value is of the **UIntPtr** type.
- - **Params** specifies the event parameters. The value is of the **UIntPtr** type.
-
- Example:
-
- ```
- Perform simple instrumentation for reading and writing files fd1 and fd2.
- Set TYPE to 1 for read operations and 2 for write operations.
- Insert the following to the position where the fd1 file is read:
- LOS_TRACE_EASY(1, fd1, flag, size);
- Insert the following to the position where the fd2 file is read:
- LOS_TRACE_EASY(1, fd2, flag, size);
- Insert the following to the position where the fd1 file is written:
- LOS_TRACE_EASY(2, fd1, flag, size);
- Insert the following in the position where the fd2 file is written:
- LOS_TRACE_EASY(2, fd2, flag, size);
- ```
-
- - **LOS\_TRACE\(TYPE, IDENTITY, params...\)** for standard instrumentation.
- - Compared with simple instrumentation, standard instrumentation supports dynamic event filtering and parameter tailoring. However, you need to extend the functions based on rules.
- - **TYPE** specifies the event type. You can define the event type in **enum LOS\_TRACE\_TYPE** in the header file **los\_trace.h**. For details about methods and rules for defining events, see other event types.
- - The **IDENTITY** and **Params** are the same as those of simple instrumentation.
-
- Example:
-
- ```
- 1. Set the event mask (module-level event type) in enum LOS_TRACE_MASK.
- Format: TRACE_#MOD#_FLAG (MOD indicates the module name)
- Example:
- TRACE_FS_FLAG = 0x4000
- 2. Define the event type in enum LOS_TRACE_TYPE.
- Format: #TYPE# = TRACE_#MOD#_FLAG | NUMBER
- Example:
- FS_READ = TRACE_FS_FLAG | 0; // Read files
- FS_WRITE = TRACE_FS_FLAG | 1; // Write files
- 3. Set event parameters in the #TYPE#_PARAMS(IDENTITY, parma1...) IDENTITY, ... format.
- #TYPE# is the #TYPE# defined in step 2.
- Example:
- #define FS_READ_PARAMS(fp, fd, flag, size) fp, fd, flag, size
- The parameters defined by the macro correspond to the event parameters recorded in the trace buffer. You can modify the parameters as required.
- If no parameter is specified, events of this type are not traced.
- #define FS_READ_PARAMS(fp, fd, flag, size) // File reading events are not traced.
- 4. Insert a code stub in a proper position.
- Format: LOS_TRACE(#TYPE#, #TYPE#_PARAMS(IDENTITY, parma1...))
- LOS_TRACE(FS_READ, fp, fd, flag, size); // Code stub for reading files
- The parameters following #TYPE# are the input parameter of the FS_READ_PARAMS function in step 3.
- ```
-
- > **NOTE:**
- >The trace event types and parameters can be modified as required. For details about the parameters, see **kernel\\include\\los\_trace.h**.
-
-
-
-- For **LOS\_TraceEventMaskSet\(UINT32 mask\)**, only the most significant 28 bits \(corresponding to the enable bit of the module in **LOS\_TRACE\_MASK**\) of the mask take effect and are used only for module-based tracing. Currently, fine-grained event-based tracing is not supported. For example, in **LOS\_TraceEventMaskSet\(0x202\)**, the effective mask is **0x200 \(TRACE\_QUE\_FLAG\)** and all events of the QUE module are collected. The recommended method is **LOS\_TraceEventMaskSet\(TRACE\_EVENT\_FLAG | TRACE\_MUX\_FLAG | TRACE\_SEM\_FLAG | TRACE\_QUE\_FLAG\);**.
-- To enable trace of only simple instrumentation events, set **Trace Mask** to **TRACE\_MAX\_FLAG**.
-- The trace buffer has limited capacity. When the trace buffer is full, events will be overwritten. You can use **LOS\_TraceRecordDump** to export data from the trace buffer and locate the latest records by **CurEvtIndex**.
-- The typical trace operation process includes **LOS\_TraceStart**, **LOS\_TraceStop**, and **LOS\_TraceRecordDump**.
-- You can filter out interrupt events by interrupt ID to prevent other events from being overwritten due to frequent triggering of a specific interrupt in some scenarios. You can customize interrupt filtering rules.
-
- The sample code is as follows:
-
- ```
- BOOL Example_HwiNumFilter(UINT32 hwiNum)
- {
- if ((hwiNum == TIMER_INT) || (hwiNum == DMA_INT)) {
- return TRUE;
- }
- return FALSE;
+The online mode must be used with the integrated development environment (IDE). Trace frames are sent to the IDE in real time. The IDE parses the records and displays them in a visualized manner.
+
+
+## Available APIs
+
+
+### Kernel Mode
+
+The trace module of the OpenHarmony LiteOS-A kernel provides the following APIs. For more details, see [API reference](https://gitee.com/openharmony/kernel_liteos_a/blob/master/kernel/include/los_trace.h).
+
+ **Table 1** APIs of the trace module
+
+| Category| Description|
+| -------- | -------- |
+| Starting/Stopping trace| **LOS_TraceStart**: starts trace.
**LOS_TraceStop**: stops trace. |
+| Managing trace records| **LOS_TraceRecordDump**: dumps data from the trace buffer.
**LOS_TraceRecordGet**: obtains the start address of the trace buffer.
**LOS_TraceReset**: clears events in the trace buffer. |
+| Filtering trace records| **LOS_TraceEventMaskSet**: sets the event mask to trace only events of the specified modules.|
+| Masking events of specified interrupt IDs| **LOS_TraceHwiFilterHookReg**: registers a hook to filter out events of specified interrupt IDs.|
+| Performing function instrumentation| **LOS_TRACE_EASY**: performs simple instrumentation.
**LOS_TRACE**: performs standard instrumentation. |
+
+You can perform function instrumentation in the source code to trace specific events. The system provides the following APIs for instrumentation:
+
+- **LOS_TRACE_EASY(TYPE, IDENTITY, params...)** for simple instrumentation
+
+ - You only need to insert this API into the source code.
+ - **TYPE** specifies the event type. The value range is 0 to 0xF. The meaning of each value is user-defined.
+ - **IDENTITY** specifies the object of the event operation. The value is of the **UIntPtr** type.
+ - **Params** specifies the event parameters. The value is of the **UIntPtr** type.
+ Example:
+
+ ```
+ Perform simple instrumentation for reading and writing files fd1 and fd2.
+ Set TYPE to 1 for read operations and 2 for write operations.
+ Insert the following to the position where the fd1 file is read:
+ LOS_TRACE_EASY(1, fd1, flag, size);
+ Insert the following to the position where the fd2 file is read:
+ LOS_TRACE_EASY(1, fd2, flag, size);
+ Insert the following to the position where the fd1 file is written:
+ LOS_TRACE_EASY(2, fd1, flag, size);
+ Insert the following in the position where the fd2 file is written:
+ LOS_TRACE_EASY(2, fd2, flag, size);
+ ```
+- **LOS_TRACE(TYPE, IDENTITY, params...)** for standard instrumentation.
+ - Compared with simple instrumentation, standard instrumentation supports dynamic event filtering and parameter tailoring. However, you need to extend the functions based on rules.
+ - **TYPE** specifies the event type. You can define the event type in **enum LOS_TRACE_TYPE** in the header file **los_trace.h**. For details about methods and rules for defining events, see other event types.
+ - The **IDENTITY** and **Params** are the same as those of simple instrumentation.
+ Example:
+
+ ```
+ 1. Set the event mask (module-level event type) in enum LOS_TRACE_MASK.
+ Format: TRACE_#MOD#_FLAG (MOD indicates the module name)
+ Example:
+ TRACE_FS_FLAG = 0x4000
+ 2. Define the event type in **enum LOS_TRACE_TYPE**.
+ Format: #TYPE# = TRACE_#MOD#_FLAG | NUMBER
+ Example:
+ FS_READ = TRACE_FS_FLAG | 0; // Read files.
+ FS_WRITE = TRACE_FS_FLAG | 1; // Write files.
+ 3. Set event parameters in the #TYPE#_PARAMS(IDENTITY, parma1...) IDENTITY, ... format.
+ #TYPE# is the #TYPE# defined in step 2.
+ Example:
+ #define FS_READ_PARAMS(fp, fd, flag, size) fp, fd, flag, size
+ The parameters defined by the macro correspond to the event parameters recorded in the trace buffer. You can modify the parameters as required.
+ If no parameter is specified, events of this type are not traced.
+ #define FS_READ_PARAMS(fp, fd, flag, size) // File reading events are not traced.
+ 4. Insert a code stub in a proper position.
+ Format: LOS_TRACE(#TYPE#, #TYPE#_PARAMS(IDENTITY, parma1...))
+ LOS_TRACE(FS_READ, fp, fd, flag, size); // Code stub for reading files.
+ The parameters following #TYPE# are the input parameter of the **FS_READ_PARAMS** function in step 3.
+ ```
+
+ >  **NOTE**
+ > The trace event types and parameters can be modified as required. For details about the parameters, see **kernel\include\los_trace.h**.
+
+For **LOS_TraceEventMaskSet(UINT32 mask)**, only the most significant 28 bits (corresponding to the enable bit of the module in **LOS_TRACE_MASK**) of the mask take effect and are used only for module-based tracing. Currently, fine-grained event-based tracing is not supported. For example, in **LOS_TraceEventMaskSet(0x202)**, the effective mask is **0x200 (TRACE_QUE_FLAG)** and all events of the QUE module are collected. The recommended method is **LOS_TraceEventMaskSet(TRACE_EVENT_FLAG | TRACE_MUX_FLAG | TRACE_SEM_FLAG | TRACE_QUE_FLAG);**.
+
+To enable trace of only simple instrumentation events, set **Trace Mask** to **TRACE_MAX_FLAG**.
+
+The trace buffer has limited capacity. When the trace buffer is full, events will be overwritten. You can use **LOS_TraceRecordDump** to export data from the trace buffer and locate the latest records by **CurEvtIndex**.
+
+The typical trace operation process includes **LOS_TraceStart**, **LOS_TraceStop**, and **LOS_TraceRecordDump**.
+
+You can filter out interrupt events by interrupt ID to prevent other events from being overwritten due to frequent triggering of a specific interrupt in some scenarios. You can customize interrupt filtering rules.
+
+Example:
+
+```
+BOOL Example_HwiNumFilter(UINT32 hwiNum)
+{
+ if ((hwiNum == TIMER_INT) || (hwiNum == DMA_INT)) {
+ return TRUE;
}
- LOS_TraceHwiFilterHookReg(Example_HwiNumFilter);
- ```
+ return FALSE;
+}
+LOS_TraceHwiFilterHookReg(Example_HwiNumFilter);
+```
+
+The interrupt events with interrupt ID of **TIMER_INT** or **DMA_INT** are not traced.
+
+### User Mode
-The interrupt events with interrupt ID of **TIMER\_INT** or **DMA\_INT** are not traced.
+The trace character device is added in **/dev/trace**. You can use **read()**, **write()**, and **ioctl()** on the device node to read, write, and control trace in user mode.
-### User Mode
+- **read()**: reads the trace data in user mode.
-The trace character device is added in **/dev/trace**. You can use **read\(\)**, **write\(\)**, and **ioctl\(\)** on the device node to read, write, and control trace in user mode.
+- **write()**: writes an event in user mode.
-- **read\(\)**: reads the trace data in user mode.
-- **write\(\)**: writes an event in user mode.
-- **ioctl\(\)**: performs user-mode trace operations, including:
+- **ioctl()**: performs user-mode trace operations, including:
+
```
#define TRACE_IOC_MAGIC 'T'
#define TRACE_START _IO(TRACE_IOC_MAGIC, 1)
@@ -189,134 +138,77 @@ The trace character device is added in **/dev/trace**. You can use **read\(\)*
#define TRACE_SET_MASK _IO(TRACE_IOC_MAGIC, 5)
```
-The operations specified by the input parameter of **ioctl\(\)** correspond to **LOS\_TraceStart**, **LOS\_TraceStop**, **LOS\_TraceReset**, **LOS\_TraceRecordDump**, and **LOS\_TraceEventMaskSet**, respectively.
+The operations specified by the input parameter of **ioctl()** correspond to **LOS_TraceStart**, **LOS_TraceStop**, **LOS_TraceReset**, **LOS_TraceRecordDump**, and **LOS_TraceEventMaskSet**, respectively.
+
+For details, see [User-Mode Development Example](kernel-small-debug-trace.md#user-mode).
+
-For more details, see [User-mode Programming Example](https://gitee.com/openharmony/docs/blob/70744e1e0e34d66e11108a00c8db494eea49dd02/en/device-dev/kernel/kernel-small-debug-trace.md#section4.2.2).
+## Development Guidelines
-## Development Guidelines
-### Kernel-mode Development Process
+### Kernel-Mode Development Process
The typical trace process is as follows:
-1. Configure the macro related to the trace module.
-
- Configure the trace macro **LOSCFG\_KERNEL\_TRACE**, which is disabled by default. Run the **make update\_config** command in the **kernel/liteos\_a** directory, choose **Kernel** \> **Enable Hook Feature**, and set **Enable Trace Feature** to **YES**.
-
-
- Configuration
- |
- menuconfig Option
- |
- Description
- |
- Value
- |
-
-
- LOSCFG_KERNEL_TRACE
- |
- Enable Trace Feature
- |
- Specifies whether to enable the trace feature.
- |
- YES/NO
- |
-
- LOSCFG_RECORDER_MODE_OFFLINE
- |
- Trace work mode ->Offline mode
- |
- Specifies whether to enable the online trace mode.
- |
- YES/NO
- |
-
- LOSCFG_RECORDER_MODE_ONLINE
- |
- Trace work mode ->Online mode
- |
- Specifies whether to enable the offline trace mode.
- |
- YES/NO
- |
-
- LOSCFG_TRACE_CLIENT_INTERACT
- |
- Enable Trace Client Visualization and Control
- |
- Specifies whether to enable interaction with Trace IDE (dev tools), including data visualization and process control.
- |
- YES/NO
- |
-
- LOSCFG_TRACE_FRAME_CORE_MSG
- |
- Enable Record more extended content ->Record cpuid, hardware interrupt status, task lock status
- |
- Specifies whether to enable recording of the CPU ID, interruption state, and lock task state.
- |
- YES/NO
- |
-
- LOSCFG_TRACE_FRAME_EVENT_COUNT
- |
- Enable Record more extended content ->Record event count, which indicate the sequence of happend events
- |
- Specifies whether to enables recording of the event sequence number.
- |
- YES/NO
- |
-
- LOSCFG_TRACE_FRAME_MAX_PARAMS
- |
- Record max params
- |
- Specifies the maximum number of parameters for event recording.
- |
- INT
- |
-
- LOSCFG_TRACE_BUFFER_SIZE
- |
- Trace record buffer size
- |
- Specifies the trace buffer size.
- |
- INT
- |
-
-
-
-
-2. \(Optional\) Preset event parameters and stubs \(or use the default event parameter settings and event stubs\).
-3. \(Optional\) Call **LOS\_TraceStop** to stop trace and call **LOS\_TraceReset** to clear the trace buffer. \(Trace is started by default.\)
-4. \(Optional\) Call **LOS\_TraceEventMaskSet** to set the event mask for trace \(only the interrupts and task events are enabled by default\). For details about the event mask, see **LOS\_TRACE\_MASK** in **los\_trace.h**.
-5. Call **LOS\_TraceStart** at the start of the code where the event needs to be traced.
-6. Call **LOS\_TraceStop** at the end of the code where the event needs to be traced.
-7. Call **LOS\_TraceRecordDump** to output the data in the buffer. \(The input parameter of the function is of the Boolean type. The value **FALSE** means to output data in the specified format, and the value **TRUE** means to output data to Trace IDE.\)
-
-The methods in steps 3 to 7 are encapsulated with shell commands. After the shell is enabled, the corresponding commands can be executed. The mapping is as follows:
-
-- LOS\_TraceReset —— trace\_reset
-- LOS\_TraceEventMaskSet —— trace\_mask
-- LOS\_TraceStart —— trace\_start
-- LOS\_TraceStop —— trace\_stop
-- LOS\_TraceRecordDump —— trace\_dump
-
-## Kernel-mode Programming Example
+1. Configure the macro related to the trace module.
+
+ Configure the macro **LOSCFG_KERNEL_TRACE**, which is disabled by default. Run the **make update_config** command in the **kernel/liteos_a** directory, choose **Kernel** > **Enable Hook Feature**, and set **Enable Trace Feature** to **YES**.
+
+| Configuration Item | menuconfig Option| Description| Value|
+| -------- | -------- | -------- | -------- |
+| LOSCFG_KERNEL_TRACE | Enable Trace Feature | Specifies whether to enable the trace feature.| YES/NO |
+| LOSCFG_RECORDER_MODE_OFFLINE | Trace work mode ->Offline mode | Specifies whether to enable the online trace mode.| YES/NO |
+| LOSCFG_RECORDER_MODE_ONLINE | Trace work mode ->Online mode | Specifies whether to enable the offline trace mode.| YES/NO |
+| LOSCFG_TRACE_CLIENT_INTERACT | Enable Trace Client Visualization and Control | Enables interaction with Trace IDE (dev tools), including data visualization and process control.| YES/NO |
+| LOSCFG_TRACE_FRAME_CORE_MSG | Enable Record more extended content -
>Record cpuid, hardware interrupt
status, task lock status | Specifies whether to enable recording of the CPU ID, interruption state, and lock task state.| YES/NO |
+| LOSCFG_TRACE_FRAME_EVENT_COUNT | Enable Record more extended content
->Record event count,
which indicate the sequence of happend events | Specifies whether to enables recording of the event sequence number.| YES/NO |
+| LOSCFG_TRACE_FRAME_MAX_PARAMS | Record max params | Specifies the maximum number of parameters for event recording.| INT |
+| LOSCFG_TRACE_BUFFER_SIZE | Trace record buffer size | Specifies the trace buffer size.| INT |
+
+2. (Optional) Preset event parameters and stubs (or use the default event parameter settings and event stubs).
+
+3. (Optional) Call **LOS_TraceStop** to stop trace and call **LOS_TraceReset** to clear the trace buffer. (Trace is started by default.)
+
+4. (Optional) Call **LOS_TraceEventMaskSet** to set the event mask for trace (only the interrupts and task events are enabled by default). For details about the event mask, see **LOS_TRACE_MASK** in **los_trace.h**.
+
+5. Call **LOS_TraceStart** at the start of the code where the event needs to be traced.
+
+6. Call **LOS_TraceStop** at the end of the code where the event needs to be traced.
+
+7. Call **LOS_TraceRecordDump** to output the data in the buffer. (The input parameter of the function is of the Boolean type. The value **FALSE** means to output data in the specified format, and the value **TRUE** means to output data to Trace IDE.)
+
+The methods in steps 3 to 7 are encapsulated with shell commands. You can run these commands on shell. The mappings between the functions and commands are as follows:
+
+- LOS_TraceReset —— trace_reset
+
+- LOS_TraceEventMaskSet —— trace_mask
+
+- LOS_TraceStart —— trace_start
+
+- LOS_TraceStop —— trace_stop
+
+- LOS_TraceRecordDump —— trace_dump
+
+
+### Kernel-Mode Development Example
This example implements the following:
-1. Create a trace task.
-2. Set the event mask.
-3. Start trace.
-4. Stop trace.
-5. Output trace data in the specified format.
+1. Create a trace task.
+
+2. Set the event mask.
+
+3. Start trace.
+
+4. Stop trace.
+
+5. Output trace data in the specified format.
+
+
+### Kernel-Mode Sample Code
-## Kernel-mode Sample Code
+The sample code is as follows:
-The code is as follows:
```
#include "los_trace.h"
@@ -331,21 +223,21 @@ VOID Example_Trace(VOID)
dprintf("trace start error\n");
return;
}
- /* Trigger a task switching event.*/
+ /* Trigger a task switching event. */
LOS_TaskDelay(1);
LOS_TaskDelay(1);
LOS_TaskDelay(1);
- /* Stop trace.*/
+ /* Stop trace. */
LOS_TraceStop();
LOS_TraceRecordDump(FALSE);
}
UINT32 Example_Trace_test(VOID){
UINT32 ret;
TSK_INIT_PARAM_S traceTestTask;
- /* Create a trace task. */
+ /* Create a trace task. */
memset(&traceTestTask, 0, sizeof(TSK_INIT_PARAM_S));
traceTestTask.pfnTaskEntry = (TSK_ENTRY_FUNC)Example_Trace;
- traceTestTask.pcName = "TestTraceTsk"; /* Trace task name*/
+ traceTestTask.pcName = "TestTraceTsk"; /* Test task name. */
traceTestTask.uwStackSize = 0x800;
traceTestTask.usTaskPrio = 5;
traceTestTask.uwResved = LOS_TASK_STATUS_DETACHED;
@@ -354,22 +246,24 @@ UINT32 Example_Trace_test(VOID){
dprintf("TraceTestTask create failed .\n");
return LOS_NOK;
}
- /* Trace is started by default. Therefore, you can stop trace, clear the buffer, and then restart trace. */
+ /* Trace is started by default. Therefore, you can stop trace, clear the buffer, and then start trace. */
LOS_TraceStop();
LOS_TraceReset();
- /* Enable trace of the Task module events. */
+ /* Enable trace of the Task module events. */
LOS_TraceEventMaskSet(TRACE_TASK_FLAG);
return LOS_OK;
}
LOS_MODULE_INIT(Example_Trace_test, LOS_INIT_LEVEL_KMOD_EXTENDED);
```
-## Verification
+
+### Verification
The output is as follows:
+
```
-*******TraceInfo begin*******
+***TraceInfo begin***
clockFreq = 50000000
CurEvtIndex = 7
Index Time(cycles) EventType CurTask Identity params
@@ -381,36 +275,41 @@ Index Time(cycles) EventType CurTask Identity params
5 0x36eec810 0x45 0xc 0x1 0x9 0x8 0x1f
6 0x3706f804 0x45 0x1 0x0 0x1f 0x4 0x0
7 0x37070e59 0x45 0x0 0x1 0x0 0x8 0x1f
-*******TraceInfo end*******
+***TraceInfo end***
```
The output event information includes the occurrence time, event type, task in which the event occurs, object of the event operation, and other parameters of the event.
-- **EventType**: event type. For details, see **enum LOS\_TRACE\_TYPE** in the header file **los\_trace.h**.
-- **CurrentTask**: ID of the running task.
-- **Identity**: object of the event operation. For details, see **\#TYPE\#\_PARAMS** in the header file **los\_trace.h**.
-- **params**: event parameters. For details, see **\#TYPE\#\_PARAMS** in the header file **los\_trace.h**.
+- **EventType**: event type. For details, see **enum LOS_TRACE_TYPE** in the header file **los_trace.h**.
+
+- **CurrentTask**: ID of the running task.
+
+- **Identity**: object of the event operation. For details, see **#TYPE#_PARAMS** in the header file **los_trace.h**.
+
+- **params**: event parameters. For details, see **#TYPE#_PARAMS** in the header file **los_trace.h**.
The following uses output No. 0 as an example.
+
```
Index Time(cycles) EventType CurTask Identity params
0 0x366d5e88 0x45 0x1 0x0 0x1f 0x4
```
-- **Time \(cycles\)** can be converted into time \(in seconds\) by dividing the cycles by clockFreq.
-- **0x45** indicates the task switching event. **0x1** is the ID of the task in running.
-- For details about the meanings of **Identity** and **params**, see the **TASK\_SWITCH\_PARAMS** macro.
+- **Time (cycles)** can be converted into time (in seconds) by dividing the cycles by clockFreq.
+
+- **0x45** indicates the task switching event. **0x1** is the ID of the task in running.
+
+- For details about the meanings of **Identity** and **params**, see the **TASK_SWITCH_PARAMS** macro.
```
#define TASK_SWITCH_PARAMS(taskId, oldPriority, oldTaskStatus, newPriority, newTaskStatus) \
taskId, oldPriority, oldTaskStatus, newPriority, newTaskStatus
```
-Because of **\#TYPE\#\_PARAMS\(IDENTITY, parma1...\) IDENTITY, ...**, **Identity** is **taskId \(0x0\)** and the first parameter is **oldPriority \(0x1f\)**.
-
-> **NOTE:**
->The number of **param**s is specified by the **LOSCFG\_TRACE\_FRAME\_MAX\_PARAMS** parameter. The default value is **3**. Excess parameters are not recorded. You need to set **LOSCFG\_TRACE\_FRAME\_MAX\_PARAMS** based on service requirements.
+Because of **#TYPE#_PARAMS(IDENTITY, parma1...) IDENTITY, ...**, **Identity** is **taskId (0x0)** and the first parameter is **oldPriority (0x1f)**.
-Task 0x1 is switched to Task 0x0. The priority of task 0x1 is **0x1f**, and the state is **0x4**. The priority of the task 0x0 is **0x0**.
+>  **NOTE**
+> The number of parameters in **params** is specified by **LOSCFG_TRACE_FRAME_MAX_PARAMS**. The default value is **3**. Excess parameters are not recorded. You need to set **LOSCFG_TRACE_FRAME_MAX_PARAMS** based on service requirements.
+Task 0x1 is switched to Task 0x0. The priority of task 0x1 is **0x1f**, and the state is **0x4**. The priority of the task 0x0 is **0x0**.
diff --git a/en/device-dev/kernel/kernel-small-memory-lms.md b/en/device-dev/kernel/kernel-small-memory-lms.md
index 74595cb951918bad29df9240c52395866cc5e853..277cbea8a28268a9c4597be980a22bb69f8f85f8 100644
--- a/en/device-dev/kernel/kernel-small-memory-lms.md
+++ b/en/device-dev/kernel/kernel-small-memory-lms.md
@@ -1,186 +1,119 @@
# LMS
-## Basic Concepts
+## Basic Concepts
-Lite Memory Sanitizer \(LMS\) is a tool used to detect memory errors on a real-time basis. LMS can detect buffer overflow, Use-After-Free \(UAF\), and double free errors in real time, and notify the operating system immediately. Together with locating methods such as Backtrace, LMS can locate the code line that causes the memory error. It greatly improves the efficiency of locating memory errors.
+Lite Memory Sanitizer (LMS) is a tool used to detect memory errors on a real-time basis. LMS can detect buffer overflow, Use-After-Free (UAF), and double free errors in real time, and notify the operating system immediately. Together with locating methods such as Backtrace, LMS can locate the code line that causes the memory error. It greatly improves the efficiency of locating memory errors.
The LMS module of the OpenHarmony LiteOS-A kernel provides the following functions:
-- Supports check of multiple memory pools.
-- Checks the memory allocated by **LOS\_MemAlloc**, **LOS\_MemAllocAlign**, and **LOS\_MemRealloc**.
-- Checks the memory when bounds-checking functions are called \(enabled by default\).
-- Checks the memory when libc frequently accessed functions, including **memset**, **memcpy**, **memmove**, **strcat**, **strcpy**, **strncat** and **strncpy**, are called.
-
-## Working Principles
-
-LMS uses shadow memory mapping to mark the system memory state. There are three states: **Accessible**, **RedZone**, and **Freed**. The shadow memory is located in the tail of the memory pool.
-
-- After memory is allocated from the heap, the shadow memory in the data area is set to the **Accessible** state, and the shadow memory in the head node area is set to the **RedZone** state.
-- When memory is released from the heap, the shadow memory of the released memory is set to the **Freed** state.
-- During code compilation, a function is inserted before the read/write instructions in the code to check the address validity. The tool checks the state value of the shadow memory that accesses the memory. If the shadow memory is in the **RedZone** statue, an overflow error will be reported. If the shadow memory is in the **Freed** state, a UAF error will be reported.
-- When memory is released, the tool checks the state value of the shadow memory at the released address. If the shadow memory is in the **RedZone** state, a double free error will be reported.
-
-## Available APIs
-
-### Kernel Mode
-
-The LMS module of the OpenHarmony LiteOS-A kernel provides the following APIs. For more details about the APIs, see the [API](https://gitee.com/openharmony/kernel_liteos_a/blob/master/kernel/include/los_lms.h) reference.
-
-**Table 1** LMS module APIs
-
-
-Function
- |
-API
- |
-Description
- |
-
-
-Adding a memory pool to be checked
- |
-LOS_LmsCheckPoolAdd
- |
-Adds the address range of a memory pool to the LMS check linked list. LMS performs a validity check when the accessed address is within the linked list. In addition, LOS_MemInit calls this API to add the initialized memory pool to the LMS check linked list by default.
- |
-
-Deleting a memory pool from the LMS check linked list
- |
-LOS_LmsCheckPoolDel
- |
-Cancels the validity check on the specified memory pool.
- |
-
-Protecting a specified memory chunk
- |
-LOS_LmsAddrProtect
- |
-Locks a memory chunk to prevent it from being read or written. Once the locked memory chunk is accessed, an error will be reported.
- |
-
-Disabling protection of a specified memory chunk
- |
-LOS_LmsAddrDisableProtect
- |
-Unlocks a memory chunk to make it readable and writable.
- |
-
-
-
-
-### User Mode
+- Supports check of multiple memory pools.
+
+- Checks the memory allocated by **LOS_MemAlloc**, **LOS_MemAllocAlign**, and **LOS_MemRealloc**.
+
+- Checks the memory when bounds-checking functions are called (enabled by default).
+
+- Checks the memory when libc frequently accessed functions, including **memset**, **memcpy**, **memmove**, **strcat**, **strcpy**, **strncat** and **strncpy**, are called.
+
+
+## Working Principles
+
+LMS uses shadow memory mapping to mark the system memory state. There are three states: **Accessible**, **RedZone**, and **Freed**. The shadow memory is located in the tail of the memory pool.
+
+- After memory is allocated from the heap, the shadow memory in the data area is set to the **Accessible** state, and the shadow memory in the head node area is set to the **RedZone** state.
+
+- When memory is released from the heap, the shadow memory of the released memory is set to the **Freed** state.
+
+- During code compilation, a function is inserted before the read/write instructions in the code to check the address validity. The tool checks the state value of the shadow memory that accesses the memory. If the shadow memory is in the **RedZone** statue, an overflow error will be reported. If the shadow memory is in the **Freed** state, a UAF error will be reported.
+
+- When memory is released, the tool checks the state value of the shadow memory at the released address. If the shadow memory is in the **RedZone** state, a double free error will be reported.
+
+
+## Available APIs
+
+
+### Kernel Mode
+
+The LMS module of the OpenHarmony LiteOS-A kernel provides the following APIs. For more details, see [API reference](https://gitee.com/openharmony/kernel_liteos_a/blob/master/kernel/include/los_lms.h).
+
+ **Table 1** APIs of the LMS module
+
+| Category| API| Description|
+| -------- | -------- | -------- |
+| Adding a memory pool to be checked| LOS_LmsCheckPoolAdd | Adds the address range of a memory pool to the LMS check linked list. LMS performs a validity check when the accessed address is within the linked list. In addition, **LOS_MemInit** calls this API to add the initialized memory pool to the LMS check linked list by default.|
+| Deleting a memory pool from the LMS check linked list| LOS_LmsCheckPoolDel | Cancels the validity check on the specified memory pool.|
+| Protecting a specified memory chunk| LOS_LmsAddrProtect | Locks a memory chunk to prevent it from being read or written. Once the locked memory chunk is accessed, an error will be reported.|
+| Disabling protection of a specified memory chunk| LOS_LmsAddrDisableProtect | Unlocks a memory chunk to make it readable and writable.|
+
+
+### User Mode
The user mode provides only the LMS check library. It does not provide external APIs.
-## Development Guidelines
-### Kernel-mode Development Process
+## Development Guidelines
-The typical process for enabling LMS is as follows:
-1. Configure the macros related to the LMS module.
-
- Configure the LMS macro **LOSCFG\_KERNEL\_LMS**, which is disabled by default. Run the **make update\_config** command in the **kernel/liteos\_a** directory, choose **Kernel**, and select **Enable Lite Memory Sanitizer**.
-
-
- Macro
- |
- menuconfig Option
- |
- Description
- |
- Value
- |
-
-
- LOSCFG_KERNEL_LMS
- |
- Enable Lms Feature
- |
- Whether to enable LMS.
- |
- YES/NO
- |
-
- LOSCFG_LMS_MAX_RECORD_POOL_NUM
- |
- Lms check pool max num
- |
- Maximum number of memory pools that can be checked by LMS.
- |
- INT
- |
-
- LOSCFG_LMS_LOAD_CHECK
- |
- Enable lms read check
- |
- Whether to enable LMS read check.
- |
- YES/NO
- |
-
- LOSCFG_LMS_STORE_CHECK
- |
- Enable lms write check
- |
- Whether to enable LMS write check.
- |
- YES/NO
- |
-
- LOSCFG_LMS_CHECK_STRICT
- |
- Enable lms strict check, byte-by-byte
- |
- Whether to enable LMS byte-by-byte check.
- |
- YES/NO
- |
-
-
-
-
-2. Modify the compile script of the target module.
-
- Add "-fsanitize=kernel-address" to insert memory access checks, and add the **-O0** option to disable optimization performed by the compiler.
-
- The modifications vary depending on the compiler \(GCC or Clang\) used. The following is an example:
-
- ```
- if ("$ohos_build_compiler_specified" == "gcc") {
- cflags_c = [
- "-O0",
- "-fsanitize=kernel-address",
- ]
- } else {
- cflags_c = [
- "-O0",
- "-fsanitize=kernel-address",
- "-mllvm",
- "-asan-instrumentation-with-call-threshold=0",
- "-mllvm",
- "-asan-stack=0",
- "-mllvm",
- "-asan-globals=0",
- ]
- }
- ```
+### Kernel-Mode Development Process
-3. Recompile the code and check the serial port output. The memory problem detected will be displayed.
+The typical process for enabling LMS is as follows:
-## Kernel-mode Development Example
+1. Configure the macros related to the LMS module.
+
+ Configure the LMS macro **LOSCFG_KERNEL_LMS**, which is disabled by default. Run the **make update_config** command in the **kernel/liteos_a** directory, choose **Kernel**, and select **Enable Lite Memory Sanitizer**.
+
+ | Macro| menuconfig Option| Description| Value:|
+ | -------- | -------- | -------- | -------- |
+ | LOSCFG_KERNEL_LMS | Enable Lms Feature | Whether to enable LMS.| YES/NO |
+ | LOSCFG_LMS_MAX_RECORD_POOL_NUM | Lms check pool max num | Maximum number of memory pools that can be checked by LMS.| INT |
+ | LOSCFG_LMS_LOAD_CHECK | Enable lms read check | Whether to enable LMS read check.| YES/NO |
+ | LOSCFG_LMS_STORE_CHECK | Enable lms write check | Whether to enable LMS write check.| YES/NO |
+ | LOSCFG_LMS_CHECK_STRICT | Enable lms strict check, byte-by-byte | Whether to enable LMS byte-by-byte check.| YES/NO |
+
+
+2. Modify the build script of the target module.
+
+ Add **-fsanitize=kernel-address** to insert memory access checks, and add the **-O0** option to disable optimization performed by the compiler.
+
+ The modifications vary depending on the compiler (GCC or Clang) used. The following is an example:
+
+ ```
+ if ("$ohos_build_compiler_specified" == "gcc") {
+ cflags_c = [
+ "-O0",
+ "-fsanitize=kernel-address",
+ ]
+ } else {
+ cflags_c = [
+ "-O0",
+ "-fsanitize=kernel-address",
+ "-mllvm",
+ "-asan-instrumentation-with-call-threshold=0",
+ "-mllvm",
+ "-asan-stack=0",
+ "-mllvm",
+ "-asan-globals=0",
+ ]
+ }
+ ```
+
+3. Recompile the code and check the serial port output. The memory problem detected will be displayed.
+
+
+#### Kernel-Mode Development Example
This example implements the following:
-1. Create a task for LMS.
-2. Construct a buffer overflow error and a UAF error.
-3. Add "-fsanitize=kernel-address", execute the compilation, and check the output.
+1. Create a task for LMS.
-## Kernel-mode Sample Code
+2. Construct a buffer overflow error and a UAF error.
-The code is as follows:
+3. Add "-fsanitize=kernel-address", execute the compilation, and check the output.
+
+
+#### Kernel-Mode Sample Code
+
+ The sample code is as follows:
```
#define PAGE_SIZE (0x1000U)
@@ -221,10 +154,10 @@ VOID LmsTestCaseTask(VOID)
UINT32 Example_Lms_test(VOID){
UINT32 ret;
TSK_INIT_PARAM_S lmsTestTask;
- /* Create a task for LMS. */
+ /* Create a task for LMS. */
memset(&lmsTestTask, 0, sizeof(TSK_INIT_PARAM_S));
lmsTestTask.pfnTaskEntry = (TSK_ENTRY_FUNC)LmsTestCaseTask;
- lmsTestTask.pcName = "TestLmsTsk"; /* Task name. */
+ lmsTestTask.pcName = "TestLmsTsk"; /* Test task name. */
lmsTestTask.uwStackSize = 0x800;
lmsTestTask.usTaskPrio = 5;
lmsTestTask.uwResved = LOS_TASK_STATUS_DETACHED;
@@ -238,20 +171,21 @@ UINT32 Example_Lms_test(VOID){
LOS_MODULE_INIT(Example_Lms_test, LOS_INIT_LEVEL_KMOD_EXTENDED);
```
-### Kernel-mode Verification
-The output is as follows:
+#### Kernel-Mode Verification
+
+ The output is as follows:
```
######LmsTestOsmallocOverflow start ######
-[ERR][KProcess:LmsTestCaseTask]***** Kernel Address Sanitizer Error Detected Start *****
+[ERR][KProcess:LmsTestCaseTask]* Kernel Address Sanitizer Error Detected Start *
[ERR][KProcess:LmsTestCaseTask]Heap buffer overflow error detected
[ERR][KProcess:LmsTestCaseTask]Illegal READ address at: [0x4157a3c8]
[ERR][KProcess:LmsTestCaseTask]Shadow memory address: [0x4157be3c : 4] Shadow memory value: [2]
OsBackTrace fp = 0x402c0f88
runTask->taskName = LmsTestCaseTask
runTask->taskID = 2
-*******backtrace begin*******
+***backtrace begin***
traceback fp fixed, trace using fp = 0x402c0fd0
traceback 0 -- lr = 0x400655a4 fp = 0x402c0ff8
traceback 1 -- lr = 0x40065754 fp = 0x402c1010
@@ -269,18 +203,18 @@ traceback 3 -- lr = 0x40004e14 fp = 0xcacacaca
[0x4157a3e0]: 00 00 00 00 00 00 00 00 | [0x4157be3e | 0]: 3 3
[0x4157a3e8]: 00 00 00 00 00 00 00 00 | [0x4157be3e | 4]: 3 3
[0x4157a3f0]: 00 00 00 00 00 00 00 00 | [0x4157be3f | 0]: 3 3
-[ERR][KProcess:LmsTestCaseTask]***** Kernel Address Sanitizer Error Detected End *****
+[ERR][KProcess:LmsTestCaseTask]* Kernel Address Sanitizer Error Detected End *
str[20]=0xffffffba
######LmsTestOsmallocOverflow stop ######
###### LmsTestUseAfterFree start ######
-[ERR][KProcess:LmsTestCaseTask]***** Kernel Address Sanitizer Error Detected Start *****
+[ERR][KProcess:LmsTestCaseTask]* Kernel Address Sanitizer Error Detected Start *
[ERR][KProcess:LmsTestCaseTask]Use after free error detected
[ERR][KProcess:LmsTestCaseTask]Illegal READ address at: [0x4157a3d4]
[ERR][KProcess:LmsTestCaseTask]Shadow memory address: [0x4157be3d : 2] Shadow memory value: [3]
OsBackTrace fp = 0x402c0f90
runTask->taskName = LmsTestCaseTask
runTask->taskID = 2
-*******backtrace begin*******
+***backtrace begin***
traceback fp fixed, trace using fp = 0x402c0fd8
traceback 0 -- lr = 0x40065680 fp = 0x402c0ff8
traceback 1 -- lr = 0x40065758 fp = 0x402c1010
@@ -298,35 +232,36 @@ traceback 3 -- lr = 0x40004e14 fp = 0xcacacaca
[0x4157a3e8]: ba dc cd ab c8 a3 57 41 | [0x4157be3e | 4]: 2 2
[0x4157a3f0]: 0c 1a 00 00 00 00 00 00 | [0x4157be3f | 0]: 2 3
[0x4157a3f8]: 00 00 00 00 00 00 00 00 | [0x4157be3f | 4]: 3 3
-[ERR][KProcess:LmsTestCaseTask]***** Kernel Address Sanitizer Error Detected End *****
+[ERR][KProcess:LmsTestCaseTask]* Kernel Address Sanitizer Error Detected End *
str[ 0]=0x 0
######LmsTestUseAfterFree stop ######
```
The key output information is as follows:
-- Error type:
- - Heap buffer overflow
- - UAF
+- Error type:
+ - Heap buffer overflow
+ - UAF
+
+- Incorrect operations:
+ - Illegal read
+ - Illegal write
+ - Illegal double free
-- Incorrect operations:
- - Illegal read
- - Illegal write
- - Illegal double free
+- Context:
+ - Task information (**taskName** and **taskId**)
+ - Backtrace
-- Context:
- - Task information \(**taskName** and **taskId**\)
- - Backtrace
+- Memory information of the error addresses:
+ - Memory value and the value of the corresponding shadow memory
+ - Memory address: memory value|[shadow memory address|shadow memory byte offset]: shadow memory value
+ - Shadow memory value. **0** (Accessible), **3** (Freed), **2** (RedZone), and **1** (filled value)
-- Memory information of the error addresses:
- - Memory value and the value of the corresponding shadow memory
- - Memory address: memory value|\[shadow memory address|shadow memory byte offset\]: shadow memory value
- - Shadow memory value. **0** \(Accessible\), **3** \(Freed\), **2** \(RedZone\), and **1** \(filled value\)
+### User-Mode Development Process
-### User-mode Development Process
+Add the following to the build script of the app to be checked. For details about the complete code, see **/kernel/liteos_a/apps/lms/BUILD.gn**.
-Add the following to the build script of the app to be checked. For details about the complete code, see **/kernel/liteos\_a/apps/lms/BUILD.gn**.
```
if ("$ohos_build_compiler_specified" == "gcc") {
@@ -369,16 +304,19 @@ if ("$ohos_build_compiler_specified" == "gcc") {
deps = [ "//kernel/liteos_a/kernel/extended/lms/usr:usrlmslib" ]
```
-### User-mode Development Example
+
+#### User-Mode Development Example
This example implements the following:
-1. Construct a buffer overflow error and a UAF error.
-2. Modify the build script and perform the build again.
+1. Construct a buffer overflow error and a UAF error.
+
+2. Modify the build script and perform the build again.
+
-### User-Mode Sample Code
+#### User-Mode Sample Code
-The code is as follows:
+ The code is as follows:
```
static void BufWriteTest(void *buf, int start, int end)
@@ -421,16 +359,17 @@ int main(int argc, char * const * argv)
}
```
-### User-mode Verification
-The output is as follows:
+#### User-Mode Verification
+
+ The output is as follows:
```
-***** Lite Memory Sanitizer Error Detected *****
+* Lite Memory Sanitizer Error Detected *
Heap buffer overflow error detected!
Illegal READ address at: [0x1f8b3edf]
Shadow memory address: [0x3d34d3ed : 6] Shadow memory value: [2]
-Accessable heap addr 0
+Accessible heap addr 0
Heap red zone 2
Heap freed buffer 3
Dump info around address [0x1f8b3edf]:
@@ -443,7 +382,7 @@ Dump info around address [0x1f8b3edf]:
[0x1f8b3ee8]: 09 00 00 00 00 00 00 00 | [0x3d34d3ee | 4]: 0 0
[0x1f8b3ef0]: 00 00 00 00 08 03 09 00 | [0x3d34d3ef | 0]: 2 2
[0x1f8b3ef8]: 00 00 00 00 00 00 00 00 | [0x3d34d3ef | 4]: 2 2
-***** Lite Memory Sanitizer Error Detected End *****
+* Lite Memory Sanitizer Error Detected End *
Backtrace() returned 5 addresses
#01: [0x4d6c] -> ./sample_usr_lms
#02: <(null)+0x2004074>[0x4074] -> ./sample_usr_lms
@@ -451,11 +390,11 @@ Backtrace() returned 5 addresses
#04: [0x363c] -> ./sample_usr_lms
#05: <(null)+0x1f856f30>[0x56f30] -> /lib/libc.so
-------- LMS_malloc_test End --------
-***** Lite Memory Sanitizer Error Detected *****
+* Lite Memory Sanitizer Error Detected *
Use after free error detected!
Illegal Double free address at: [0x1f8b3ee0]
Shadow memory address: [0x3d34d3ee : 0] Shadow memory value: [3]
-Accessable heap addr 0
+Accessible heap addr 0
Heap red zone 2
Heap freed buffer 3
Dump info around address [0x1f8b3ee0]:
@@ -468,7 +407,7 @@ Dump info around address [0x1f8b3ee0]:
[0x1f8b3ef0]: 20 40 8b 1f 20 20 8b 1f | [0x3d34d3ef | 0]: 3 3
[0x1f8b3ef8]: 00 00 00 00 00 00 00 00 | [0x3d34d3ef | 4]: 3 3
[0x1f8b3f00]: 00 00 00 00 00 00 00 00 | [0x3d34d3f0 | 0]: 3 3
-***** Lite Memory Sanitizer Error Detected End *****
+* Lite Memory Sanitizer Error Detected End *
Backtrace() returned 5 addresses
#01: [0x4d6c] -> ./sample_usr_lms
#02: [0x5548] -> ./sample_usr_lms
@@ -479,4 +418,3 @@ Backtrace() returned 5 addresses
```
The Backtrace output contains the names of the files where the addresses are located. You can locate the code line corresponding to the address in the related file.
-
diff --git a/en/device-dev/kernel/kernel-small-start-kernel.md b/en/device-dev/kernel/kernel-small-start-kernel.md
index c92af04ba02216c05708e280bd427b7b8cb128d8..01c4373ac8b51dc17a9ea91985c98688f4965311 100644
--- a/en/device-dev/kernel/kernel-small-start-kernel.md
+++ b/en/device-dev/kernel/kernel-small-start-kernel.md
@@ -1,99 +1,46 @@
# Startup in Kernel Mode
-## Kernel Startup Process
-
-The kernel startup process consists of the assembly startup and C language startup, as shown in the following figure. The assembly startup involves the following operations: initializing CPU settings, disabling dCache/iCache, enabling the FPU and NEON, setting the MMU to establish the virtual-physical address mapping, setting the system stack, clearing the BSS segment, and calling the main function of the C language. The C language startup involves the following operations: starting the OsMain function and starting scheduling. As shown in the following figure, the OsMain function is used for basic kernel initialization and architecture- and board-level initialization. The kernel startup framework leads the initialization process. The right part of the figure shows the phase in which external modules can register with the kernel startup framework and starts. [Table 1](#table38544719428) describes each phase.
-
-**Figure 1** Kernel startup process
-
-
-**Table 1** Startup framework levels
-
-
-Level
- |
-Description
- |
-
-
-LOS_INIT_LEVEL_EARLIEST
- |
-Earliest initialization.
-The initialization is architecture-independent. The board and subsequent modules initialize the pure software modules on which they depend.
-Example: trace module
- |
-
-LOS_INIT_LEVEL_ARCH_EARLY
- |
-Early initialization of the architecture.
-The initialization is architecture-dependent. Subsequent modules initialize the modules on which they depend. It is recommended that functions not required for startup be placed at LOS_INIT_LEVEL_ARCH.
- |
-
-LOS_INIT_LEVEL_PLATFORM_EARLY
- |
-Early initialization of the platform.
-The initialization depends on the board platform and drivers. Subsequent modules initialize the modules on which they depend. It is recommended that functions required for startup be placed at LOS_INIT_LEVEL_PLATFORM.
- |
-
-LOS_INIT_LEVEL_KMOD_PREVM
- |
-Kernel module initialization before memory initialization.
-Initialize the modules that need to be enabled before memory initialization.
- |
-
-LOS_INIT_LEVEL_VM_COMPLETE
- |
-Initialization after the basic memory is ready.
-After memory initialization, initialize the modules that need to be enabled and do not depend on inter-process communication (IPC) and system processes.
-Example: shared memory function
- |
-
-LOS_INIT_LEVEL_ARCH
- |
-Late initialization of the architecture.
-The initialization is related to the architecture extension functions. Subsequent modules initialize the modules on which they depend.
- |
-
-LOS_INIT_LEVEL_PLATFORM
- |
-Late initialization of the platform.
-The initialization depends on the board platform and drivers. Subsequent modules initialize the modules on which they depend.
-Example: initialization of the driver kernel abstraction layer (MMC and MTD)
- |
-
-LOS_INIT_LEVEL_KMOD_BASIC
- |
-Initialization of the kernel basic modules.
-Initialize the basic modules that can be detached from the kernel.
-Example: VFS initialization
- |
-
-LOS_INIT_LEVEL_KMOD_EXTENDED
- |
-Initialization of the kernel extended modules.
-Initialize the extended modules that can be detached from the kernel.
-Example: initialization of system call, ProcFS, Futex, HiLog, HiEvent, and LiteIPC
- |
-
-LOS_INIT_LEVEL_KMOD_TASK
- |
-Kernel task creation
-Create kernel tasks (kernel tasks and software timer tasks).
-Example: creation of the resident resource reclaiming task, SystemInit task, and CPU usage statistics task.
- |
-
-
-
-
-## Programming Example
-
-### Example Description
+## Kernel Startup Process
+
+The kernel startup process consists of the assembly startup and C language startup, as shown in the following figure.
+
+The assembly startup involves the following operations: initializing CPU settings, disabling dCache/iCache, enabling the FPU and NEON, setting the MMU to establish the virtual-physical address mapping, setting the system stack, clearing the BSS segment, and calling the main function of the C language.
+
+The C language startup involves the following operations: starting the **OsMain** function and starting scheduling. As shown in the following figure, the **OsMain** function is used for basic kernel initialization and architecture- and board-level initialization. The kernel startup framework leads the initialization process. The right part of the figure shows the phase in which external modules can register with the kernel startup framework and starts. The table below describes each phase.
+
+
+ **Figure 1** Kernel startup process
+ 
+
+
+ **Table 1** Start framework
+
+| Level | Startup Description |
+| -------- | -------- |
+| LOS_INIT_LEVEL_EARLIEST | Earliest initialization.
The initialization is architecture-independent. The board and subsequent modules initialize the pure software modules on which they depend.
Example: trace module|
+| LOS_INIT_LEVEL_ARCH_EARLY | Early initialization of the architecture.
The initialization is architecture-dependent. Subsequent modules initialize the modules on which they depend. It is recommended that functions not required for startup be placed at **LOS_INIT_LEVEL_ARCH**.|
+| LOS_INIT_LEVEL_PLATFORM_EARLY | Early initialization of the platform.
The initialization depends on the board platform and drivers. Subsequent modules initialize the modules on which they depend. It is recommended that functions required for startup be placed at **LOS_INIT_LEVEL_PLATFORM**.
Example: UART module|
+| LOS_INIT_LEVEL_KMOD_PREVM | Kernel module initialization before memory initialization.
Initialize the modules that need to be enabled before memory initialization.|
+| LOS_INIT_LEVEL_VM_COMPLETE | Initialization after the basic memory is ready.
After memory initialization, initialize the modules that need to be enabled and do not depend on inter-process communication (IPC) and system processes.
Example: shared memory function|
+| LOS_INIT_LEVEL_ARCH | Late initialization of the architecture.
The initialization is related to the architecture extension functions. Subsequent modules initialize the modules on which they depend.|
+| LOS_INIT_LEVEL_PLATFORM | Late initialization of the platform.
The initialization depends on the board platform and drivers. Subsequent modules initialize the modules on which they depend.
Example: initialization of the driver kernel abstraction layer (MMC and MTD)|
+| LOS_INIT_LEVEL_KMOD_BASIC | Initialization of the kernel basic modules.
Initialize the basic modules that can be detached from the kernel.
Example: VFS initialization|
+| LOS_INIT_LEVEL_KMOD_EXTENDED | Initialization of the kernel extended modules.
Initialize the extended modules that can be detached from the kernel.
Example: initialization of system call, ProcFS, Futex, HiLog, HiEvent, and LiteIPC|
+| LOS_INIT_LEVEL_KMOD_TASK | Kernel task creation.
Create kernel tasks (kernel tasks and software timer tasks).
Example: creation of the resident resource reclaiming task, SystemInit task, and CPU usage statistics task|
+
+
+## Programming Example
+
+**Example Description**
Add a kernel module and register the initialization function of the module to the kernel startup process through the kernel startup framework, so as to complete the module initialization during the kernel initialization process.
+
**Sample Code**
+
+
```
/* Header file of the kernel startup framework */
#include "los_init.h"
@@ -110,8 +57,11 @@ unsigned int OsSampleModInit(void)
LOS_MODULE_INIT(OsSampleModInit, LOS_INIT_LEVEL_KMOD_EXTENDED);
```
+
**Verification**
+
+
```
main core booting up...
OsSampleModInit SUCCESS!
@@ -120,9 +70,12 @@ cpu 1 entering scheduler
cpu 0 entering scheduler
```
+
According to the information displayed during the system startup, the kernel has called the initialization function of the registered module during the startup to initialize the module.
-> **NOTE:**
->Modules at the same level cannot depend on each other. It is recommended that a new module be split based on the preceding startup phase and be registered and started as required.
->You can view the symbol table in the **.rodata.init.kernel.\*** segment of the **OHOS\_Image.map** file generated after the build is complete, so as to learn about the initialization entry of each module that has been registered with the kernel startup framework and check whether the newly registered initialization entry has taken effect.
+>  **NOTE**
+>
+> Modules at the same level cannot depend on each other. It is recommended that a new module be split based on the preceding startup phase and be registered and started as required.
+>
+> You can view the symbol table in the **.rodata.init.kernel.*** segment of the **OHOS_Image.map** file generated after the build is complete, so as to learn about the initialization entry of each module that has been registered with the kernel startup framework and check whether the newly registered initialization entry has taken effect.
diff --git a/en/device-dev/kernel/kernel-standard-build.md b/en/device-dev/kernel/kernel-standard-build.md
index 747c9133458aec67156d3a1200d705b1a45df4a5..3c950570cf2ae2638fd00a68756c3cefaaf3ddce 100644
--- a/en/device-dev/kernel/kernel-standard-build.md
+++ b/en/device-dev/kernel/kernel-standard-build.md
@@ -1,14 +1,16 @@
# Compiling and Building the Linux Kernel
-## Example 1
+
+ **Example**
The following uses the Hi3516D V300 board and Ubuntu x86 server as an example.
-Perform a full build for the project to generate the **uImage** kernel image.
+
+Perform a full build for the project to generate the **uImage** kernel image.
+
```
-./build.sh --product-name hispark_taurus_standard # Build the hispark_taurus_standard image.
- --build-target build_kernel # Build the uImage kernel image of the hispark_taurus_standard.
- --gn-args linux_kernel_version=\"linux-5.10\" # Build the specified kernel version.
+./build.sh --product-name hispark_taurus_standard # Build the hispark_taurus_standard image.
+ --build-target build_kernel # Build the uImage kernel image of hispark_taurus_standard.
+ --gn-args linux_kernel_version=\"linux-5.10\" # Specify the kernel version.
```
-
diff --git a/en/device-dev/kernel/kernel-standard-sched-rtg.md b/en/device-dev/kernel/kernel-standard-sched-rtg.md
index 534cdcdab06c04c6f3abce5e29766411e4c819cc..61a36ab5bb7aad13789b0ce053e3dbcd73be6b25 100644
--- a/en/device-dev/kernel/kernel-standard-sched-rtg.md
+++ b/en/device-dev/kernel/kernel-standard-sched-rtg.md
@@ -51,11 +51,11 @@ STATE COMM PID PRIO CPU // Thread information, including th
## Available APIs
-The RTG provides the device node and ioctl APIs for querying and configuring group information. The device node is in `/dev/sched_rtg_ctrl`.
-
-| Device Node | request | Description |
-| ------------------- | ------------------- | ------------------- |
-| /dev/sched_rtg_ctrl | CMD_ID_SET_RTG | Creates an RTG, and adds, updates, or deletes threads in the group. |
-| | CMD_ID_SET_CONFIG | Configures global group attributes, for example, the maximum number of real-time RTGs.|
-| | CMD_ID_SET_RTG_ATTR | Configures specified group attributes, for example, the thread priority. |
-| | CMD_ID_SET_MIN_UTIL | Sets the minimum utilization of an RTG. |
+The RTG provides the device node and ioctl APIs for querying and configuring group information. The device node is in **/dev/sched_rtg_ctrl**.
+
+| Request | Description |
+| ------------------- | ------------------- |
+| CMD_ID_SET_RTG | Creates an RTG, and adds, updates, or deletes threads in the group. |
+| CMD_ID_SET_CONFIG | Sets global group attributes, for example, the maximum number of real-time RTGs. |
+| CMD_ID_SET_RTG_ATTR | Sets specified group attributes, for example, the thread priority. |
+| CMD_ID_SET_MIN_UTIL | Sets the minimum utilization of an RTG. |
diff --git a/en/device-dev/subsystems/subsys-build-mini-lite.md b/en/device-dev/subsystems/subsys-build-mini-lite.md
index 2f82037899cd8d74fe2388b3c34b1f0137a032e9..fa47c3a94b36fdc69c207ae602c218574c9074c7 100644
--- a/en/device-dev/subsystems/subsys-build-mini-lite.md
+++ b/en/device-dev/subsystems/subsys-build-mini-lite.md
@@ -2,7 +2,7 @@
## Overview
- The Compilation and Building subsystem provides a build framework based on Generate Ninja (GN) and Ninja. This subsystem allows you to:
+The Compilation and Building subsystem provides a build framework based on Generate Ninja (GN) and Ninja. This subsystem allows you to:
- Assemble components into a product and build the product.
@@ -75,7 +75,7 @@ You can build a component, a chipset solution, and a product solution. To ensure
### Component
- The component source code directory is named in the *{Domain}/{Subsystem}/{Component}* format. The component directory structure is as follows:
+The component source code directory is named in the *{Domain}/{Subsystem}/{Component}* format. The component directory structure is as follows:
>  **CAUTION**
> The .json file of the subsystem in the **build/lite/components** directory contains component attributes, including the name, source code directory, function description, mandatory or not, build targets, RAM, ROM, build outputs, adapted kernels, configurable features, and dependencies of the component. When adding a component, add the component information in the .json file of the corresponding subsystem. The component configured for a product must have been defined in a subsystem. Otherwise, the verification will fail.
@@ -94,42 +94,42 @@ component
```
{
- "name": "@ohos/sensor_lite", # OpenHarmony Package Manager (HPM) component name, in the @Organization/Component name format.
- "description": "Sensor services", # Description of the component functions.
- "version": "3.1", # Version, which must be the same as the version of OpenHarmony.
- "license": "MIT", # Component license.
- "publishAs": "code-segment", # Mode for publishing the HPM package. The default value is code-segment.
+ "name": "@ohos/sensor_lite", # OpenHarmony Package Manager (HPM) component name, in the "@Organization/Component name" format.
+ "description": "Sensor services", # Description of the component functions.
+ "version": "3.1", # Version, which must be the same as the version of OpenHarmony.
+ "license": "MIT", # Component license.
+ "publishAs": "code-segment", # Mode for publishing the HPM package. The default value is code-segment.
"segment": {
"destPath": ""
- }, # Code restoration path (source code path) set when "publishAs is code-segment.
- "dirs": {"base/sensors/sensor_lite"} # Directory structure of the HPM package. This field is mandatory and can be left empty.
- "scripts": {}, # Scripts to be executed. This field is mandatory and can be left empty.
+ }, # Code restoration path (source code path) set when "publishAs" is code-segment.
+ "dirs": {"base/sensors/sensor_lite"}, # Directory structure of the HPM package. This field is mandatory and can be left empty.
+ "scripts": {}, # Scripts to be executed. This field is mandatory and can be left empty.
"licensePath": "COPYING",
"readmePath": {
"en": "README.rst"
},
- "component": { # Component attributes.
- "name": "sensor_lite", # Component name.
- "subsystem": "", # Subsystem to which the component belongs.
- "syscap": [], # System capabilities provided by the component for applications.
- "features": [], # List of the component's configurable features. Generally, this parameter corresponds to sub_component in build and can be configured.
- "adapted_system_type": [], # Adapted system types, which can be mini, small, and standard. Multiple values are allowed.
- "rom": "92KB", # Size of the component's ROM.
- "ram": "~200KB", # Size of the component's RAM.
+ "component": { # Component attributes.
+ "name": "sensor_lite", # Component name.
+ "subsystem": "", # Subsystem to which the component belongs.
+ "syscap": [], # System capabilities provided by the component for applications.
+ "features": [], # List of external configurable features of a component. Generally, this parameter corresponds to sub_component in build and can be configured by the product.
+ "adapted_system_type": [], # Types of adapted systems. The value can be mini, small, and standard.
+ "rom": "92KB", # Component ROM size.
+ "ram": "~200KB",, # Component RAM size.
"deps": {
- "components": [ # Other components on which this component depends.
+ "components": [ # Other components on which this component depends.
"samgr_lite"
],
- "third_party": [ # Third-party open-source software on which this component depends.
+ "third_party": [ # Third-party open-source software on which this component depends.
"bounds_checking_function"
]
}
- "build": { # Build-related configurations.
+ "build": { # Build-related configuration.
"sub_component": [
""//base/sensors/sensor_lite/services:sensor_service"", # Component build entry
- ], # Component build entry. Configure the module here.
- "inner_kits": [], # APIs between components.
- "test": [] # Entry for building the component's test cases.
+ ], # Component build entry. Configure modules here.
+ "inner_kits": [], # APIs between components.
+ "test": [] # Entry for building the component's test cases.
}
}
}
@@ -195,28 +195,28 @@ component
### Chipset Solution
-The chipset solution is a special component. It is built based on a development board, including the drivers, device API adaptation, and SDK.
+- The chipset solution is a special component. It is built based on a development board, including the drivers, device API adaptation, and SDK.
-The source code path is named in the **device/{Development board}/{Chipset solution vendor}** format.
+- The source code path is named in the **device/{Development board}/{Chipset solution vendor}** format.
-The chipset solution component is built by default based on the development board selected.
-
-The chipset solution directory structure is as follows:
+- The chipset solution component is built by default based on the development board selected.
+
+- The chipset solution directory structure is as follows:
-```
-device
-└── board # Chipset solution vendor
- └── company # Development board name
- ├── BUILD.gn # Build script
- ├── hals # OS device API adaptation
- ├── linux # (Optional) Linux kernel version
- │ └── config.gni # Linux build configuration
- └── liteos_a # (Optional) LiteOS kernel version
- └── config.gni # LiteOS_A build configuration
-```
+ ```
+ device
+ └── board # Chipset solution vendor
+ └── company # Development board name
+ ├── BUILD.gn # Build script
+ ├── hals # OS device API adaptation
+ ├── linux # (Optional) Linux kernel version
+ │ └── config.gni # Linux build configuration
+ └── liteos_a # (Optional) LiteOS kernel version
+ └── config.gni # LiteOS_A build configuration
+ ```
->  **NOTE**
-> The **config.gni** file contains build-related configuration of the development board. The parameters in the file are used to build all OS components, and are globally visible to the system during the build process.
+ >  **NOTE**
+ > The **config.gni** file contains build-related configuration of the development board. The parameters in the file are used to build all OS components, and are globally visible to the system during the build process.
- The **config.gni** file contains the following key parameters:
@@ -253,7 +253,7 @@ vendor
└── ...
```
->  **CAUTION**
+>  **CAUTION**
> Follow the preceding rules to create directories and files for new products. The Compilation and Building subsystem scans the configured products based on the rules.
The key directories and files are described as follows:
@@ -267,18 +267,21 @@ The key directories and files are described as follows:
This file is the configuration file for the **init** process to start services. Currently, the following commands are supported:
- **start**: starts a service.
-- **mkdir**: creates a folder.
-
+
+ - **mkdir**: creates a folder.
+
- **chmod**: changes the permission on a specified directory or file.
-- **chown**: changes the owner group of a specified directory or file.
- - **mount**: mounts a device.
- The fields in the file are described as follows:
+ - **chown**: changes the owner group of a specified directory or file.
+
+ - **mount**: mounts a device.
+
+ The fields in the file are described as follows:
```
{
"jobs" : [{ # Job array. A job corresponds to a command set. Jobs are executed in the following sequence: pre-init > init > post-init.
- "name" : "pre-init",
+ "name" : "pre-init",
"cmds" : [
"mkdir /storage/data", # Create a directory.
"chmod 0755 /storage/data", #Modify the permissions. The format of the permission value is 0xxx, for example, 0755.
@@ -314,7 +317,7 @@ The key directories and files are described as follows:
]
}
```
-
+
3. **vendor/company/product/init_configs/hals**
This file contains the OS adaptation of the product. For details about APIs for implementing OS adaptation, see the readme file of each component.
@@ -361,9 +364,9 @@ The key directories and files are described as follows:
source_dir: (Optional) specifies target file directory in the out directory. If this field is not specified, an empty directory will be created in the file system based on target_dir.
target_dir: (Mandatory) specifies the file directory in the file system.
ignore_files: (Optional) declares ignored files during the copy operation.
- dir_mode: (Optional) specifies the file directory permissions. The default value is 755.
- file_mode: (Optional) specifies the permissions of all files in the directory. The default value is 555.
- fs_filemode: (Optional) specifies the files that require special permissions. Each file corresponds to a list.
+ dir_mode: (Optional) specifies the file directory permissions. The default value is 755.
+ file_mode: (Optional) specifies the permissions of all files in the directory. The default value is 555.
+ fs_filemode: (Optional) specifies the files that require special permissions. Each file corresponds to a list.
file_dir: (Mandatory) specifies the detailed file path in the file system.
file_mode: (Mandatory) declares file permissions.
fs_symlink: (Optional) specifies the soft link of the file system.
@@ -373,11 +376,12 @@ The key directories and files are described as follows:
The **fs_symlink** and **fs_make_cmd** fields support the following variables:
- - ${root_path}: code root directory, which corresponds to **${ohos_root_path}** of GN.
- - ${out_path}: **out** directory of the product, which corresponds to **${root_out_dir}** of GN.
- - ${fs_dir}: file system directory, which consists of variables ${root_path} and ${fs_dir_name}.
->  **NOTE**
-> **fs.yml** is optional and not required for devices without a file system.
+ - **${root_path}**: Code root directory, which corresponds to **${ohos_root_path}** of GN.
+ - **${out_path}**: The **out** directory of the product, which corresponds to **${root_out_dir}** of GN.
+ - **${fs_dir}**: File system directory, which consists of variables **${root_path}** and **${fs_dir_name}**.
+
+ >  **NOTE**
+ > **fs.yml** is optional and not required for devices without a file system.
6. **vendor/company/product/BUILD.gn**
@@ -405,7 +409,7 @@ The development environment has GN, Ninja, Python 3.9.2 or later, and hb availab
**hb** is an OpenHarmony command line tool for executing build commands. Common hb commands are described as follows:
- **hb set**
+**hb set**
```
hb set -h
@@ -438,7 +442,7 @@ hb env
[OHOS INFO] device path: xxx/device/hisilicon/hispark_taurus/sdk_linux_4.19
```
- **hb build**
+**hb build**
```
hb build -h
@@ -660,7 +664,6 @@ The following uses the RTL8720 development board provided by Realtek as an examp
```
4. Build the chipset solution.
-
Run the **hb build** command in the development board directory to start the build.
### Adding a Product Solution
@@ -668,7 +671,6 @@ The following uses the RTL8720 development board provided by Realtek as an examp
You can customize a product solution by flexibly assembling a chipset solution and components. The procedure is as follows:
1. Create a product directory based on the [configuration rules](#product-solution).
-
The following uses the Wi-Fi IoT module on the RTL8720 development board as an example. Run the following command in the root directory to create a product directory:
```
@@ -676,9 +678,8 @@ You can customize a product solution by flexibly assembling a chipset solution a
```
2. Assemble the product.
-
- Create a **config.json** file, for example for wifiiot, in the product directory. The **vendor/my_company/wifiiot/config.json** file is as follows:
-
+ Create a **config.json** file, for example for wifiiot, in the product directory. The **vendor/my_company/wifiiot/config.json** file is as follows:
+
```
{
"product_name": "wifiiot", # Product name
@@ -704,25 +705,22 @@ You can customize a product solution by flexibly assembling a chipset solution a
}
```
->  **CAUTION**
-> Before the build, the Compilation and Building subsystem checks the validity of fields in **config.json**. The **device_company**, **board**, **kernel_type**, and **kernel_version** fields must match the fields of the chipset solution, and **subsystem** and **component** must match the component description in the **build/lite/components** file.
+ >  **CAUTION**
+ > Before the build, the Compilation and Building subsystem checks the validity of fields in **config.json**. The **device_company**, **board**, **kernel_type**, and **kernel_version** fields must match the fields of the chipset solution, and **subsystem** and **component** must match the component description in the **build/lite/components** file.
3. Implement adaptation to OS APIs.
-
Create the **hals** directory in the product directory and save the source code as well as the build script for OS adaptation in this directory.
4. Configure system services.
-
Create the **init_configs** directory in the product directory and then the **init.cfg** file in the **init_configs** directory, and configure the system services to be started.
5. (Optional) Configure the init process for the Linux kernel.
-
Create the **etc** directory in the **init_configs** directory, and then the **init.d** folder and the **fstab** file in the **etc** directory. Then, create the **rcS** and **S***xxx* files in the **init.d** file and edit them based on product requirements.
6. (Optional) Configure the file system image for the development board that supports the file system.
-
+
Create a **fs.yml** file in the product directory and configure it as required. A typical **fs.yml** file is as follows:
-
+
```
-
fs_dir_name: rootfs # Image name
@@ -823,7 +821,7 @@ You can customize a product solution by flexibly assembling a chipset solution a
- ${root_path}/build/lite/make_rootfs/rootfsimg_linux.sh ${fs_dir} ext4
```
-
+
7. (Optional) Configure patches if the product and components need to be patched.
Create a **patch.yml** file in the product directory and configure it as required. A typical **patch.yml** file is as follows:
@@ -841,14 +839,14 @@ You can customize a product solution by flexibly assembling a chipset solution a
...
```
+
Add **--patch** when running the **hb build** command. Then, the patch files can be added to the specified directory before the build.
- ```
- hb build -f --patch
- ```
+ ```
+ hb build -f --patch
+ ```
8. Write the build script.
-
Create a **BUILD.gn** file in the product directory and write the script. The following **BUILD.gn** file uses the Wi-Fi IoT module in step 1 as an example:
```
@@ -864,9 +862,9 @@ You can customize a product solution by flexibly assembling a chipset solution a
```
9. Build the product.
-
Run the **hb set** command in the code root directory, select the new product as prompted, and run the **hb build** command.
+
## Troubleshooting
### "usr/sbin/ninja: invalid option -- w" Displayed During the Build Process
@@ -892,10 +890,10 @@ You can customize a product solution by flexibly assembling a chipset solution a
- **Possible Causes**
The ncurses library is not installed.
-
+
- **Solution**
- ```
+ ```
sudo apt-get install lib32ncurses5-dev
```
@@ -929,9 +927,9 @@ You can customize a product solution by flexibly assembling a chipset solution a
1. Run the following command to locate **gcc_riscv32**:
- ```
+ ```
which riscv32-unknown-elf-gcc
- ```
+ ```
2. Run the **chmod** command to change the directory permission to **755**.
diff --git a/en/device-dev/subsystems/subsys-xts-guide.md b/en/device-dev/subsystems/subsys-xts-guide.md
index d9ee3223e3aa282f5e7e847a07f7a20b00c628c5..90992e3e0449cd9a883ba5ddba171f5ae8fefb9d 100644
--- a/en/device-dev/subsystems/subsys-xts-guide.md
+++ b/en/device-dev/subsystems/subsys-xts-guide.md
@@ -2,7 +2,7 @@
## Introduction
-The X test suite \(XTS\) subsystem contains a set of OpenHarmony compatibility test suites, including the currently supported application compatibility test suite \(ACTS\) and the device compatibility test suite \(DCTS\) that will be supported in the future.
+The X test suite (XTS) subsystem contains a set of OpenHarmony compatibility test suites, including the currently supported application compatibility test suite (ACTS) and the device compatibility test suite (DCTS) that will be supported in the future.
This subsystem contains the ACTS and **tools** software package.
@@ -19,7 +19,7 @@ OpenHarmony supports the following systems:
- Small system
- A small system runs on a device that comes with memory greater than or equal to 1 MiB and application processors such as ARM Cortex-A. It provides higher security capabilities, standard graphics frameworks, and video encoding and decoding capabilities. Typical products include smart home IP cameras, electronic cat eyes, and routers, and event data recorders \(EDRs\) for smart travel.
+ A small system runs on a device that comes with memory greater than or equal to 1 MiB and application processors such as ARM Cortex-A. It provides higher security capabilities, standard graphics frameworks, and video encoding and decoding capabilities. Typical products include smart home IP cameras, electronic cat eyes, and routers, and event data recorders (EDRs) for smart travel.
- Standard system
@@ -34,7 +34,7 @@ OpenHarmony supports the following systems:
│ └── subsystem # Source code of subsystem test cases for the standard system
│ └── subsystem_lite # Source code of subsystems test cases for mini and small systems
│ └── BUILD.gn # Build configuration of test cases for the standard system
-│ └── build_lite
+│ └── build_lite # Build configuration of test cases for the mini and small systems.
│ └── BUILD.gn # Build configuration of test cases for mini and small systems
└── tools # Test tool code
```
@@ -72,9 +72,9 @@ Test cases for the mini system must be developed in C, and those for the small s
| Performance | Tests the processing capability of the tested object under specific preset conditions and load models. The processing capability is measured by the service volume that can be processed in a unit time, for example, call per second, frame per second, or event processing volume per second. |
| Power | Tests the power consumption of the tested object in a certain period of time under specific preset conditions and load models. |
| Reliability | Tests the service performance of the tested object under common and uncommon input conditions, or specified service volume pressure and long-term continuous running pressure. The test covers stability, pressure handling, fault injection, and Monkey test times. |
-| Security | - Tests the capability of defending against security threats, including but not limited to unauthorized access, use, disclosure, damage, modification, and destruction, to ensure information confidentiality, integrity, and availability.
- Tests the privacy protection capability to ensure that the collection, use, retention, disclosure, and disposal of users' private data comply with laws and regulations.
- Tests the compliance with various security specifications, such as security design, security requirements, and security certification of the Ministry of Industry and Information Technology (MIIT).
|
+| Security | Tests the capability of defending against security threats, including but not limited to unauthorized access, use, disclosure, damage, modification, and destruction, to ensure information confidentiality, integrity, and availability.
Tests the privacy protection capability to ensure that the collection, use, retention, disclosure, and disposal of users' private data comply with laws and regulations.
Tests the compliance with various security specifications, such as security design, security requirements, and security certification of the Ministry of Industry and Information Technology (MIIT). |
| Global | Tests the internationalized data and localization capabilities of the tested object, including multi-language display, various input/output habits, time formats, and regional features, such as currency, time, and culture taboos. |
-| Compatibility | - Tests backward compatibility of an application with its own data, the forward and backward compatibility with the system, and the compatibility with different user data, such as audio file content of the player and smart SMS messages.
- Tests system backward compatibility with its own data and the compatibility of common applications in the ecosystem.
- Tests software compatibility with related hardware.
|
+| Compatibility | Tests backward compatibility of an application with its own data, the forward and backward compatibility with the system, and the compatibility with different user data, such as audio file content of the player and smart SMS messages.
Tests system backward compatibility with its own data and the compatibility of common applications in the ecosystem.
Tests software compatibility with related hardware. |
| User | Tests user experience of the object in real user scenarios. All conclusions and comments should come from the users, which are all subjective evaluation in this case. |
| Standard | Tests the compliance with industry and company-specific standards, protocols, and specifications. The standards here do not include any security standards that should be classified into the security test. |
| Safety | Tests the safety property of the tested object to avoid possible hazards to personal safety, health, and the object itself. |
@@ -92,107 +92,109 @@ The test framework and programming language vary with the system type.
| Small | HCPPTest | C++ |
| Standard | HJSUnit and HCPPTest | JavaScript and C++ |
-### Developing Test Cases in C (for the Mini System\)
+### Developing Test Cases in C (for the Mini System)
**Developing Test Cases for the Mini System**
HCTest and the C language are used to develop test cases. HCTest is enhanced and adapted based on the open-source test framework Unity.
-1. Define the test case directory. The test cases are stored to **test/xts/acts**.
+1. Define the test case directory. The test cases are stored to **test/xts/acts**.
+
+ ```
+ ├── acts
+ │ └──subsystem_lite
+ │ │ └── module_hal
+ │ │ │ └── BUILD.gn
+ │ │ │ └── src
+ │ └──build_lite
+ │ │ └── BUILD.gn
+ ```
- ```
- ├── acts
- │ └──subsystem_lite
- │ │ └── module_hal
- │ │ │ └── BUILD.gn
- │ │ │ └── src
- │ └──build_lite
- │ │ └── BUILD.gn
- ```
2. Write the test case in the **src** directory.
- a) Include the test framework header file.
+ (1) Include the test framework header file.
- ```
- #include "hctest.h"
- ```
+ ```
+ #include "hctest.h"
+ ```
- b) Use the **LITE\_TEST\_SUIT** macro to define names of the subsystem, module, and test suite.
+ (2) Use the **LITE_TEST_SUIT** macro to define names of the subsystem, module, and test suite.
- ```
- /**
- * @brief Registers a test suite named IntTestSuite.
- * @param test Subsystem name
- * @param example Module name
- * @param IntTestSuite Test suite name
- */
- LITE_TEST_SUIT(test, example, IntTestSuite);
- ```
+ ```
+ /**
+ * @brief register a test suite named "IntTestSuite"
+ * @param test subsystem name
+ * @param example module name
+ * @param IntTestSuite test suite name
+ */
+ LITE_TEST_SUIT(test, example, IntTestSuite);
+ ```
- c) Define Setup and TearDown.
+ (3) Define Setup and TearDown.
- Format: Test suite name+Setup, Test suite name+TearDown.
+ Format: Test suite name+Setup, Test suite name+TearDown.
+ The Setup and TearDown functions must exist, but function bodies can be empty.
+
+ (4) Use the **LITE_TEST_CASE** macro to write the test case.
- The Setup and TearDown functions must exist, but function bodies can be empty.
+ Three parameters are involved: test suite name, test case name, and test case properties (including type, granularity, and level).
+
+ ```
+ LITE_TEST_CASE(IntTestSuite, TestCase001, Function | MediumTest | Level1)
+ {
+ // Do something.
+ };
+ ```
+
+ (5) Use the **RUN_TEST_SUITE** macro to register the test suite.
- d) Use the **LITE\_TEST\_CASE** macro to write the test case.
+ ```
+ RUN_TEST_SUITE(IntTestSuite);
+ ```
- Three parameters are involved: test suite name, test case name, and test case properties \(including type, granularity, and level\).
+3. Create the configuration file (**BUILD.gn**) of the test module.
- ```
- LITE_TEST_CASE(IntTestSuite, TestCase001, Function | MediumTest | Level1)
- {
- // Do something
- };
- ```
+ Create a **BUILD.gn** (example) file in each test module directory, and specify the name of the built static library and its dependent header files and libraries.
- e) Use the **RUN\_TEST\_SUITE** macro to register the test suite.
+ The format is as follows:
```
- RUN_TEST_SUITE(IntTestSuite);
+ import("//test/xts/tools/lite/build/suite_lite.gni")
+ hctest_suite("ActsDemoTest") {
+ suite_name = "acts"
+ sources = [
+ "src/test_demo.c",
+ ]
+ include_dirs = [ ]
+ cflags = [ "-Wno-error" ]
+ }
```
-3. Create the configuration file \(**BUILD.gn**\) of the test module.
-
- Create a **BUILD.gn** \(example\) file in each test module directory, and specify the name of the built static library and its dependent header files and libraries. The format is as follows:
-
- ```
- import("//test/xts/tools/lite/build/suite_lite.gni")
- hctest_suite("ActsDemoTest") {
- suite_name = "acts"
- sources = [
- "src/test_demo.c",
- ]
- include_dirs = [ ]
- cflags = [ "-Wno-error" ]
- }
- ```
+4. Add build options to the **BUILD.gn** file in the **acts** directory.
-4. Add build options to the **BUILD.gn** file in the **acts** directory.
+ You need to add the test module to the **test/xts/acts/build\_lite/BUILD.gn** script in the **acts** directory.
- You need to add the test module to the **test/xts/acts/build\_lite/BUILD.gn** script in the **acts** directory.
-
- ```
- lite_component("acts") {
- ...
- if(board_name == "liteos_m") {
- features += [
- ...
- "//xts/acts/subsystem_lite/module_hal:ActsDemoTest"
- ]
- }
- }
- ```
+ ```
+ lite_component("acts") {
+ ...
+ if(board_name == "liteos_m") {
+ features += [
+ ...
+ "//xts/acts/subsystem_lite/module_hal:ActsDemoTest"
+ ]
+ }
+ }
+ ```
-5. Run build commands.
+5. Run build commands.
- Test suites are built along with the OS version. The ACTS is built together with the debug version.
+ Test suites are built along with the OS version. The ACTS is built together with the debug version.
- > **NOTE**
The ACTS build middleware is a static library, which will be linked to the image.
+ > **NOTE**
The ACTS build middleware is a static library, which will be linked to the image.
-### Executing Test Cases in C (for the Mini System\)
+### Executing Test Cases in C (for the Mini System)
**Executing Test Cases for the Mini System**
@@ -211,120 +213,122 @@ The log for each test suite starts with "Start to run test suite:" and ends wit
### Developing Test Cases in C++ (for Standard and Small Systems)
-**Developing Test Cases for Small-System Devices** \(for the standard system, see the **global/i18n\_standard directory**.\)
+**Developing Test Cases for Small-System Devices** (for the standard system, see the **global/i18n_standard directory**.)
The HCPPTest framework, an enhanced version based on the open-source framework Googletest, is used.
-1. Define the test case directory. The test cases are stored to **test/xts/acts**.
-
- ```
- ├── acts
- │ └──subsystem_lite
- │ │ └── module_posix
- │ │ │ └── BUILD.gn
- │ │ │ └── src
- │ └──build_lite
- │ │ └── BUILD.gn
- ```
-
-2. Write the test case in the **src** directory.
-
- a) Include the test framework header file.
-
- The following statement includes **gtest.h**.
+1. Define the test case directory. The test cases are stored to **test/xts/acts**.
```
- #include "gtest/gtest.h"
+ ├── acts
+ │ └──subsystem_lite
+ │ │ └── module_posix
+ │ │ │ └── BUILD.gn
+ │ │ │ └── src
+ │ └──build_lite
+ │ │ └── BUILD.gn
```
- b) Define Setup and TearDown.
-
- ```
- using namespace std;
- using namespace testing::ext;
- class TestSuite: public testing::Test {
- protected:
- // Preset action of the test suite, which is executed before the first test case
- static void SetUpTestCase(void){
- }
- // Test suite cleanup action, which is executed after the last test case
- static void TearDownTestCase(void){
- }
- // Preset action of the test case
- virtual void SetUp()
- {
- }
- // Cleanup action of the test case
- virtual void TearDown()
- {
- }
- };
- ```
-
- c) Use the **HWTEST** or **HWTEST\_F** macro to write the test case.
-
- **HWTEST**: definition of common test cases, including the test suite name, test case name, and case annotation.
-
- **HWTEST\_F**: definition of SetUp and TearDown test cases, including the test suite name, test case name, and case annotation.
-
- Three parameters are involved: test suite name, test case name, and test case properties \(including type, granularity, and level\).
+2. Write the test case in the **src** directory.
- ```
- HWTEST_F(TestSuite, TestCase_0001, Function | MediumTest | Level1) {
- // Do something
- }
- ```
+ (1) Include the test framework.
+
+ Include **gtest.h**.
+ ```
+ #include "gtest/gtest.h"
+ ```
+
+
+ (2) Define Setup and TearDown.
+
+ ```
+ using namespace std;
+ using namespace testing::ext;
+ class TestSuite: public testing::Test {
+ protected:
+ // Preset action of the test suite, which is executed before the first test case
+ static void SetUpTestCase(void){
+ }
+ // Test suite cleanup action, which is executed after the last test case
+ static void TearDownTestCase(void){
+ }
+ // Preset action of the test case
+ virtual void SetUp()
+ {
+ }
+ // Cleanup action of the test case
+ virtual void TearDown()
+ {
+ }
+ };
+ ```
+
+
+ (3) Use the **HWTEST** or **HWTEST_F** macro to write the test case.
+
+ **HWTEST**: definition of common test cases, including the test suite name, test case name, and case annotation.
+
+ **HWTEST_F**: definition of SetUp and TearDown test cases, including the test suite name, test case name, and case annotation.
+
+ Three parameters are involved: test suite name, test case name, and test case properties (including type, granularity, and level).
+
+ ```
+ HWTEST_F(TestSuite, TestCase_0001, Function | MediumTest | Level1) {
+ // Do something
+ ```
-3. Create a configuration file \(**BUILD.gn**\) of the test module.
+3. Create a configuration file (**BUILD.gn**) of the test module.
Create a **BUILD.gn** file in each test module directory, and specify the name of the built static library and its dependent header files and libraries. Each test module is independently built into a **.bin** executable file, which can be directly pushed to the development board for testing.
Example:
-
- ```
- import("//test/xts/tools/lite/build/suite_lite.gni")
- hcpptest_suite("ActsDemoTest") {
- suite_name = "acts"
- sources = [
- "src/TestDemo.cpp"
- ]
-
- include_dirs = [
- "src",
- ...
- ]
- deps = [
- ...
- ]
- cflags = [ "-Wno-error" ]
- }
+
+ ```
+ import("//test/xts/tools/lite/build/suite_lite.gni")
+ hcpptest_suite("ActsDemoTest") {
+ suite_name = "acts"
+ sources = [
+ "src/TestDemo.cpp"
+ ]
+
+ include_dirs = [
+ "src",
+ ...
+ ]
+ deps = [
+ ...
+ ]
+ cflags = [ "-Wno-error" ]
+ }
+ ```
- ```
-
4. Add build options to the **BUILD.gn** file in the **acts** directory.
- Add the test module to the **test/xts/acts/build\_lite/BUILD.gn** script in the **acts** directory.
+ Add the test module to the **test/xts/acts/build_lite/BUILD.gn** script in the **acts** directory.
+
+ ```
+ lite_component("acts") {
+ ...
+ else if(board_name == "liteos_a") {
+ features += [
+ ...
+ "//xts/acts/subsystem_lite/module_posix:ActsDemoTest"
+ ]
+ }
+ }
+ ```
- ```
- lite_component("acts") {
- ...
- else if(board_name == "liteos_a") {
- features += [
- ...
- "//xts/acts/subsystem_lite/module_posix:ActsDemoTest"
- ]
- }
- }
- ```
5. Run build commands.
Test suites are built along with the OS version. The ACTS is built together with the debug version.
- > **NOTE**
The ACTS for the small system is independently built to an executable file \(.bin\) and archived in the **suites\\acts** directory of the build result.
+ > **NOTE**
+ >
+ >The ACTS for the small system is independently built to an executable file (.bin) and archived in the **suites\acts** directory of the build result.
-### Executing Test Cases in C++ (for Standard and Small Systems\)
+### Executing Test Cases in C++ (for Standard and Small Systems)
**Executing Test Cases for the Small System**
@@ -332,24 +336,29 @@ Currently, test cases are shared by the NFS and mounted to the development board
**Setting Up the Environment**
-1. Use a network cable or wireless network to connect the development board to your PC.
-2. Configure the IP address, subnet mask, and gateway for the development board. Ensure that the development board and the PC are in the same network segment.
-3. Install and register the NFS server on the PC and start the NFS service.
+1. Use a network cable or wireless network to connect the development board to your PC.
+
+2. Configure the IP address, subnet mask, and gateway for the development board. Ensure that the development board and the PC are in the same network segment.
+
+3. Install and register the NFS server on the PC and start the NFS service.
+
4. Run the **mount** command for the development board to ensure that the development board can access NFS shared files on the PC.
Format: **mount** _NFS server IP address_**:/**_NFS shared directory_ **/**_development board directory_ **nfs**
- Example:
+ Example:
```
mount 192.168.1.10:/nfs /nfs nfs
```
+
+
**Executing Test Cases**
Execute **ActsDemoTest.bin** to trigger test case execution, and analyze serial port logs generated after the execution is complete.
-### Developing Test Cases in JavaScript (for the Standard System\)
+### Developing Test Cases in JavaScript (for the Standard System)
The HJSUnit framework is used to support automated test of OpenHarmony apps that are developed using the JavaScript language based on the JS application framework.
@@ -366,73 +375,82 @@ The test cases are developed with the JavaScript language and must meet the prog
| beforeEach | Presets a test-case-level action executed before each test case is executed. The number of execution times is the same as the number of test cases defined by it. You can pass the action function as the only parameter. | No |
| afterEach | Presets a test-case-level clear action executed after each test case is executed. The number of execution times is the same as the number of test cases defined by it. You can pass the clear function as the only parameter. | No |
| describe | Defines a test suite. You can pass two parameters: test suite name and test suite function. The describe statement supports nesting. You can use beforeall, beforeEach, afterEach, and afterAll in each describe statement. | Yes |
-| it | Defines a test case. You can pass three parameters: test case name, filter parameter, and test case function.
**Usage of the filter parameter:**
The value of the filter parameter is a 32-bit integer. Setting different bits to 1 means different configurations:
- Bit 0: whether the filter parameter takes effect. 1 means that the test case is used for the function test and other settings of the parameter do not take effect.
- Bits 0-10: test case categories
- Bits 16-18: test case scales
- Bits 24-28: test levels
**Test case categories**: Bits 0-10 indicate FUNCTION (function test), PERFORMANCE (performance test), POWER (power consumption test), RELIABILITY (reliability test), SECURITY (security compliance test), GLOBAL (integrity test), COMPATIBILITY (compatibility test), USER (user test), STANDARD (standard test), SAFETY (security feature test), and RESILIENCE (resilience test), respectively.
**Test case scales**: Bits 16-18 indicate SMALL (small-scale test), MEDIUM (medium-scale test), and LARGE (large-scale test), respectively.
**Test levels**: Bits 24-28 indicate LEVEL0 (level-0 test), LEVEL1 (level-1 test), LEVEL2 (level-2 test), LEVEL3 (level-3 test), and LEVEL4 (level-4 test), respectively. | Yes |
+| it | Defines a test case. You can pass three parameters: test case name, filter parameter, and test case function.
**Filter parameter:**
The value is a 32-bit integer. Setting different bits to 1 means different configurations.
- Setting bit 0 to **1** means bypassing the filter.
- Setting bits 0-10 to **1** specifies the test case type, which can be FUNCTION (function test), PERFORMANCE (performance test), POWER (power consumption test), RELIABILITY (reliability test), SECURITY (security compliance test), GLOBAL (integrity test), COMPATIBILITY (compatibility test), USER (user test), STANDARD (standard test), SAFETY (security feature test), and RESILIENCE (resilience test), respectively.
- Setting bits 16-18 to **1** specifies the test case scale, which can be SMALL (small-scale test), MEDIUM (medium-scale test), and LARGE (large-scale test), respectively.
- Seting bits 24-28 to **1** specifies the test level, which can be LEVEL0 (level-0 test), LEVEL1 (level-1 test), LEVEL2 (level-2 test), LEVEL3 (level-3 test), and LEVEL4 (level-4 test), respectively.
| Yes |
Use the standard syntax of Jasmine to write test cases. The ES6 specification is supported.
1. Define the test case directory. The test cases are stored in the **entry/src/main/js/test** directory.
```
- ├── BUILD.gn
- │ └──entry
- │ │ └──src
- │ │ │ └──main
- │ │ │ │ └──js
- │ │ │ │ │ └──default
- │ │ │ │ │ │ └──pages
- │ │ │ │ │ │ │ └──index
- │ │ │ │ │ │ │ │ └──index.js # Entry file
- │ │ │ │ │ └──test # Test code
- │ │ │ └── resources # HAP resources
- │ │ │ └── config.json # HAP configuration file
- ```
+ ├── BUILD.gn
+ │ └──entry
+ │ │ └──src
+ │ │ │ └──main
+ │ │ │ │ └──js
+ │ │ │ │ │ └──default
+ │ │ │ │ │ │ └──pages
+ │ │ │ │ │ │ │ └──index
+ │ │ │ │ │ │ │ │ └──index.js # Entry file
+ │ │ │ │ │ └──test # Test code directory
+ │ │ │ └── resources # HAP resources
+ │ │ │ └── config.json # HAP configuration file
+ ```
-2. Start the JS test framework and load test cases. The following is an example for **index.js**.
- ```
- // Start the JS test framework and load test cases.
+2. Start the JS test framework and load test cases.
+
+ The following is an example for **index.js**.
+
+ ```
+ // Start the JS test framework and load test cases.
import {Core, ExpectExtend} from 'deccjsunit/index'
export default {
- data: {
- title: ""
- },
- onInit() {
- this.title = this.$t('strings.world');
- },
- onShow() {
- console.info('onShow finish')
- const core = Core.getInstance()
- const expectExtend = new ExpectExtend({
- 'id': 'extend'
- })
- core.addService('expect', expectExtend)
- core.init()
- const configService = core.getDefaultService('config')
- configService.setConfig(this)
- require('../../../test/List.test')
- core.execute()
- },
- onReady() {
- },
- }
- ```
+ data: {
+ title: ""
+ },
+ onInit() {
+ this.title = this.$t('strings.world');
+ },
+ onShow() {
+ console.info('onShow finish')
+ const core = Core.getInstance()
+ const expectExtend = new ExpectExtend({
+ 'id': 'extend'
+ })
+ core.addService('expect', expectExtend)
+ core.init()
+ const configService = core.getDefaultService('config')
+ configService.setConfig(this)
+ require('../../../test/List.test')
+ core.execute()
+ },
+ onReady() {
+ },
+ }
+ ```
-3. Write a unit test case by referring to the following example:
+
- ```
- // Use HJSUnit to perform the unit test.
- describe('appInfoTest', function () {
- it('app_info_test_001', 0, function () {
- var info = app.getInfo()
- expect(info.versionName).assertEqual('1.0')
- expect(info.versionCode).assertEqual('3')
- })
- })
- ```
+3. Write a unit test case.
+
+ The following is an example:
+
+ ```
+ // Example 1: Use HJSUnit to perform a unit test.
+ describe('appInfoTest', function () {
+ it('app_info_test_001', 0, function () {
+ var info = app.getInfo()
+ expect(info.versionName).assertEqual('1.0')
+ expect(info.versionCode).assertEqual('3')
+ })
+ })
+ ```
+
-### Packaging Test Cases in JavaScript (for the Standard System\)
+
+### Packaging Test Cases in JavaScript (for the Standard System)
For details about how to build a HAP, see the JS application development guide of the standard system [Building and Creating HAPs](https://developer.harmonyos.com/en/docs/documentation/doc-guides/build_overview-0000001055075201).
@@ -444,12 +462,15 @@ Run the following command:
./build.sh suite=acts system_size=standard
```
+
+
+
Test case directory: **out/release/suites/acts/testcases**
Test framework and test case directory: **out/release/suites/acts** \(the test suite execution framework is compiled during the build process)
-## Executing Test Cases in a Full Build (for Small and Standard Systems\)
+## Executing Test Cases in a Full Build (for Small and Standard Systems)
**Setting Up a Test Environment**
@@ -468,29 +489,30 @@ Install Python 3.7 or a later version on a Windows environment and ensure that t
**Executing Test Cases**
-1. On the Windows environment, locate the directory in which the test cases are stored \(**out/release/suites/acts**, copied from the Linux server\), go to the directory in the Windows command window, and run **acts\\run.bat**.
+1. On the Windows environment, locate the directory in which the test cases are stored \(**out/release/suites/acts**, copied from the Linux server), go to the directory in the Windows command window, and run **acts\\run.bat**.
-1. Enter the command for executing the test case.
+2. Enter the command for executing the test case.
- Execute all test cases.
- ```
- run acts
- ```
-
- 
-
- - Execute the test cases of a module \(view specific module information in **\\acts\\testcases\\**\).
-
- ```
- run –l ActsSamgrTest
- ```
-
- 
-
- Wait until the test cases are complete.
-
+ ```
+ run acts
+ ```
+
+ 
+
+ - Execute the test cases of a module \(view specific module information in **\acts\testcases\**).
+
+ ```
+ run –l ActsSamgrTest
+ ```
+
+ 
+
+ You can view specific module information in **\acts\testcases\**.
+
+ Wait until the test cases are complete.
3. View the test report.
- Go to **acts\\reports\\**, obtain the current execution record, and open **summary\_report.html** to view the test report.
+ Go to **acts\reports**, obtain the current execution record, and open **summary_report.html** to view the test report.