diff --git a/en/device-dev/kernel/kernel-mini-basic-ipc-event.md b/en/device-dev/kernel/kernel-mini-basic-ipc-event.md index a39a68606f1a780ea75b05fe788f4d577ac554a5..3339899d1e1b04c94513c64692c93aeb1bea8b0a 100644 --- a/en/device-dev/kernel/kernel-mini-basic-ipc-event.md +++ b/en/device-dev/kernel/kernel-mini-basic-ipc-event.md @@ -1,15 +1,15 @@ -# Events +# Event ## Basic Concepts -An event is a mechanism for communication between tasks. It can be used to synchronize tasks. The events have the following features: +An event is a communication mechanism used to synchronize tasks. Events have the following features: - Events can be synchronized in one-to-many or many-to-many mode. In one-to-many mode, a task can wait for multiple events. In many-to-many mode, multiple tasks can wait for multiple events. However, a write event wakes up only one task from the block. - Event read timeout mechanism is used. -- Events are used only for task synchronization, but not for data transmission. +- Events are used for task synchronization, but not for data transmission. APIs are provided to initialize, read/write, clear, and destroy events. @@ -18,7 +18,7 @@ APIs are provided to initialize, read/write, clear, and destroy events. ### Event Control Block -The event control block is a struct configured in the event initialization function. It is passed in as an input parameter to identify the event for operations such as event read and write. The data structure of the event control block is as follows: +The event control block is a structure in the event initialization function. It passes in event identifies for operations such as event read and write. The data structure of the event control block is as follows: ``` @@ -31,23 +31,33 @@ typedef struct tagEvent { ### Working Principles -**Initializing an event**: An event control block is created to maintain a collection of processed events and a linked list of tasks waiting for specific events. +**Initializing an Event** -**Writing an event**: When a specified event is written to the event control block, the event control block updates the event set, traverses the task linked list, and determines whether to wake up related task based on the task conditions. +An event control block is created to maintain a set of processed events and a linked list of tasks waiting for specific events. -**Reading an event**: If the read event already exists, it is returned synchronously. In other cases, the return time is determined based on the timeout period and event triggering status. If the wait event condition is met before the timeout period expires, the blocked task will be directly woken up. Otherwise, the blocked task will be woken up only after the timeout period has expired. +**Writing an Event** -The input parameters **eventMask** and **mode** determine whether the condition for reading an event is met. **eventMask** indicates the mask of the event. **mode** indicates the handling mode, which can be any of the following: +When an event is written to the event control block, the event control block updates the event set, traverses the task linked list, and determines whether to wake up related tasks based on the task conditions. -- **LOS_WAITMODE_AND**: Event reading is successful only when all the events corresponding to **eventMask** occur. Otherwise, the task will be blocked, or an error code will be returned. +**Reading an Event** -- **LOS_WAITMODE_OR**: Event reading is successful when any of the events corresponding to **eventMask** occur. Otherwise, the task will be blocked, or an error code will be returned. +If the event to read already exists, it is returned synchronously. In other cases, the event is returned based on the timeout period and event triggering conditions. If the wait condition is met before the timeout period expires, the blocked task will be directly woken up. Otherwise, the blocked task will be woken up only after the timeout period has expired. + +The parameters **eventMask** and **mode** determine whether the condition for reading an event is met. **eventMask** specifies the event mask. **mode** specifies the handling mode, which can be any of the following: + +- **LOS_WAITMODE_AND**: Read the event only when all the events corresponding to **eventMask** occur. Otherwise, the task will be blocked, or an error code will be returned. + +- **LOS_WAITMODE_OR**: Read the event only when any of the events corresponding to **eventMask** occur. Otherwise, the task will be blocked, or an error code will be returned. - **LOS_WAITMODE_CLR**: This mode must be used with one or all of the event modes (LOS_WAITMODE_AND | LOS_WAITMODE_CLR or LOS_WAITMODE_OR | LOS_WAITMODE_CLR). In this mode, if all event modes or any event mode is successful, the corresponding event type bit in the event control block will be automatically cleared. -**Clearing events**: Clear the event set of the event control block based on the specified mask. If the mask is **0**, the event set will be cleared. If the mask is **0xffff**, no event will be cleared, and the event set remains unchanged. +**Clearing Events** + +The events in the event set of the event control block can be cleared based on the specified mask. The mask **0** means to clear the event set; the mask **0xffff** means the opposite. + +**Destroying Events** -**Destroying an event**: Destroy the specified event control block. +The event control block can be destroyed to release resources. **Figure 1** Event working mechanism for a mini system @@ -58,12 +68,12 @@ The input parameters **eventMask** and **mode** determine whether the condition | Category| API| Description| | -------- | -------- | -------- | -| Event check| LOS_EventPoll | Checks whether the expected event occurs based on **eventID**, **eventMask**, and **mode**.
**NOTICE**

If **mode** contains **LOS_WAITMODE_CLR** and the expected event occurs, the event that meets the requirements in **eventID** will be cleared. In this case, **eventID** is an input parameter and an output parameter. In other cases, **eventID** is used only as an input parameter.| -| Initialization| LOS_EventInit | Initializes an event control block.| -| Event read| LOS_EventRead | Reads an event (wait event). The task will be blocked to wait based on the timeout period (in ticks).
If no event is read, **0** is returned.
If an event is successfully read, a positive value (event set) is returned.
In other cases, an error code is returned.| -| Event write| LOS_EventWrite | Writes an event to the event control block.| -| Event clearance| LOS_EventClear | Clears an event in the event control block based on the event mask.| -| Event destruction| LOS_EventDestroy | Destroys an event control block.| +| Checking an event | LOS_EventPoll | Checks whether the expected event occurs based on **eventID**, **eventMask**, and **mode**.
**NOTE**
If **mode** contains **LOS_WAITMODE_CLR** and the expected event occurs, the event that meets the requirements in **eventID** will be cleared. In this case, **eventID** is an input parameter and an output parameter. In other cases, **eventID** is used only as an input parameter. | +| Initializing an event control block | LOS_EventInit | Initializes an event control block.| +| Reading an event | LOS_EventRead | Reads an event (wait event). The task will be blocked to wait based on the timeout period (in ticks).
If no event is read, **0** is returned.
If an event is successfully read, a positive value (event set) is returned.
In other cases, an error code is returned.| +| Writing an event | LOS_EventWrite | Writes an event to the event control block.| +| Clearing events | LOS_EventClear | Clears events in the event control block based on the event mask. | +| Destroying events | LOS_EventDestroy | Destroys an event control block.| ## How to Develop @@ -72,11 +82,11 @@ The typical event development process is as follows: 1. Initialize an event control block. -2. Block a read event control block. +2. Block a read event. -3. Write related events. +3. Write events. -4. Wake up a blocked task, read the event, and check whether the event meets conditions. +4. Wake up the blocked task, read the event, and check whether the event meets conditions. 5. Handle the event control block. @@ -84,7 +94,7 @@ The typical event development process is as follows: > **NOTE** -> - When an event is read or written, the 25th bit of the event is reserved and cannot be set. +> - For event read and write operations, the 25th bit (`0x02U << 24`) of the event is reserved and cannot be set. > > - Repeated writes of the same event are treated as one write. @@ -111,7 +121,7 @@ In the **ExampleEvent** task, create an **EventReadTask** task with a timout per The sample code is as follows: -The sample code is compiled and verified in **./kernel/liteos_m/testsuites/src/osTest.c**. Call **ExampleEvent()** in **TestTaskEntry**. +The sample code can be compiled and verified in **./kernel/liteos_m/testsuites/src/osTest.c**. The **ExampleEvent()** function is called in **TestTaskEntry**. ``` diff --git a/en/device-dev/kernel/kernel-mini-basic-ipc-queue.md b/en/device-dev/kernel/kernel-mini-basic-ipc-queue.md index 3f874e55624965233b940bf1a33d378120a47762..b0677e6d8074ee0d0fbed29d74074cbc582fe543 100644 --- a/en/device-dev/kernel/kernel-mini-basic-ipc-queue.md +++ b/en/device-dev/kernel/kernel-mini-basic-ipc-queue.md @@ -77,7 +77,7 @@ The preceding figure illustrates how to write data to the tail node only. Writin ## Available APIs -| Category| Description| +| Category| API Description | | -------- | -------- | | Creating or deleting a message queue| **LOS_QueueCreate**: creates a message queue. The system dynamically allocates the queue space.
**LOS_QueueCreateStatic**: creates a static message queue. You need to pass in the queue space.
**LOS_QueueDelete**: deletes a message queue. After a static message queue is deleted, you need to release the queue space.| | Reading or writing data (address without the content) in a queue| **LOS_QueueRead**: reads data in the head node of the specified queue. The data in the queue node is an address.
**LOS_QueueWrite**: writes the **bufferAddr** (buffer address) to the tail node of the specified queue.
**LOS_QueueWriteHead**: writes the **bufferAddr** (buffer address) to the head node of the specified queue.| @@ -136,7 +136,7 @@ Create a queue and two tasks. Enable task 1 to write data to the queue, and task The sample code is as follows: -The sample code is compiled and verified in **./kernel/liteos_m/testsuites/src/osTest.c**. Call **ExampleQueue** in **TestTaskEntry**. +The sample code can be compiled and verified in **./kernel/liteos_m/testsuites/src/osTest.c**. The **ExampleQueue** function is called in **TestTaskEntry**. ``` diff --git a/en/device-dev/kernel/kernel-small-apx-bitwise.md b/en/device-dev/kernel/kernel-small-apx-bitwise.md index a3760fc0c586a410de798654e2d4c3f75c2c39ce..7d2021ff322d40f8bccd7ad3cccb8742b0de1503 100644 --- a/en/device-dev/kernel/kernel-small-apx-bitwise.md +++ b/en/device-dev/kernel/kernel-small-apx-bitwise.md @@ -1,80 +1,42 @@ # Bitwise Operation - ## Basic Concepts -A bitwise operation operates on a binary number at the level of its individual bits. For example, a variable can be set as a program status word \(PSW\), and each bit \(flag bit\) in the PSW can have a self-defined meaning. - -## Available APIs - -The system provides operations for setting the flag bit to **1** or **0**, changing the flag bit content, and obtaining the most significant bit and least significant bit of the flag bit 1 in a PSW. You can also perform bitwise operations on system registers. The following table describes the APIs available for the bitwise operation module. For more details about the APIs, see the API reference. - -**Table 1** Bitwise operation module APIs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Function

-

API

-

Description

-

Setting the flag bit to 1 or 0

-

LOS_BitmapSet

-

Sets a flag bit of a PSW to 1.

-

LOS_BitmapClr

-

Sets a flag bit of a PSW to 0.

-

Obtaining the bit whose flag bit is 1

-

LOS_HighBitGet

-

Obtains the most significant bit of 1 in the PSW.

-

LOS_LowBitGet

-

Obtains the least significant bit of 1 in the PSW.

-

Operating continuous bits

-

LOS_BitmapSetNBits

-

Sets the continuous flag bits of a PSW to 1.

-

LOS_BitmapClrNBits

-

Sets the continuous flag bits of a PSW to 0.

-

LOS_BitmapFfz

-

Obtains the first 0 bit starting from the least significant bit (LSB).

-
- -## Development Example - -### Example Description +A bitwise operation operates on the bits of a binary number. A variable can be set as a program status word (PSW), and each bit (flag bit) in the PSW can have a self-defined meaning. + + +## **Available APIs** + +The system provides operations for setting the flag bit to **1** or **0**, changing the flag bit content, and obtaining the most significant bit (MSB) and least significant bit (LSB) of the flag bit 1 in a PSW. You can also perform bitwise operations on system registers. The following table describes the APIs available for the bitwise operation module. For more details about the APIs, see the API reference. + + **Table 1** APIs of the bitwise operation module + +| Category | API Description | +| -------- | -------- | +| Setting a flag bit| - **LOS_BitmapSet**: sets a flag bit of a PSW to **1**.
- **LOS_BitmapClr**: sets a flag bit of a PSW to **0**. | +| Obtaining the bit whose flag bit is **1**| -**LOS_HighBitGet**: obtains the most significant bit of 1 in a PSW.
- **LOS_LowBitGet**: obtains the least significant bit of 1 in a PSW. | +| Operating continuous bits| - **LOS_BitmapSetNBits**: sets the consecutive flag bits of a PSW to **1**.
- **LOS_BitmapClrNBits**: sets the consecutive flag bits of a PSW to **0**.
- **LOS_BitmapFfz**: obtains the first 0 bit starting from the LSB. | + + +## Development Example + + +### Example Description This example implements the following: -1. Set a flag bit to **1**. -2. Obtain the most significant bit of flag bit 1. -3. Set a flag bit to **0**. -4. Obtain the least significant bit of the flag bit 1. +1. Set a flag bit to **1**. + +2. Obtain the MSB of flag bit 1. + +3. Set a flag bit to **0**. + +4. Obtain the LSB of flag bit 1. + +### Sample Code + +The sample code can be compiled and verified in **./kernel/liteos_a/testsuites/kernel/src/osTest.c**. The **BitSample** function is called in **TestTaskEntry**. ``` #include "los_bitmap.h" @@ -105,10 +67,12 @@ static UINT32 BitSample(VOID) } ``` + ### Verification The development is successful if the return result is as follows: + ``` Bitmap Sample! The flag is 0x10101010 @@ -117,4 +81,3 @@ LOS_HighBitGet:The highest one bit is 28, the flag is 0x10101110 LOS_BitmapClr: pos : 28, the flag is 0x00101110 LOS_LowBitGet: The lowest one bit is 4, the flag is 0x00101110 ``` - diff --git a/en/device-dev/kernel/kernel-small-apx-dll.md b/en/device-dev/kernel/kernel-small-apx-dll.md index e33e8e55d65e6a5e39fbb33e154557e2751148e9..1baa754b958dfbc5613eb7058e8ed4e24edfa376 100644 --- a/en/device-dev/kernel/kernel-small-apx-dll.md +++ b/en/device-dev/kernel/kernel-small-apx-dll.md @@ -8,19 +8,18 @@ A doubly linked list (DLL) is a linked data structure that consists of a set of ## Available APIs -The table below describes the DLL APIs. For more details about the APIs, see the API reference. - -| **Category**| **API**| -| -------- | -------- | -| Initializing a DLL| - **LOS_ListInit**: initializes a node as a DLL node.
- **LOS_DL_LIST_HEAD**: defines a node and initializes it as a DLL node.| -| Adding a node| - **LOS_ListAdd**: adds a node to the head of a DLL.
- **LOS_ListHeadInsert**: same as **LOS_ListAdd**.
- **LOS_ListTailInsert**: inserts a node to the tail of a DLL.| -| Adding a DLL| - **LOS_ListAddList**: adds the head of a DLL to the head of this DLL.
- **LOS_ListHeadInsertList**: inserts the head of a DLL to the head of this DLL.
- **LOS_ListTailInsertList**: Inserts the end of a DLL to the head of this DLL.| -| Deleting a node| - **LOS_ListDelete**: deletes a node from this DLL.
- **LOS_ListDelInit**: deletes a node from this DLL and uses this node to initialize the DLL.| -| Checking a DLL| - **LOS_ListEmpty**: checks whether a DLL is empty.
- **LOS_DL_LIST_IS_END**: checks whether a node is the tail of the DLL.
- **LOS_DL_LIST_IS_ON_QUEUE**: checks whether a node is in the DLL.| -| Obtains structure information.| - **LOS_OFF_SET_OF**: obtains the offset of a member in the specified structure relative to the start address of the structure.
- **LOS_DL_LIST_ENTRY**: obtains the address of the structure that contains the first node in the DLL. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure.
- **LOS_ListPeekHeadType**: obtains the address of the structure that contains the first node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure. Null will be returned if the DLL is empty.
- **LOS_ListRemoveHeadType**: obtains the address of the structure that contains the first node in the linked list, and deletes the first node from the list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure. Null will be returned if the DLL is empty.
- **LOS_ListNextType**: obtains the address of the structure that contains the next node of the specified node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the specified node, the third parameter indicates the name of the structure to be obtained, and the fourth input parameter indicates the name of the linked list in the structure. If the next node of the linked list node is the head node and is empty, NULL will be returned.| -| Traversing a DLL| - **LOS_DL_LIST_FOR_EACH**: traverses a DLL.
- **LOS_DL_LIST_FOR_EACH_SAFE**: traverses the DLL and stores the subsequent nodes of the current node for security verification.| -| Traversing the structure that contains the DLL| - **LOS_DL_LIST_FOR_EACH_ENTRY**: traverses a DLL and obtains the address of the structure that contains the linked list node.
- **LOS_DL_LIST_FOR_EACH_ENTRY_SAFE**: traverses a DLL, obtains the address of the structure that contains the linked list node, and stores the address of the structure that contains the subsequent node of the current node.| - +The table below describes APIs available for the DLL. For more details about the APIs, see the API reference. + +| Category | API Description | +| ------------------------ | ------------------------------------------------------------ | +| Initializing a DLL | - **LOS_ListInit**: initializes a node as a DLL node.
- **LOS_DL_LIST_HEAD**: defines a node and initializes it as a DLL node.| +| Adding a node | - **LOS_ListAdd**: adds a node to the head of a DLL.
- **LOS_ListHeadInsert**: same as **LOS_ListAdd**.
- **LOS_ListTailInsert**: inserts a node to the tail of a DLL.| +| Adding a DLL | - **LOS_ListAddList**: adds the head of a DLL to the head of this DLL.
- **LOS_ListHeadInsertList**: inserts the head of a DLL to the head of this DLL.
- **LOS_ListTailInsertList**: inserts the end of a DLL to the head of this DLL.| +| Deleting a node | - **LOS_ListDelete**: deletes a node from this DLL.
- **LOS_ListDelInit**: deletes a node from this DLL and uses this node to initialize the DLL.| +| Checking a DLL | - **LOS_ListEmpty**: checks whether a DLL is empty.
- **LOS_DL_LIST_IS_END**: checks whether a node is the tail of the DLL.
- **LOS_DL_LIST_IS_ON_QUEUE**: checks whether a node is in the DLL.| +| Obtaining structure information | - **LOS_OFF_SET_OF**: obtains the offset of a member in the specified structure relative to the start address of the structure.
- **LOS_DL_LIST_ENTRY**: obtains the address of the structure that contains the first node in the DLL. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure.
- **LOS_ListPeekHeadType**: obtains the address of the structure that contains the first node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure. Null will be returned if the DLL is empty.
- **LOS_ListRemoveHeadType**: obtains the address of the structure that contains the first node in the linked list, and deletes the first node from the list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the name of the structure to be obtained, and the third input parameter indicates the name of the linked list in the structure. Null will be returned if the DLL is empty.
- **LOS_ListNextType**: obtains the address of the structure that contains the next node of the specified node in the linked list. The first input parameter of the API indicates the head node in the list, the second input parameter indicates the specified node, the third parameter indicates the name of the structure to be obtained, and the fourth input parameter indicates the name of the linked list in the structure. If the next node of the linked list node is the head node and is empty, NULL will be returned.| +| Traversing a DLL | - **LOS_DL_LIST_FOR_EACH**: traverses a DLL.
- **LOS_DL_LIST_FOR_EACH_SAFE**: traverses the DLL and stores the subsequent nodes of the current node for security verification.| +| Traversing the structure that contains a DLL| - **LOS_DL_LIST_FOR_EACH_ENTRY**: traverses a DLL and obtains the address of the structure that contains the linked list node.
- **LOS_DL_LIST_FOR_EACH_ENTRY_SAFE**: traverses a DLL, obtains the address of the structure that contains the linked list node, and stores the address of the structure that contains the subsequent node of the current node.| ## How to Develop @@ -30,7 +29,7 @@ The typical development process of the DLL is as follows: 2. Call **LOS_ListAdd** to add a node into the DLL. -3. Call **LOS_ListTailInsert** to insert a node to the tail of the DLL. +3. Call **LOS_ListTailInsert** to insert a node into the tail of the DLL. 4. Call **LOS_ListDelete** to delete the specified node. @@ -39,18 +38,19 @@ The typical development process of the DLL is as follows: 6. Call **LOS_ListDelInit** to delete the specified node and initialize the DLL based on the node. -> ![icon-note.gif](public_sys-resources/icon-note.gif) **NOTE**
-> - Pay attention to the operations operations of the front and back pointer of the node. -> +> **NOTE**
+> +> - Pay attention to the operations before and after the node pointer. +> > - The DLL APIs are underlying interfaces and do not check whether the input parameters are empty. You must ensure that the input parameters are valid. -> +> > - If the memory of a linked list node is dynamically allocated, release the memory when deleting the node. - **Development Example** +## Development Example -**Example Description** +### Example Description This example implements the following: @@ -63,7 +63,11 @@ This example implements the following: 4. Check the operation result. +### Sample Code + +The sample code can be compiled and verified in **./kernel/liteos_a/testsuites/kernel/src/osTest.c**. The **ListSample** function is called in **TestTaskEntry**. +The sample code is as follows: ``` #include "stdio.h" @@ -109,6 +113,8 @@ static UINT32 ListSample(VOID) The development is successful if the return result is as follows: + + ``` Initial head Add listNode1 success diff --git a/en/device-dev/kernel/kernel-small-apx-library.md b/en/device-dev/kernel/kernel-small-apx-library.md index c99d339880983a28403409a2caf157a20875c47b..dbbc0a9d9b872a5a0fe0a2c703c1d8783d05b59f 100644 --- a/en/device-dev/kernel/kernel-small-apx-library.md +++ b/en/device-dev/kernel/kernel-small-apx-library.md @@ -1,45 +1,49 @@ # Standard Library -The OpenHarmony kernel uses the musl libc library that supports the Portable Operating System Interface \(POSIX\). You can develop components and applications working on the kernel based on the POSIX. +The OpenHarmony kernel uses the musl libc library that supports the Portable Operating System Interface (POSIX). You can develop components and applications working on the kernel based on the POSIX. + ## Standard Library API Framework -**Figure 1** POSIX framework +**Figure 1** POSIX framework + ![](figures/posix-framework.png "posix-framework") The musl libc library supports POSIX standards. The OpenHarmony kernel adapts the related system call APIs to implement external functions. For details about the APIs supported by the standard library, see the API document of the C library, which also covers the differences between the standard library and the POSIX standard library. -## Development Example -In this example, the main thread creates **THREAD\_NUM** child threads. Once a child thread is started, it enters the standby state. After the main thread successfully wakes up all child threads, they continue to execute until the lifecycle ends. The main thread uses the **pthread\_join** method to wait until all child threads are executed. +### Development Example + + +#### Example Description + +In this example, the main thread creates THREAD_NUM child threads. Once a child thread is started, it enters the standby state. After the main thread successfully wakes up all child threads, they continue to execute until the lifecycle ends. The main thread uses the **pthread_join** method to wait until all child threads are executed. + +#### Sample Code + +The sample code can be compiled and verified in **./kernel/liteos_a/testsuites/kernel/src/osTest.c**. The **ExamplePosix** function is called in **TestTaskEntry**. + +The sample code is as follows: ``` #include #include #include -#ifdef __cplusplus -#if __cplusplus -extern "C" { -#endif /* __cplusplus */ -#endif /* __cplusplus */ - #define THREAD_NUM 3 -int g_startNum = 0; /* Number of started threads */ -int g_wakenNum = 0; /* Number of wakeup threads */ +int g_startNum = 0; /* Number of threads to start */ +int g_wakenNum = 0; /* Number of threads to wake up */ struct testdata { pthread_mutex_t mutex; pthread_cond_t cond; } g_td; -/* - * Entry function of child threads. - */ -static void *ChildThreadFunc(void *arg) +/* Entry function of the child thread */ +static VOID *ChildThreadFunc(VOID *arg) { int rc; pthread_t self = pthread_self(); @@ -47,17 +51,17 @@ static void *ChildThreadFunc(void *arg) /* Acquire a mutex. */ rc = pthread_mutex_lock(&g_td.mutex); if (rc != 0) { - printf("ERROR:take mutex lock failed, error code is %d!\n", rc); + dprintf("ERROR:take mutex lock failed, error code is %d!\n", rc); goto EXIT; } /* The value of g_startNum is increased by 1. The value indicates the number of child threads that have acquired a mutex. */ g_startNum++; - /* Wait for the cond variable. */ + /* Wait for the cond variable. */ rc = pthread_cond_wait(&g_td.cond, &g_td.mutex); if (rc != 0) { - printf("ERROR: pthread condition wait failed, error code is %d!\n", rc); + dprintf("ERROR: pthread condition wait failed, error code is %d!\n", rc); (void)pthread_mutex_unlock(&g_td.mutex); goto EXIT; } @@ -65,52 +69,53 @@ static void *ChildThreadFunc(void *arg) /* Attempt to acquire a mutex, which is failed in normal cases. */ rc = pthread_mutex_trylock(&g_td.mutex); if (rc == 0) { - printf("ERROR: mutex gets an abnormal lock!\n"); + dprintf("ERROR: mutex gets an abnormal lock!\n"); goto EXIT; } /* The value of g_wakenNum is increased by 1. The value indicates the number of child threads that have been woken up by the cond variable. */ g_wakenNum++; - /* Unlock a mutex. */ + /* Release a mutex. */ rc = pthread_mutex_unlock(&g_td.mutex); if (rc != 0) { - printf("ERROR: mutex release failed, error code is %d!\n", rc); + dprintf("ERROR: mutex release failed, error code is %d!\n", rc); goto EXIT; } EXIT: return NULL; } -static int testcase(void) +static int ExamplePosix(VOID) { int i, rc; pthread_t thread[THREAD_NUM]; - /* Initialize a mutex. */ + /* Initialize the mutex. */ rc = pthread_mutex_init(&g_td.mutex, NULL); if (rc != 0) { - printf("ERROR: mutex init failed, error code is %d!\n", rc); + dprintf("ERROR: mutex init failed, error code is %d!\n", rc); goto ERROROUT; } /* Initialize the cond variable. */ rc = pthread_cond_init(&g_td.cond, NULL); if (rc != 0) { - printf("ERROR: pthread condition init failed, error code is %d!\n", rc); + dprintf("ERROR: pthread condition init failed, error code is %d!\n", rc); goto ERROROUT; } - /* Create child threads in batches. The number is specified by THREAD_NUM. */ + /* Create child threads in batches. */ for (i = 0; i < THREAD_NUM; i++) { rc = pthread_create(&thread[i], NULL, ChildThreadFunc, NULL); if (rc != 0) { - printf("ERROR: pthread create failed, error code is %d!\n", rc); + dprintf("ERROR: pthread create failed, error code is %d!\n", rc); goto ERROROUT; } } + dprintf("pthread_create ok\n"); - /* Wait until all child threads lock a mutex. */ + /* Wait until all child threads obtain a mutex. */ while (g_startNum < THREAD_NUM) { usleep(100); } @@ -118,14 +123,14 @@ static int testcase(void) /* Acquire a mutex and block all threads using pthread_cond_wait. */ rc = pthread_mutex_lock(&g_td.mutex); if (rc != 0) { - printf("ERROR: mutex lock failed, error code is %d\n", rc); + dprintf("ERROR: mutex lock failed, error code is %d\n", rc); goto ERROROUT; } - /* Release a mutex. */ + /* Release the mutex. */ rc = pthread_mutex_unlock(&g_td.mutex); if (rc != 0) { - printf("ERROR: mutex unlock failed, error code is %d!\n", rc); + dprintf("ERROR: mutex unlock failed, error code is %d!\n", rc); goto ERROROUT; } @@ -133,7 +138,7 @@ static int testcase(void) /* Broadcast signals on the cond variable. */ rc = pthread_cond_signal(&g_td.cond); if (rc != 0) { - printf("ERROR: pthread condition failed, error code is %d!\n", rc); + dprintf("ERROR: pthread condition failed, error code is %d!\n", rc); goto ERROROUT; } } @@ -142,73 +147,69 @@ static int testcase(void) /* Check whether all child threads are woken up. */ if (g_wakenNum != THREAD_NUM) { - printf("ERROR: not all threads awaken, only %d thread(s) awaken!\n", g_wakenNum); + dprintf("ERROR: not all threads awaken, only %d thread(s) awaken!\n", g_wakenNum); goto ERROROUT; } + dprintf("all threads awaked\n"); - /* Wait for all threads to terminate. */ + /* Join all child threads, that is, wait for the end of all child threads. */ for (i = 0; i < THREAD_NUM; i++) { rc = pthread_join(thread[i], NULL); if (rc != 0) { - printf("ERROR: pthread join failed, error code is %d!\n", rc); + dprintf("ERROR: pthread join failed, error code is %d!\n", rc); goto ERROROUT; } } + dprintf("all threads join ok\n"); /* Destroy the cond variable. */ rc = pthread_cond_destroy(&g_td.cond); if (rc != 0) { - printf("ERROR: pthread condition destroy failed, error code is %d!\n", rc); + dprintf("ERROR: pthread condition destroy failed, error code is %d!\n", rc); goto ERROROUT; } return 0; ERROROUT: return -1; } +``` -/* - * Main function - */ -int main(int argc, char *argv[]) -{ - int rc; +#### Verification - /* Start the test function. */ - rc = testcase(); - if (rc != 0) { - printf("ERROR: testcase failed!\n"); - } + The output is as follows: - return 0; -} -#ifdef __cplusplus -#if __cplusplus -} -#endif /* __cplusplus */ -#endif /* __cplusplus */ +``` +pthread_create ok +all threads awaked +all threads join ok ``` ## Differences from the Linux Standard Library -This section describes the key differences between the standard library carried by the OpenHarmony kernel and the Linux standard library. For more differences, see the API document of the C library. +The following describes the key differences between the standard library supported by the OpenHarmony kernel and the Linux standard library. For more differences, see the API document of the C library. + ### Process -1. The OpenHarmony user-mode processes support only static priorities, which range from 10 \(highest\) to 31 \(lowest\). -2. The OpenHarmony user-mode threads support only static priorities, which range from 0 \(highest\) to 31 \(lowest\). -3. The OpenHarmony process scheduling supports **SCHED\_RR** only, and thread scheduling supports **SCHED\_RR** or **SCHED\_FIFO**. +- The OpenHarmony user-mode processes support only static priorities, which range from 10 (highest) to 31 (lowest). + +- The OpenHarmony user-mode threads support only static priorities, which range from 0 (highest) to 31 (lowest). + +- The OpenHarmony process scheduling supports **SCHED_RR** only, and thread scheduling supports **SCHED_RR** or **SCHED_FIFO**. + ### Memory -**h2****Difference with Linux mmap** +**Differences from Linux mmap** + +mmap prototype: **void \*mmap (void \*addr, size_t length, int prot, int flags, int fd, off_t offset)** -mmap prototype: **void \*mmap \(void \*addr, size\_t length, int prot, int flags, int fd, off\_t offset\)** +The lifecycle implementation of **fd** is different from that of Linux glibc. glibc releases the **fd** handle immediately after successfully invoking **mmap** for mapping. In the OpenHarmony kernel, you are not allowed to close the **fd** immediately after the mapping is successful. You can close the **fd** only after **munmap** is called. If you do not close **fd**, the OS reclaims the **fd** when the process exits. -The lifecycle implementation of **fd** is different from that of Linux glibc. glibc releases the **fd** handle immediately after successfully invoking **mmap** for mapping. In the OpenHarmony kernel, you are not allowed to close the **fd** immediately after the mapping is successful. You can close the **fd** only after **munmap** is called. If you do not close **fd**, the OS reclaims the **fd** when the process exits. +**Example** -**h2****Sample Code** -Linux OS: +Linux: ``` int main(int argc, char *argv[]) @@ -226,13 +227,14 @@ int main(int argc, char *argv[]) perror("mmap"); exit(EXIT_FAILURE); } - close(fd); /* OpenHarmony does not support close fd immediately after the mapping is successful. */ + close(fd); /* OpenHarmony does not support closing fd immediately after the mapping is successful. */ ... exit(EXIT_SUCCESS); } ``` -OpenHarmony: + + OpenHarmony: ``` int main(int argc, char *argv[]) @@ -252,27 +254,32 @@ int main(int argc, char *argv[]) } ... munmap(addr, length); - close(fd); /* Close fd after the munmap is canceled. */ + close(fd); /* Close fd after the munmap is canceled. */ exit(EXIT_SUCCESS); } ``` + ### File System -**System directories**: You cannot modify system directories and device mount directories, which include **/dev**, **/proc**, **/app**, **/bin**, **/data**, **/etc**, **/lib**, **/system** and **/usr**. +System directories: You cannot modify system directories and device mount directories, which include **/dev**, **/proc**, **/app**, **/bin**, **/data**, **/etc**, **/lib**, **/system**, and **/usr**. -**User directory**: The user directory refers to the **/storage** directory. You can create, read, and write files in this directory, but cannot mount devices. +User directory: The user directory refers to the **/storage** directory. You can create, read, and write files in this directory, but cannot mount it to a device. + +Except in the system and user directories, you can create directories and mount them to devices. Note that nested mount is not allowed, that is, a mounted folder and its subfolders cannot be mounted repeatedly. A non-empty folder cannot be mounted. -Except in the system and user directories, you can create directories and mount devices. Note that nested mount is not allowed, that is, a mounted folder and its subfolders cannot be mounted repeatedly. A non-empty folder cannot be mounted. ### Signal -- The default behavior for signals does not include **STOP**, **CONTINUE**, or **COREDUMP**. -- A sleeping process \(for example, a process enters the sleeping status by calling the sleep function\) cannot be woken up by a signal. The signal mechanism does not support the wakeup function. The behavior for a signal can be processed only when the process is scheduled by the CPU. -- After a process exits, **SIGCHLD** is sent to the parent process. The sending action cannot be canceled. -- Only signals 1 to 30 are supported. The callback is executed only once even if the same signal is received multiple times. +- The default behavior for signals does not include **STOP**, **CONTINUE**, or **COREDUMP**. -### Time +- A sleeping process (for example, a process enters the sleeping status by calling the sleep function) cannot be woken up by a signal. The signal mechanism does not support the wakeup function. The behavior for a signal can be processed only when the process is scheduled by the CPU. -The OpenHarmony time precision is based on tick. The default value is 10 ms/tick. The time error of the **sleep** and **timeout** functions is less than or equal to 20 ms. +- After a process exits, **SIGCHLD** is sent to the parent process. The sending action cannot be canceled. + +- Only signals 1 to 30 are supported. The callback is invoked only once even if the same signal is received multiple times. + + +### Time +The default time precision of OpenHarmony is 10 ms/tick. The time error of the **sleep** and **timeout** functions is less than or equal to 20 ms. diff --git a/en/device-dev/kernel/kernel-small-basic-trans-event.md b/en/device-dev/kernel/kernel-small-basic-trans-event.md index 2aba10352fbf9691cb4ab825f00ec28564d14c44..7d478d71a13aebbfa152bebd7832496887ebdfb7 100644 --- a/en/device-dev/kernel/kernel-small-basic-trans-event.md +++ b/en/device-dev/kernel/kernel-small-basic-trans-event.md @@ -1,146 +1,145 @@ # Event -## Basic Concepts -An event is a mechanism for communication between tasks. It can be used to synchronize tasks. +## Basic Concepts + +An event is a communication mechanism used to synchronize tasks. In multi-task environment, synchronization is required between tasks. Events can be used for synchronization in the following cases: -- One-to-many synchronization: A task waits for the triggering of multiple events. A task is woken up by one or multiple events. -- Many-to-many synchronization: Multiple tasks wait for the triggering of multiple events. +- One-to-many synchronization: A task waits for the triggering of multiple events. A task can be woken up by one or multiple events. + +- Many-to-many synchronization: Multiple tasks wait for the triggering of multiple events. The event mechanism provided by the OpenHarmony LiteOS-A event module has the following features: -- A task triggers or waits for an event by creating an event control block. -- Events are independent of each other. The internal implementation is a 32-bit unsigned integer, and each bit indicates an event type. The 25th bit is unavailable. Therefore, a maximum of 31 event types are supported. -- Events are used only for synchronization between tasks, but not for data transmission. -- Writing the same event type to the event control block for multiple times is equivalent to writing the event type only once before the event control block is cleared. -- Multiple tasks can read and write the same event. -- The event read/write timeout mechanism is supported. +- A task triggers or waits for an event by creating an event control block. + +- Events are independent of each other. The internal implementation is a 32-bit unsigned integer, and each bit indicates an event type. The value **0** indicates that the event type does not occur, and the value **1** indicates that the event type has occurred. There are 31 event types in total. The 25th bit (`0x02U << 24`) is reserved. + +- Events are used for task synchronization, but not for data transmission. + +- Writing the same event type to an event control block multiple times is equivalent to writing the event type only once before the event control block is cleared. + +- Multiple tasks can read and write the same event. -## Working Principles +- The event read/write timeout mechanism is supported. + + +## Working Principles + + +### Event Control Block -### Event Control Block ``` /** -* Event control block data structure + * Event control block data structure */ typedef struct tagEvent { UINT32 uwEventID; /* Event set, which is a collection of events processed (written and cleared). */ - LOS_DL_LIST stEventList; /* List of tasks waiting for specific events */ + LOS_DL_LIST stEventList; /* List of tasks waiting for specific events. */ } EVENT_CB_S, *PEVENT_CB_S; ``` -### Working Principles -**Initializing an event**: An event control block is created to maintain a collection of processed events and a linked list of tasks waiting for specific events. +### Working Principles + +**Initializing an Event** + +An event control block is created to maintain a set of processed events and a linked list of tasks waiting for specific events. -**Writing an event**: When a specified event is written to the event control block, the event control block updates the event set, traverses the task linked list, and determines whether to wake up related task based on the task conditions. +**Writing an Event** -**Reading an event**: If the read event already exists, it is returned synchronously. In other cases, the return time is determined based on the timeout period and event triggering status. If the wait event condition is met before the timeout period expires, the blocked task will be directly woken up. Otherwise, the blocked task will be woken up only after the timeout period has expired. +When an event is written to the event control block, the event control block updates the event set, traverses the task linked list, and determines whether to wake up related task based on the specified conditions. -The input parameters **eventMask** and **mode** determine whether the condition for reading an event is met. **eventMask** indicates the mask of the event. **mode** indicates the handling mode, which can be any of the following: +**Reading an Event** -- **LOS\_WAITMODE\_AND**: Event reading is successful only when all the events corresponding to **eventMask** occur. Otherwise, the task will be blocked, or an error code will be returned. -- **LOS\_WAITMODE\_OR**: Event reading is successful when any of the events corresponding to **eventMask** occurs. Otherwise, the task will be blocked, or an error code will be returned. -- **LOS\_WAITMODE\_CLR**: This mode must be used with **LOS\_WAITMODE\_AND** or **LOS\_WAITMODE\_OR** \(LOS\_WAITMODE\_AND | LOS\_WAITMODE\_CLR or LOS\_WAITMODE\_OR | LOS\_WAITMODE\_CLR\). In this mode, if **LOS\_WAITMODE\_AND** or **LOS\_WAITMODE\_OR** is successful, the corresponding event type bit in the event control block will be automatically cleared. +If the event to read already exists, it is returned synchronously. In other cases, the event is returned based on the timeout period and event triggering conditions. If the wait condition is met before the timeout period expires, the blocked task will be directly woken up. Otherwise, the blocked task will be woken up only after the timeout period has expired. -**Clearing events**: Clear the event set of the event control block based on the specified mask. If the mask is **0**, the event set will be cleared. If the mask is **0xffff**, no event will be cleared, and the event set remains unchanged. +The parameters **eventMask** and **mode** determine whether the condition for reading an event is met. **eventMask** specifies the event mask. **mode** specifies the handling mode, which can be any of the following: -**Destroying an event**: Destroy the specified event control block. +- **LOS_WAITMODE_AND**: Read the event only when all the events corresponding to **eventMask** occur. Otherwise, the task will be blocked, or an error code will be returned. -**Figure 1** Event working mechanism for small systems -![](figures/event-working-mechanism-for-small-systems.png "event-working-mechanism-for-small-systems") +- **LOS_WAITMODE_OR**: Read the event only when any of the events corresponding to **eventMask** occur. Otherwise, the task will be blocked, or an error code will be returned. -## Development Guidelines +- **LOS_WAITMODE_CLR**: This mode must be used with one or all of the event modes (LOS_WAITMODE_AND | LOS_WAITMODE_CLR or LOS_WAITMODE_OR | LOS_WAITMODE_CLR). In this mode, if all event modes or any event mode is successful, the corresponding event type bit in the event control block will be automatically cleared. -### Available APIs +**Clearing Events** + +The events in the event set of the event control block can be cleared based on the specified mask. The mask **0** means to clear the event set; the mask **0xffff** means the opposite. + +**Destroying Events** + +The event control block can be destroyed to release resources. + +**Figure 1** Event working mechanism for small systems + + ![](figures/event-working-mechanism-for-small-systems.png "event-working-mechanism-for-small-systems") + + +## Development Guidelines + + +### Available APIs The following table describes APIs available for the OpenHarmony LiteOS-A event module. -**Table 1** Event module APIs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Function

-

API

-

Description

-

Initializing events

-

LOS_EventInit

-

Initializes an event control block.

-

Reading/Writing events

-

LOS_EventRead

-

Reads a specified type of event, with the timeout period of a relative time period in ticks.

-

LOS_EventWrite

-

Writes a specified type of event.

-

Clearing events

-

LOS_EventClear

-

Clears a specified type of event.

-

Checking the event mask

-

LOS_EventPoll

-

Returns whether the event input by the user meets the expectation based on the event ID, event mask, and read mode passed by the user.

-

Destroying events

-

LOS_EventDestroy

-

Destroys a specified event control block.

-
- -### How to Develop +**Table 1** APIs of the event module + +| Category| API Description | +| -------- | -------- | +| Initializing an event| **LOS_EventInit**: initializes an event control block.| +| Reading/Writing an event| - **LOS_EventRead**: reads an event, with a relative timeout period in ticks.
- **LOS_EventWrite**: writes an event. | +| Clearing events| **LOS_EventClear**: clears a specified type of events.| +| Checking the event mask| **LOS_EventPoll**: checks whether the specified event occurs.| +| Destroying events | **LOS_EventDestroy**: destroys an event control block.| + + +### How to Develop The typical event development process is as follows: -1. Initialize an event control block. -2. Block a read event control block. -3. Write related events. -4. Wake up a blocked task, read the event, and check whether the event meets conditions. -5. Handle the event control block. -6. Destroy an event control block. +1. Initialize an event control block. + +2. Block a read event. + +3. Write related events. + +4. Wake up a blocked task, read the event, and check whether the event meets conditions. + +5. Handle the event control block. + +6. Destroy an event control block. + +> **NOTE** +> +> - For event read and write operations, the 25th bit (`0x02U << 24`) of the event is reserved and cannot be set. +> +> - Repeated writes of the same event are treated as one write. + + +## Development Example + + +### Example Description ->![](../public_sys-resources/icon-note.gif) **NOTE:** ->- When an event is read or written, the 25th bit of the event is reserved and cannot be set. ->- Repeated writes of the same event are treated as one write. +In this example, run the **Example_TaskEntry** task to create the **Example_Event** task. Run the **Example_Event** task to read an event to trigger task switching. Run the **Example_TaskEntry** task to write an event. You can understand the task switching during event operations based on the sequence in which logs are recorded. -## Development Example +1. Create the **Example_Event** task in the **Example_TaskEntry** task with a higher priority than the **Example_TaskEntry** task. -### Example Description +2. Run the **Example_Event** task to read event **0x00000001**. Task switching is triggered to execute the **Example_TaskEntry** task. -In this example, run the **Example\_TaskEntry** task to create the **Example\_Event** task, run the **Example\_Event** task to read an event to trigger task switching, and run the **Example\_TaskEntry** task to write an event. You can understand the task switching during event operations based on the sequence in which logs are recorded. +3. Run the **Example_TaskEntry** task to write event **0x00000001**. Task switching is triggered to execute the **Example_Event** task. -1. Create the **Example\_Event** task in the **Example\_TaskEntry** task with a higher priority than the **Example\_TaskEntry** task. -2. Run the **Example\_Event** task to read event **0x00000001**. Task switching is triggered to execute the **Example\_TaskEntry** task. -3. Run the **Example\_TaskEntry** task to write event **0x00000001**. Task switching is triggered to execute the **Example\_Event** task. -4. The **Example\_Event** task is executed. -5. The **Example\_TaskEntry** task is executed. +4. The **Example_Event** task is executed. -### Sample Code +5. The **Example_TaskEntry** task is executed. + + +### Sample Code + +The sample code can be compiled and verified in **./kernel/liteos_a/testsuites/kernel/src/osTest.c**. The **Example_EventEntry** function is called in **TestTaskEntry**. The sample code is as follows: @@ -149,28 +148,28 @@ The sample code is as follows: #include "los_task.h" #include "securec.h" -/* Task ID*/ +/* Task ID */ UINT32 g_testTaskId; -/* Event control structure*/ +/* Event control structure */ EVENT_CB_S g_exampleEvent; -/* Type of the wait event*/ -#define EVENT_WAIT 0x00000001 - -/* Example task entry function*/ +/* Type of the wait event */ +#define EVENT_WAIT 0x00000001 +#define EVENT_TIMEOUT 500 +/* Example task entry function */ VOID Example_Event(VOID) { UINT32 event; - /* Set a timeout period for event reading to 100 ticks. If the specified event is not read within 100 ticks, the read operation times out and the task is woken up.*/ - printf("Example_Event wait event 0x%x \n", EVENT_WAIT); + /* Set a timeout period for event reading to 100 ticks. If the specified event is not read within 100 ticks, the read operation times out and the task is woken up. */ + dprintf("Example_Event wait event 0x%x \n", EVENT_WAIT); - event = LOS_EventRead(&g_exampleEvent, EVENT_WAIT, LOS_WAITMODE_AND, 100); + event = LOS_EventRead(&g_exampleEvent, EVENT_WAIT, LOS_WAITMODE_AND, EVENT_TIMEOUT); if (event == EVENT_WAIT) { - printf("Example_Event,read event :0x%x\n", event); + dprintf("Example_Event,read event :0x%x\n", event); } else { - printf("Example_Event,read event timeout\n"); + dprintf("Example_Event,read event timeout\n"); } } @@ -179,14 +178,14 @@ UINT32 Example_EventEntry(VOID) UINT32 ret; TSK_INIT_PARAM_S task1; - /* Initialize the event.*/ + /* Initialize the event. */ ret = LOS_EventInit(&g_exampleEvent); if (ret != LOS_OK) { - printf("init event failed .\n"); + dprintf("init event failed .\n"); return -1; } - /* Create a task.*/ + /* Create a task. */ (VOID)memset_s(&task1, sizeof(TSK_INIT_PARAM_S), 0, sizeof(TSK_INIT_PARAM_S)); task1.pfnTaskEntry = (TSK_ENTRY_FUNC)Example_Event; task1.pcName = "EventTsk1"; @@ -194,39 +193,34 @@ UINT32 Example_EventEntry(VOID) task1.usTaskPrio = 5; ret = LOS_TaskCreate(&g_testTaskId, &task1); if (ret != LOS_OK) { - printf("task create failed.\n"); + dprintf("task create failed.\n"); return LOS_NOK; } /* Write the task wait event (g_testTaskId). */ - printf("Example_TaskEntry write event.\n"); + dprintf("Example_TaskEntry write event.\n"); ret = LOS_EventWrite(&g_exampleEvent, EVENT_WAIT); if (ret != LOS_OK) { - printf("event write failed.\n"); + dprintf("event write failed.\n"); return LOS_NOK; } - /* Clear the flag.*/ - printf("EventMask:%d\n", g_exampleEvent.uwEventID); + /* Clear the flag. */ + dprintf("EventMask:%d\n", g_exampleEvent.uwEventID); LOS_EventClear(&g_exampleEvent, ~g_exampleEvent.uwEventID); - printf("EventMask:%d\n", g_exampleEvent.uwEventID); - - /* Delete the task.*/ - ret = LOS_TaskDelete(g_testTaskId); - if (ret != LOS_OK) { - printf("task delete failed.\n"); - return LOS_NOK; - } + dprintf("EventMask:%d\n", g_exampleEvent.uwEventID); return LOS_OK; } ``` -### Verification + +### Verification The development is successful if the return result is as follows: + ``` Example_Event wait event 0x1 Example_TaskEntry write event. @@ -234,4 +228,3 @@ Example_Event,read event :0x1 EventMask:1 EventMask:0 ``` - diff --git a/en/device-dev/kernel/kernel-small-basic-trans-mutex.md b/en/device-dev/kernel/kernel-small-basic-trans-mutex.md index a911f97e1f894004b5cf48fea296982fe1d4d9b5..4d16065f285430f1a4b4007d65d6e704f4695f3a 100644 --- a/en/device-dev/kernel/kernel-small-basic-trans-mutex.md +++ b/en/device-dev/kernel/kernel-small-basic-trans-mutex.md @@ -1,196 +1,118 @@ # Mutex +## Basic Concepts -## Basic Concepts - -A mutual exclusion \(mutex\) is a special binary semaphore used for exclusive access to shared resources. When a task holds the mutex, the task obtains the ownership of the mutex. When the task releases the mutex, the task will lose the ownership of the mutex. When a task holds a mutex, other tasks cannot hold the mutex. In an environment where multiple tasks compete for shared resources, the mutex ensures exclusive access to the shared resources. +A mutual exclusion (mutex) is a special binary semaphore used for exclusive access to shared resources. When a task holds the mutex, the task obtains the ownership of the mutex. When the task releases the mutex, the task will lose the ownership of the mutex. When a task holds a mutex, other tasks cannot hold the mutex. In an environment where multiple tasks compete for shared resources, the mutex ensures exclusive access to the shared resources. A mutex has three attributes: protocol attribute, priority upper limit attribute, and type attribute. The protocol attribute is used to handle a mutex requested by tasks of different priorities. The protocol attribute can be any of the following: -- LOS\_MUX\_PRIO\_NONE - - Do not inherit or protect the priority of the task requesting the mutex. +- LOS_MUX_PRIO_NONE + -- LOS\_MUX\_PRIO\_INHERIT +Do not inherit or protect the priority of the task requesting the mutex. - Inherits the priority of the task that requests the mutex. This is the default protocol attribute. When the mutex protocol attribute is set to this value: If a task with a higher priority is blocked because the mutex is already held by a task, the priority of the task holding the mutex will be backed up to the priority bitmap of the task control block, and then set to be the same as that of the task of a higher priority. When the task holding the mutex releases the mutex, its task priority is restored to its original value. +- LOS_MUX_PRIO_INHERIT + -- LOS\_MUX\_PRIO\_PROTECT - - Protects the priority of the task that requests the mutex. When the mutex protocol attribute is set to this value: If the priority of the task that requests the mutex is lower than the upper limit of the mutex priority, the task priority will be backed up to the priority bitmap of the task control block, and then set to the upper limit value of the mutex priority. When the mutex is released, the task priority is restored to its original value. +Inherits the priority of the task that requests the mutex. This is the default protocol attribute. When the mutex protocol attribute is set to this value: If a task with a higher priority is blocked because the mutex is already held by a task, the priority of the task holding the mutex will be backed up to the priority bitmap of the task control block, and then set to be the same as that of the task of a higher priority. When the task holding the mutex releases the mutex, its task priority is restored to its original value. +- LOS_MUX_PRIO_PROTECT + + Protects the priority of the task that requests the mutex. When the mutex protocol attribute is set to this value: If the priority of the task that requests the mutex is lower than the upper limit of the mutex priority, the task priority will be backed up to the priority bitmap of the task control block, and then set to the upper limit value of the mutex priority. When the mutex is released, the task priority is restored to its original value. The type attribute of a mutex specifies whether to check for deadlocks and whether to support recursive holding of the mutex. The type attribute can be any of the following: -- LOS\_MUX\_NORMAL - - Common mutex, which does not check for deadlocks. If a task repeatedly attempts to hold a mutex, the thread will be deadlocked. If the mutex type attribute is set to this value, a task cannot release a mutex held by another task or repeatedly release a mutex. Otherwise, unexpected results will be caused. +- LOS_MUX_NORMAL + -- LOS\_MUX\_RECURSIVE +Common mutex, which does not check for deadlocks. If a task repeatedly attempts to hold a mutex, the thread will be deadlocked. If the mutex type attribute is set to this value, a task cannot release a mutex held by another task or repeatedly release a mutex. Otherwise, unexpected results will be caused. - Recursive mutex, which is the default attribute. If the type attribute of a mutex is set to this value, a task can hold the mutex for multiple times. Another task can hold this mutex only when the number of lock holding times is the same as the number of lock release times. However, any attempt to hold a mutex held by another task or attempt to release a mutex that has been released will return an error code. +- LOS_MUX_RECURSIVE + -- LOS\_MUX\_ERRORCHECK +Recursive mutex, which is the default attribute. If the type attribute of a mutex is set to this value, a task can hold the mutex for multiple times. Another task can hold this mutex only when the number of lock holding times is the same as the number of lock release times. However, any attempt to hold a mutex held by another task or attempt to release a mutex that has been released will return an error code. - Allows automatic check for deadlocks. When a mutex is set to this type, an error code will be returned if a task attempts to repeatedly hold the mutex, attempts to release the mutex held by another task, or attempts to release the mutex that has been released. +- LOS_MUX_ERRORCHECK + + Mutex for error checks. When a mutex is set to this type, an error code will be returned if a task attempts to repeatedly hold the mutex, attempts to release the mutex held by another task, or attempts to release the mutex that has been released. -## Working Principles +## Working Principles -In a multi-task environment, multiple tasks may access the same shared resource. However, certain shared resources are not shared, and can only be accessed exclusively by tasks. A mutex can be used to address this issue. +In a multi-task environment, multiple tasks may access the same shared resources. However, certain shared resources are not shared, and can only be accessed exclusively by tasks. A mutex can be used to address this issue. When non-shared resources are accessed by a task, the mutex is locked. Other tasks will be blocked until the mutex is released by the task. The mutex allows only one task to access the shared resources at a time, ensuring integrity of operations on the shared resources. -**Figure 1** Mutex working mechanism for small systems + **Figure 1** Mutex working mechanism for the small system + ![](figures/mutex-working-mechanism-for-small-systems.png "mutex-working-mechanism-for-small-systems") -## Development Guidelines - -### Available APIs - -**Table 1** Mutex module APIs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Function

-

API

-

Description

-

Initializing or destroying a mutex

-

LOS_MuxInit

-

Initializes a mutex.

-

LOS_MuxDestroy

-

Destroys the specified mutex.

-

Requesting or releasing a mutex

-

LOS_MuxLock

-

Requests the specified mutex.

-

LOS_MuxTrylock

-

Attempts to request the specified mutex in non-block mode.

-

LOS_MuxUnlock

-

Releases the specified mutex.

-

Verifying a mutex

-

LOS_MuxIsValid

-

Checks whether the mutex release is valid.

-

Initializing or destroying mutex attributes

-

LOS_MuxAttrInit

-

Initializes mutex attributes.

-

LOS_MuxAttrDestroy

-

Destroys the specified mutex attributes.

-

Setting and obtaining mutex attributes

-

LOS_MuxAttrGetType

-

Obtains the type attribute of a specified mutex.

-

LOS_MuxAttrSetType

-

Sets the type attribute of a specified mutex.

-

LOS_MuxAttrGetProtocol

-

Obtains the protocol attribute of a specified mutex.

-

LOS_MuxAttrSetProtocol

-

Sets the protocol attribute of a specified mutex.

-

LOS_MuxAttrGetPrioceiling

-

Obtains the priority upper limit attribute of a specified mutex.

-

LOS_MuxAttrSetPrioceiling

-

Sets the priority upper limit attribute of a specified mutex.

-

LOS_MuxGetPrioceiling

-

Obtains the mutex priority upper limit attribute.

-

LOS_MuxSetPrioceiling

-

Sets the mutex priority upper limit attribute.

-
- -### How to Develop + +## Development Guidelines + + +### Available APIs + + **Table 1** APIs of the mutex module + +| Category| API Description | +| -------- | -------- | +| Initializing or destroying a mutex| - **LOS_MuxInit**: initializes a mutex.
- **LOS_MuxDestroy**: destroys a mutex.| +| Requesting or releasing a mutex| - **LOS_MuxLock**: requests a mutex.
- **LOS_MuxTrylock**: requests a mutex without blocking.
- **LOS_MuxUnlock**: releases a mutex.| +| Verifying a mutex| - **LOS_MuxIsValid**: checks whether the mutex release is valid.
- **LOS_MuxAttrDestroy**: destroys the specified mutex attribute.| +| Setting and obtaining mutex attributes| - **LOS_MuxAttrGetType**: obtains the type attribute of a mutex.
- **LOS_MuxAttrSetType**: sets the type attribute for a mutex.
- **LOS_MuxAttrGetProtocol**: obtains the protocol attribute of a mutex.
- **LOS_MuxAttrSetProtocol**: sets the protocol attribute for a mutex.
- **LOS_MuxAttrGetPrioceiling**: obtains the priority upper limit attribute of a mutex.
- **LOS_MuxAttrSetPrioceiling**: sets the priority upper limit attribute for a mutex.
- **LOS_MuxGetPrioceiling**: obtains the priority upper limit of this mutex.
- **LOS_MuxSetPrioceiling**: sets the priority upper limit for this mutex. | + + +### How to Develop The typical mutex development process is as follows: -1. Call **LOS\_MuxInit** to initialize a mutex. +1. Call **LOS_MuxInit** to initialize a mutex. -2. Call **LOS\_MuxLock** to request a mutex. +2. Call **LOS_MuxLock** to request a mutex. The following modes are available: -- Non-block mode: A task acquires the mutex if the requested mutex is not held by any task or the task holding the mutex is the same as the task requesting the mutex. -- Permanent block mode: A task acquires the mutex if the requested mutex is not occupied. If the mutex is occupied, the task will be blocked and the task with the highest priority in the ready queue will be executed. The blocked task can be unlocked and executed only when the mutex is released. -- Scheduled block mode: A task acquires the mutex if the requested mutex is not occupied. If the mutex is occupied, the task will be blocked and the task with the highest priority in the ready queue will be executed. The blocked task can be executed only when the mutex is released within the specified timeout period or when the specified timeout period expires. +- Non-block mode: A task acquires the mutex if the requested mutex is not held by any task or the task holding the mutex is the same as the task requesting the mutex. + +- Permanent block mode: A task acquires the mutex if the requested mutex is not occupied. If the mutex is occupied, the task will be blocked and the task with a highest priority in the ready queue will be executed. The blocked task can be unlocked and executed only when the mutex is released. -3. Call **LOS\_MuxUnlock** to release a mutex. +- Scheduled block mode: A task acquires the mutex if the requested mutex is not occupied. If the mutex is occupied, the task will be blocked and the task with the highest priority in the ready queue will be executed. The blocked task can be executed only when the mutex is released within the specified timeout period or when the specified timeout period expires. -- If tasks are blocked by the specified mutex, the task with a higher priority will be unblocked when the mutex is released. The unblocked task changes to the Ready state and is scheduled. -- If no task is blocked by the specified mutex, the mutex is released successfully. +3. Call **LOS_MuxUnlock** to release a mutex. -4. Call **LOS\_MuxDestroy** to destroy a mutex. +- If tasks are blocked by the specified mutex, the task with a higher priority will be unblocked when the mutex is released. The unblocked task changes to the Ready state and is scheduled. ->![](../public_sys-resources/icon-note.gif) **NOTE:** ->- Two tasks cannot lock the same mutex. If a task attempts to lock a mutex held by another task, the task will be blocked until the mutex is unlocked. ->- Mutexes cannot be used in the interrupt service program. ->- When using the LiteOS-A kernel, the OpenHarmony must ensure real-time task scheduling and avoid long-time task blocking. Therefore, a mutex must be released as soon as possible after use. +- If no task is blocked by the specified mutex, the mutex is released successfully. -### Development Example +4. Call **LOS_MuxDestroy** to destroy a mutex. -Example Description +> **NOTE**
+> - Two tasks cannot lock the same mutex. If a task attempts to lock a mutex held by another task, the task will be blocked until the mutex is unclocked. +> +> - Mutexes cannot be used in the interrupt service program. +> +> - The system using the LiteOS-A kernel must ensure real-time task scheduling and avoid long-time task blocking. Therefore, a mutex must be released as soon as possible after use. + + +### Development Example + +#### Example Description This example implements the following: -1. Create a mutex in the **Example\_TaskEntry** task, and lock task scheduling. Create two tasks **Example\_MutexTask1** and **Example\_MutexTask2**. and unlock task scheduling. -2. When being scheduled, **Example\_MutexTask2** requests a mutex in permanent block mode. After acquiring the mutex, **Example\_MutexTask2** enters the sleep mode for 100 ticks. **Example\_MutexTask2** is suspended, and **Example\_MutexTask1** is woken up. -3. **Example\_MutexTask1** requests a mutex in scheduled block mode, and waits for 10 ticks. Because the mutex is still held by **Example\_MutexTask2**, **Example\_MutexTask1** is suspended. After 10 ticks, **Example\_MutexTask1** is woken up and attempts to request a mutex in permanent block mode. **Example\_MutexTask1** is suspended because the mutex is still held by **Example\_MutexTask2**. -4. After 100 ticks, **Example\_MutexTask2** is woken up and releases the mutex, and then **Example\_MutexTask1** is woken up. **Example\_MutexTask1** acquires the mutex and then releases the mutex. At last, the mutex is deleted. +1. Create the **Example_TaskEntry** task. In this task, create a mutex to lock task scheduling, and create two tasks **Example_MutexTask1** (with a lower priority) and **Example_MutexTask2** (with a higher priority) to unlock task scheduling. + +2. When being scheduled, **Example_MutexTask2** requests a mutex in permanent block mode. After acquiring the mutex, **Example_MutexTask2** enters the sleep mode for 100 ticks. **Example_MutexTask2** is suspended, and **Example_MutexTask1** is woken up. + +3. **Example_MutexTask1** requests a mutex in scheduled block mode, and waits for 10 ticks. Because the mutex is still held by **Example_MutexTask2**, **Example_MutexTask1** is suspended. After 10 ticks, **Example_MutexTask1** is woken up and attempts to request a mutex in permanent block mode. **Example_MutexTask1** is suspended because the mutex is still held by **Example_MutexTask2**. + +4. After 100 ticks, **Example_MutexTask2** is woken up and releases the mutex, and then **Example_MutexTask1** is woken up. **Example_MutexTask1** acquires the mutex and then releases the mutex. At last, the mutex is deleted. -**Sample Code** +#### Sample Code + +The sample code can be compiled and verified in **./kernel/liteos_a/testsuites/kernel/src/osTest.c**. The **Example_MutexEntry** function is called in **TestTaskEntry**. The sample code is as follows: @@ -199,7 +121,7 @@ The sample code is as follows: #include "los_mux.h" /* Mutex */ -LosMux g_testMux; +LosMux g_testMutex; /* Task ID*/ UINT32 g_testTaskId01; UINT32 g_testTaskId02; @@ -207,48 +129,49 @@ UINT32 g_testTaskId02; VOID Example_MutexTask1(VOID) { UINT32 ret; + LOS_TaskDelay(50); - printf("task1 try to get mutex, wait 10 ticks.\n"); - /* Request a mutex.*/ - ret = LOS_MuxLock(&g_testMux, 10); + dprintf("task1 try to get mutex, wait 10 ticks.\n"); + /* Request a mutex. */ + ret = LOS_MuxLock(&g_testMutex, 10); if (ret == LOS_OK) { - printf("task1 get mutex g_testMux.\n"); - /* Release the mutex.*/ - LOS_MuxUnlock(&g_testMux); + dprintf("task1 get mutex g_testMux.\n"); + /* Release the mutex. */ + LOS_MuxUnlock(&g_testMutex); return; - } - if (ret == LOS_ETIMEDOUT ) { - printf("task1 timeout and try to get mutex, wait forever.\n"); - /* Request a mutex.*/ - ret = LOS_MuxLock(&g_testMux, LOS_WAIT_FOREVER); - if (ret == LOS_OK) { - printf("task1 wait forever, get mutex g_testMux.\n"); - /*Release the mutex.*/ - LOS_MuxUnlock(&g_testMux); - /* Delete the mutex. */ - LOS_MuxDestroy(&g_testMux); - printf("task1 post and delete mutex g_testMux.\n"); - return; - } + } + if (ret == LOS_ETIMEDOUT) { + dprintf("task1 timeout and try to get mutex, wait forever.\n"); + /* Request a mutex. */ + ret = LOS_MuxLock(&g_testMutex, LOS_WAIT_FOREVER); + if (ret == LOS_OK) { + dprintf("task1 wait forever, get mutex g_testMux.\n"); + /* Release the mutex. */ + LOS_MuxUnlock(&g_testMutex); + /* Delete the mutex. */ + LOS_MuxDestroy(&g_testMutex); + dprintf("task1 post and delete mutex g_testMux.\n"); + return; + } } return; } VOID Example_MutexTask2(VOID) { - printf("task2 try to get mutex, wait forever.\n"); - /* Request a mutex.*/ - (VOID)LOS_MuxLock(&g_testMux, LOS_WAIT_FOREVER); + dprintf("task2 try to get mutex, wait forever.\n"); + /* Request a mutex. */ + (VOID)LOS_MuxLock(&g_testMutex, LOS_WAIT_FOREVER); - printf("task2 get mutex g_testMux and suspend 100 ticks.\n"); + dprintf("task2 get mutex g_testMux and suspend 100 ticks.\n"); - /* Enable the task to enter sleep mode for 100 ticks.*/ + /* Enable the task to enter sleep mode for 100 ticks. */ LOS_TaskDelay(100); - printf("task2 resumed and post the g_testMux\n"); - /* Release the mutex.*/ - LOS_MuxUnlock(&g_testMux); + dprintf("task2 resumed and post the g_testMux\n"); + /* Release the mutex. */ + LOS_MuxUnlock(&g_testMutex); return; } @@ -258,13 +181,13 @@ UINT32 Example_MutexEntry(VOID) TSK_INIT_PARAM_S task1; TSK_INIT_PARAM_S task2; - /* Initializes the mutex./ - LOS_MuxInit(&g_testMux, NULL); + /* Initialize the mutex. */ + LOS_MuxInit(&g_testMutex, NULL); - /* Lock task scheduling.*/ + /* Lock task scheduling. */ LOS_TaskLock(); - /* Create task 1.*/ + /* Create task 1. */ memset(&task1, 0, sizeof(TSK_INIT_PARAM_S)); task1.pfnTaskEntry = (TSK_ENTRY_FUNC)Example_MutexTask1; task1.pcName = "MutexTsk1"; @@ -272,11 +195,11 @@ UINT32 Example_MutexEntry(VOID) task1.usTaskPrio = 5; ret = LOS_TaskCreate(&g_testTaskId01, &task1); if (ret != LOS_OK) { - printf("task1 create failed.\n"); + dprintf("task1 create failed.\n"); return LOS_NOK; } - /* Create task 2.*/ + /* Create task 2. */ memset(&task2, 0, sizeof(TSK_INIT_PARAM_S)); task2.pfnTaskEntry = (TSK_ENTRY_FUNC)Example_MutexTask2; task2.pcName = "MutexTsk2"; @@ -284,11 +207,11 @@ UINT32 Example_MutexEntry(VOID) task2.usTaskPrio = 4; ret = LOS_TaskCreate(&g_testTaskId02, &task2); if (ret != LOS_OK) { - printf("task2 create failed.\n"); + dprintf("task2 create failed.\n"); return LOS_NOK; } - /* Unlock task scheduling.*/ + /* Unlock task scheduling. */ LOS_TaskUnlock(); return LOS_OK; @@ -299,13 +222,13 @@ UINT32 Example_MutexEntry(VOID) The development is successful if the return result is as follows: + ``` -task1 try to get mutex, wait 10 ticks. task2 try to get mutex, wait forever. task2 get mutex g_testMux and suspend 100 ticks. +task1 try to get mutex, wait 10 ticks. task1 timeout and try to get mutex, wait forever. task2 resumed and post the g_testMux task1 wait forever, get mutex g_testMux. task1 post and delete mutex g_testMux. ``` - diff --git a/en/device-dev/kernel/kernel-small-basic-trans-queue.md b/en/device-dev/kernel/kernel-small-basic-trans-queue.md index 5e2cbc062c4b9c3ba2066e37594d01d0fd871a7e..578fcac6d76ed84eb3129374bff9cf834e6a02f8 100644 --- a/en/device-dev/kernel/kernel-small-basic-trans-queue.md +++ b/en/device-dev/kernel/kernel-small-basic-trans-queue.md @@ -1,7 +1,7 @@ # Queue -## Basic Concepts +## Basic Concepts A queue, also called a message queue, is a data structure used for communication between tasks. The queue receives messages of unfixed length from tasks or interrupts, and determines whether to store the transferred messages in the queue based on different APIs. @@ -11,21 +11,30 @@ You can adjust the timeout period of the read queue and write queue to adjust th An asynchronous processing mechanism is provided to allow messages in a queue not to be processed immediately. In addition, queues can be used to buffer messages and implement asynchronous task communication. Queues have the following features: -- Messages are queued in FIFO mode and can be read and written asynchronously. -- Both the read queue and write queue support the timeout mechanism. -- Each time a message is read, the message node becomes available. -- The types of messages to be sent are determined by the parties involved in communication. Messages of different lengths \(not exceeding the message node size of the queue\) are allowed. -- A task can receive messages from and send messages to any message queue. -- Multiple tasks can receive messages from and send messages to the same queue. -- When a queue is created, the required dynamic memory space is automatically allocated in the queue API. +- Messages are queued in first-in-first-out (FIFO) mode and can be read and written asynchronously. -## Working Principles +- Both the read queue and write queue support the timeout mechanism. + +- Each time a message is read, the message node becomes available. + +- The types of messages to be sent are determined by the parties involved in communication. Messages of different lengths (not exceeding the message node size of the queue) are allowed. + +- A task can receive messages from and send messages to any message queue. + +- Multiple tasks can receive messages from and send messages to the same queue. + +- When a queue is created, the required dynamic memory space is automatically allocated in the queue API. + + +## Working Principles + + +### Queue Control Block -### Queue Control Block ``` /** - * Data structure of the queue control block + * Data structure of the queue control block */ typedef struct { UINT8 *queueHandle; /**< Pointer to a queue handle */ @@ -43,121 +52,94 @@ typedef struct { Each queue control block contains information about the queue status. -- **OS\_QUEUE\_UNUSED**: The queue is not in use. -- **OS\_QUEUE\_INUSED**: The queue is in use. +- **OS_QUEUE_UNUSED**: The queue is not in use. + +- **OS_QUEUE_INUSED**: The queue is in use. + + +### Working Principles + +- The queue ID is returned when a queue is created successfully. + +- The queue control block contains **Head** and **Tail**, which indicate the storage status of messages in a queue. **Head** indicates the start position of occupied message nodes in the queue. **Tail** indicates the end position of the occupied message nodes and the start position of idle message nodes. When a queue is created, **Head** and **Tail** point to the start position of the queue. -### Working Principles +- When data is to be written to a queue, **readWriteableCnt[1]** is used to determine whether data can be written to the queue. If **readWriteableCnt[1]** is **0**, the queue is full and data cannot be written to it. Data can be written to the head node or tail node of a queue. To write data to the tail node, locate the start idle message node based on **Tail** and write data to it. If **Tail** is pointing to the tail of the queue, the rewind mode is used. To write data to the head node, locate previous node based on **Head** and write data to it. If **Head** is pointing to the start position of the queue, the rewind mode is used. -- The queue ID is returned if a queue is created successfully. -- The queue control block contains **Head** and **Tail**, which indicate the storage status of messages in a queue. **Head** indicates the start position of occupied message nodes in the queue. **Tail** indicates the end position of the occupied message nodes and the start position of idle message nodes. When a queue is created, **Head** and **Tail** point to the start position of the queue. -- When data is to be written to a queue, **readWriteableCnt\[1\]** is used to determine whether data can be written to the queue. If **readWriteableCnt\[1\]** is **0**, the queue is full and data cannot be written to it. Data can be written to the head node or tail node of a queue. To write data to the tail node, locate the start idle message node based on **Tail** and write data to it. If **Tail** is pointing to the tail of the queue, the rewind mode is used. To write data to the head node, locate previous node based on **Head** and write data to it. If **Head** is pointing to the start position of the queue, the rewind mode is used. -- When a queue is to be read, **readWriteableCnt\[0\]** is used to determine whether the queue has messages to read. Reading an idle queue \(**readWriteableCnt\[0\]** is** 0**\) will cause task suspension. If the queue has messages to read, the system locates the first node to which data is written based on **Head** and read the message from the node. If **Head** is pointing to the tail of the queue, the rewind mode is used. -- When a queue is to be deleted, the system locates the queue based on the queue ID, sets the queue status to **OS\_QUEUE\_UNUSED**, sets the queue control block to the initial state, and releases the memory occupied by the queue. +- When a queue is to be read, **readWriteableCnt[0]** is used to determine whether the queue has messages to read. Reading an idle queue (**readWriteableCnt[0]** is** 0**) will cause task suspension. If the queue has messages to read, the system locates the first node to which data is written based on **Head** and read the message from the node. If **Head** is pointing to the tail of the queue, the rewind mode is used. -**Figure 1** Reading and writing data in a queue -![](figures/reading-and-writing-data-in-a-queue-3.png "reading-and-writing-data-in-a-queue-3") +- When a queue is to be deleted, the system locates the queue based on the queue ID, sets the queue status to **OS_QUEUE_UNUSED**, sets the queue control block to the initial state, and releases the memory occupied by the queue. + + **Figure 1** Reading and writing data in a queue + + ![](figures/reading-and-writing-data-in-a-queue-3.png "reading-and-writing-data-in-a-queue-3") The preceding figure illustrates how to write data to the tail node only. Writing data to the head node is similar. -## Development Guidelines - -### Available APIs - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Function

-

API

-

Description

-

Creating or deleting a message queue

-

LOS_QueueCreate

-

Creates a message queue. The system dynamically allocates the queue space.

-

LOS_QueueDelete

-

Deletes the specified queue based on the queue ID.

-

Reading or writing data in a queue (without the content contained in the address)

-

LOS_QueueRead

-

Reads data in the head node of the specified queue. The data in the queue node is an address.

-

LOS_QueueWrite

-

Writes the value of the input parameter bufferAddr (buffer address) to the tail node of the specified queue.

-

LOS_QueueWriteHead

-

Writes the value of the input parameter bufferAddr (buffer address) to the head node of the specified queue.

-

Reading or writing in a queue (with the content contained in the address)

-

LOS_QueueReadCopy

-

Reads data from the head node of the specified queue.

-

LOS_QueueWriteCopy

-

Writes the data saved in the input parameter bufferAddr to the tail node of the specified queue.

-

LOS_QueueWriteHeadCopy

-

Writes the data saved in the input parameter bufferAddr to the head node of the specified queue.

-

Obtaining queue information

-

LOS_QueueInfoGet

-

Obtains information about the specified queue, including the queue ID, queue length, message node size, head node, tail node, number of readable nodes, number of writable nodes, tasks waiting for read operations, and tasks waiting for write operations.

-
- -### How to Develop - -1. Call **LOS\_QueueCreate** to create a queue. The queue ID is returned when the queue is created. -2. Call **LOS\_QueueWrite** or **LOS\_QueueWriteCopy** to write messages to the queue. -3. Call **LOS\_QueueRead** or **LOS\_QueueReadCopy** to read messages from the queue. -4. Call **LOS\_QueueInfoGet** to obtain queue information. -5. Call **LOS\_QueueDelete** to delete a queue. - ->![](../public_sys-resources/icon-note.gif) **NOTE:** ->- The maximum number of queues supported by the system is the total number of queue resources of the system, not the number of queue resources available to users. For example, if the system software timer occupies one more queue resource, the number of queue resources available to users decreases by one. ->- The input parameters queue name and flags passed when a queue is created are reserved for future use. ->- The input parameter **timeOut** in the queue interface function is relative time. ->- **LOS\_QueueReadCopy**, **LOS\_QueueWriteCopy**, and **LOS\_QueueWriteHeadCopy** are a group of APIs that must be used together. **LOS\_QueueRead**, **LOS\_QueueWrite**, and **LOS\_QueueWriteHead** are a group of APIs that must be used together. ->- As **LOS\_QueueWrite**, **LOS\_QueueWriteHead**, and **LOS\_QueueRead** are used to manage data addresses, you must ensure that the memory directed by the pointer obtained by calling **LOS\_QueueRead** is not modified or released abnormally when the queue is being read. Otherwise, unpredictable results may occur. ->- If the input parameter **bufferSize** in **LOS\_QueueRead** and **LOS\_QueueReadCopy** is less than the length of the message, the message will be truncated. ->- **LOS\_QueueWrite**, **LOS\_QueueWriteHead**, and **LOS\_QueueRead** are called to manage data addresses, which means that the actual data read or written is pointer data. Therefore, before using these APIs, ensure that the message node size is the pointer length during queue creation, to avoid waste and read failures. - -## Development Example - -### Example Description - -Create a queue and two tasks. Enable task 1 to call the queue write API to send messages, and enable task 2 to receive messages by calling the queue read API. - -1. Create task 1 and task 2 by calling **LOS\_TaskCreate**. -2. Create a message queue by calling **LOS\_QueueCreate**. -3. Enable messages to be sent in task 1 by calling **SendEntry**. -4. Enable messages to be received in task 2 by calling **RecvEntry**. -5. Call **LOS\_QueueDelete** to delete a queue. - -### Sample Code + +## Development Guidelines + + +### Available APIs + +| Category| API Description | +| -------- | -------- | +| Creating or deleting a message queue| - **LOS_QueueCreate**: creates a message queue. The system dynamically allocates the queue space.
- **LOS_QueueDelete**: deletes a queue.| +| Reading or writing data (address without the content) in a queue| - **LOS_QueueRead**: reads data in the head node of the specified queue. The data in the queue node is an address.
- **LOS_QueueWrite**: writes the value of **bufferAddr** (buffer address) to the tail node of a queue.
- **LOS_QueueWrite**: writes the value of **bufferAddr** (buffer address) to the head node of a queue.| +| Reading or writing data (data and address) in a queue| - **LOS_QueueReadCopy**: reads data from the head node of a queue.
- **LOS_QueueWriteCopy**: writes the data saved in **bufferAddr** to the tail node of a queue.
- **LOS_QueueWriteHeadCopy**: writes the data saved in **bufferAddr** to the head node of a queue.| +| Obtaining queue information| **LOS_QueueInfoGet**: obtains queue information, including the queue ID, queue length, message node size, head node, tail node, number of readable/writable nodes, and tasks waiting for read/write operations.| + + +### How to Develop + +1. Call **LOS_QueueCreate** to create a queue. The queue ID is returned when the queue is created. + +2. Call **LOS_QueueWrite** or **LOS_QueueWriteCopy** to write data to the queue. + +3. Call **LOS_QueueRead** or **LOS_QueueReadCopy** to read data from the queue. + +4. Call **LOS_QueueInfoGet** to obtain queue information. + +5. Call **LOS_QueueDelete** to delete a queue. + +> **NOTE**
+> - The maximum number of queues supported by the system is the total number of queue resources of the system, not the number of queue resources available to users. For example, if the system software timer occupies one more queue resource, the number of queue resources available to users decreases by one. +> +> - The queue name and flags passed in when a queue is created are reserved for future use. +> +> - The parameter **timeOut** in the queue function is relative time. +> +> - **LOS_QueueReadCopy**, **LOS_QueueWriteCopy**, and **LOS_QueueWriteHeadCopy** are a group of APIs that must be used together. **LOS_QueueRead**, **LOS_QueueWrite**, and **LOS_QueueWriteHead** are a group of APIs that must be used together. +> +> - As **LOS_QueueWrite**, **LOS_QueueWriteHead**, and **LOS_QueueRead** are used to manage data addresses, you must ensure that the memory directed by the pointer obtained by calling **LOS_QueueRead** is not modified or released abnormally when the queue is being read. Otherwise, unpredictable results may occur. +> +> - If the length of the data to read in **LOS_QueueRead** or **LOS_QueueReadCopy** is less than the actual message length, the message will be truncated. +> +> - **LOS_QueueWrite**, **LOS_QueueWriteHead**, and **LOS_QueueRead** are called to manage data addresses, which means that the actual data read or written is pointer data. Therefore, before using these APIs, ensure that the message node size is the pointer length during queue creation, to avoid waste and read failures. + + +## Development Example + + +### Example Description + +Create a queue and two tasks. Enable task 1 to write data to the queue, and task 2 to read data from the queue. + +1. Call **LOS_TaskCreate** to create task 1 and task 2. + +2. Call **LOS_QueueCreate** to create a message queue. + +3. Task 1 sends a message in **SendEntry**. + +4. Task 2 receives message in **RecvEntry**. + +5. Call **LOS_QueueDelete** to delete the queue. + + +### Sample Code + +The sample code can be compiled and verified in **./kernel/liteos_a/testsuites/kernel/src/osTest.c**. The **ExampleQueue** function is called in **TestTaskEntry**. + +To avoid excessive printing, call **LOS_Msleep(5000)** to cause a short delay before calling **ExampleQueue**. The sample code is as follows: @@ -175,7 +157,7 @@ VOID SendEntry(VOID) ret = LOS_QueueWriteCopy(g_queue, abuf, len, 0); if(ret != LOS_OK) { - printf("send message failure, error: %x\n", ret); + dprintf("send message failure, error: %x\n", ret); } } @@ -185,30 +167,36 @@ VOID RecvEntry(VOID) CHAR readBuf[BUFFER_LEN] = {0}; UINT32 readLen = BUFFER_LEN; - // Sleep for 1s. - usleep(1000000); + LOS_Msleep(1000); ret = LOS_QueueReadCopy(g_queue, readBuf, &readLen, 0); if(ret != LOS_OK) { - printf("recv message failure, error: %x\n", ret); + dprintf("recv message failure, error: %x\n", ret); } - printf("recv message: %s\n", readBuf); + dprintf("recv message: %s\n", readBuf); ret = LOS_QueueDelete(g_queue); if(ret != LOS_OK) { - printf("delete the queue failure, error: %x\n", ret); + dprintf("delete the queue failure, error: %x\n", ret); } - printf("delete the queue success!\n"); + dprintf("delete the queue success!\n"); } UINT32 ExampleQueue(VOID) { - printf("start queue example\n"); + dprintf("start queue example\n"); UINT32 ret = 0; UINT32 task1, task2; TSK_INIT_PARAM_S initParam = {0}; + ret = LOS_QueueCreate("queue", 5, &g_queue, 0, 50); + if(ret != LOS_OK) { + dprintf("create queue failure, error: %x\n", ret); + } + + dprintf("create the queue success!\n"); + initParam.pfnTaskEntry = (TSK_ENTRY_FUNC)SendEntry; initParam.usTaskPrio = 9; initParam.uwStackSize = LOSCFG_BASE_CORE_TSK_DEFAULT_STACK_SIZE; @@ -217,7 +205,8 @@ UINT32 ExampleQueue(VOID) LOS_TaskLock(); ret = LOS_TaskCreate(&task1, &initParam); if(ret != LOS_OK) { - printf("create task1 failed, error: %x\n", ret); + dprintf("create task1 failed, error: %x\n", ret); + LOS_QueueDelete(g_queue); return ret; } @@ -225,29 +214,26 @@ UINT32 ExampleQueue(VOID) initParam.pfnTaskEntry = (TSK_ENTRY_FUNC)RecvEntry; ret = LOS_TaskCreate(&task2, &initParam); if(ret != LOS_OK) { - printf("create task2 failed, error: %x\n", ret); + dprintf("create task2 failed, error: %x\n", ret); + LOS_QueueDelete(g_queue); return ret; } - ret = LOS_QueueCreate("queue", 5, &g_queue, 0, 50); - if(ret != LOS_OK) { - printf("create queue failure, error: %x\n", ret); - } - - printf("create the queue success!\n"); LOS_TaskUnlock(); + LOS_Msleep(5000); return ret; } ``` -### Verification + +### Verification The development is successful if the return result is as follows: + ``` -start test example +start queue example create the queue success! recv message: test message delete the queue success! ``` - diff --git a/en/device-dev/kernel/kernel-small-basic-trans-semaphore.md b/en/device-dev/kernel/kernel-small-basic-trans-semaphore.md index 31cf7a943e55c174be94905d70ad7c5a6d102dcb..22411251d4982ec959d2e1ebb9984c99fd1860f4 100644 --- a/en/device-dev/kernel/kernel-small-basic-trans-semaphore.md +++ b/en/device-dev/kernel/kernel-small-basic-trans-semaphore.md @@ -1,34 +1,38 @@ # Semaphore -## Basic Concepts +## Basic Concepts -Semaphore is a mechanism for implementing inter-task communication. It implements synchronization between tasks or exclusive access to shared resources. +Semaphore is a mechanism used to implement synchronization between tasks or exclusive access to shared resources. -In the data structure of a semaphore, there is a value indicating the number of shared resources available. The value can be: +In the semaphore data structure, there is a value indicating the number of shared resources available. The value can be: -- **0**: The semaphore is unavailable. Tasks waiting for the semaphore may exist. -- Positive number: The semaphore is available. +- **0**: The semaphore is unavailable. In this case, tasks waiting for the semaphore may exist. -The semaphore for exclusive access is different from the semaphore for synchronization: +- Positive number: The semaphore is available. -- Semaphore used for exclusive access: The initial semaphore counter value \(non-zero\) indicates the number of shared resources available. The semaphore counter value must be acquired before a shared resource is used, and released when the resource is no longer required. When all shared resources are used, the semaphore counter is reduced to 0 and the tasks that need to obtain the semaphores will be blocked. This ensures exclusive access to shared resources. In addition, when the number of shared resources is 1, a binary semaphore \(similar to the mutex mechanism\) is recommended. -- Semaphore used for synchronization: The initial semaphore counter value is **0**. Task 1 cannot acquire the semaphore and is blocked. Task 1 enters Ready or Running state only when the semaphore is released by task 2 or an interrupt. In this way, task synchronization is implemented. +The semaphore used for exclusive access to resources is different from the semaphore used for synchronization: -## Working Principles +- Semaphore used for exclusive access: The initial semaphore counter value \(non-zero\) indicates the number of shared resources available. A semaphore must be acquired before a shared resource is used, and released when the resource is no longer required. When all shared resources are used, the semaphore counter is reduced to 0 and all tasks requiring the semaphore will be blocked. This ensures exclusive access to shared resources. In addition, if the number of shared resources is 1, a binary semaphore \(similar to the mutex mechanism\) is recommended. + +- Semaphore used for synchronization: The initial semaphore counter value is **0**. A task without the semaphore will be blocked, and enters the Ready or Running state only when the semaphore is released by another task or an interrupt. + + +## Working Principles **Semaphore Control Block** + ``` /** - * Data structure of the semaphore control block + * Data structure of the semaphore control block */ typedef struct { UINT16 semStat; /* Semaphore status */ - UINT16 semType; /* Semaphore type*/ - UINT16 semCount; /* Semaphore count*/ - UINT16 semId; /* Semaphore index*/ - LOS_DL_LIST semList; /* Mount the task blocked by the semaphore.*/ + UINT16 semType; /* Semaphore type */ + UINT16 semCount; /* Semaphore count */ + UINT16 semId; /* Semaphore ID */ + LOS_DL_LIST semList; /* List of blocked tasks */ } LosSemCB; ``` @@ -36,102 +40,89 @@ typedef struct { Semaphore allows only a specified number of tasks to access a shared resource at a time. When the number of tasks accessing the resource reaches the limit, other tasks will be blocked until the semaphore is released. -- Semaphore initialization +- Semaphore initialization + + Allocate memory for the semaphores (the number of semaphores is specified by the **LOSCFG_BASE_IPC_SEM_LIMIT** macro), set all semaphores to the unused state, and add them to a linked list. + +- Semaphore creation + + Obtain a semaphore from the linked list of unused semaphores and assign an initial value to the semaphore. - The system allocates memory for the semaphores configured \(you can configure the number of semaphores using the **LOSCFG\_BASE\_IPC\_SEM\_LIMIT** macro\), initializes all semaphores to be unused semaphores, and adds them to a linked list for the system to use. +- Semaphore request -- Semaphore creation + If the counter value is greater than 0 when a semaphore is requsted, the counter is decreased by 1 and a success message is returned. Otherwise, the task is blocked and added to the end of a task queue waiting for semaphores. The wait timeout period can be set. - The system obtains a semaphore from the linked list of unused semaphores and assigns an initial value to the semaphore. +- Semaphore release -- Semaphore request + If no task is waiting for the semaphore, the counter is incremented by 1. Otherwise, wake up the first task in the wait queue. - If the counter value is greater than 0, the system allocates a semaphore, decreases the value by 1, and returns a success message. Otherwise, the system blocks the task and moves the task to the end of a task queue waiting for semaphores. The wait timeout period can be set. +- Semaphore deletion -- Semaphore release + Set a semaphore in use to the unused state and add it to the linked list of unused semaphores. - When a semaphore is released, if there is no task waiting for it, the counter value is increased by 1. Otherwise, the first task in the wait queue is woken up. +The following figure illustrates the semaphore working mechanism. -- Semaphore deletion +**Figure 1** Semaphore working mechanism for the small system - The system sets a semaphore in use to unused state and inserts it to the linked list of unused semaphores. +![](figures/semaphore-working-mechanism-for-small-systems.png "semaphore-working-mechanism-for-small-systems") -The following figure illustrates the semaphore working mechanism. +## Development Guidelines -**Figure 1** Semaphore working mechanism for small systems -![](figures/semaphore-working-mechanism-for-small-systems.png "semaphore-working-mechanism-for-small-systems") -## Development Guidelines - -### Available APIs - -**Table 1** Semaphore module APIs - - - - - - - - - - - - - - - - - - - - - - - - - -

Function

-

API

-

Description

-

Creating or deleting a semaphore

-

LOS_SemCreate

-

Creates a semaphore and returns the semaphore ID.

-

LOS_BinarySemCreate

-

Creates a binary semaphore. The maximum counter value is 1.

-

LOS_SemDelete

-

Deletes a semaphore.

-

Requesting or releasing a semaphore

-

LOS_SemPend

-

Requests a specified semaphore and sets the timeout period.

-

LOS_SemPost

-

Posts (releases) a semaphore.

-
- -### How to Develop - -1. Call **LOS\_SemCreate** to create a semaphore. To create a binary semaphore, call **LOS\_BinarySemCreate**. -2. Call **LOS\_SemPend** to request a semaphore. -3. Call **LOS\_SemPost** to release a semaphore. -4. Call **LOS\_SemDelete** to delete a semaphore. - ->![](../public_sys-resources/icon-note.gif) **NOTE:** ->As interrupts cannot be blocked, semaphores cannot be requested in block mode for interrupts. - -### Development Example - -### Example Description +### Available APIs + +**Table 1** APIs for creating and deleting a semaphore + +| API| Description| +| -------- | -------- | +| LOS_SemCreate | Creates a semaphore and returns the semaphore ID.| +| LOS_BinarySemCreate | Creates a binary semaphore. The maximum counter value is **1**.| +| LOS_SemDelete | Deletes a semaphore.| + +**Table 2** APIs for requesting and releasing a semaphore + +| API| Description| +| -------- | -------- | +| LOS_SemPend | Requests a semaphore and sets a timeout period.| +| LOS_SemPost | Releases a semaphore.| + + +### How to Develop + +1. Call **LOS_SemCreate** to create a semaphore. To create a binary semaphore, call **LOS_BinarySemCreate**. + +2. Call **LOS_SemPend** to request a semaphore. + +3. Call **LOS_SemPost** to release a semaphore. + +4. Call **LOS_SemDelete** to delete a semaphore. + +> **NOTE**
+> As interrupts cannot be blocked, semaphores cannot be requested in block mode for interrupts. + + +### Development Example + + +### Example Description This example implements the following: -1. Create a semaphore in task **ExampleSem** and lock task scheduling. Create two tasks **ExampleSemTask1** and **ExampleSemTask2** \(with higher priority\). Enable the two tasks to request the same semaphore. Unlock task scheduling. Enable task **ExampleSem** to enter sleep mode for 400 ticks. Release the semaphore in task **ExampleSem**. -2. Enable** ExampleSemTask2** to enter sleep mode for 20 ticks after acquiring the semaphore. \(When **ExampleSemTask2** is delayed, **ExampleSemTask1** is woken up.\) -3. Enable **ExampleSemTask1** to request the semaphore in scheduled block mode, with a wait timeout period of 10 ticks. \(Because the semaphore is still held by **ExampleSemTask2**, **ExampleSemTask1** is suspended. **ExampleSemTask1** is woken up after 10 ticks.\) Enable **ExampleSemTask1** to request the semaphore in permanent block mode after it is woken up 10 ticks later. \(Because the semaphore is still held by **ExampleSemTask2**, **ExampleSemTask1** is suspended.\) -4. After 20 ticks, **ExampleSemTask2** is woken up and releases the semaphore. **ExampleSemTask1** acquires the semaphore and is scheduled to run. When **ExampleSemTask1** is complete, it releases the semaphore. -5. Task **ExampleSem** is woken up after 400 ticks and deletes the semaphore. +1. Create a semaphore in task **ExampleSem** and lock task scheduling. Create two tasks **ExampleSemTask1** and **ExampleSemTask2** (with higher priority). Enable the two tasks to request the same semaphore. Unlock task scheduling. Enable task **ExampleSem** to enter sleep mode for 400 ticks. Release the semaphore in task **ExampleSem**. + +2. Enable **ExampleSemTask2** to enter sleep mode for 20 ticks after acquiring the semaphore. (When **ExampleSemTask2** is delayed, **ExampleSemTask1** is woken up.) + +3. Enable **ExampleSemTask1** to request the semaphore in scheduled block mode, with a wait timeout period of 10 ticks. (Because the semaphore is still held by **ExampleSemTask2**, **ExampleSemTask1** is suspended. **ExampleSemTask1** is woken up after 10 ticks.) Enable **ExampleSemTask1** to request the semaphore in permanent block mode after it is woken up 10 ticks later. (Because the semaphore is still held by **ExampleSemTask2**, **ExampleSemTask1** is suspended.) + +4. After 20 ticks, **ExampleSemTask2** is woken up and releases the semaphore. **ExampleSemTask1** acquires the semaphore and is scheduled to run. When **ExampleSemTask1** is complete, it releases the semaphore. + +5. Task **ExampleSem** is woken up after 400 ticks. After that, delete the semaphore. + + +### Sample Code -### Sample Code +The sample code can be compiled and verified in **./kernel/liteos_a/testsuites/kernel/src/osTest.c**. The **ExampleSem** function is called in **TestTaskEntry**. The sample code is as follows: @@ -144,33 +135,34 @@ static UINT32 g_testTaskId01; static UINT32 g_testTaskId02; /* Task priority */ -#define TASK_PRIO_TEST 5 +#define TASK_PRIO_LOW 5 +#define TASK_PRIO_HI 4 -/* Semaphore structure ID*/ +/* Semaphore structure ID */ static UINT32 g_semId; VOID ExampleSemTask1(VOID) { UINT32 ret; - printf("ExampleSemTask1 try get sem g_semId, timeout 10 ticks.\n"); + dprintf("ExampleSemTask1 try get sem g_semId, timeout 10 ticks.\n"); - /* Request the semaphore in scheduled block mode, with a wait timeout period of 10 ticks.*/ + /* Request the semaphore in scheduled block mode, with a wait timeout period of 10 ticks. */ ret = LOS_SemPend(g_semId, 10); - - /* The semaphore is acquired.*/ + /* The semaphore is acquired. */ if (ret == LOS_OK) { LOS_SemPost(g_semId); return; } - /* The semaphore is not acquired when the timeout period has expired.*/ + /* The semaphore is not acquired when the timeout period has expired. */ if (ret == LOS_ERRNO_SEM_TIMEOUT) { - printf("ExampleSemTask1 timeout and try get sem g_semId wait forever.\n"); + dprintf("ExampleSemTask1 timeout and try get sem g_semId wait forever.\n"); - /* Request the semaphore in permanent block mode.*/ + /* Request the semaphore in permanent block mode. */ ret = LOS_SemPend(g_semId, LOS_WAIT_FOREVER); - printf("ExampleSemTask1 wait_forever and get sem g_semId.\n"); + dprintf("ExampleSemTask1 wait_forever and get sem g_semId.\n"); if (ret == LOS_OK) { + dprintf("ExampleSemTask1 post sem g_semId.\n"); LOS_SemPost(g_semId); return; } @@ -180,20 +172,19 @@ VOID ExampleSemTask1(VOID) VOID ExampleSemTask2(VOID) { UINT32 ret; - printf("ExampleSemTask2 try get sem g_semId wait forever.\n"); + dprintf("ExampleSemTask2 try get sem g_semId wait forever.\n"); - /* Request the semaphore in permanent block mode.*/ + /* Request the semaphore in permanent block mode. */ ret = LOS_SemPend(g_semId, LOS_WAIT_FOREVER); - if (ret == LOS_OK) { - printf("ExampleSemTask2 get sem g_semId and then delay 20 ticks.\n"); + dprintf("ExampleSemTask2 get sem g_semId and then delay 20 ticks.\n"); } - /* Enable the task to enter sleep mode for 20 ticks.*/ + /* Enable the task to enter sleep mode for 20 ticks. */ LOS_TaskDelay(20); - printf("ExampleSemTask2 post sem g_semId.\n"); - /* Release the semaphore.*/ + dprintf("ExampleSemTask2 post sem g_semId.\n"); + /* Release the semaphore. */ LOS_SemPost(g_semId); return; } @@ -204,60 +195,65 @@ UINT32 ExampleSem(VOID) TSK_INIT_PARAM_S task1; TSK_INIT_PARAM_S task2; - /* Create a semaphore.*/ + /* Create a semaphore. */ LOS_SemCreate(0, &g_semId); - /* Lock task scheduling.*/ + /* Lock task scheduling. */ LOS_TaskLock(); - /* Create task 1.*/ + /* Create task 1. */ (VOID)memset_s(&task1, sizeof(TSK_INIT_PARAM_S), 0, sizeof(TSK_INIT_PARAM_S)); task1.pfnTaskEntry = (TSK_ENTRY_FUNC)ExampleSemTask1; task1.pcName = "TestTask1"; task1.uwStackSize = LOSCFG_BASE_CORE_TSK_DEFAULT_STACK_SIZE; - task1.usTaskPrio = TASK_PRIO_TEST; + task1.usTaskPrio = TASK_PRIO_LOW; ret = LOS_TaskCreate(&g_testTaskId01, &task1); if (ret != LOS_OK) { - printf("task1 create failed .\n"); + dprintf("task1 create failed .\n"); return LOS_NOK; } - /* Create task 2.*/ + /* Create task 2. */ (VOID)memset_s(&task2, sizeof(TSK_INIT_PARAM_S), 0, sizeof(TSK_INIT_PARAM_S)); task2.pfnTaskEntry = (TSK_ENTRY_FUNC)ExampleSemTask2; task2.pcName = "TestTask2"; task2.uwStackSize = LOSCFG_BASE_CORE_TSK_DEFAULT_STACK_SIZE; - task2.usTaskPrio = (TASK_PRIO_TEST - 1); + task2.usTaskPrio = TASK_PRIO_HI; ret = LOS_TaskCreate(&g_testTaskId02, &task2); if (ret != LOS_OK) { - printf("task2 create failed.\n"); + dprintf("task2 create failed.\n"); return LOS_NOK; } - /* Unlock task scheduling.*/ + /* Unlock task scheduling. */ LOS_TaskUnlock(); + /* Enable the task to enter sleep mode for 400 ticks. */ + LOS_TaskDelay(400); + ret = LOS_SemPost(g_semId); - /* Enable the task to enter sleep mode for 400 ticks.*/ + /* Enable the task to enter sleep mode for 400 ticks. */ LOS_TaskDelay(400); - /* Delete the semaphore. */ + /* Delete the semaphore. */ LOS_SemDelete(g_semId); return LOS_OK; } ``` -### Verification + +### Verification The development is successful if the return result is as follows: + ``` ExampleSemTask2 try get sem g_semId wait forever. -ExampleSemTask2 get sem g_semId and then delay 20 ticks. ExampleSemTask1 try get sem g_semId, timeout 10 ticks. ExampleSemTask1 timeout and try get sem g_semId wait forever. +ExampleSemTask2 get sem g_semId and then delay 20 ticks. ExampleSemTask2 post sem g_semId. ExampleSemTask1 wait_forever and get sem g_semId. +ExampleSemTask1 post sem g_semId. ``` -