> **Note:** Test cases are classified into common test cases and device-specific test cases. You are advised to place common test cases in the **common** directory and device-specific test cases in the directories of the related devices.
> **NOTE**<br>
> Test cases are classified into common test cases and device-specific test cases. You are advised to place common test cases in the **common** directory and device-specific test cases in the directories of the related devices.
### Writing Test Cases
This test framework supports test cases written in multiple programming languages and provides different templates for different languages.
...
...
@@ -196,7 +197,8 @@ Example:
// Set a teardown function, which will be called after each test case.
}
```
> **Note**: When defining a test suite, ensure that the test suite name is the same as the target to build and uses the upper camel case style.
> **NOTE**<br>
> When defining a test suite, ensure that the test suite name is the same as the target to build and uses the upper camel case style.
5. Add implementation of the test cases, including test case comments and logic.
```
...
...
@@ -216,31 +218,31 @@ Example:
}
```
The following test case templates are provided for your reference.
| Type| Description|
| ------------| ------------|
| HWTEST(A,B,C)| Use this template if the test case execution does not depend on setup or teardown.|
| HWTEST_F(A,B,C)| Use this template if the test case execution (excluding parameters) depends on setup and teardown.|
| HWTEST_P(A,B,C)| Use this template if the test case execution (including parameters) depends on setup and teardown.|
| Type | Description |
| ------------ | ------------ |
| HWTEST(A,B,C) | Use this template if the test case execution does not depend on setup or teardown. |
| HWTEST_F(A,B,C) | Use this template if the test case execution (excluding parameters) depends on setup and teardown. |
| HWTEST_P(A,B,C) | Use this template if the test case execution (including parameters) depends on setup and teardown. |
In the template names:
- *A* indicates the test suite name.
- *B* indicates the test case name, which is in the *Function*\_*No.* format. The *No.* is a three-digit number starting from **001**.
- *C* indicates the test case level. There are five test case levels: guard-control level 0 and non-guard-control level 1 to level 4. Of levels 1 to 4, a smaller value indicates a more important function verified by the test case.
- *A* indicates the test suite name.
- *B* indicates the test case name, which is in the *Function*\_*No.* format. The *No.* is a three-digit number starting from **001**.
- *C* indicates the test case level. There are five test case levels: guard-control level 0 and non-guard-control level 1 to level 4. Of levels 1 to 4, a smaller value indicates a more important function verified by the test case.
**Note**:
- The expected result of each test case must have an assertion.
- The test case level must be specified.
- It is recommended that the test be implemented step by step according to the template.
- The comment must contain the test case name, description, type, and requirement number, which are in the @tc.*xxx*: *value* format. The test case type @**tc.type** can be any of the following:
Note the following:
- The expected result of each test case must have an assertion.
- The test case level must be specified.
- It is recommended that the test be implemented step by step according to the template.
- The comment must contain the test case name, description, type, and requirement number, which are in the @tc.*xxx*: *value* format. The test case type @**tc.type** can be any of the following:
| Test Case Type|Code|
| ------------|------------|
|Function test|FUNC|
|Performance test|PERF|
|Reliability test|RELI|
|Security test|SECU|
|Fuzz test|FUZZ|
| Test Case Type | Code |
| ------------ | ---------- |
| Function test | FUNC |
| Performance test | PERF |
| Reliability test | RELI |
| Security test | SECU |
| Fuzz test | FUZZ |
**JavaScript Test Case Example**
...
...
@@ -416,18 +418,24 @@ The following provides templates for different languages for your reference.
> **Note**: The output path is ***Part name*/*Module name***.
> **NOTE**<br>
> The output path is ***Part name*/*Module name***.
4. Set the output build file for the test cases.
...
...
@@ -527,7 +539,7 @@ The following provides templates for different languages for your reference.
ohos_js_unittest("GetAppInfoJsTest") {
}
```
> **Note:**
> **NOTE**
>- Use the **ohos\_js\_unittest** template to define the JavaScript test suite. Pay attention to the difference between JavaScript and C++.
>- The file generated for the JavaScript test suite must be in .hap format and named after the test suite name defined here. The test suite name must end with **JsTest**.
...
...
@@ -611,7 +623,8 @@ The following provides templates for different languages for your reference.
deps = [ ":GetAppInfoJsTest" ]
}
```
> **Note**: Grouping test cases by test type allows you to execute a specific type of test cases when required.
> **NOTE**<br>
> Grouping test cases by test type allows you to execute a specific type of test cases when required.
#### Configuring ohos.build
...
...
@@ -632,7 +645,9 @@ Configure the part build file to associate with specific test cases.
]
}
```
> **Note**: **test_list** contains the test cases of the corresponding module.
> **NOTE**<br>
> **test_list** contains the test cases of the corresponding module.
### Configuring Test Case Resources
Test case resources include external file resources, such as image files, video files, and third-party libraries, required for test case execution.
>- **target_name** indicates the test suite name defined in the **BUILD.gn** file in the **test** directory.**preparer** indicates the action to perform before the test suite is executed.
>- **src="res"** indicates that the test resources are in the **resource** directory under the **test** directory. **src="out"** indicates that the test resources are in the **out/release/$(*part*)** directory.
...
...
@@ -711,7 +726,8 @@ Before executing test cases, you need to modify the configuration based on the d
</NFS>
</user_config>
```
>**Note**: If HDC is connected to the device before the test cases are executed, you only need to configure the device IP address and port number, and retain the default settings for other parameters.
> **NOTE**<br>
> If HDC is connected to the device before the test cases are executed, you only need to configure the device IP address and port number, and retain the default settings for other parameters.
### Executing Test Cases on Windows
#### Building Test Cases
...
...
@@ -720,7 +736,7 @@ Test cases cannot be built on Windows. You need to run the following command to
>- **product-name**: specifies the name of the product to build, for example, **hispark_taurus_standard**.
>- **build-target**: specifies the test case to build. **make_test** indicates all test cases. You can specify the test cases based on requirements.
...
...
@@ -731,7 +747,8 @@ After the build is complete, the test cases are automatically saved in **out/his
2. Copy **developertest** and **xdevice** from the Linux environment to the **Test** directory on Windows, and copy the test cases to the **testcase** directory.
>**Note**: Port the test framework and test cases from the Linux environment to the Windows environment for subsequent execution.
> **NOTE**<br>
> Port the test framework and test cases from the Linux environment to the Windows environment for subsequent execution.
3. Modify the **user_config.xml** file.
```
...
...
@@ -744,7 +761,8 @@ After the build is complete, the test cases are automatically saved in **out/his
<dir>D:\Test\testcase\tests</dir>
</test_cases>
```
>**Note**: `<testcase>` indicates whether to build test cases. `<dir>` indicates the path for searching for test cases.
> **NOTE**<br>
> `<testcase>` indicates whether to build test cases. `<dir>` indicates the path for searching for test cases.
#### Executing Test Cases
1. Start the test framework.
...
...
@@ -778,13 +796,15 @@ To enable test cases to be executed on a remote Linux server or a Linux VM, map
hdc_std kill
hdc_std -m -s 0.0.0.0:8710
```
>**Note**: The IP address and port number are default values.
> **NOTE**<br>
> The IP address and port number are default values.
2. On the HDC client, run the following command:
```
hdc_std -s xx.xx.xx.xx:8710 list targets
```
>**Note**: Enter the IP address of the device to test.
> **NOTE**<br>
> Enter the IP address of the device to test.
#### Executing Test Cases
1. Start the test framework.
...
...
@@ -819,15 +839,16 @@ You can obtain the test result in the following directory:
```
test/developertest/reports/xxxx_xx_xx_xx_xx_xx
```
>**Note**: The folder for test reports is automatically generated.
> **NOTE**<br>
> The folder for test reports is automatically generated.
The folder contains the following files:
| Type| Description|
| Type | Description |
| ------------ | ------------ |
| result/ |Test cases in standard format|
| log/plan_log_xxxx_xx_xx_xx_xx_xx.log | Test case logs|
| summary_report.html | Test report summary|
| details_report.html | Detailed test report|
| result/ | Test cases in standard format. |
| log/plan_log_xxxx_xx_xx_xx_xx_xx.log | Test case logs. |
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p6156126341"><aname="p6156126341"></a><aname="p6156126341"></a>Views the help text for bytrace.</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p0805165932419"><aname="p0805165932419"></a><aname="p0805165932419"></a>Sets the size of the buffer (KB) for storing and reading traces. The default buffer size is 2048 KB.</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p1480517591245"><aname="p1480517591245"></a><aname="p1480517591245"></a>Sets the bytrace uptime in seconds, which depends on the time required for analysis.</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p11806959142416"><aname="p11806959142416"></a><aname="p11806959142416"></a>Sets the type of the clock for adding a timestamp to a trace, which can be <strongid="b13396101413368"><aname="b13396101413368"></a><aname="b13396101413368"></a>boot</strong> (default), <strongid="b14612918183619"><aname="b14612918183619"></a><aname="b14612918183619"></a>global</strong>, <strongid="b045562173610"><aname="b045562173610"></a><aname="b045562173610"></a>mono</strong>, <strongid="b1132132453612"><aname="b1132132453612"></a><aname="b1132132453612"></a>uptime</strong>, or <strongid="b9888326133616"><aname="b9888326133616"></a><aname="b9888326133616"></a>perf</strong>.</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p148077595245"><aname="p148077595245"></a><aname="p148077595245"></a>Dumps traces to a specified position (console where you run this command by default).</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p18809559182420"><aname="p18809559182420"></a><aname="p18809559182420"></a>Stops capturing traces and dumps traces to a specified position (console where you run this command by default).</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p38091159142414"><aname="p38091159142414"></a><aname="p38091159142414"></a>Lists the bytrace categories supported by the device.</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p12810165914248"><aname="p12810165914248"></a><aname="p12810165914248"></a>Sets the action to take when the buffer is full. If this option is used, the latest traces are discarded; if this option is not used, the earliest traces are discarded (default).</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p9810559132410"><aname="p9810559132410"></a><aname="p9810559132410"></a>Outputs traces to a specified file.</p>
<tdclass="cellrowborder"valign="top"width="50%"headers="mcps1.2.3.1.2 "><pid="p1281117592249"><aname="p1281117592249"></a><aname="p1281117592249"></a>Compresses a captured trace.</p>
</td>
</tr>
</tbody>
</table>
| Option | Description |
| -------- | -------- |
| -h, --help | Views the help text for bytrace. |
| -b _n_, --buffer_size _n_ | Sets the size of the buffer (KB) for storing and reading traces. The default buffer size is 2048 KB. |
| -t _n_, --time _n_ | Sets the bytrace uptime in seconds, which depends on the time required for analysis. |
| --trace_clock _clock_ | Sets the type of the clock for adding a timestamp to a trace, which can be **boot** (default), **global**, **mono**, **uptime**, or **perf**. |
| --trace_begin | Starts capturing traces. |
| --trace_dump | Dumps traces to a specified position (console where you run this command by default). |
| --trace_finish | Stops capturing traces and dumps traces to a specified position (console where you run this command by default). |
| -l, --list_categories | Lists the bytrace categories supported by the device. |
| --overwrite | Sets the action to take when the buffer is full. If this option is used, the latest traces are discarded; if this option is not used, the earliest traces are discarded (default). |
| -o _filename_, --output _filename_ | Outputs traces to a specified file. |