提交 4423f0ee 编写于 作者: G Gloria

Update docs against 19455

Signed-off-by: wusongqing<wusongqing@huawei.com>
上级 bdd68601
...@@ -31,6 +31,8 @@ ...@@ -31,6 +31,8 @@
- [Container Component Development](subsys-graphics-container-guide.md) - [Container Component Development](subsys-graphics-container-guide.md)
- [Development of Layout Container Components](subsys-graphics-layout-guide.md) - [Development of Layout Container Components](subsys-graphics-layout-guide.md)
- [Animator Development](subsys-graphics-animation-guide.md) - [Animator Development](subsys-graphics-animation-guide.md)
- [Using Qt Creator on Windows](subsys-graphics-simulator-guide.md)
- [Small-System Graphics Framework Integration](subsys-graphics-porting-guide.md)
- Multimedia - Multimedia
- Camera - Camera
- [Camera Overview](subsys-multimedia-camera-overview.md) - [Camera Overview](subsys-multimedia-camera-overview.md)
......
# Graphics Overview<a name="EN-US_TOPIC_0000001051770388"></a> # Overview of Small-System Graphics
The OpenHarmony graphics subsystem provides you with basic UI and container components, including buttons, images, labels, lists, animators, scroll views, swipe views, fonts, clock, charts, canvas, sliders, and layouts. In addition, this subsystem provides the Design for X \(DFX\) capability to capture screenshots and export the component tree. It also implements features such as component rendering, animation, and input event distribution. ## Overview
The small-system graphics framework, a lightweight graphics framework, includes lightweight UI components, animations, events, 2D graphics libraries, font layout engines, multi-backend rendering, and window manager modules. It is mainly used for UI display on small-system devices with screens, such as sports watches and smart home devices.
## UI Components<a name="section1987017145112"></a> ### Relationship Between UIs in OpenHarmony
You may have learned different development paradigms of OpenHarmony. What is the relationship between them and the small-system graphics framework?
Contain a variety of basic components \(such as buttons, text, and progress bars\) and complex components \(such as page switching and image sequence frames\). Currently, the [ace_engine repository](https://gitee.com/openharmony/arkui_ace_engine) implements two development frameworks: ArkUI declarative development paradigm and ArkUI web-like development paradigm. For details about their differences, see [ArkUI Overview](../../application-dev/ui/arkui-overview.md). Based on the characteristics of the small system, the [ace_engine_lite repository](https://gitee.com/openharmony/arkui_ace_engine_lite) implements the lightweight ArkUI web-like development paradigm, which is named ArkUI web-like development paradigm lite. Its capabilities are a subset of the ArkUI web-like development paradigm.
## Layouts<a name="section662016231612"></a> OpenHarmony supports the following development modes by system type:
- Standard system:
- ArkUI declarative development paradigm (recommended)
- ArkUI web-like development paradigm
- Small system:
- ArkUI web-like development paradigm lite
- C++ development (for system applications)
Implement grid layout and flexible layout \(such as centered, left-aligned, and right-aligned\). The figure below shows the code implementation relationship between [ui_lite](https://gitee.com/openharmony/arkui_ui_lite), [ace_engine_lite](https://gitee.com/openharmony/arkui_ace_engine_lite), and [ace_engine](https://gitee.com/openharmony/arkui_ace_engine) in the small-system graphics framework.
As each layout is a one-off, the positions of components in a specific layout are calculated each time related functions are called on the layout. However, if the position of a component changes with an operation \(dragging for example\), the positions of other associated components do not automatically change. To change them, you need to call the layout functions again. ![ui relationship](figures/openharmony_ui.png)
## Animators<a name="section73736284117"></a> When determining the API suite used for your application development, preferentially select the ArkUI declarative development paradigm for the standard system and the ArkUI web-like development paradigm lite for the small system. When developing a system application on devices with low configurations, you can use C++ APIs for higher performance and better flexibility.
Each time the Tick event is received, the Task Manager module periodically invokes callbacks upon attribute changes and then triggers redrawing to produce the desired animation effect.
You can call related functions to start, stop, pause, resume, create, and destroy an animator. ### UI Components
The small-system graphics framework implements basic components, such as button, text, and progress bar.
## Input Events<a name="section672194012114"></a> It also provides complex components such as list, swiper, and image sequence frame.
Include touchscreen and physical key input events. Each time the GUI engine runs, the Input Manager module reads the input of all registered hardware devices and converts the input into various events for UI components to use. ### Layouts
The framework implements grid layout and flexible layout (such as centered, left-aligned, and right-aligned).
## Rendering<a name="section14338859916"></a> As each layout is a one-off, the positions of components in a specific layout are calculated each time related functions are called on the layout. However, if the position of a component changes with an operation (dragging for example), the positions of other associated components do not automatically change.
- 2D graphics rendering ### Animations
The framework supports custom animations. All animations are managed by AnimatorManager. Based on the screen refresh event, AnimatorManager periodically calls the callback functions to process attribute changes and then triggers component re-rendering to achieve the animation effect.
Draws lines, rectangles, triangles, and arcs. You can call related functions to start, stop, pause, resume, create, and destroy an animation.
### Events
Input events include touchscreen and physical key input events. Each time the GUI engine runs, InputManager reads the input of all registered hardware devices and converts the input into various events for UI components to use.
- Image rendering ### Rendering
2D graphics rendering: Draws lines, rectangles, triangles, and arcs.
Draws images of various types, such as RGB565, RGB888, ARGB8888, PNG, and JPG. Image rendering: Draws images of various types, such as RGB565, RGB888, ARGB8888, PNG, and JPG.
Font rendering: Draws vector fonts in real time.
- Font rendering ## Implementation Principles
Draws vector fonts in real time. In the small-system graphics framework, the task queue is driven by the screen refresh signal. The task queue stores every task. A periodic screen refresh signal triggers a periodic callback to cyclically drive the execution of a task in the task queue. Operations such as input events, animations, and rendering are executed as independent tasks.
### Event Interaction
The small-system graphics framework supports the touch event (PointerInputDevice), key event (KeyInputDevice), and crown rotation event (RotateInputDevice).
```mermaid
classDiagram
class InputDevice {
+Read(DeviceData& data): bool
}
class PointerInputDevice {
+Read(DeviceData& data): bool
}
class RotateInputDevice {
+Read(DeviceData& data): bool
}
class KeyInputDevice {
+Read(DeviceData& data): bool
}
class InputManager {
-deviceList_: List<InputDevice*>
}
InputDevice <|-- PointerInputDevice
InputDevice <|-- RotateInputDevice
InputDevice <|-- KeyInputDevice
Task <|-- InputManager
InputManager *-- InputDevice
```
The figure above shows the input event classes. Each type of input event overrides the **Read** function of the **InputDevice** base class based on its own features, reads input data, generates an event based on the input data, and distributes the event to the corresponding UI component. For example, **PointerInputDevice** reads the touch coordinates, searches for the component corresponding to the coordinates from the component tree, generates a touch, touch and hold, or drag event, and distributes the event to that component.
### Animation Framework
```mermaid
classDiagram
class AnimatorCallback {
+Callback(UIView* view): void
}
class Animator {
+Start(): void
+Stop(): void
-callback_: AnimatorCallback*
}
class AnimatorManager {
-list_: List<Animator*>
}
Task <|-- AnimatorManager
AnimatorManager *-- Animator
Animator *-- AnimatorCallback
```
To implement a custom animation, you need to inherit from the **Animator** class and implement the callback function of **AnimatorCallback**. All animations are managed by **AnimatorManager** in a unified manner. The input parameter of the callback function is **view** (component) of the current animation. You can modify the component attributes to generate the animation effect, such as the coordinate change, color change, and zoom effect.
### Rendering Framework
```mermaid
classDiagram
class WindowImpl {
Render: void
}
class UIView {
OnDraw: void
Invalidate : void
}
class RootView {
Measure: void
Render : void
}
class RenderManager {
+Callback: void
-winList_: List<Window*>
}
Task <|-- RenderManager
Window <|-- WindowImpl
UIView <|-- RootView
WindowImpl *-- RootView
RenderManager *-- Window
```
- Each window has a **RootView**.
- **RootView** is the root node of a window. All components in a window can be displayed only after being mounted to **RootView**.
- **UIView** is the base class of all components. Each component implements its own **OnDraw** function.
- When the display of a component changes, the **Invalidate** function is called to mark the current area as a dirty area.
- **RootView** manages information about all dirty areas in a window.
- Each time a screen refresh signal is triggered, all windows are traversed and rendered. For each window, the **Measure** function is called for layout from **RootView**, and then the **Render** function is called to render the components in all the dirty areas.
# Small-System Graphics Framework Integration
The small-system graphics modules run in OpenHarmony as a framework. Therefore, you only need to adapt to and implement APIs at the OpenHarmony HDF layer. The graphics subsystem can run on different platforms. For example, when developing an application on Windows or macOS, you can use Qt Creator for simple page layout, development, and debugging. The graphics framework has been adapted to the Windows and macOS platforms. To integrate the graphics framework into an existing project independently, you need to carry out the following adaptation operations:
1. Initialize the graphics engine.
2. Adapt to the display.
3. Adapt to the input device.
4. Initialize the font engine.
5. Interconnect to screen flush.
For details, see [OpenHarmony Small-System Graphics Simulator Adaptation Implementation](https://gitee.com/openharmony/arkui_ui_lite/tree/master/tools/qt/simulator).
### Initializing the Graphics Engine
Initialize UI tasks, rendering modules, animation modules, and default styles.
```c++
// graphic_startup.h
GraphicStartUp::Init();
```
### Adapting to the Display
Set the screen size, connect to basic component rendering, obtain the graphics rendering buffer, and refresh the graphics rendering data to the screen for display.
During display layer adaptation, note that the classes to inherit and implement vary according to hardware rendering and software rendering. The **BaseGfxEngine** class in [gfx_engine_manager.h](https://gitee.com/openharmony/arkui_ui_lite/blob/master/interfaces/innerkits/engines/gfx/gfx_engine_manager.h) is a pure virtual implementation. It defines only functions and does not contain any implementation. It can be used as the parent class of hardware rendering. The **SoftEngine** class in [soft_engine.h](https://gitee.com/openharmony/arkui_ui_lite/blob/master/interfaces/innerkits/engines/gfx/soft_engine.h) inherits from **BaseGfxEngine** and implements the functions of **BaseGfxEngine** at the software layer. It can be used as the parent class of software rendering.
The **BaseGfxEngine** class has three types of functions:
Type 1: functions used to obtain the display memory, apply for the buffer, and release the buffer.
Type 2: basic rendering functions, such as line drawing, **Blit**, and **Fill**.
Type 3: display functions, which are called to send the rendered content to the display.
The functions for obtaining the display memory and the display functions must be implemented for different platforms. The second type of function is implemented by software by default in the graphics UI framework, that is, **SoftEngine** of **soft_engine.h**. You can inherit from **SoftEngine** to extend software rendering. If hardware acceleration is required on different platforms, for example, DMA2D, you can inherit from **BaseGfxEngine** of **gfx_engine_manager.h**, implement all pure virtual functions, and carry out scalability adaptation.
The code for interconnecting to the graphics functions is as follows:
```c++
// gfx_engine_manager.h
virtual void DrawArc(BufferInfo& dst,
ArcInfo& arcInfo,
const Rect& mask,
const Style& style,
OpacityType opacity,
uint8_t cap) = 0;
virtual void DrawLine(BufferInfo& dst,
const Point& start,
const Point& end,
const Rect& mask,
int16_t width,
ColorType color,
OpacityType opacity) = 0;
virtual void DrawLetter(BufferInfo& gfxDstBuffer,
const uint8_t* fontMap,
const Rect& fontRect,
const Rect& subRect,
const uint8_t fontWeight,
const ColorType& color,
const OpacityType opa) = 0;
virtual void DrawCubicBezier(BufferInfo& dst,
const Point& start,
const Point& control1,
const Point& control2,
const Point& end,
const Rect& mask,
int16_t width,
ColorType color,
OpacityType opacity) = 0;
virtual void
DrawRect(BufferInfo& dst, const Rect& rect, const Rect& dirtyRect, const Style& style, OpacityType opacity) = 0;
virtual void DrawTransform(BufferInfo& dst,
const Rect& mask,
const Point& position,
ColorType color,
OpacityType opacity,
const TransformMap& transMap,
const TransformDataInfo& dataInfo) = 0;
// x/y: center of a circle
virtual void ClipCircle(const ImageInfo* info, float x, float y, float radius) = 0;
virtual void Blit(BufferInfo& dst,
const Point& dstPos,
const BufferInfo& src,
const Rect& subRect,
const BlendOption& blendOption) = 0;
virtual void Fill(BufferInfo& dst, const Rect& fillArea, const ColorType color, const OpacityType opacity) = 0;
virtual void DrawPath(BufferInfo& dst,
void* param,
const Paint& paint,
const Rect& rect,
const Rect& invalidatedArea,
const Style& style) = 0;
virtual void FillPath(BufferInfo& dst,
void* param,
const Paint& paint,
const Rect& rect,
const Rect& invalidatedArea,
const Style& style) = 0;
virtual uint8_t* AllocBuffer(uint32_t size, uint32_t usage) = 0;
virtual void FreeBuffer(uint8_t* buffer, uint32_t usage) = 0;
virtual BufferInfo* GetFBBufferInfo()
{
return nullptr;
}
virtual void AdjustLineStride(BufferInfo& info) {}
virtual void Flush(const Rect& flushRect) {}
virtual uint16_t GetScreenWidth()
{
return screenWidth_;
}
virtual uint16_t GetScreenHeight()
{
return screenHeight_;
}
virtual void SetScreenShape(ScreenShape screenShape)
{
screenShape_ = screenShape;
}
virtual ScreenShape GetScreenShape()
{
return screenShape_;
}
```
### Adapting to the Input Device
The graphics framework supports touch, key, and rotation devices. Currently, you must inherit from **InputDevice** to implement the **Read** function for all input devices.
For touch devices, inherit from the **PointerInputDevice** class to implement the **Read** function. The x and y coordinates and pressed status need to be returned.
For key input devices, inherit from the **KeyInputDevice** class to implement the **Read** function. The key ID and key status need to be set.
For rotation input devices, inherit from the **RotateInputDevice** class to implement the **Read** function. The rotate value needs to be set.
The code for interconnecting to the **InputDevice** instance is as follows:
```c++
// Read function in input_device.h
/**
* @brief Reads data from hardware. You should override this to set data. *
* @param [out] Indicates device data. *
* @returns Returns no more data to read if true.
*/
virtual bool Read(DeviceData& data) = 0;
// Inherit from and implement the Read function of the InputDevice base class. The following uses a touch event as an example:
class TouchInput : public OHOS::PointerInputDevice {
public:
TouchInput() {}
virtual TouchInput() {}
// Implement the read function.
bool Read(OHOS::DeviceData& data) override
{
// Set the position and state. You should update the value when it is being touched.
data.point.x = g_lastX;
data.point.y = g_lastY;
data.state = g_leftButtonDown ? STATE_PRESS : STATE_RELEASE;
return false;
}
};
```
### Initializing the Font Engine
Fonts are classified into dot matrix fonts and vector fonts.
For dot matrix fonts, use the font packaging tool to generate a **font.bin** file. The tool supports packaging of Chinese and English fonts. For details about the supported font sizes and font IDs, see **ui_text_language.h** generated by the tool.
For vector fonts, **DEFAULT_VECTOR_FONT_FILENAME** is registered by default. To use other fonts, call **RegisterFontInfo** to register the required font files. Vector font parsing and layout depend on the third-party open-source software FreeType and ICU. To support complex languages such as Arabic, you must introduce the third-party open-source software HarfBuzz and enable **ENABLE_SHAPING** and **ENABLE_ICU**.
The code of font engine initialization is as follows:
```c++
// graphic_config.h
#define DEFAULT_VECTOR_FONT_FILENAME "SourceHanSansSC-Regular.otf"
// Vector font switch
#define ENABLE_VECTOR_FONT 1
// Dot-matrix font switch
#define ENABLE_BITMAP_FONT 0
#define ENABLE_ICU 0
#define ENABLE_SHAPING 0
// ui_font.h
uint8_t RegisterFontInfo(const char* ttfName, uint8_t shaping = 0)
// graphic_startup.h
static void InitFontEngine(uintptr_t psramAddr, uint32_t psramLen, const char* dPath, const char* ttfName);
```
### Interconnecting to Screen Flush
Periodically call **TaskHandler** based on the screen hardware refresh signal (similar to Vsync).
The code for interconnecting to screen flush is as follows:
```c++
TaskManager::GetInstance()->TaskHandler();
```
### Sample Code
The macro definitions related to interconnection are described as follows:
```c++
// graphic_config.h
// By default, macro definitions of different levels of devices are defined. For lightweight devices, enable the VERSION_LITE macro.
/**
* Defines three graphics library versions: lightweight, standard, and extended versions.
* The three versions have different requirements on the memory and hardware.
* The standard version is enabled by default.
*
* The macros of the versions are defined as follows:
* Name | Version Description
* ------------------- | ----------
* VERSION_LITE | Lightweight version
* VERSION_STANDARD | Standard version
* VERSION_EXTENDED | Extended version
*/
#ifdef _LITEOS
#define VERSION_LITE
#elif defined _WIN32 || defined __APPLE__
#define VERSION_LITE
#else
#define VERSION_STANDARD
#endif
// Disable window synthesis. Enabling window synthesis depends on the WMS window synthesis service.
/**
* @brief Multi-window, which is disabled by default on WIN32.
*/
#define ENABLE_WINDOW 0
// Disable the support for images in PNG and JPEG formats. To enable the support, introduce a third-party library.
#define ENABLE_JPEG_AND_PNG 0
// Hardware acceleration. If hardware acceleration is disabled, CPU soft rendering is used by default. You can disable this feature first and enable it after the page is displayed.
/**
* @brief Graphics rendering hardware acceleration, which is disabled by default on WIN32.
*/
#define ENABLE_HARDWARE_ACCELERATION 0
```
The code snippet is as follows:
```c++
using namespace OHOS;
int main(int argc, char** argv)
{
// Initialize the graphic engine.
GraphicStartUp::Init();
// Initialize the display and input device.
InitHal();
// Initialize the font engine.
InitFontEngine();
// Run your application code.
RunApp();
// Use while loop to simulate the hardware flush callback.
// You should call *TaskHandler* in screen flush signal callback (like Vsync).
while (1) {
TaskManager::GetInstance()->TaskHandler();
Sleep(DEFAULT_TASK_PERIOD);
}
return 0;
}
// Below is the memory pool.
static uint8_t g_fontPsramBaseAddr[MIN_FONT_PSRAM_LENGTH];
#if ENABLE_SHAPING
static uint8_t g_shapePsramBaseAddr[MIN_SHAPING_PSRAM_LENGTH];
#else
static uint8_t* g_shapePsramBaseAddr = nullptr;
#endif
static void InitFontEngine()
{
#if ENABLE_VECTOR_FONT
GraphicStartUp::InitFontEngine(reinterpret_cast<uintptr_t>(g_fontMemBaseAddr), MIN_FONT_PSRAM_LENGTH, VECTOR_FONT_DIR, DEFAULT_VECTOR_FONT_FILENAME);
#else
BitmapFontInit();
std::string dPath(_pgmptr);
size_t len = dPath.size();
size_t pos = dPath.find_last_of('\\');
dPath.replace((pos + 1), (len - pos), "..\\..\\simulator\\font\\font.bin");
GraphicStartUp::InitFontEngine(reinterpret_cast<uintptr_t>(g_fontMemBaseAddr), MIN_FONT_PSRAM_LENGTH, dPath.c_str(), nullptr);
#endif
#if ENABLE_ICU
GraphicStartUp::InitLineBreakEngine(reinterpret_cast<uintptr_t>(g_icuMemBaseAddr), SHAPING_WORD_DICT_LENGTH,
VECTOR_FONT_DIR, DEFAULT_LINE_BREAK_RULE_FILENAME);
#endif
}
// Display adapter
class SDLMonitorGfxEngine : public BaseGfxEngine {
public:
BufferInfo* GetFBBufferInfo() override
{
static BufferInfo* bufferInfo = nullptr;
if (bufferInfo == nullptr) {
bufferInfo = new BufferInfo;
bufferInfo->rect = {0, 0, HORIZONTAL_RESOLUTION - 1, VERTICAL_RESOLUTION - 1};
bufferInfo->mode = ARGB8888;
bufferInfo->color = 0x44;
bufferInfo->virAddr = GetFramBuff();
bufferInfo->phyAddr = bufferInfo->virAddr;
// 4: bpp
bufferInfo->stride = HORIZONTAL_RESOLUTION * 4;
bufferInfo->width = HORIZONTAL_RESOLUTION;
bufferInfo->height = VERTICAL_RESOLUTION;
}
return bufferInfo;
}
void Flush() override
{
MonitorRenderFinish();
}
};
class TouchInput : public OHOS::PointerInputDevice {
public:
TouchInput() {}
virtual TouchInput() {}
// Implement the read function.
bool Read(OHOS::DeviceData& data) override
{
// Set the position x, y, and state. You should update the
// g_lastX/g_lastY/g_leftButtonDown when it is being touched.
data.point.x = g_lastX;
data.point.y = g_lastY;
data.state = g_leftButtonDown ? STATE_PRESS : STATE_RELEASE;
return false;
}
};
class KeyInput : public OHOS::KeyInputDevice {
public:
KeyInput();
virtual ~KeyInput() {}
// Implement the read function.
bool Read(OHOS::DeviceData& data) override
{
data.keyId = g_lastKeyId;
data.state = g_lastState;
g_lastState = INVALID_KEY_STATE;
return false;
}
};
// Other device to add, if required.
class XXInput : public OHOS::XXInputDevice {
public:
KeyInput();
virtual ~KeyInput() {}
// Implement the read function.
bool Read(OHOS::DeviceData& data) override
{
// Set the device data information.
}
};
static void InitHal()
{
// Set up the GFX engine.
BaseGfxEngine::GetInstance()->InitEngine(new SDLMonitorGfxEngine());
// Set up the touch device.
TouchInput* touch = new TouchInput();
InputDeviceManager::GetInstance()->Add(touch);
// Set up the key device if required.
KeyInput* key = new KeyInput();
InputDeviceManager::GetInstance()->Add(key);
// Set up xx device if required.
XXInput* inputXX = new XXInput();
InputDeviceManager::GetInstance()->Add(inputXX);
}
```
# Using Qt Creator on Windows
Qt Creator is a cross-platform integrated development environment that enables you to get started and perform application development operations efficiently and easily. The graphics framework provides a Qt Creator project for you to quickly get familiar with the graphics framework.
This topic describes how to install Qt Creator and Git on the Windows PC, obtain the minimum code repository of the UI simulator, and build and run the project.
## Software Installation
You need to download and install QT Creator and Git.
### Installing Qt Creator
Download Qt from the official website [https://www.qt.io/offline-installers](https://www.qt.io/offline-installers).
Select the following three components during the installation:
![Select installation components](figures/graphic_lite_qt_install.png)
### Installing and Configuring Git
Download Git from the [official website](https://git-scm.com/).
![Git official website](figures/graphic_lite_git_download.png "Git official website")
Double-click the downloaded installation program and complete the installation as prompted.
## Obtaining the Minimum Code Repository of the UI Simulator
### Source Code Acquisition
Run the following git commands to pull the minimum code repository of the UI simulator.
```git
git clone https://gitee.com/openharmony/arkui_ui_lite.git -b master foundation/arkui/ui_lite
git clone https://gitee.com/openharmony/graphic_graphic_utils_lite.git -b master foundation/graphic/graphic_utils_lite
git clone https://gitee.com/openharmony/graphic_surface_lite.git -b master foundation/graphic/surface_lite
git clone https://gitee.com/openharmony/window_window_manager_lite.git -b master foundation/window/window_window_manager_lite
git clone https://gitee.com/openharmony/third_party_zlib.git -b master third_party/zlib
git clone https://gitee.com/openharmony/third_party_qrcodegen.git -b master third_party/qrcodegen
git clone https://gitee.com/openharmony/third_party_libpng.git -b master third_party/libpng
git clone https://gitee.com/openharmony/third_party_libjpeg.git -b master third_party/libjpeg
git clone https://gitee.com/openharmony/third_party_icu.git -b master third_party/icu
git clone https://gitee.com/openharmony/third_party_harfbuzz.git -b master third_party/harfbuzz
git clone https://gitee.com/openharmony/third_party_freetype.git -b master third_party/freetype
git clone https://gitee.com/openharmony/third_party_bounds_checking_function.git -b master third_party/bounds_checking_function
git clone https://gitee.com/openharmony/third_party_cJSON.git -b master third_party/cJSON
git clone https://gitee.com/openharmony/third_party_giflib.git -b master third_party/giflib
git clone https://gitee.com/openharmony/third_party_libjpeg-turbo.git -b master third_party/libjpeg-turbo
```
1. Create a source code project directory.
2. Right-click the new directory and choose **Git Bash Here**.
3. Copy and paste the preceding commands to the terminal, press **Enter**, and wait until the download is complete. Alternatively, create a **clone.bat** file in the directory, copy and save the preceding commands, double-click **clone.bat**, and wait until the download is complete.
![Downloading source code](figures/graphic_lite_git_clone.png "Download source code")
### Opening Qt Creator
1. Choose **File > Open File or Project**.
2. Select the project in the displayed dialog box.
The path of the source code is as follows:
```bash
foundation/arkui/ui_lite/tools/qt/simulator/simulator.pro
```
Note: When you open the project for the first time, select only **minGW** in the **kits** list on the **Configure Project** page.
![Opening a project](figures/graphic_lite_qt_project_open.png "Open project")
3. Click **Configure Project** to load the project.
![Selecting minGW](figures/graphic_lite_qt_project_open2.png "Select minGW")
### UI Test Application Running Entry
Expand the project tree.
```
simulator
|-UITest
|-Sources
|-main.cpp
```
![Project tree](figures/graphic_lite_qt_project_run.png "Project tree")
### Build
Choose **Build > Build Project "simulator"**, or right-click the project tree and choose **Rebuild**.
![Build](figures/graphic_lite_qt_project_build.png "Build")
### Debugging
Click the running triangle in the lower left corner to run the code, and click the debugging button to start debugging.
![Running window](figures/graphic_lite_qt_project_debug.png "Running window")
![Test UI](figures/graphic_lite_qt_project_demo.png "Test UI")
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册