未验证 提交 62b26058 编写于 作者: O openharmony_ci 提交者: Gitee

!22090 翻译完成:20907 修改获取源码步骤,以及一些格式问题。

Merge pull request !22090 from wusongqing/TR20907
# Camera Overview<a name="EN-US_TOPIC_0000001051690589"></a>
# Camera Overview
## Basic Concepts<a name="section175012297491"></a>
## Basic Concepts
Camera is one of the services provided by the OpenHarmony multimedia subsystem. The camera module provides recording, preview, and photographing features and supports concurrent stream reading by multiple users.
It is considered good practice that you understand the following concepts before starting development:
- Video frame
A video frame is formed by the stream data of a video image. Video data streams are formed by a series of image data arranged at a fixed time interval.
- Frame per second \(FPS\)
FPS is used to represent the frame rate at which images are refreshed during video playback, or the number of frames per second during video playback. A higher frame rate means smoother video playback.
- Resolution
Information about each image frame consists of pixels. The number of pixels in an image is presented by the resolution. For example, 1080p \(1920 x 1080\) indicates that the image width is 1920 pixels and the image height is 1080 pixels.
## Working Principles<a name="section193961322175011"></a>
- Multimedia services
Multimedia services are started by the **Init** process upon system startup, and media hardware resources \(such as memory, display hardware, image sensors, and codecs\) are initialized and allocated. During the initialization, the configuration file is parsed, which determines the upper limit of capabilities and resources of each service. Generally, the upper limit is configured by original equipment manufacturers \(OEMs\) in the configuration file. The following configuration items are available for the camera service during multimedia service initialization:
- Memory pool: Memory blocks in the memory pool are accessed and released continuously by all multimedia services.
- Image sensor: sensor type, resolution, ISP, and more
- Image processor: resolution, bit rate, image inversion, and more
- Image encoder: encoding format, bit rate, resolution, and more
- Major classes
- Video frame
You can use the **Camera** class and its asynchronous callback classes to configure and access the camera functionalities. The three callback classes correspond to different asynchronous processing scenarios, as described in [Table 1](#table486418149411).
A video frame is formed by the stream data of a video image. Video data streams are formed by a series of image data arranged at a fixed time interval.
**Table 1** Class description
- Frames per second (FPS)
<a name="table486418149411"></a>
<table><thead align="left"><tr id="row19864414104115"><th class="cellrowborder" valign="top" width="22.322232223222326%" id="mcps1.2.4.1.1"><p id="p128641914114112"><a name="p128641914114112"></a><a name="p128641914114112"></a>Class</p>
</th>
<th class="cellrowborder" valign="top" width="44.34443444344435%" id="mcps1.2.4.1.2"><p id="p1386471410411"><a name="p1386471410411"></a><a name="p1386471410411"></a>Description</p>
</th>
<th class="cellrowborder" valign="top" width="33.33333333333333%" id="mcps1.2.4.1.3"><p id="p1486541484116"><a name="p1486541484116"></a><a name="p1486541484116"></a>Examples</p>
</th>
</tr>
</thead>
<tbody><tr id="row138651914104113"><td class="cellrowborder" valign="top" width="22.322232223222326%" headers="mcps1.2.4.1.1 "><p id="p1886515147416"><a name="p1886515147416"></a><a name="p1886515147416"></a>Camera</p>
</td>
<td class="cellrowborder" valign="top" width="44.34443444344435%" headers="mcps1.2.4.1.2 "><p id="p48653148414"><a name="p48653148414"></a><a name="p48653148414"></a>Configures the static camera capability through the configuration class to use basic camera functionalities.</p>
</td>
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.2.4.1.3 "><p id="p986510145416"><a name="p986510145416"></a><a name="p986510145416"></a>Photographing, video recording, and previewing</p>
</td>
</tr>
<tr id="row98656144413"><td class="cellrowborder" valign="top" width="22.322232223222326%" headers="mcps1.2.4.1.1 "><p id="p13865161412412"><a name="p13865161412412"></a><a name="p13865161412412"></a>CameraDeviceCallback</p>
</td>
<td class="cellrowborder" valign="top" width="44.34443444344435%" headers="mcps1.2.4.1.2 "><p id="p1986517141413"><a name="p1986517141413"></a><a name="p1986517141413"></a>Handles camera hardware state changes.</p>
</td>
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.2.4.1.3 "><p id="p286531413419"><a name="p286531413419"></a><a name="p286531413419"></a>Available or unavailable</p>
</td>
</tr>
<tr id="row167872310411"><td class="cellrowborder" valign="top" width="22.322232223222326%" headers="mcps1.2.4.1.1 "><p id="p196793230419"><a name="p196793230419"></a><a name="p196793230419"></a>CameraStateCallback</p>
</td>
<td class="cellrowborder" valign="top" width="44.34443444344435%" headers="mcps1.2.4.1.2 "><p id="p14679823144110"><a name="p14679823144110"></a><a name="p14679823144110"></a>Handles camera instance state changes.</p>
</td>
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.2.4.1.3 "><p id="p6679102354112"><a name="p6679102354112"></a><a name="p6679102354112"></a>Created or released</p>
</td>
</tr>
<tr id="row886581414118"><td class="cellrowborder" valign="top" width="22.322232223222326%" headers="mcps1.2.4.1.1 "><p id="p1865614194116"><a name="p1865614194116"></a><a name="p1865614194116"></a>FrameStateCallback</p>
</td>
<td class="cellrowborder" valign="top" width="44.34443444344435%" headers="mcps1.2.4.1.2 "><p id="p1865171420410"><a name="p1865171420410"></a><a name="p1865171420410"></a>Handles frame status changes.</p>
</td>
<td class="cellrowborder" valign="top" width="33.33333333333333%" headers="mcps1.2.4.1.3 "><p id="p486541444119"><a name="p486541444119"></a><a name="p486541444119"></a>Start and end of photographing, and frame rate changes</p>
</td>
</tr>
</tbody>
</table>
FPS is used to represent the frame rate at which images are refreshed during video playback, or the number of frames per second during video playback. A higher frame rate means smoother video playback.
- Stream transfer
- Resolution
A surface is the basic data structure for transferring audio and video data. A camera is generally used as the data producer of a surface and has specific consumers in different scenarios.
Information about each image frame consists of pixels. The number of pixels in an image is presented by the resolution. For example, 1080p (1920 x 1080) indicates that the image width is 1920 pixels and the image height is 1080 pixels.
Camera preview and recording outputs are video streams, and photographing outputs are image frames. The outputs are transferred through the **Surface** class. A surface can transmit media information streams within and cross processes.
Take video recording as an example. You create a **Recorder** instance, obtain the surface of the **Recorder** instance, and then transfer the surface to the **Camera** instance. In this case, the **Camera** instance works as a producer to inject video streams to the surface, and the **Recorder** instance act as the consumer to obtain video streams from the surface for storage. In this case, you connect the recorder and camera through the surface.
## Working Principles
Similarly, you can create a surface, implement consumer logic for it, and transfer it to the **Camera** instance. For example, transmit video streams over the network or save captured frame data as an image file.
- Multimedia services
Multimedia services are started by the **Init** process upon system startup, and media hardware resources (such as memory, display hardware, image sensors, and codecs) are initialized and allocated. During the initialization, the configuration file is parsed, which determines the upper limit of capabilities and resources of each service. Generally, the upper limit is configured by original equipment manufacturers (OEMs) in the configuration file. The following configuration items are available for the camera service during multimedia service initialization:
The graphics module also obtains stream resources from the camera module through surfaces. For details, see development guidelines on [Graphic](subsys-graphics-overview.md).
- Memory pool: Memory blocks in the memory pool are accessed and released continuously by all multimedia services.
- Image sensor: sensor type, resolution, ISP, and more.
- Image processor: resolution, bit rate, image inversion, and more.
- Image encoder: encoding format, bit rate, resolution, and more.
- Camera running process
1. Creating a camera
- Major classes
You can use the **Camera** class and its asynchronous callback classes to configure and access the camera functionalities. The three callback classes correspond to different asynchronous processing scenarios, as described in the table below.
**Table 1** Class description
| Class| Description| Example|
| -------- | -------- | -------- |
| Camera | Configures the static camera capability through the configuration class to use basic camera functionalities.| Photographing, video recording, and previewing|
| CameraDeviceCallback | Handles camera hardware state changes.| Available/Unavailable|
| CameraStateCallback | Handles camera instance state changes.| Created or released|
| FrameStateCallback | Handles frame status changes.| Start and end of photographing, and frame rate changes|
This process creates a **Camera** instance by **CameraManager**, binds the camera device to the server, and asynchronously notifies you of the successful creation. The following figure shows the time sequence between classes.
- Stream transfer
A surface is the basic data structure for transferring audio and video data. A camera is generally used as the data producer of a surface and has specific consumers in different scenarios.
**Figure 1** Sequence diagram for creating a camera<a name="fig9882125184416"></a>
Camera preview and recording outputs are video streams, and photographing outputs are image frames. The outputs are transferred through the **Surface** class. A surface can transmit media information streams within and cross processes.
![](figures/en-us_image_0000001054101094.png)
Take video recording as an example. You create a **Recorder** instance, obtain the surface of the **Recorder** instance, and then transfer the surface to the **Camera** instance. In this case, the **Camera** instance works as a producer to inject video streams to the surface, and the **Recorder** instance act as the consumer to obtain video streams from the surface for storage. In this case, you connect the recorder and camera through the surface.
1. Taking a video/Previewing
Similarly, you can create a surface, implement consumer logic for it, and transfer it to the **Camera** instance. For example, transmit video streams over the network or save captured frame data as an image file.
This process creates a **Camera** instance via **CameraKit**, and configures frame attributes via **FrameConfig** for recording or previewing. The following figure shows the time sequence.
The graphics module also obtains stream resources from the camera module through surfaces. For details, see [Overview of Small-System Graphics](../subsystems/subsys-graphics-overview.md).
**Figure 2** Sequence diagram for recording/previewing<a name="fig642695404512"></a>
- Camera running process
![](figures/en-us_image_0000001054421113.png)
1. Creating a camera
This process creates a **Camera** instance by **CameraManager**, binds the camera device to the server, and asynchronously notifies you of the successful creation. The following figure shows the time sequence between classes.
**Figure 1** Sequence diagram for creating a camera
![en-us_image_0000001200114819](figures/en-us_image_0000001200114819.png)
2. Taking a video/Previewing
This process creates a **Camera** instance via **CameraKit**, and configures frame attributes via **FrameConfig** for recording or previewing. The following figure shows the time sequence.
**Figure 2** Sequence diagram for recording/previewing
![en-us_image_0000001200115193](figures/en-us_image_0000001200115193.png)
# Previewing Development<a name="EN-US_TOPIC_0000001051930577"></a>
# Previewing Development
## When to Use<a name="section186634310418"></a>
Use the camera module APIs to generate and play video streams.
## Available APIs<a name="section125479541744"></a>
For details, see the available APIs described in development guidelines on photographing.
## When to Use
## Limitations and Constraints<a name="section1165911177314"></a>
Use the camera module APIs to generate and play video streams.
None
## How to Develop<a name="section34171333656"></a>
## Available APIs
1. Perform step 1 through step 4 described in development guidelines on photographing.
2. Set the preview area.
For details, see [Available APIs](subsys-multimedia-camera-photo-guide.md#available-apis).
```
Surface *surface = Surface::CreateSurface();
/* Set the display area. */
surface->SetUserData("region_position_x", "480"); // X-coordinate of the upper left corner of the rectangle
surface->SetUserData("region_position_y", "270"); // Y-coordinate of the upper left corner of the rectangle
surface->SetUserData("region_width", "960"); // Width
surface->SetUserData("region_height", "540"); // Height
fc->AddSurface(*surface);
```
3. Start and stop previewing.
## Constraints
```
stateCallback->camera_->TriggerLoopingCapture(*fc); // Start previewing.
stateCallback->camera_->StopLoopingCapture(); // Stop previewing.
```
None
## How to Develop
1. Perform step 1 through step 4 described in [Photographing Development](subsys-multimedia-camera-photo-guide.md).
2. Set the preview area.
```
Surface *surface = Surface::CreateSurface();
/* Set the display area. */
surface->SetUserData("region_position_x", "480"); // X-coordinate of the upper left corner of the rectangle.
surface->SetUserData("region_position_y", "270"); // Y-coordinate of the upper left corner of the rectangle.
surface->SetUserData("region_width", "960"); // Width.
surface->SetUserData("region_height", "540"); // Height.
fc->AddSurface(*surface);
```
3. Start and stop previewing.
```
stateCallback->camera_->TriggerLoopingCapture(*fc); // Start previewing.
stateCallback->camera_->StopLoopingCapture(); // Stop previewing.
```
# Video Recording Development<a name="EN-US_TOPIC_0000001051451869"></a>
# Video Recording Development
## When to Use<a name="section186634310418"></a>
Use the camera module APIs to capture video streams.
## Available APIs<a name="section125479541744"></a>
For details, see the available APIs described in development guidelines on photographing.
## When to Use
## Limitations and Constraints<a name="section1165911177314"></a>
Use the camera module APIs to capture video streams.
None
## How to Develop<a name="section1196016315516"></a>
## Available APIs
1. Perform step 1 through step 4 described in development guidelines on photographing.
2. Obtain the **FrameConfig** instance for audio recording.
For details, see [Available APIs](subsys-multimedia-camera-photo-guide.md#available-apis).
```
/* Obtain the surface from the recorder. */
Surface *surface = recorder_->GetSurface(0);
surface->SetWidthAndHeight(1920, 1080);
surface->SetQueueSize(3);
surface->SetSize(1024 * 1024);
/* Add the surface to the FrameConfig instance. */
FrameConfig *fc = new FrameConfig(FRAME_CONFIG_RECORD);
fc->AddSurface(*surface);
```
3. Start and stop video recording.
## Constraints
```
stateCallback->camera_->TriggerLoopingCapture(*fc); // Start recording.
stateCallback->camera_->StopLoopingCapture(); // Stop recording.
```
None
## How to Develop
1. Perform step 1 through step 4 described in [Photographing Development](subsys-multimedia-camera-photo-guide.md).
2. Obtain the **FrameConfig** instance for video recording.
```
/* Obtain the surface from the recorder. */
Surface *surface = recorder_->GetSurface(0);
surface->SetWidthAndHeight(1920, 1080);
surface->SetQueueSize(3);
surface->SetSize(1024 * 1024);
/* Add the surface to the FrameConfig instance. */
FrameConfig *fc = new FrameConfig(FRAME_CONFIG_RECORD);
fc->AddSurface(*surface);
```
3. Start and stop video recording.
```
stateCallback->camera_->TriggerLoopingCapture(*fc); // Start recording.
stateCallback->camera_->StopLoopingCapture(); // Stop recording.
```
# Audio/Video Overview
OpenHarmony multimedia services help you to develop for audio and video playback and recording.
- The media playback module facilitates the development of audio and video playback, including media file and stream playback, volume control, and playback progress control.
- The media recording module supports the development of audio and video recording and provides functions to set the size of captured video, encoding bit rate, encoder type, video frame rate, audio sampling rate, and output file format.
OpenHarmony multimedia services help you to develop audio and video playback and recording.
## Basic Concepts
It is considered good practice that you understand the following concepts before starting development:
- The media playback module facilitates the development of audio and video playback, including media file and stream playback, volume control, and playback progress control.
- Streaming media technology
- The media recording module supports the development of audio and video recording and provides functions to set the size of captured video, encoding bit rate, encoder type, video frame rate, audio sampling rate, and output file format.
The streaming media technology refers to a process to encode continuous video and audio data and store the data on a network server. A viewer can watch and listen to the video and audio during download with no need to wait for the completion of download.
## Basic Concepts
It is considered good practice that you understand the following concepts before starting development:
- Video frame rate
- Streaming media technology
The frame rate is used to measure the number of displayed frames, which is the number of images transmitted per second. The more frames per second \(FPS\), the smoother the video.
The streaming media technology refers to a process to encode continuous video and audio data and store the data on a network server. A viewer can watch and listen to the video and audio during download with no need to wait for the completion of download.
- Bit rate
- Video frame rate
The frame rate is used to measure the number of displayed frames, which is the number of images transmitted per second. The more frames per second (FPS), the smoother the video.
Bit rate is the number of bits transmitted per unit of time. The commonly used unit is kbit/s.
- Bit rate
- Sampling rate
Bit rate is the number of bits transmitted per unit of time. The commonly used unit is kbit/s.
The sampling rate is the number of samples per second taken from continuous signals to form discrete signals. The unit is hertz \(Hz\).
- Sampling rate
The sampling rate is the number of samples per second taken from continuous signals to form discrete signals. The unit is hertz (Hz).
## Encoding and Decoding
Available audio and video encoding and decoding capabilities vary depending on device types. The following table lists supported specifications for available development boards.
## Codec Specifications
**Table 1** Encoding and decoding specifications for different development boards
Available audio and video codecs vary depending on device types. The following table lists supported specifications for available development boards.
| Device Type | Development Board | Decoding | Encoding |
| ------------------------ | ----------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| Cameras with a screen | Hi3516 | - Audio: MPEG-4 AAC Profile (AAC LC), MPEG Audio Layer 3 (MP3), mono and dual channels, MPEG-4 (.mp4 and .m4a), and MP3 (.mp3) are supported.<br>- Video: The H.265 (HEVC) and H.264 (AVC) (for streams encoded using a chip of the same type) and the MPEG-4 (.mp4) container format are supported. | - Audio: AAC-LC encoding, mono and dual channels, and the MPEG-4 (.mp4) container format are supported.<br/>- Video: H.264 and H.265 encoding and the MPEG-4 (.mp4) container format are supported.</li></ul> |
| Cameras without a screen | Hi3518 | - Audio: MPEG-4 AAC Profile (AAC LC), MPEG Audio Layer 3 (MP3), mono and dual channels, MPEG-4 (.mp4 and .m4a), and MP3 (.mp3) are supported.<br/>- Video: none | <ul><li>Audio: AAC-LC encoding, mono and dual channels, and the MPEG-4 (.mp4) container format are supported.<br/>- Video: H.264 and H.265 encoding and the MPEG-4 (.mp4) container format are supported. |
| WLAN connecting devices | Hi3861 | N/A | N/A |
**Table 1** Codec specifications for different development boards
For details about the encoding and decoding specifications of Hi3516 and Hi3518, refer to their documentation.
| Device Type| Development Board| Decoding| Encoding|
| -------- | -------- | -------- | -------- |
| Cameras with a screen| Hi3516 | - Audio: MPEG-4 AAC Profile (AAC LC), mono and dual channels, MPEG-4 (.mp4 and .m4a) <br>- Video: The H.265 (HEVC) and H.264 (AVC) (for streams encoded using a chip of the same type) and the MPEG-4 (.mp4) container format are supported.| - Audio: AAC-LC encoding, mono and dual channels, and the MPEG-4 (.mp4) container format are supported.<br>- Video: H.264 and H.265 encoding and the MPEG-4 (.mp4) container format are supported.|
| Cameras without a screen| Hi3518 | - Audio: MPEG-4 AAC Profile (AAC LC), mono and dual channels, and the MPEG-4 (.mp4 and .m4a) container format are supported.<br>- Video: none| - Audio: AAC-LC encoding, mono and dual channels, and the MPEG-4 (.mp4) container format are supported.<br>- Video: H.264 and H.265 encoding and the MPEG-4 (.mp4) container format are supported.|
| WLAN connecting devices| Hi3861 | N/A| N/A|
For details about the codec specifications of Hi3516 and Hi3518, refer to their documentation.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册