diff --git a/en/device-dev/subsystems/figures/en-us_image_0000001054101094.png b/en/device-dev/subsystems/figures/en-us_image_0000001200114819.png similarity index 100% rename from en/device-dev/subsystems/figures/en-us_image_0000001054101094.png rename to en/device-dev/subsystems/figures/en-us_image_0000001200114819.png diff --git a/en/device-dev/subsystems/figures/en-us_image_0000001054421113.png b/en/device-dev/subsystems/figures/en-us_image_0000001200115193.png similarity index 100% rename from en/device-dev/subsystems/figures/en-us_image_0000001054421113.png rename to en/device-dev/subsystems/figures/en-us_image_0000001200115193.png diff --git a/en/device-dev/subsystems/subsys-multimedia-camera-overview.md b/en/device-dev/subsystems/subsys-multimedia-camera-overview.md index 34f8ddb97762ff4c28bc609d9505af457b329018..6e3cdaf627fc8fe294b5ccde5f1b2dccada1d47a 100644 --- a/en/device-dev/subsystems/subsys-multimedia-camera-overview.md +++ b/en/device-dev/subsystems/subsys-multimedia-camera-overview.md @@ -1,112 +1,75 @@ -# Camera Overview +# Camera Overview -## Basic Concepts + +## Basic Concepts Camera is one of the services provided by the OpenHarmony multimedia subsystem. The camera module provides recording, preview, and photographing features and supports concurrent stream reading by multiple users. It is considered good practice that you understand the following concepts before starting development: -- Video frame - - A video frame is formed by the stream data of a video image. Video data streams are formed by a series of image data arranged at a fixed time interval. - -- Frame per second \(FPS\) - - FPS is used to represent the frame rate at which images are refreshed during video playback, or the number of frames per second during video playback. A higher frame rate means smoother video playback. - -- Resolution - - Information about each image frame consists of pixels. The number of pixels in an image is presented by the resolution. For example, 1080p \(1920 x 1080\) indicates that the image width is 1920 pixels and the image height is 1080 pixels. - - -## Working Principles - -- Multimedia services - - Multimedia services are started by the **Init** process upon system startup, and media hardware resources \(such as memory, display hardware, image sensors, and codecs\) are initialized and allocated. During the initialization, the configuration file is parsed, which determines the upper limit of capabilities and resources of each service. Generally, the upper limit is configured by original equipment manufacturers \(OEMs\) in the configuration file. The following configuration items are available for the camera service during multimedia service initialization: - - - Memory pool: Memory blocks in the memory pool are accessed and released continuously by all multimedia services. - - Image sensor: sensor type, resolution, ISP, and more - - Image processor: resolution, bit rate, image inversion, and more - - Image encoder: encoding format, bit rate, resolution, and more - - -- Major classes +- Video frame - You can use the **Camera** class and its asynchronous callback classes to configure and access the camera functionalities. The three callback classes correspond to different asynchronous processing scenarios, as described in [Table 1](#table486418149411). + A video frame is formed by the stream data of a video image. Video data streams are formed by a series of image data arranged at a fixed time interval. - **Table 1** Class description +- Frames per second (FPS) - - - - - - - - - - - - - - - - - - - - - - - -

Class

-

Description

-

Examples

-

Camera

-

Configures the static camera capability through the configuration class to use basic camera functionalities.

-

Photographing, video recording, and previewing

-

CameraDeviceCallback

-

Handles camera hardware state changes.

-

Available or unavailable

-

CameraStateCallback

-

Handles camera instance state changes.

-

Created or released

-

FrameStateCallback

-

Handles frame status changes.

-

Start and end of photographing, and frame rate changes

-
+ FPS is used to represent the frame rate at which images are refreshed during video playback, or the number of frames per second during video playback. A higher frame rate means smoother video playback. -- Stream transfer +- Resolution - A surface is the basic data structure for transferring audio and video data. A camera is generally used as the data producer of a surface and has specific consumers in different scenarios. + Information about each image frame consists of pixels. The number of pixels in an image is presented by the resolution. For example, 1080p (1920 x 1080) indicates that the image width is 1920 pixels and the image height is 1080 pixels. - Camera preview and recording outputs are video streams, and photographing outputs are image frames. The outputs are transferred through the **Surface** class. A surface can transmit media information streams within and cross processes. - Take video recording as an example. You create a **Recorder** instance, obtain the surface of the **Recorder** instance, and then transfer the surface to the **Camera** instance. In this case, the **Camera** instance works as a producer to inject video streams to the surface, and the **Recorder** instance act as the consumer to obtain video streams from the surface for storage. In this case, you connect the recorder and camera through the surface. +## Working Principles - Similarly, you can create a surface, implement consumer logic for it, and transfer it to the **Camera** instance. For example, transmit video streams over the network or save captured frame data as an image file. +- Multimedia services + + Multimedia services are started by the **Init** process upon system startup, and media hardware resources (such as memory, display hardware, image sensors, and codecs) are initialized and allocated. During the initialization, the configuration file is parsed, which determines the upper limit of capabilities and resources of each service. Generally, the upper limit is configured by original equipment manufacturers (OEMs) in the configuration file. The following configuration items are available for the camera service during multimedia service initialization: - The graphics module also obtains stream resources from the camera module through surfaces. For details, see development guidelines on [Graphic](subsys-graphics-overview.md). + - Memory pool: Memory blocks in the memory pool are accessed and released continuously by all multimedia services. + - Image sensor: sensor type, resolution, ISP, and more. + - Image processor: resolution, bit rate, image inversion, and more. + - Image encoder: encoding format, bit rate, resolution, and more. -- Camera running process - 1. Creating a camera +- Major classes + + You can use the **Camera** class and its asynchronous callback classes to configure and access the camera functionalities. The three callback classes correspond to different asynchronous processing scenarios, as described in the table below. + + **Table 1** Class description + + | Class| Description| Example| + | -------- | -------- | -------- | + | Camera | Configures the static camera capability through the configuration class to use basic camera functionalities.| Photographing, video recording, and previewing| + | CameraDeviceCallback | Handles camera hardware state changes.| Available/Unavailable| + | CameraStateCallback | Handles camera instance state changes.| Created or released| + | FrameStateCallback | Handles frame status changes.| Start and end of photographing, and frame rate changes| - This process creates a **Camera** instance by **CameraManager**, binds the camera device to the server, and asynchronously notifies you of the successful creation. The following figure shows the time sequence between classes. +- Stream transfer + + A surface is the basic data structure for transferring audio and video data. A camera is generally used as the data producer of a surface and has specific consumers in different scenarios. - **Figure 1** Sequence diagram for creating a camera - + Camera preview and recording outputs are video streams, and photographing outputs are image frames. The outputs are transferred through the **Surface** class. A surface can transmit media information streams within and cross processes. - ![](figures/en-us_image_0000001054101094.png) + Take video recording as an example. You create a **Recorder** instance, obtain the surface of the **Recorder** instance, and then transfer the surface to the **Camera** instance. In this case, the **Camera** instance works as a producer to inject video streams to the surface, and the **Recorder** instance act as the consumer to obtain video streams from the surface for storage. In this case, you connect the recorder and camera through the surface. - 1. Taking a video/Previewing + Similarly, you can create a surface, implement consumer logic for it, and transfer it to the **Camera** instance. For example, transmit video streams over the network or save captured frame data as an image file. - This process creates a **Camera** instance via **CameraKit**, and configures frame attributes via **FrameConfig** for recording or previewing. The following figure shows the time sequence. + The graphics module also obtains stream resources from the camera module through surfaces. For details, see [Overview of Small-System Graphics](../subsystems/subsys-graphics-overview.md). - **Figure 2** Sequence diagram for recording/previewing - +- Camera running process - ![](figures/en-us_image_0000001054421113.png) + 1. Creating a camera + + This process creates a **Camera** instance by **CameraManager**, binds the camera device to the server, and asynchronously notifies you of the successful creation. The following figure shows the time sequence between classes. + **Figure 1** Sequence diagram for creating a camera + + ![en-us_image_0000001200114819](figures/en-us_image_0000001200114819.png) + 2. Taking a video/Previewing + + This process creates a **Camera** instance via **CameraKit**, and configures frame attributes via **FrameConfig** for recording or previewing. The following figure shows the time sequence. + **Figure 2** Sequence diagram for recording/previewing + + ![en-us_image_0000001200115193](figures/en-us_image_0000001200115193.png) diff --git a/en/device-dev/subsystems/subsys-multimedia-camera-photo-guide.md b/en/device-dev/subsystems/subsys-multimedia-camera-photo-guide.md index ec5d8870524f3a581fb178d56205f2ac2bcc99cf..f9ccef96ef42c34e28f4d8f7b7c76b10ade98a5c 100644 --- a/en/device-dev/subsystems/subsys-multimedia-camera-photo-guide.md +++ b/en/device-dev/subsystems/subsys-multimedia-camera-photo-guide.md @@ -1,440 +1,192 @@ -# Photographing Development - -## When to Use - -Use the camera module APIs to capture frames \(photographing\). - -## Available APIs - -**Table 1** APIs for photographing - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Class

-

Function

-

Description

-

CameraKit

-

int32_t GetCameraIds(std::list<string> cameraList)

-

Obtains IDs of cameras that are currently available.

-

CameraKit

-

CameraAbility& GetCameraAbility(string cameraId)

-

Obtains the camera capability

-

CameraKit

-

void RegisterCameraDeviceCallback(CameraDeviceCallback* callback, EventHandler* handler)

-

Registers a camera callback for camera status changes.

-

CameraKit

-

void UnregisterCameraDeviceCallback(CameraDeviceCallback* callback)

-

Unregisters a camera callback.

-

CameraKit

-

void CreateCamera(string cameraId, CameraStateCallback* callback, EventHandler* handler)

-

Creates a Camera instance.

-

CameraKit

-

const CameraInfo *GetCameraInfo(std::string cameraId);

-

Creates a CameraInfo instance.

-

Camera

-

string GetCameraId()

-

Obtains the camera ID.

-

Camera

-

CameraConfig& GetCameraConfig()

-

Obtains the camera configuration.

-

Camera

-

FrameConfig& GetFrameConfig(int32_t type)

-

Obtains the frame configuration.

-

Camera

-

void Configure(CameraConfig& config)

-

Configures the camera using the CameraConfig object.

-

Camera

-

void Release()

-

Releases the Camera object and associated resources.

-

Camera

-

int TriggerLoopingCapture(FrameConfig& frameConfig)

-

Starts looping-frame capture.

-

Camera

-

void StopLoopingCapture()

-

Stops looping-frame capture.

-

Camera

-

int32_t TriggerSingleCapture(FrameConfig& frameConfig)

-

Starts single-frame capture.

-

CameraConfig

-

void SetFrameStateCallback(FrameStateCallback* callback, EventHandler* handler);

-

Sets a frame state callback to respond to state changes.

-

CameraConfig

-

static CameraConfig* CreateCameraConfig()

-

Creates a CameraConfig instance.

-

CameraAbility

-

std::list<Size> GetSupportedSizes(int format)

-

Obtains the supported image sizes for a specified image format.

-

CameraAbility

-

std::list<T> GetParameterRange(uint32_t key)

-

Obtains the parameter value range based on a specified parameter key.

-

CameraAbility

-

std::list<int32_t> GetSupportedAfModes() const;

-

Gets a list of supported autofocus modes.

-

CameraAbility

-

std::list<int32_t> GetSupportedAeModes() const;

-

Gets a list of supported auto exposure modes.

-

CameraDevice

-

CameraDeviceCallback()

-

A constructor used to create a CameraDeviceCallback instance.

-

CameraDevice

-

void OnCameraStatus​(std::string cameraId, int32_t status)

-

Called when the camera device status changes.

-

CameraStateCallback

-

CameraStateCallback​()

-

A constructor used to create a CameraStateCallback instance.

-

CameraStateCallback

-

void OnConfigured​(Camera& camera)

-

Called when the camera is configured.

-

CameraStateCallback

-

void OnConfigureFailed​(Camera& camera,int32_t errorCode)

-

Called when the camera fails to be configured.

-

CameraStateCallback

-

void OnCreated​(Camera& camera)

-

Called when the camera is successfully created.

-

CameraStateCallback

-

void OnCreateFailed​(std::string cameraId,int32_t errorCode)

-

Called when the camera fails to be created.

-

CameraStateCallback

-

void OnReleased​(Camera& camera)

-

Called when the camera is released.

-

FrameStateCallback

-

FrameStateCallback​()

-

A constructor used to create a FrameStateCallback instance.

-

FrameStateCallback

-

void OnFrameFinished(Camera& camera, FrameConfig& frameConfig, FrameResult& frameResult)

-

Called when the frame capture is completed.

-

FrameStateCallback

-

void OnFrameError​(Camera& camera, FrameConfig& frameConfig, int32_t errorCode, FrameResult& frameResult)

-

Called when the frame capture fails.

-

FrameConfig

-

int32_t GetFrameConfigType()

-

Obtains the frame configuration type.

-

FrameConfig

-

std::list<OHOS::Surface> GetSurfaces()

-

Obtains a list of surface objects (shared memories).

-

FrameConfig

-

void AddSurface(OHOS::AGP::UISurface& surface);

-

Adds a surface.

-

FrameConfig

-

void RemoveSurface(OHOS::AGP::UISurface& surface);

-

Removes a surface.

-

FrameConfig

-

void GetVendorParameter(uint8_t *value, uint32_t len);

-

Gets a vendor parameter.

-

FrameConfig

-

void SetVendorParameter(uint8_t *value, uint32_t len);

-

Sets a vendor parameter.

-

CameraInfo

-

int32_t GetCameraType() const;

-

Gets a camera type.

-

CameraInfo

-

int32_t GetCameraFacingType() const;

-

Gets a camera facing type.

-
- -## Limitations and Constraints +# Photographing Development + + +## When to Use + +Use the camera module APIs to capture frames (photographing). + + +## Available APIs + +**Table 1** APIs for photographing + +| Class| API| Description| +| -------- | -------- | -------- | +| CameraKit | int32_t GetCameraIds(std::list<string> cameraList) | Obtains IDs of cameras that are currently available.| +| CameraKit | CameraAbility& GetCameraAbility(string cameraId) | Obtains the camera capability.| +| CameraKit | void RegisterCameraDeviceCallback(CameraDeviceCallback\* callback, EventHandler\* handler) | Registers a camera callback for camera status changes.| +| CameraKit | void UnregisterCameraDeviceCallback(CameraDeviceCallback\* callback) | Unregisters a camera callback.| +| CameraKit | void CreateCamera(string cameraId, CameraStateCallback\* callback, EventHandler\* handler) | Creates a **Camera** instance.| +| Camera | string GetCameraId() | Obtains the camera ID.| +| Camera | CameraConfig& GetCameraConfig() | Obtains the camera configuration.| +| Camera | FrameConfig& GetFrameConfig(int32_t type) | Obtains the frame configuration.| +| Camera | void Configure(CameraConfig& config) | Configures the camera using a **CameraConfig** instance.| +| Camera | void Release() | Releases the **Camera** object and associated resources.| +| Camera | int TriggerLoopingCapture(FrameConfig& frameConfig) | Starts looping-frame capture.| +| Camera | void StopLoopingCapture() | Stops looping-frame capture.| +| Camera | int32_t TriggerSingleCapture(FrameConfig& frameConfig) | Starts single-frame capture.| +| CameraConfig | void SetFrameStateCallback(FrameStateCallback\* callback, EventHandler\* handler); | Sets a frame state callback to respond to state changes.| +| CameraConfig | static CameraConfig\* CreateCameraConfig() | Creates a **CameraConfig** instance.| +| CameraAbility | std::list<Size> GetSupportedSizes(int format) | Obtains the supported image sizes for a specified image format.| +| CameraAbility | std::list<T> GetParameterRange(uint32_t key) | Obtains the parameter value range based on a specified parameter key.| +| CameraDevice | CameraDeviceCallback() | A constructor used to create a **CameraDeviceCallback** instance.| +| CameraDevice | void OnCameraStatus​(std::string cameraId, int32_t status) | Called when the camera device status changes.| +| CameraStateCallback | CameraStateCallback​() | A constructor used to create a **CameraStateCallback** instance.| +| CameraStateCallback | void OnConfigured​(Camera& camera) | Called when the camera is configured successfully.| +| CameraStateCallback | void OnConfigureFailed​(Camera& camera,int32_t errorCode) | Called when the camera fails to be configured.| +| CameraStateCallback | void OnCreated​(Camera& camera) | Called when the camera is created successfully.| +| CameraStateCallback | void OnCreateFailed​(std::string cameraId,int32_t errorCode) | Called when the camera fails to be created.| +| CameraStateCallback | void OnReleased​(Camera& camera) | Called when the camera is released.| +| FrameStateCallback | FrameStateCallback​() | A constructor used to create a **FrameStateCallback** instance.| +| FrameStateCallback | void OnFrameFinished(Camera& camera, FrameConfig& frameConfig, FrameResult& frameResult) | Called when the frame capture is completed.| +| FrameStateCallback | void OnFrameError​(Camera& camera, FrameConfig& frameConfig, int32_t errorCode, FrameResult& frameResult) | Called when the frame capture fails.| +| FrameConfig | int32_t GetFrameConfigType() | Obtains the frame configuration type.| +| FrameConfig | std::list<OHOS::Surface> GetSurfaces() | Obtains a list of surface objects.| +| FrameConfig | void AddSurface(OHOS::AGP::UISurface& surface); | Adds a surface.| +| FrameConfig | void RemoveSurface(OHOS::AGP::UISurface& surface); | Removes a surface.| + + +## Constraints None -## How to Develop - -1. Extend the **CameraDeviceCallback** class and call **OnCameraStatus** to customize operations when the camera device changes, for example, when a camera becomes available or unavailable. - - ``` - class SampleCameraDeviceCallback : public CameraDeviceCallback { - void OnCameraStatus(std::string cameraId, int32_t status) override - { - // Do something when camera is available or unavailable. - } - }; - ``` - -2. Extend the **FrameStateCallback** class. After obtaining the frame data, save the data as a file. - - ``` - static void SampleSaveCapture(const char *p, uint32_t size) - { - cout << "Start saving picture" << endl; - struct timeval tv; - gettimeofday(&tv, NULL); - struct tm *ltm = localtime(&tv.tv_sec); - if (ltm != nullptr) { - ostringstream ss("Capture_"); - ss << "Capture" << ltm->tm_hour << "-" << ltm->tm_min << "-" << ltm->tm_sec << ".jpg"; - - ofstream pic("/sdcard/" + ss.str(), ofstream::out | ofstream::trunc); - cout << "write " << size << " bytes" << endl; - pic.write(p, size); - cout << "Saving picture end" << endl; - } - } - - class TestFrameStateCallback : public FrameStateCallback { - void OnFrameFinished(Camera &camera, FrameConfig &fc, FrameResult &result) override - { - cout << "Receive frame complete inform." << endl; - if (fc.GetFrameConfigType() == FRAME_CONFIG_CAPTURE) { - cout << "Capture frame received." << endl; - list surfaceList = fc.GetSurfaces(); - for (Surface *surface : surfaceList) { - SurfaceBuffer *buffer = surface->AcquireBuffer(); - if (buffer != nullptr) { - char *virtAddr = static_cast(buffer->GetVirAddr()); - if (virtAddr != nullptr) { - SampleSaveCapture(virtAddr, buffer->GetSize()); - } - surface->ReleaseBuffer(buffer); - } - delete surface; - } - delete &fc; - } - } - }; - ``` - -3. Extend the **CameraStateCallback** class and customize operations when the camera state changes \(configuration successful or failed, and creation successful or failed\). - - ``` - class SampleCameraStateMng : public CameraStateCallback { - public: - SampleCameraStateMng() = delete; - SampleCameraStateMng(EventHandler &eventHdlr) : eventHdlr_(eventHdlr) {} - ~SampleCameraStateMng() - { - if (recordFd_ != -1) { - close(recordFd_); - } - } - void OnCreated(Camera &c) override - { - cout << "Sample recv OnCreate camera." << endl; - auto config = CameraConfig::CreateCameraConfig(); - config->SetFrameStateCallback(&fsCb_, &eventHdlr_); - c.Configure(*config); - cam_ = &c; - } - void OnCreateFailed(const std::string cameraId, int32_t errorCode) override {} - void OnReleased(Camera &c) override {} - }; - ``` - -4. Create a **CameraKit** instance to set and obtain camera information. - - ``` - CameraKit *camKit = CameraKit::GetInstance(); - list camList = camKit->GetCameraIds(); - string camId; - for (auto &cam : camList) { - cout << "camera name:" << cam << endl; - const CameraAbility *ability = camKit->GetCameraAbility(cam); - /* Find the camera that fits your ability. */ - list sizeList = ability->GetSupportedSizes(0); - if (find(sizeList.begin(), sizeList.end(), CAM_PIC_1080P) != sizeList.end()) { - camId = cam; - break; - } - } - ``` - -5. Create a **Camera** instance. - - ``` - EventHandler eventHdlr; // Create a thread to handle callback events. - SampleCameraStateMng CamStateMng(eventHdlr); - - camKit->CreateCamera(camId, CamStateMng, eventHdlr); - ``` - -6. In the main process, synchronize configurations set by callback functions implemented in [step 1](#li378084192111), [step 2](#li8716104682913), and [step 3](#li6671035102514). - - ``` - void OnCreated(Camera &c) override - { - cout << "Sample recv OnCreate camera." << endl; - auto config = CameraConfig::CreateCameraConfig(); - config->SetFrameStateCallback(&fsCb_, &eventHdlr_); - c.Configure(*config); - cam_ = &c; - } - - void Capture() - { - if (cam_ == nullptr) { - cout << "Camera is not ready." << endl; - return; - } - FrameConfig *fc = new FrameConfig(FRAME_CONFIG_CAPTURE); - Surface *surface = Surface::CreateSurface(); - if (surface == nullptr) { - delete fc; - return; - } - surface->SetWidthAndHeight(1920, 1080); /* 1920:width,1080:height */ - fc->AddSurface(*surface); - cam_->TriggerSingleCapture(*fc); - } - ``` - +## How to Develop + +1. Implement the **CameraDeviceCallback** class and call **OnCameraStatus** to customize operations when the camera device changes, for example, when a camera becomes available or unavailable. + + ``` + class SampleCameraDeviceCallback : public CameraDeviceCallback { + void OnCameraStatus(std::string cameraId, int32_t status) override + { + //do something when camera is available/unavailable + } + }; + ``` + +2. Implement the **FrameStateCallback** class. After obtaining the frame data, save the data as a file. + + ``` + static void SampleSaveCapture(const char *p, uint32_t size) + { + cout << "Start saving picture" << endl; + struct timeval tv; + gettimeofday(&tv, NULL); + struct tm *ltm = localtime(&tv.tv_sec); + if (ltm != nullptr) { + ostringstream ss("Capture_"); + ss << "Capture" << ltm->tm_hour << "-" << ltm->tm_min << "-" << ltm->tm_sec << ".jpg"; + + ofstream pic("/sdcard/" + ss.str(), ofstream::out | ofstream::trunc); + cout << "write " << size << " bytes" << endl; + pic.write(p, size); + cout << "Saving picture end" << endl; + } + } + + class TestFrameStateCallback : public FrameStateCallback { + void OnFrameFinished(Camera &camera, FrameConfig &fc, FrameResult &result) override + { + cout << "Receive frame complete inform." << endl; + if (fc.GetFrameConfigType() == FRAME_CONFIG_CAPTURE) { + cout << "Capture frame received." << endl; + list surfaceList = fc.GetSurfaces(); + for (Surface *surface : surfaceList) { + SurfaceBuffer *buffer = surface->AcquireBuffer(); + if (buffer != nullptr) { + char *virtAddr = static_cast(buffer->GetVirAddr()); + if (virtAddr != nullptr) { + SampleSaveCapture(virtAddr, buffer->GetSize()); + } + surface->ReleaseBuffer(buffer); + } + delete surface; + } + delete &fc; + } + } + }; + ``` + +3. Implement the **CameraStateCallback** class and customize operations when the camera state changes (configuration successful or failed, and creation successful or failed). + + ``` + class SampleCameraStateMng : public CameraStateCallback { + public: + SampleCameraStateMng() = delete; + SampleCameraStateMng(EventHandler &eventHdlr) : eventHdlr_(eventHdlr) {} + ~SampleCameraStateMng() + { + if (recordFd_ != -1) { + close(recordFd_); + } + } + void OnCreated(Camera &c) override + { + cout << "Sample recv OnCreate camera." << endl; + auto config = CameraConfig::CreateCameraConfig(); + config->SetFrameStateCallback(&fsCb_, &eventHdlr_); + c.Configure(*config); + cam_ = &c; + } + void OnCreateFailed(const std::string cameraId, int32_t errorCode) override {} + void OnReleased(Camera &c) override {} + }; + ``` + +4. Create a **CameraKit** instance to set and obtain camera information. + + ``` + CameraKit *camKit = CameraKit::GetInstance(); + list camList = camKit->GetCameraIds(); + string camId; + for (auto &cam : camList) { + cout << "camera name:" << cam << endl; + const CameraAbility *ability = camKit->GetCameraAbility(cam); + /* find camera which fits user's ability */ + list sizeList = ability->GetSupportedSizes(0); + if (find(sizeList.begin(), sizeList.end(), CAM_PIC_1080P) != sizeList.end()) { + camId = cam; + break; + } + } + ``` + +5. Create a **Camera** instance. + + ``` + EventHandler eventHdlr; // Create a thread to handle callback events + SampleCameraStateMng CamStateMng(eventHdlr); + + camKit->CreateCamera(camId, CamStateMng, eventHdlr); + ``` + +6. Based on the callback design in steps 1, 2, and 3, perform related operations until the **OnCreated** callback obtains **cam_**. + + ``` + void OnCreated(Camera &c) override + { + cout << "Sample recv OnCreate camera." << endl; + auto config = CameraConfig::CreateCameraConfig(); + config->SetFrameStateCallback(&fsCb_, &eventHdlr_); + c.Configure(*config); + cam_ = &c; + } + + void Capture() + { + if (cam_ == nullptr) { + cout << "Camera is not ready." << endl; + return; + } + FrameConfig *fc = new FrameConfig(FRAME_CONFIG_CAPTURE); + Surface *surface = Surface::CreateSurface(); + if (surface == nullptr) { + delete fc; + return; + } + surface->SetWidthAndHeight(1920, 1080); /* 1920:width,1080:height */ + fc->AddSurface(*surface); + cam_->TriggerSingleCapture(*fc); + } + ``` diff --git a/en/device-dev/subsystems/subsys-multimedia-camera-preview-guide.md b/en/device-dev/subsystems/subsys-multimedia-camera-preview-guide.md index ff82dd15e65c193109e2d51ed2eca5aebc35d4c1..aeb3f8fc3926388fe682bce43863243141597747 100644 --- a/en/device-dev/subsystems/subsys-multimedia-camera-preview-guide.md +++ b/en/device-dev/subsystems/subsys-multimedia-camera-preview-guide.md @@ -1,38 +1,41 @@ -# Previewing Development +# Previewing Development -## When to Use -Use the camera module APIs to generate and play video streams. - -## Available APIs - -For details, see the available APIs described in development guidelines on photographing. +## When to Use -## Limitations and Constraints +Use the camera module APIs to generate and play video streams. -None -## How to Develop +## Available APIs -1. Perform step 1 through step 4 described in development guidelines on photographing. -2. Set the preview area. +For details, see [Available APIs](subsys-multimedia-camera-photo-guide.md#available-apis). - ``` - Surface *surface = Surface::CreateSurface(); - /* Set the display area. */ - surface->SetUserData("region_position_x", "480"); // X-coordinate of the upper left corner of the rectangle - surface->SetUserData("region_position_y", "270"); // Y-coordinate of the upper left corner of the rectangle - surface->SetUserData("region_width", "960"); // Width - surface->SetUserData("region_height", "540"); // Height - - fc->AddSurface(*surface); - ``` -3. Start and stop previewing. +## Constraints - ``` - stateCallback->camera_->TriggerLoopingCapture(*fc); // Start previewing. - stateCallback->camera_->StopLoopingCapture(); // Stop previewing. - ``` +None +## How to Develop + +1. Perform step 1 through step 4 described in [Photographing Development](subsys-multimedia-camera-photo-guide.md). + +2. Set the preview area. + + ``` + Surface *surface = Surface::CreateSurface(); + /* Set the display area. */ + surface->SetUserData("region_position_x", "480"); // X-coordinate of the upper left corner of the rectangle. + surface->SetUserData("region_position_y", "270"); // Y-coordinate of the upper left corner of the rectangle. + surface->SetUserData("region_width", "960"); // Width. + surface->SetUserData("region_height", "540"); // Height. + + fc->AddSurface(*surface); + ``` + +3. Start and stop previewing. + + ``` + stateCallback->camera_->TriggerLoopingCapture(*fc); // Start previewing. + stateCallback->camera_->StopLoopingCapture(); // Stop previewing. + ``` diff --git a/en/device-dev/subsystems/subsys-multimedia-camera-record-guide.md b/en/device-dev/subsystems/subsys-multimedia-camera-record-guide.md index b590d6222e362130e3d1ee3c2d8a20bb9e8c4049..79aaa6814385c42f6d6c26ad00526d0281efdb0e 100644 --- a/en/device-dev/subsystems/subsys-multimedia-camera-record-guide.md +++ b/en/device-dev/subsystems/subsys-multimedia-camera-record-guide.md @@ -1,38 +1,41 @@ -# Video Recording Development +# Video Recording Development -## When to Use -Use the camera module APIs to capture video streams. - -## Available APIs - -For details, see the available APIs described in development guidelines on photographing. +## When to Use -## Limitations and Constraints +Use the camera module APIs to capture video streams. -None -## How to Develop +## Available APIs -1. Perform step 1 through step 4 described in development guidelines on photographing. -2. Obtain the **FrameConfig** instance for audio recording. +For details, see [Available APIs](subsys-multimedia-camera-photo-guide.md#available-apis). - ``` - /* Obtain the surface from the recorder. */ - Surface *surface = recorder_->GetSurface(0); - surface->SetWidthAndHeight(1920, 1080); - surface->SetQueueSize(3); - surface->SetSize(1024 * 1024); - /* Add the surface to the FrameConfig instance. */ - FrameConfig *fc = new FrameConfig(FRAME_CONFIG_RECORD); - fc->AddSurface(*surface); - ``` -3. Start and stop video recording. +## Constraints - ``` - stateCallback->camera_->TriggerLoopingCapture(*fc); // Start recording. - stateCallback->camera_->StopLoopingCapture(); // Stop recording. - ``` +None +## How to Develop + +1. Perform step 1 through step 4 described in [Photographing Development](subsys-multimedia-camera-photo-guide.md). + +2. Obtain the **FrameConfig** instance for video recording. + + ``` + /* Obtain the surface from the recorder. */ + Surface *surface = recorder_->GetSurface(0); + surface->SetWidthAndHeight(1920, 1080); + surface->SetQueueSize(3); + surface->SetSize(1024 * 1024); + /* Add the surface to the FrameConfig instance. */ + FrameConfig *fc = new FrameConfig(FRAME_CONFIG_RECORD); + fc->AddSurface(*surface); + ``` + +3. Start and stop video recording. + + ``` + stateCallback->camera_->TriggerLoopingCapture(*fc); // Start recording. + stateCallback->camera_->StopLoopingCapture(); // Stop recording. + ``` diff --git a/en/device-dev/subsystems/subsys-multimedia-video-overview.md b/en/device-dev/subsystems/subsys-multimedia-video-overview.md index 2fa5db56aa9c14509d780f843ac5e6a575585297..c8d2824378f7a8401aa9fbb85af4f70ea4bde300 100644 --- a/en/device-dev/subsystems/subsys-multimedia-video-overview.md +++ b/en/device-dev/subsystems/subsys-multimedia-video-overview.md @@ -1,43 +1,45 @@ # Audio/Video Overview -OpenHarmony multimedia services help you to develop for audio and video playback and recording. -- The media playback module facilitates the development of audio and video playback, including media file and stream playback, volume control, and playback progress control. -- The media recording module supports the development of audio and video recording and provides functions to set the size of captured video, encoding bit rate, encoder type, video frame rate, audio sampling rate, and output file format. +OpenHarmony multimedia services help you to develop audio and video playback and recording. -## Basic Concepts -It is considered good practice that you understand the following concepts before starting development: +- The media playback module facilitates the development of audio and video playback, including media file and stream playback, volume control, and playback progress control. -- Streaming media technology +- The media recording module supports the development of audio and video recording and provides functions to set the size of captured video, encoding bit rate, encoder type, video frame rate, audio sampling rate, and output file format. - The streaming media technology refers to a process to encode continuous video and audio data and store the data on a network server. A viewer can watch and listen to the video and audio during download with no need to wait for the completion of download. +## Basic Concepts + +It is considered good practice that you understand the following concepts before starting development: -- Video frame rate +- Streaming media technology - The frame rate is used to measure the number of displayed frames, which is the number of images transmitted per second. The more frames per second \(FPS\), the smoother the video. + The streaming media technology refers to a process to encode continuous video and audio data and store the data on a network server. A viewer can watch and listen to the video and audio during download with no need to wait for the completion of download. -- Bit rate +- Video frame rate + + The frame rate is used to measure the number of displayed frames, which is the number of images transmitted per second. The more frames per second (FPS), the smoother the video. - Bit rate is the number of bits transmitted per unit of time. The commonly used unit is kbit/s. +- Bit rate -- Sampling rate + Bit rate is the number of bits transmitted per unit of time. The commonly used unit is kbit/s. - The sampling rate is the number of samples per second taken from continuous signals to form discrete signals. The unit is hertz \(Hz\). +- Sampling rate + The sampling rate is the number of samples per second taken from continuous signals to form discrete signals. The unit is hertz (Hz). -## Encoding and Decoding -Available audio and video encoding and decoding capabilities vary depending on device types. The following table lists supported specifications for available development boards. +## Codec Specifications -**Table 1** Encoding and decoding specifications for different development boards +Available audio and video codecs vary depending on device types. The following table lists supported specifications for available development boards. -| Device Type | Development Board | Decoding | Encoding | -| ------------------------ | ----------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -| Cameras with a screen | Hi3516 | - Audio: MPEG-4 AAC Profile (AAC LC), MPEG Audio Layer 3 (MP3), mono and dual channels, MPEG-4 (.mp4 and .m4a), and MP3 (.mp3) are supported.
- Video: The H.265 (HEVC) and H.264 (AVC) (for streams encoded using a chip of the same type) and the MPEG-4 (.mp4) container format are supported. | - Audio: AAC-LC encoding, mono and dual channels, and the MPEG-4 (.mp4) container format are supported.
- Video: H.264 and H.265 encoding and the MPEG-4 (.mp4) container format are supported. | -| Cameras without a screen | Hi3518 | - Audio: MPEG-4 AAC Profile (AAC LC), MPEG Audio Layer 3 (MP3), mono and dual channels, MPEG-4 (.mp4 and .m4a), and MP3 (.mp3) are supported.
- Video: none |
  • Audio: AAC-LC encoding, mono and dual channels, and the MPEG-4 (.mp4) container format are supported.
    - Video: H.264 and H.265 encoding and the MPEG-4 (.mp4) container format are supported. | -| WLAN connecting devices | Hi3861 | N/A | N/A | +**Table 1** Codec specifications for different development boards -For details about the encoding and decoding specifications of Hi3516 and Hi3518, refer to their documentation. +| Device Type| Development Board| Decoding| Encoding| +| -------- | -------- | -------- | -------- | +| Cameras with a screen| Hi3516 | - Audio: MPEG-4 AAC Profile (AAC LC), mono and dual channels, MPEG-4 (.mp4 and .m4a)
    - Video: The H.265 (HEVC) and H.264 (AVC) (for streams encoded using a chip of the same type) and the MPEG-4 (.mp4) container format are supported.| - Audio: AAC-LC encoding, mono and dual channels, and the MPEG-4 (.mp4) container format are supported.
    - Video: H.264 and H.265 encoding and the MPEG-4 (.mp4) container format are supported.| +| Cameras without a screen| Hi3518 | - Audio: MPEG-4 AAC Profile (AAC LC), mono and dual channels, and the MPEG-4 (.mp4 and .m4a) container format are supported.
    - Video: none| - Audio: AAC-LC encoding, mono and dual channels, and the MPEG-4 (.mp4) container format are supported.
    - Video: H.264 and H.265 encoding and the MPEG-4 (.mp4) container format are supported.| +| WLAN connecting devices| Hi3861 | N/A| N/A| +For details about the codec specifications of Hi3516 and Hi3518, refer to their documentation.