**[how to build ncnn library](https://github.com/Tencent/ncnn/wiki/how-to-build) on Linux / Windows / macOS / Raspberry Pi3, Pi4 / Android / NVIDIA Jetson / iOS / WebAssembly / AllWinner D1 / Loongson 2K1000**
-[Build for Linux / NVIDIA Jetson / Raspberry Pi3, Pi4](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux)
-[Build for Linux / NVIDIA Jetson / Raspberry Pi3, Pi4 / POWER9](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux)
-[Build for Windows x64 using VS2017](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-windows-x64-using-visual-studio-community-2017)
-[Build for macOS](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-macos)
-[Build for ARM Cortex-A family with cross-compiling](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-arm-cortex-a-family-with-cross-compiling)
@@ -287,7 +287,7 @@ Fully customizable op, first change to one that can export (e.g. concat slice),
First of all, you need to manage the memory you request yourself, at this point ncnn::Mat will not automatically free up the float data you pass over to it
``` c++
std::vector<float>testData(60,1.0);// use std::vector<float> to manage memory requests and releases yourself
ncnn::Matin1(60,(void*)testData.data()).reshape(4,5,3);// just pass the pointer to the float data as a void*, and even specify the dimension (up says it's best to use reshape to solve the channel gap)
ncnn::Matin1=ncnn::Mat(60,(void*)testData.data()).reshape(4,5,3);// just pass the pointer to the float data as a void*, and even specify the dimension (up says it's best to use reshape to solve the channel gap)
float*a=newfloat[60];// New a piece of memory yourself, you need to release it later
ncnn::Matin2=ncnn::Mat(60,(void*)a).reshape(4,5,3).clone();// use the same method as above, clone() to transfer data owner
-[Build for Windows x64 using Visual Studio Community 2017](#build-for-windows-x64-using-visual-studio-community-2017)
-[Build for macOS](#build-for-macos)
...
...
@@ -87,6 +88,22 @@ You can add `-GNinja` to `cmake` above to use Ninja build system (invoke build u
For Rasberry Pi 3 on 32bit OS, add `-DCMAKE_TOOLCHAIN_FILE=../toolchains/pi3.toolchain.cmake` to cmake. You can also consider disabling Vulkan support as the Vulkan drivers for Rasberry Pi are still not mature, but it doesn't hurt to build the support in, but not use it.
Earlier versions of Clang may fail to build ncnn due to [Bug 49864](https://github.com/llvm/llvm-project/issues/49864). To use GCC instead, use the `power9le-linux-gnu-vsx.toolchain.cmake` toolchain file instead. Note that according to benchmarks, Clang appears to produce noticeably faster CPU inference than GCC for POWER9 targets.
Note that the POWER9 toolchain files only support little-endian mode.