![](img/c960034e-9433-4aa2-9418-3442cd67dac3.png)Example of average linkage. C[1] and C[2] are selected for merging. The highlighted points are the averages.
![](img/c960034e-9433-4aa2-9418-3442cd67dac3.png)
Example of average linkage. C[1] and C[2] are selected for merging. The highlighted points are the averages.
@@ -381,7 +399,9 @@ for i, l in enumerate(linkages):
相应的图显示在以下屏幕截图中:
![](img/22e87346-d2f3-4423-b75b-c700146d7550.png)Cophenetic correlation (left) and silhouette score (right) for a different number of clusters and four linkage methods
![](img/22e87346-d2f3-4423-b75b-c700146d7550.png)
Cophenetic correlation (left) and silhouette score (right) for a different number of clusters and four linkage methods
@@ -442,7 +456,9 @@ for n in range(1, n_max_components + 1):
生成的图显示在以下屏幕截图中:
![](img/006b6068-1d96-4ada-86ca-44bdd17cd9b0.png)AICs, BICs, and log-likelihoods for Gaussian mixtures with the number of components in the range (1, 20)
![](img/006b6068-1d96-4ada-86ca-44bdd17cd9b0.png)
AICs, BICs, and log-likelihoods for Gaussian mixtures with the number of components in the range (1, 20)
![](img/c8c4fa68-5db2-406d-9444-fbf1fdeb5418.png)Figure 2.1 Dogset images separated by folders, or labels of dog breedYou may download the dataset and then run the `retrain.py` script on Mac, as it doesn't take too long (less than an hour) for the script to run on the relatively small dataset (about 20,000 images in total), but if you do that on a GPU-powered Ubuntu, as set up in the last chapter, the script can complete in just a few minutes. In addition, when retraining with a large image dataset, running on Mac may take hours or days so it makes sense to run it on a GPU-powered machine.
![](img/c8c4fa68-5db2-406d-9444-fbf1fdeb5418.png)
Figure 2.1 Dogset images separated by folders, or labels of dog breedYou may download the dataset and then run the `retrain.py` script on Mac, as it doesn't take too long (less than an hour) for the script to run on the relatively small dataset (about 20,000 images in total), but if you do that on a GPU-powered Ubuntu, as set up in the last chapter, the script can complete in just a few minutes. In addition, when retraining with a large image dataset, running on Mac may take hours or days so it makes sense to run it on a GPU-powered machine.
![](img/b89b3444-a6a9-45b4-bb02-ddf8388a6992.png)Figure 2.6 Adding utility files, model files, label file and image files
![](img/b89b3444-a6a9-45b4-bb02-ddf8388a6992.png)
Figure 2.6 Adding utility files, model files, label file and image files
6. 将`ViewController.m`重命名为`ViewController.mm`,因为我们将在该文件中混合使用 C++ 代码和 Objective-C 代码来调用 TensorFlow C++ API 并处理图像输入和推断结果。 然后,在`@interface ViewController`之前,添加以下`#include`和函数原型:
![](img/f2841f1f-b0c3-412d-822b-5e9d0f34f235.png)Figure 4.6 Showing the original content image
![](img/f2841f1f-b0c3-412d-822b-5e9d0f34f235.png)
Figure 4.6 Showing the original content image
点击任意位置,您将看到两个样式选择,如图 4.7 所示:
![](img/c462927f-71be-46f1-995f-5cbd536ae175.png)Figure 4.7 Showing the choice of two style models
![](img/c462927f-71be-46f1-995f-5cbd536ae175.png)
Figure 4.7 Showing the choice of two style models
两张已传输图像的结果,`out_style[19] = 1.0`; 如图 4.8 所示:
![](img/cc28a550-1a66-4845-b330-1c01d3806e62.png)Figure 4.8 The style-transferred results with two different models (fast style transfer on the left, multi-style on the right)
![](img/cc28a550-1a66-4845-b330-1c01d3806e62.png)
Figure 4.8 The style-transferred results with two different models (fast style transfer on the left, multi-style on the right)
![](img/c5dd63f4-32a7-4177-afb8-af0e8a8770e8.png)Figure 4.9 Results of different mixes of multiple styles (half starry night half the other on the left, mix of all 26 styles on the right)
![](img/c5dd63f4-32a7-4177-afb8-af0e8a8770e8.png)
Figure 4.9 Results of different mixes of multiple styles (half starry night half the other on the left, mix of all 26 styles on the right)
![](img/f582db90-6795-43d5-8d6d-28e2835e58de.png)Figure 4.10 The original content image and the style transferred image with a mix of the 5th image and the starry night image
![](img/f582db90-6795-43d5-8d6d-28e2835e58de.png)
Figure 4.10 The original content image and the style transferred image with a mix of the 5th image and the starry night image
如果替换以下两行代码:
...
...
@@ -632,7 +652,9 @@ for (int i = 0; i < NUM_STYLES; ++i) {
然后,您将在图 4.11 中看到效果:
![](img/a6009bea-e770-4c76-b7ab-9f426ea66e0b.png)Figure 4.11 Images stylized with the starry night style only and with all the 26 styles mixed equally
![](img/a6009bea-e770-4c76-b7ab-9f426ea66e0b.png)
Figure 4.11 Images stylized with the starry night style only and with all the 26 styles mixed equally
![](img/a88ae370-8a39-4ee3-8ce0-ebaa5fd8a8ed.png)Figure 7.4: Finding out possible input node namesYou may wonder why we can't fix the `Not found: Op type not registered 'OneShotIterator'` error with a technique we used before, which is to first find out which source file contains the op using the command `grep 'REGISTER.*"OneShotIterator"' tensorflow/core/ops/*.cc`
![](img/a88ae370-8a39-4ee3-8ce0-ebaa5fd8a8ed.png)
Figure 7.4: Finding out possible input node namesYou may wonder why we can't fix the `Not found: Op type not registered 'OneShotIterator'` error with a technique we used before, which is to first find out which source file contains the op using the command `grep 'REGISTER.*"OneShotIterator"' tensorflow/core/ops/*.cc`
(and you'll see the output as `tensorflow/core/ops/dataset_ops.cc:REGISTER_OP("OneShotIterator")`) then add `tensorflow/core/ops/dataset_ops.cc` to `tf_op_files.txt` and rebuild the TensorFlow library. Even if this were feasible, it would complicate the solution as now we need to feed the model with some data related to `OneShotIterator`, instead of the direct user drawing in points.
在本章我们不得不结束有趣旅程之前的最后一个提示是,如果您使用错误的 ABI 构建适用于 Android 的 TensorFlow 本机库,您仍然可以从 Android Studio 构建和运行该应用程序,但您将 出现运行时错误`java.lang.RuntimeException: Native TF methods not found; check that the correct native libraries are present in the APK.`,这意味着您的应用程序的`jniLibs`文件夹中没有正确的 TensorFlow 本机库(图 7.9)。 要找出`jniLibs`内特定 ABI 文件夹中是否缺少该文件,可以从`Android Studio | View | Tool Windows`中打开`Device File Explorer`,然后选择设备的`data | app | package | lib`来查看,如图 7.12 所示。 如果您更喜欢命令行,则也可以使用`adb`工具找出来。
![](img/16494075-d42a-4b86-a48a-4cb0bc2ec865.png)Figure 7.12: Checking out the TensorFlow native library file with Device File Explorer
![](img/16494075-d42a-4b86-a48a-4cb0bc2ec865.png)
Figure 7.12: Checking out the TensorFlow native library file with Device File Explorer
图 9.1 显示了原始测试图像,其模糊版本以及经过训练的 GAN 模型的生成器输出。 结果并不理想,但是 GAN 模型确实具有更好的分辨率而没有模糊效果:
![](img/b73deef3-1598-4019-ac72-5c0212d53c74.png)Figure 9.1: The original, the blurry, and the generated
![](img/b73deef3-1598-4019-ac72-5c0212d53c74.png)
Figure 9.1: The original, the blurry, and the generated
10. 现在,将`newckpt`目录复制到`/tmp`,我们可以如下冻结模型:
...
...
@@ -328,7 +330,9 @@ No OpKernel was registered to support Op 'FIFOQueueV2' with these attrs. Registe
现在,您的 Xcode 应该类似于图 9.2:
![](img/9cb1fa66-5a4e-4a1b-aa51-a16fd9051f57.png)Figure 9.2: Showing the GAN app in Xcode
![](img/9cb1fa66-5a4e-4a1b-aa51-a16fd9051f57.png)
Figure 9.2: Showing the GAN app in Xcode
我们将创建一个按钮,在点击该按钮时,提示用户选择一个模型以生成数字或增强图像:
...
...
@@ -539,13 +543,17 @@ No OpKernel was registered to support Op 'FIFOQueueV2' with these attrs. Registe
现在,在 iOS 模拟器或设备中运行该应用程序,点击 GAN 按钮,然后选择生成数字,您将看到 GAN 生成的手写数字的结果,如图 9.3 所示:
![](img/0e0f47b1-2efa-4ec0-9eaa-4b21356cad23.png)Figure 9.3: Showing GAN model selection and results of generated handwritten digits
![](img/0e0f47b1-2efa-4ec0-9eaa-4b21356cad23.png)
Figure 9.3: Showing GAN model selection and results of generated handwritten digits
这些数字看起来很像真实的人类手写数字,都是在训练了基本 GAN 模型之后完成的。 如果您返回并查看进行训练的代码,并且停下来思考一下 GAN 的工作原理,一般来说,则生成器和判别器如何相互竞争,以及 尝试达到稳定的纳什均衡状态,在这种状态下,生成器可以生成区分器无法分辨出真实还是伪造的真实假数据,您可能会更欣赏 GAN 的魅力。
6. 在以与在`HelloTensorFlow`应用程序中相同的方式加载`labels.txt`文件后,也以相同的方式加载要分类的图像,但是使用 TensorFlow Lite 的`Interpreter`的`typed_tensor`方法而不是 TensorFlow Mobile 的`Tensor`类及其`tensor`方法。 图 11.2 比较了用于加载和处理图像文件数据的 TensorFlow Mobile 和 Lite 代码:
![](img/8df9836e-fbf9-4caf-97af-7d415b6dc5bb.png)Figure 11.2 The TensorFlow Mobile (left) and Lite code of loading and processing the image input
![](img/8df9836e-fbf9-4caf-97af-7d415b6dc5bb.png)
Figure 11.2 The TensorFlow Mobile (left) and Lite code of loading and processing the image input
7. 在调用`GetTopN`辅助方法以获取前`N`个分类结果之前,调用`Interpreter`上的`Invoke`方法运行模型,并调用`typed_out_tensor`方法以获取模型的输出。 TensorFlow Mobile 和 Lite 之间的代码差异如图 11.3 所示:
![](img/c3e31c5c-775f-4af8-a000-d8d65f75f657.png)Figure 11.3 The TensorFlow Mobile (left) and Lite code of running the model and getting the output
![](img/c3e31c5c-775f-4af8-a000-d8d65f75f657.png)
Figure 11.3 The TensorFlow Mobile (left) and Lite code of running the model and getting the output
8. 以类似于 HelloTensorFlow 中方法的方式实现`GetTopN`方法,对于 TensorFlow Lite 使用`const float* prediction`类型而不是对于 TensorFlow Mobile 使用`const Eigen::TensorMap<Eigen::Tensor<float, 1, Eigen::RowMajor>, Eigen::Aligned>& prediction`。 TensorFlow Mobile 和 Lite 中`GetTopN`方法的比较如图 11.4 所示:
![](img/0f0c6dc7-939f-4b15-9542-c56a9f3f427f.png)Figure 11.4 The TensorFlow Mobile (left) and Lite code of processing the model output to return the top results
![](img/0f0c6dc7-939f-4b15-9542-c56a9f3f427f.png)
Figure 11.4 The TensorFlow Mobile (left) and Lite code of processing the model output to return the top results
9. 如果值大于阈值(设置为`0.1f`),则使用简单的`UIAlertController`显示带有 TensorFlow Lite 模型返回的置信度值的最佳结果:
![](img/9df4ffa8-cd63-4b4c-a728-47cba1394711.png)Figure 11.6 New Android app using TensorFlow Lite and the prebuilt MobileNet image classification model
![](img/9df4ffa8-cd63-4b4c-a728-47cba1394711.png)
Figure 11.6 New Android app using TensorFlow Lite and the prebuilt MobileNet image classification model
![](img/fdf48153-937d-4066-9dd3-cc13ce841e6e.png)Figure 11.8 Showing the stock prediction Core ML model converted from Keras and TensorFlow in an Objective-C app
![](img/fdf48153-937d-4066-9dd3-cc13ce841e6e.png)
Figure 11.8 Showing the stock prediction Core ML model converted from Keras and TensorFlow in an Objective-C app
使用 coremltools 生成模型的 Core ML 格式的另一种方法是,首先将 Keras 构建的模型保存为 Keras HDF5 模型格式,这是我们在第 10 章,“构建类似 AlphaZero 的移动游戏应用程序”中,在转换为 AlphaZero TensorFlow 检查点文件之前使用的格式。 为此,只需运行`model.save('stock.h5')`。
![](img/b77107fa-7de7-4602-84d1-d4c5b01c6efc.png)Figure 11.9 Showing the stock prediction Core ML model converted from Keras and TensorFlow in a Swift app
![](img/b77107fa-7de7-4602-84d1-d4c5b01c6efc.png)
Figure 11.9 Showing the stock prediction Core ML model converted from Keras and TensorFlow in a Swift app