2.Open the URL provided by the application in your browser and open the Jupyter Notebook named `Activity_6_Creating_an_active_training_environment.ipynb`:
$ result = model.evaluate(x=X_test, y=Y_test, verbose=0)
```
9.Each evaluation result is now stored in the variable `evaluated_weeks`. That variable is a simple array containing the sequence of MSE predictions for every week in the test set. Go ahead and also plot these results:
10.Navigate to the section **Interpreting Model Results** and execute the code cells under the sub-header **Make Predictions**. Notice that we are calling the method `model.predict()`, but with a slightly different combination of parameters. Instead of using both `X` and `Y` values, we only use `X`:
10.导航至`Interpreting Model Results`部分,并在子标题`Make Predictions`下执行代码单元。 注意,我们正在调用方法`model.predict()`,但是参数的组合稍有不同。 我们不使用`X`和`Y`值,而是只使用`X`:
12.In this section, we defined the function `denormalize(),` which performs the complete de-normalization process. Different than other functions, this function takes in a Pandas DataFrame instead of a NumPy array. We do so for using dates as an index. This is the most relevant cell block from that header:
3.Now, open the Jupyter Notebook called `Activity_7_Optimizing_a_deep_learning_model.ipynb` and navigate to the title of the Notebook and import all required libraries.
4.Now, in the open Jupyter Notebook, navigate to the header **Adding Layers and Nodes**. You will recognize our first model in the next cell. This is the basic LSTM network that we built in *Lesson 2*, *Model Architecture*. Now, we have to add a new LSTM layer to this network.
5.Now, navigate to the header **Epochs**. In this section, we are interested in exploring different magnitudes of epochs. Use the utility function `train_model()` to name different model versions and runs:
8.You can also try the L2 regularization here, as well (or combine both). Do the same as with `Dropout()`, but now using `ActivityRegularization(l2=0.0001)`.
9.Now, navigate to the header **Evaluate Models** in the Notebook. In this section, we will evaluate the model predictions for the next 19 weeks of data in the test set. Then, we will compute the RMSE and MAPE of the predicted series versus the test series.
3.In the Jupyter Notebook instance, navigate to the header **Fetching Real-Time Data**. We will now be fetching updated historical data from CoinMarketCap. Simply call the method:
6.Navigate to the header **Re-Train Old Model** in the Jupyter Notebook. Now, complete the range function and the `model_data` filtering parameters, using an index to split the data in overlapping groups of seven days. Then, re-train our model and collect the results:
6.导航到 Jupyter 笔记本中的标题`Re-Train Old Model`。 现在,使用索引将数据分成 7 天的重叠组,完成`range`函数和`model_data`过滤参数。 然后,重新训练我们的模型并收集结果:
```py
results = []
...
...
@@ -441,7 +441,7 @@ $ docker-compose up -d
BITCOIN_START_DATE = # Use other date here
```
4.As a final step, deploy your web application locally using `docker-compose,` as follows:
4.最后,使用`docker-compose`在本地部署 Web 应用程序,如下所示:
```py
docker-compose up
...
...
@@ -449,7 +449,9 @@ $ docker-compose up -d
您应该在终端上看到活动日志,包括模型中的训练时期。
5. After the model has been trained, you can visit your application on `http://localhost:5000` and make predictions on `http://localhost:5000/predict` :![Activity 9 – Deploying a Deep Learning Application](img/image04_02.jpg)