*Neural networks require, and actually capitalize on, vast amounts of labeled data to achieve an optimal solution. This means that for the algorithm to create an outstanding model, it requires hundreds of thousands or even millions of entries, containing both the features and the target values. For instance, as regards the image recognition of cats, the more images you have, the more features the model is able to detect, which makes it better.
*Neural networks require considerable computing power to be able to process such large amounts of data without taking weeks (or even longer) to be trained. Because the process of achieving the best possible model is based on trial and error, it is necessary to be able to run the training process as efficiently as possible.
1. Import **torch**, the **optim** package from PyTorch, and **matplotlib**:
1. 导入`torch`、PyTorch 的`optim`包和`matplotlib`。
```py
import torch
...
...
@@ -461,20 +461,19 @@ for i in range(100):
import matplotlib.pyplot as plt
```
2. Create dummy input data (`x`) of random values and dummy target data (`y`) that only contains zeros and ones. Tensor`x`should have a size of (**20,10**), while the size of`y`should be (**20,1**):
3. Separate the input features from the target. Note that the target is located in the first column of the CSV file. Next, convert the values into tensors, making sure the values are converted into floats.