1. Separate the features from the target. We are only doing this to rescale the features data:
X = data.loc [:,1:]
Y = data.iloc [:,0]
```py
X = data.iloc[:, 1:]
Y = data.iloc[:, 0]
```
前面的代码片段获取数据并使用切片将特征与目标分离。
2. Rescale the features data by using the normalization methodology. Display the head (that is, the top five instances) of the resulting DataFrame to verify the result:
2. Separate the feature columns from the target for each of the sets we created in the previous exercise. Additionally, convert the final DataFrames into tensors:
x_train = torch.tensor(x_train.values).float()
y_train =火炬张量(y_train.values).float()
x_dev = torch.tensor(x_dev.values).float()
y_dev = torch.tensor(y_dev.values).float()
x_test = torch.tensor(x_test.values).float()
y_test = torch.tensor(y_test.values).float()
```py
x_train = torch.tensor(x_train.values).float()
y_train = torch.tensor(y_train.values).float()
x_dev = torch.tensor(x_dev.values).float()
y_dev = torch.tensor(y_dev.values).float()
x_test = torch.tensor(x_test.values).float()
y_test = torch.tensor(y_test.values).float()
```
3. Define the network architecture using the **sequential()** container. Make sure to create a four-layer network. Use ReLU activation functions for the first three layers and leave the last layer without an activation function, considering the fact that we are dealing with a regression problem.