提交 ece7257a 编写于 作者: A Amirsina Torfi

travis

上级 1e6e401d
......@@ -8,12 +8,18 @@ git:
python: # The following versions
- "3.6"
# command to install dependencies
only:
changes:
- codes/*
install:
- pip install numpy
- pip install matplotlib
- pip install pandas
- pip install seaborn
- pip install pathlib
- pip install tensorflow_datasets
# install TensorFlow from https://storage.googleapis.com/tensorflow/
- if [[ "$TRAVIS_PYTHON_VERSION" == "3.5" ]]; then
pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow_cpu-2.3.0-cp35-cp35m-manylinux2010_x86_64.whl;
......
==================
Linear Regression
==================
This document is dedicated to explain how to run the python script for this tutorial. The documentation is available `here <Documentationlinearregression_>`_. Alternatively, you can check this ``Linear Regression using TensorFlow`` `blog post <blogpostlinearregression_>`_ for further details.
.. _blogpostlinearregression: http://www.machinelearninguru.com/deep_learning/tensorflow/machine_learning_basics/linear_regresstion/linear_regression.html
.. _Documentationlinearregression: https://github.com/astorfi/TensorFlow-World/wiki/Linear-Regeression
-------------------
Python Environment
-------------------
``WARNING:`` If TensorFlow is installed in any environment(virtual environment, ...), it must be activated at first. So at first make sure the tensorFlow is available in the current environment using the following script:
--------------------------------
How to run the code in Terminal?
--------------------------------
Please root to the ``code/`` directory and run the python script as the general form of below:
.. code:: shell
python [python_code_file.py]
As an example the code can be executed as follows:
.. code:: shell
python linear_regression.py --num_epochs=50
The ``--num_epochs`` flag is to provide the number of epochs that will be used for training. The ``--num_epochs`` flag is not required because its default value is ``50`` and is provided in the source code as follows:
.. code:: python
tf.app.flags.DEFINE_integer(
'num_epochs', 50, 'The number of epochs for training the model. Default=50')
----------------------------
How to run the code in IDEs?
----------------------------
Since the code is ready-to-go, as long as the TensorFlow can be called in the IDE editor(Pycharm, Spyder,..), the code can be executed successfully.
===========
Linear SVM
===========
This document is dedicated to explain how to run the python script for this tutorial. For this tutorial, we will create a linear SVM for separation of the data. The data that is used for this code is linearly separable.
-------------------
Python Environment
-------------------
``WARNING:`` If TensorFlow is installed in any environment(virtual environment, ...), it must be activated at first. So at first make sure the tensorFlow is available in the current environment using the following script:
--------------------------------
How to run the code in Terminal?
--------------------------------
Please root to the ``code/`` directory and run the python script as the general form of below:
.. code:: shell
python [python_code_file.py]
As an example the code can be executed as follows:
.. code:: shell
python linear_svm.py
----------------------------
How to run the code in IDEs?
----------------------------
Since the code is ready-to-go, as long as the TensorFlow can be called in the IDE editor(Pycharm, Spyder,..), the code can be executed successfully.
==================
Logistic Regression
==================
This document is dedicated to explaining how to run the python script for this tutorial. ``Logistic regression`` is a binary
classification algorithm in which `yes` or `no` are the only possible responses. The linear output is transformed to a probability of course between zero and 1. The decision is made by thresholding the probability and saying it belongs to which class. We consider ``Softmax`` with ``cross entropy`` loss for minimizing the loss.
-------------------
Python Environment
-------------------
``WARNING:`` If TensorFlow is installed in any environment(virtual environment, ...), it must be activated at first. So at first make sure the tensorFlow is available in the current environment using the following script:
--------------------------------
How to run the code in Terminal?
--------------------------------
Please root to the ``code/`` directory and run the python script as the general form of below:
.. code:: shell
python [python_code_file.py]
As an example the code can be executed as follows:
.. code:: shell
python logistic_regression.py --num_epochs=50 --batch_size=512 --max_num_checkpoint=10 --num_classes=2
Different ``flags`` are provided for training. For the full list please refer to the source code. The above example is just an example as is!
----------------------------
How to run the code in IDEs?
----------------------------
Since the code is ready-to-go, as long as the TensorFlow can be called in the IDE editor(Pycharm, Spyder,..), the code can be executed successfully.
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import tensorflow as tf\n",
"import tempfile\n",
"import urllib\n",
"import pandas as pd\n",
"import os\n",
"from tensorflow.examples.tutorials.mnist import input_data"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"######################################\n",
"######### Necessary flags ############\n",
"######################################\n",
"\n",
"max_num_checkpoint = 10\n",
"num_classes = 2\n",
"batch_size = 512\n",
"num_epochs = 10\n",
"\n",
"##########################################\n",
"######## Learning rate flags #############\n",
"##########################################\n",
"\n",
"initial_learning_rate = 0.001\n",
"learning_rate_decay_factor = 0.95\n",
"num_epochs_per_decay = 1\n",
"\n",
"#########################################\n",
"########## status flags #################\n",
"#########################################\n",
"\n",
"is_training = False\n",
"fine_tuning = False\n",
"online_test = True\n",
"allow_soft_placement = True\n",
"log_device_placement = False\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\n",
"Extracting MNIST_data/train-images-idx3-ubyte.gz\n",
"Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\n",
"Extracting MNIST_data/train-labels-idx1-ubyte.gz\n",
"Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\n",
"Extracting MNIST_data/t10k-images-idx3-ubyte.gz\n",
"Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\n",
"Extracting MNIST_data/t10k-labels-idx1-ubyte.gz\n"
]
}
],
"source": [
"# Download and get MNIST dataset(available in tensorflow.contrib.learn.python.learn.datasets.mnist)\n",
"# It checks and download MNIST if it's not already downloaded then extract it.\n",
"# The 'reshape' is True by default to extract feature vectors but we set it to false to we get the original images.\n",
"mnist = input_data.read_data_sets(\"MNIST_data/\", reshape=True, one_hot=False)\n",
"\n",
"########################\n",
"### Data Processing ####\n",
"########################\n",
"# Organize the data and feed it to associated dictionaries.\n",
"data={}\n",
"\n",
"data['train/image'] = mnist.train.images\n",
"data['train/label'] = mnist.train.labels\n",
"data['test/image'] = mnist.test.images\n",
"data['test/label'] = mnist.test.labels\n",
"\n",
"def extract_samples_Fn(data):\n",
" index_list = []\n",
" for sample_index in range(data.shape[0]):\n",
" label = data[sample_index]\n",
" if label == 1 or label == 0:\n",
" index_list.append(sample_index)\n",
" return index_list\n",
"\n",
"\n",
"# Get only the samples with zero and one label for training.\n",
"index_list_train = extract_samples_Fn(data['train/label'])\n",
"\n",
"\n",
"# Get only the samples with zero and one label for test set.\n",
"index_list_test = extract_samples_Fn(data['test/label'])\n",
"\n",
"# Reform the train data structure.\n",
"data['train/image'] = mnist.train.images[index_list_train]\n",
"data['train/label'] = mnist.train.labels[index_list_train]\n",
"\n",
"# Reform the test data structure.\n",
"data['test/image'] = mnist.test.images[index_list_test]\n",
"data['test/label'] = mnist.test.labels[index_list_test]\n",
"\n",
"# Dimentionality of train\n",
"dimensionality_train = data['train/image'].shape\n",
"\n",
"# Dimensions\n",
"num_train_samples = dimensionality_train[0]\n",
"num_features = dimensionality_train[1]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1, Training Loss= 0.32686\n",
"Epoch 2, Training Loss= 0.13760\n",
"Epoch 3, Training Loss= 0.08637\n",
"Epoch 4, Training Loss= 0.06380\n",
"Epoch 5, Training Loss= 0.05090\n",
"Epoch 6, Training Loss= 0.04240\n",
"Epoch 7, Training Loss= 0.03636\n",
"Epoch 8, Training Loss= 0.03186\n",
"Epoch 9, Training Loss= 0.02838\n",
"Epoch 10, Training Loss= 0.02562\n",
"Final Test Accuracy is % 99.95\n"
]
}
],
"source": [
"#######################################\n",
"########## Defining Graph ############\n",
"#######################################\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
" ###################################\n",
" ########### Parameters ############\n",
" ###################################\n",
"\n",
" # global step\n",
" global_step = tf.Variable(0, name=\"global_step\", trainable=False)\n",
"\n",
" # learning rate policy\n",
" decay_steps = int(num_train_samples / batch_size *\n",
" num_epochs_per_decay)\n",
" learning_rate = tf.train.exponential_decay(initial_learning_rate,\n",
" global_step,\n",
" decay_steps,\n",
" learning_rate_decay_factor,\n",
" staircase=True,\n",
" name='exponential_decay_learning_rate')\n",
" ###############################################\n",
" ########### Defining place holders ############\n",
" ###############################################\n",
" image_place = tf.placeholder(tf.float32, shape=([None, num_features]), name='image')\n",
" label_place = tf.placeholder(tf.int32, shape=([None,]), name='gt')\n",
" label_one_hot = tf.one_hot(label_place, depth=num_classes, axis=-1)\n",
" dropout_param = tf.placeholder(tf.float32)\n",
"\n",
" ##################################################\n",
" ########### Model + Loss + Accuracy ##############\n",
" ##################################################\n",
" # A simple fully connected with two class and a softmax is equivalent to Logistic Regression.\n",
" logits = tf.contrib.layers.fully_connected(inputs=image_place, num_outputs = num_classes, scope='fc')\n",
"\n",
" # Define loss\n",
" with tf.name_scope('loss'):\n",
" loss_tensor = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=label_one_hot))\n",
"\n",
" # Accuracy\n",
" # Evaluate the model\n",
" prediction_correct = tf.equal(tf.argmax(logits, 1), tf.argmax(label_one_hot, 1))\n",
"\n",
" # Accuracy calculation\n",
" accuracy = tf.reduce_mean(tf.cast(prediction_correct, tf.float32))\n",
"\n",
" #############################################\n",
" ########### training operation ##############\n",
" #############################################\n",
"\n",
" # Define optimizer by its default values\n",
" optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\n",
"\n",
" # 'train_op' is a operation that is run for gradient update on parameters.\n",
" # Each execution of 'train_op' is a training step.\n",
" # By passing 'global_step' to the optimizer, each time that the 'train_op' is run, Tensorflow\n",
" # update the 'global_step' and increment it by one!\n",
"\n",
" # gradient update.\n",
" with tf.name_scope('train_op'):\n",
" gradients_and_variables = optimizer.compute_gradients(loss_tensor)\n",
" train_op = optimizer.apply_gradients(gradients_and_variables, global_step=global_step)\n",
"\n",
"\n",
" ############################################\n",
" ############ Run the Session ###############\n",
" ############################################\n",
" session_conf = tf.ConfigProto(\n",
" allow_soft_placement=allow_soft_placement,\n",
" log_device_placement=log_device_placement)\n",
" sess = tf.Session(graph=graph, config=session_conf)\n",
"\n",
" with sess.as_default():\n",
"\n",
" # The saver op.\n",
" saver = tf.train.Saver()\n",
"\n",
" # Initialize all variables\n",
" sess.run(tf.global_variables_initializer())\n",
"\n",
" # The prefix for checkpoint files\n",
" checkpoint_prefix = 'model'\n",
"\n",
" # If fie-tuning flag in 'True' the model will be restored.\n",
" if fine_tuning:\n",
" saver.restore(sess, os.path.join(checkpoint_path, checkpoint_prefix))\n",
" print(\"Model restored for fine-tuning...\")\n",
"\n",
" ###################################################################\n",
" ########## Run the training and loop over the batches #############\n",
" ###################################################################\n",
"\n",
" # go through the batches\n",
" test_accuracy = 0\n",
" for epoch in range(num_epochs):\n",
" total_batch_training = int(data['train/image'].shape[0] / batch_size)\n",
"\n",
" # go through the batches\n",
" for batch_num in range(total_batch_training):\n",
" #################################################\n",
" ########## Get the training batches #############\n",
" #################################################\n",
"\n",
" start_idx = batch_num * batch_size\n",
" end_idx = (batch_num + 1) * batch_size\n",
"\n",
" # Fit training using batch data\n",
" train_batch_data, train_batch_label = data['train/image'][start_idx:end_idx], data['train/label'][\n",
" start_idx:end_idx]\n",
"\n",
" ########################################\n",
" ########## Run the session #############\n",
" ########################################\n",
"\n",
" # Run optimization op (backprop) and Calculate batch loss and accuracy\n",
" # When the tensor tensors['global_step'] is evaluated, it will be incremented by one.\n",
" batch_loss, _, training_step = sess.run(\n",
" [loss_tensor, train_op,\n",
" global_step],\n",
" feed_dict={image_place: train_batch_data,\n",
" label_place: train_batch_label,\n",
" dropout_param: 0.5})\n",
"\n",
" ########################################\n",
" ########## Write summaries #############\n",
" ########################################\n",
"\n",
"\n",
" #################################################\n",
" ########## Plot the progressive bar #############\n",
" #################################################\n",
"\n",
" print(\"Epoch \" + str(epoch + 1) + \", Training Loss= \" + \\\n",
" \"{:.5f}\".format(batch_loss))\n",
"\n",
" ############################################################################\n",
" ########## Run the session for pur evaluation on the test data #############\n",
" ############################################################################\n",
"\n",
" # Evaluation of the model\n",
" test_accuracy = 100 * sess.run(accuracy, feed_dict={\n",
" image_place: data['test/image'],\n",
" label_place: data['test/label'],\n",
" dropout_param: 1.})\n",
"\n",
" print(\"Final Test Accuracy is %% %.2f\" % test_accuracy)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
=======================
Multi-Class Kernel SVM
=======================
This document is dedicated to explain how to run the python script for this tutorial. For this tutorial, we will create a Kernel SVM for separation of the data. The data that is used for this code is MNIST dataset. This document is inspired on `Implementing Multiclass SVMs <Multiclasssvm_>`_ open source code. However, in ours, we extend it to MNIST dataset and modify its method.
.. _Multiclasssvm: https://github.com/nfmcclure/tensorflow_cookbook/tree/master/04_Support_Vector_Machines/06_Implementing_Multiclass_SVMs
-------------------
Python Environment
-------------------
``WARNING:`` If TensorFlow is installed in any environment(virtual environment, ...), it must be activated at first. So at first make sure the tensorFlow is available in the current environment using the following script:
--------------------------------
How to run the code in Terminal?
--------------------------------
Please root to the ``code/`` directory and run the python script as the general form of below:
.. code:: shell
python [python_code_file.py]
As an example the code can be executed as follows:
.. code:: shell
python multiclass_SVM.py
----------------------------
How to run the code in IDEs?
----------------------------
Since the code is ready-to-go, as long as the TensorFlow can be called in the IDE editor(Pycharm, Spyder,..), the code can be executed successfully.
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from sklearn import datasets\n",
"from tensorflow.python.framework import ops\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"from sklearn.decomposition import PCA"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#######################\n",
"### Necessary Flags ###\n",
"#######################\n",
"\n",
"batch_size = 50\n",
"num_steps = 1000\n",
"log_steps = 50\n",
"is_evaluation = True\n",
"gamma = -15.0\n",
"initial_learning_rate = 0.01"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"###########################\n",
"### Necessary Functions ###\n",
"###########################\n",
"def cross_class_label_fn(A):\n",
" \"\"\"\n",
" This function take the matrix of size (num_classes, batch_size) and return the cross-class label matrix\n",
" in which Yij are the elements where i,j are class indices.\n",
" :param A: The input matrix of size (num_classes, batch_size).\n",
" :return: The output matrix of size (num_classes, batch_size, batch_size).\n",
" \"\"\"\n",
" label_class_i = tf.reshape(A, [num_classes, 1, batch_size])\n",
" label_class_j = tf.reshape(label_class_i, [num_classes, batch_size, 1])\n",
" returned_mat = tf.matmul(label_class_j, label_class_i)\n",
" return returned_mat\n",
"\n",
"\n",
"# Compute SVM loss.\n",
"def loss_fn(alpha, label_placeholder):\n",
" term_1 = tf.reduce_sum(alpha)\n",
" alpha_cross = tf.matmul(tf.transpose(alpha), alpha)\n",
" cross_class_label = cross_class_label_fn(label_placeholder)\n",
" term_2 = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(alpha_cross, cross_class_label)), [1, 2])\n",
" return tf.reduce_sum(tf.subtract(term_2, term_1))\n",
"\n",
"\n",
"# Gaussian (RBF) prediction kernel\n",
"def kernel_pred(x_data, prediction_grid):\n",
" A = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])\n",
" B = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])\n",
" square_distance = tf.add(tf.subtract(A, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))),\n",
" tf.transpose(B))\n",
" return tf.exp(tf.multiply(gamma, tf.abs(square_distance)))\n",
"\n",
"\n",
"def kernel_fn(x_data, gamma):\n",
" \"\"\"\n",
" This function generates the RBF kernel.\n",
" :param x_data: Input data\n",
" :param gamma: Hyperparamet.\n",
" :return: The RBF kernel.\n",
" \"\"\"\n",
" square_distance = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))\n",
" kernel = tf.exp(tf.multiply(gamma, tf.abs(square_distance)))\n",
" return kernel\n",
"\n",
"\n",
"def prepare_label_fn(label_onehot):\n",
" \"\"\"\n",
" Label preparation. Since we are dealing with one vs all scenario, for each sample\n",
" all the labels other than the current class must be set to -1. It can be done by simply\n",
" Setting all the zero values to -1 in the return one_hot array for classes.\n",
"\n",
" :param label_onehot: The input as one_hot label which shape (num_samples,num_classes)\n",
" :return: The output with the same shape and all zeros tured to -1.\n",
" \"\"\"\n",
" labels = label_onehot\n",
" labels[labels == 0] = -1\n",
" labels = np.transpose(labels)\n",
" return labels\n",
"\n",
"\n",
"def next_batch(X, y, batch_size):\n",
" \"\"\"\n",
" Generating a batch of random data.\n",
" :param x_train:\n",
" :param batch_size:\n",
" :return:\n",
" \"\"\"\n",
" idx = np.random.choice(len(X), size=batch_size)\n",
" X_batch = X[idx]\n",
" y_batch = y[:, idx]\n",
" return X_batch, y_batch"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Extracting MNIST_data/train-images-idx3-ubyte.gz\n",
"Extracting MNIST_data/train-labels-idx1-ubyte.gz\n",
"Extracting MNIST_data/t10k-images-idx3-ubyte.gz\n",
"Extracting MNIST_data/t10k-labels-idx1-ubyte.gz\n",
"The variance of the chosen components = %91.41\n"
]
}
],
"source": [
"########################\n",
"### Data Preparation ###\n",
"########################\n",
"\n",
"# Read MNIST data. It has a data structure.\n",
"# mnist.train.images, mnist.train.labels: The training set images and their associated labels.\n",
"# mnist.validation.images, mnist.validation.labels: The validation set images and their associated labels.\n",
"# mnist.test.images, mnist.test.labels: The test set images and their associated labels.\n",
"\n",
"# Flags:\n",
"# \"reshape=True\", by this flag, the data will be reshaped to (num_samples,num_features)\n",
"# and since each image is 28x28, the num_features = 784\n",
"# \"one_hot=True\", this flag return one_hot labeling format\n",
"# ex: sample_label [1 0 0 0 0 0 0 0 0 0] says the sample belongs to the first class.\n",
"mnist = input_data.read_data_sets(\"MNIST_data/\", reshape=True, one_hot=True)\n",
"\n",
"# Label preparation.\n",
"y_train = prepare_label_fn(mnist.train.labels)\n",
"y_test = prepare_label_fn(mnist.test.labels)\n",
"\n",
"# Get the number of classes.\n",
"num_classes = y_train.shape[0]\n",
"\n",
"##########################################\n",
"### Dimensionality Reduction Using PCA ###\n",
"##########################################\n",
"pca = PCA(n_components=100)\n",
"pca.fit(mnist.train.images)\n",
"\n",
"# print the accumulative variance for the returned principle components.\n",
"print(\"The variance of the chosen components = %{0:.2f}\".format(100 * np.sum(pca.explained_variance_ratio_)))\n",
"x_train = pca.transform(mnist.train.images)\n",
"x_test = pca.transform(mnist.test.images)\n",
"num_fetures = x_train.shape[1]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"############################\n",
"### Graph & Optimization ###\n",
"############################\n",
"# Create graph\n",
"sess = tf.Session()\n",
"\n",
"# Initialize placeholders\n",
"data_placeholder = tf.placeholder(shape=[None, num_fetures], dtype=tf.float32)\n",
"label_placeholder = tf.placeholder(shape=[num_classes, None], dtype=tf.float32)\n",
"pred_placeholder = tf.placeholder(shape=[None, num_fetures], dtype=tf.float32)\n",
"\n",
"# The alpha variable for solving the dual optimization problem.\n",
"alpha = tf.Variable(tf.random_normal(shape=[num_classes, batch_size]))\n",
"\n",
"# Gaussian (RBF) kernel\n",
"gamma = tf.constant(gamma)\n",
"\n",
"# RBF kernel\n",
"my_kernel = kernel_fn(data_placeholder, gamma)\n",
"\n",
"# Loss calculation.\n",
"loss = loss_fn(alpha, label_placeholder)\n",
"\n",
"# Generating the prediction kernel.\n",
"pred_kernel = kernel_pred(data_placeholder, pred_placeholder)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#############################\n",
"### Prediction & Accuracy ###\n",
"#############################\n",
"prediction_output = tf.matmul(tf.multiply(label_placeholder, alpha), pred_kernel)\n",
"prediction = tf.arg_max(prediction_output - tf.expand_dims(tf.reduce_mean(prediction_output, 1), 1), 0)\n",
"accuracy = tf.reduce_mean(tf.cast(tf.equal(prediction, tf.argmax(label_placeholder, 0)), tf.float32))"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Optimizer\n",
"train_op = tf.train.AdamOptimizer(initial_learning_rate).minimize(loss)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Variables Initialization.\n",
"init = tf.global_variables_initializer()\n",
"sess.run(init)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Step #50, Loss= -2258.316895, training accuracy= 0.360000, testing accuracy= 0.320000 \n",
"Step #100, Loss= -4122.960938, training accuracy= 0.560000, testing accuracy= 0.540000 \n",
"Step #150, Loss= -5949.908203, training accuracy= 0.660000, testing accuracy= 0.860000 \n",
"Step #200, Loss= -7223.067871, training accuracy= 0.920000, testing accuracy= 0.920000 \n",
"Step #250, Loss= -7966.063965, training accuracy= 0.980000, testing accuracy= 0.980000 \n",
"Step #300, Loss= -9594.546875, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #350, Loss= -8474.500977, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #400, Loss= -10710.136719, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #450, Loss= -11123.141602, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #500, Loss= -10979.212891, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #550, Loss= -12162.305664, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #600, Loss= -14245.701172, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #650, Loss= -10697.230469, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #700, Loss= -14184.848633, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #750, Loss= -10400.685547, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #800, Loss= -13207.020508, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #850, Loss= -11359.216797, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #900, Loss= -17082.535156, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #950, Loss= -14653.086914, training accuracy= 1.000000, testing accuracy= 1.000000 \n",
"Step #1000, Loss= -18085.871094, training accuracy= 1.000000, testing accuracy= 1.000000 \n"
]
}
],
"source": [
"# Training loop\n",
"for i in range(num_steps):\n",
"\n",
" batch_X, batch_y = next_batch(x_train, y_train, batch_size)\n",
" sess.run(train_op, feed_dict={data_placeholder: batch_X, label_placeholder: batch_y})\n",
"\n",
" temp_loss = sess.run(loss, feed_dict={data_placeholder: batch_X, label_placeholder: batch_y})\n",
"\n",
" acc_train_batch = sess.run(accuracy, feed_dict={data_placeholder: batch_X,\n",
" label_placeholder: batch_y,\n",
" pred_placeholder: batch_X})\n",
"\n",
" batch_X_test, batch_y_test = next_batch(x_test, y_test, batch_size)\n",
" acc_test_batch = sess.run(accuracy, feed_dict={data_placeholder: batch_X_test,\n",
" label_placeholder: batch_y_test,\n",
" pred_placeholder: batch_X_test})\n",
"\n",
" if (i + 1) % log_steps == 0:\n",
" print('Step #%d, Loss= %f, training accuracy= %f, testing accuracy= %f ' % (\n",
" (i+1), temp_loss, acc_train_batch, acc_test_batch))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
......@@ -83,12 +83,6 @@ def linear_model():
# Create model instant
model = linear_model()
# Model plot
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=True, show_layer_names=True,
rankdir='TB', expand_nested=False, dpi=100
)
# Print the model summary
model.summary()
......@@ -219,5 +213,4 @@ if model_improvement_progress:
# Plot the line
y_hat = w1*x + w0
plt.plot(x, y_hat, '-r')
plt.savefig(os.path.join('/content/drive/linearregression', str(checkpoint)+'.png'))
\ No newline at end of file
plt.plot(x, y_hat, '-r')
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册