TensorFlow Mechanics 101

This page complements TensorFlow Mechanics 101 and describes how this example was successfully ran on my machine.

To run the Python script fully_connected_feed.py successfully, make two changes to fix errors caused by these two lines.
From

from tensorflow.g3doc.tutorials.mnist import input_data
from tensorflow.g3doc.tutorials.mnist import mnist

to

import input_data
import mnist

The errors occurred because fully_connected_feed.py tried to import Python files input_data.py and mnist.py from tensorflow.g3doc.tutorials.mnist which was not accessible. Instead, the change is made to import those files locally. Note that these files are located in the same directory where fully_connected_feed.py is located.

The step-by-step tutorial is below.

Step 1. Create a directory tutorial/mnist under ~/tensorflow

$ cd ~/tensorflow/
$ ls
bin include lib pip-selfcheck.json
$ mkdir tutorial
$ cd tutorial/
$ mkdir mnist
$ cd mnist/

Step 2. Download the tutorial source file mnist.tar.gz.

$ wget https://tensorflow.googlesource.com/tensorflow/+archive/master/tensorflow/g3doc/tutorials/mnist.tar.gz
$ ls
mnist.tar.gz

Step 3. Uncompress the compressed tutorial source file

$ tar zxf mnist.tar.gz
$ ls
__init__.py download input_data.py mnist.tar.gz pros
beginners fully_connected_feed.py mnist.py mnist_softmax.py tf

Step 4. Fix two errors in fully_connected_feed.py.
Use your preferred text editor to make the following changes
on the 23rd and 24th lines. (My text editor is nano.)
From

from tensorflow.g3doc.tutorials.mnist import input_data
from tensorflow.g3doc.tutorials.mnist import mnist

      to

import input_data
import moist

The first 24 lines of fully_connected_feed.py is below. Remove or comment out the red parts on the 23rd and 24th lines.

$ cat -n fully_connected_feed.py | head -n 24
1 “””Trains and Evaluates the MNIST network using a feed dictionary.
2
3 TensorFlow install instructions:
4 https://tensorflow.org/get_started/os_setup.html
5
6 MNIST tutorial:
7 https://tensorflow.org/tutorials/mnist/tf/index.html
8
9 “””
10 # pylint: disable=missing-docstring
11 from __future__ import absolute_import
12 from __future__ import division
13 from __future__ import print_function
14
15 import os.path
16 import time
17
18 import tensorflow.python.platform
19 import numpy
20 from six.moves import xrange # pylint: disable=redefined-builtin
21 import tensorflow as tf
22
23 from tensorflow.g3doc.tutorials.mnist import input_data
24 from tensorflow.g3doc.tutorials.mnist import mnist

To understand what’s going on with input_data.py, refer to MNIST Data Download.

Step 5. Run the Python script.

$ python fully_connected_feed.py
[lots of lines of messages. Click here to watch these messages.]
$

It should pause for seconds and run successfully until the end if TensorFlow was installed properly. In case of an error, you may doubt installation problem, not this Python code. If you have an error, installing with VirtualEnv is suggested because your TensorFlow will run in an isolated environment on your operating system.

Step 6. Run TensorBoard.

Step 6.1. In the terminal that you are currently working on, type in:

$ pwd
/Users/user_id/tensorflow/tutorial/mnist/data
$ tensorboard –logdir=/Users/user_id/tensorflow/tutorial/mnist/data
Starting TensorBoard on port 6006
(You can navigate to http://localhost:6006)

Make sure user_id in the directory name above is YOUR user ID.

Note also that the absolute path to directory data is used. Do not use the relative path because the data won’t be displayed on TensorBoard. In other words, don’t do this.

$ tensorboard –logdir=~/tensorflow/tutorial/mnist/data
Starting TensorBoard on port 6006
(You can navigate to http://localhost:6006)

TensorBoard may launch, but no data will be passed to it. So you won’t be able to see the graphical representation of the result and the neural network graph.

Step 6.2. Open a web browser and type in “http://localhost:6006” to the navigation bar.

스크린샷 2015-11-18 오후 7.14.20

Then TensorBoard launches.

스크린샷 2015-11-18 오후 7.00.30

If “xentropy_mean” is clicked, the result graph is displayed.

스크린샷 2015-11-18 오후 7.00.54

Clicking “Graph” shows a graph of this neural network structure.

스크린샷 2015-11-18 오후 7.02.00

Nothing is shown under “IMAGES” and “HISTOGRAMS”.

스크린샷 2015-11-18 오후 7.01.28

스크린샷 2015-11-18 오후 7.01.45

Caution: TensorBoard may launch, but no EVENTS and GRAPH are shown as below if the relative path is used.

스크린샷 2015-11-18 오후 7.13.33

스크린샷 2015-11-18 오후 7.13.44

Step 7. Bring your tea, coffee, or snacks and enjoy analyzing the results. 🙂
It may be a good idea to revisit the related tutorials.

Step 8. Finish TensorBoard

Step 8.1. To finish TensorBoard, close the web browser’s window.
Step 8.2. Hit Ctrl+C on the terminal that launched TensorBoard. This will close the process to launch TensorBoard.


Just for your information, the –help option for TensorBoard is given below.

$ cd ~/tensorflow/
$ tensorboard –help
usage: tensorboard [-h] [–logdir LOGDIR] [–debug DEBUG] [–nodebug]
[–host HOST] [–port PORT]

optional arguments:
-h, –help show this help message and exit
–logdir LOGDIR logdir specifies where TensorBoard will look to find
TensorFlow event files that it can display. In the simplest
case, logdir is a directory containing tfevents files.
TensorBoard also supports comparing multiple TensorFlow
executions: to do this, you can use directory whose
subdirectories contain tfevents files, as in the following
example: foo/bar/logdir/
foo/bar/logdir/mnist_1/events.out.tfevents.1444088766
foo/bar/logdir/mnist_2/events.out.tfevents.1444090064 You
may also pass a comma seperated list of log directories,
and you can assign names to individual log directories by
putting a colon between the name and the path, as in
tensorboard
–logdir=name1:/path/to/logs/1,name2:/path/to/logs/2
–debug DEBUG Whether to run the app in debug mode. This increases log
verbosity to DEBUG.
–nodebug
–host HOST What host to listen to. Defaults to allowing remote access,
set to 127.0.0.1 to serve only on localhost.
–port PORT What port to serve TensorBoard on.
$


Just for the reference, the entire process is given below. Note commands are highlighted in blue.

$ cd ~/tensorflow/
$ ls
bin include lib pip-selfcheck.json
$ mkdir tutorial
$ cd tutorial/
$ mkdir mnist
$ cd mnist/
$ wget https://tensorflow.googlesource.com/tensorflow/+archive/master/tensorflow/g3doc/tutorials/mnist.tar.gz
–2015-11-18 09:00:40– https://tensorflow.googlesource.com/tensorflow/+archive/master/tensorflow/g3doc/tutorials/mnist.tar.gz
Resolving tensorflow.googlesource.com… 173.194.72.82, 2404:6800:4008:c07::52
Connecting to tensorflow.googlesource.com|173.194.72.82|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: unspecified [application/x-gzip]
Saving to: ‘mnist.tar.gz’

[ <=> ] 1,179,671 835KB/s in 1.4s

2015-11-18 09:00:43 (835 KB/s) – ‘mnist.tar.gz’ saved [1179671]

$ ls
mnist.tar.gz
$ tar zxf mnist.tar.gz
$ ls
__init__.py download input_data.py mnist.tar.gz pros
beginners fully_connected_feed.py mnist.py mnist_softmax.py tf
$ python fully_connected_feed.py
Traceback (most recent call last):
File “fully_connected_feed.py”, line 23, in <module>
from tensorflow.g3doc.tutorials.mnist import input_data
ImportError: No module named g3doc.tutorials.mnist
$ nano fully_connected_feed.py
$ python fully_connected_feed.py
Succesfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting data/train-images-idx3-ubyte.gz
Succesfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting data/train-labels-idx1-ubyte.gz
Succesfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting data/t10k-images-idx3-ubyte.gz
Succesfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/t10k-labels-idx1-ubyte.gz
can’t determine number of CPU cores: assuming 4
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 4
can’t determine number of CPU cores: assuming 4
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 4
Step 0: loss = 2.31 (0.081 sec)
Step 100: loss = 2.18 (0.006 sec)
Step 200: loss = 1.98 (0.006 sec)
Step 300: loss = 1.72 (0.006 sec)
Step 400: loss = 1.39 (0.006 sec)
Step 500: loss = 1.00 (0.006 sec)
Step 600: loss = 0.88 (0.006 sec)
Step 700: loss = 0.85 (0.006 sec)
Step 800: loss = 0.64 (0.006 sec)
Step 900: loss = 0.71 (0.006 sec)
Training Data Eval:
Num examples: 55000 Num correct: 46515 Precision @ 1: 0.8457
Validation Data Eval:
Num examples: 5000 Num correct: 4276 Precision @ 1: 0.8552
Test Data Eval:
Num examples: 10000 Num correct: 8500 Precision @ 1: 0.8500
Step 1000: loss = 0.58 (0.013 sec)
Step 1100: loss = 0.51 (0.115 sec)
Step 1200: loss = 0.60 (0.006 sec)
Step 1300: loss = 0.34 (0.006 sec)
Step 1400: loss = 0.45 (0.006 sec)
Step 1500: loss = 0.52 (0.006 sec)
Step 1600: loss = 0.46 (0.006 sec)
Step 1700: loss = 0.49 (0.007 sec)
Step 1800: loss = 0.36 (0.006 sec)
Step 1900: loss = 0.27 (0.006 sec)
Training Data Eval:
Num examples: 55000 Num correct: 49009 Precision @ 1: 0.8911
Validation Data Eval:
Num examples: 5000 Num correct: 4485 Precision @ 1: 0.8970
Test Data Eval:
Num examples: 10000 Num correct: 8952 Precision @ 1: 0.8952
$

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s