2
votes

I am trying to write a two layer neural network to train a class labeler. The input to the network is a 150-feature list of about 1000 examples; all features on all examples have been L2 normalized.

I only have two outputs, and they should be disjoint--I am just attempting to predict whether the example is a one or a zero.

My code is relatively simple; I am feeding the input data into the hidden layer, and then the hidden layer into the output. As I really just want to see this working in action, I am training on the entire data set with each step.

My code is below. Based on the other NN implementations I have referred to, I believe that the performance of this network should be improving over time. However, regardless of the number of epochs I set, I am getting back an accuracy of about ~20%. The accuracy is not changing when the number of steps are changed, so I don't believe that my weights and biases are being updated.

Is there something obvious I am missing with my model? Thanks!

import numpy as np
import tensorflow as tf

sess = tf.InteractiveSession()

# generate data

np.random.seed(10)

inputs = np.random.normal(size=[1000,150]).astype('float32')*1.5

label = np.round(np.random.uniform(low=0,high=1,size=[1000,1])*0.8)
reverse_label = 1-label
labels = np.append(label,reverse_label,1)

# parameters

learn_rate = 0.01
epochs = 200
n_input = 150
n_hidden = 75
n_output = 2

# set weights/biases

x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])

b0 = tf.Variable(tf.truncated_normal([n_hidden]))
b1 = tf.Variable(tf.truncated_normal([n_output]))

w0 = tf.Variable(tf.truncated_normal([n_input,n_hidden]))
w1 = tf.Variable(tf.truncated_normal([n_hidden,n_output]))

# step function

def returnPred(x,w0,w1,b0,b1):

    z1 = tf.add(tf.matmul(x, w0), b0)
    a2 = tf.nn.relu(z1)

    z2 = tf.add(tf.matmul(a2, w1), b1)
    h = tf.nn.relu(z2)

    return h  #return the first response vector from the 

y_ = returnPred(x,w0,w1,b0,b1) # predict operation

loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_,labels=y) # calculate loss between prediction and actual
model = tf.train.GradientDescentOptimizer(learning_rate=learn_rate).minimize(loss) # apply gradient descent based on loss

init = tf.global_variables_initializer() 
tf.Session = sess
sess.run(init) #initialize graph

for step in range(0,epochs):
    sess.run(model,feed_dict={x: inputs, y: labels }) #train model

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: inputs, y: labels})) # print accuracy
3
It will help if you can produce some "toy" input and labels (doesn't have to be your specific input, you could use numpy random), so that the reader could be able to run your codeMiriam Farber
Hi Miriam. I've updated my code to include some "toy" input per your request. Thanks!newtensorflowguy

3 Answers

5
votes

I changed your optimizer to AdamOptimizer (in many cases it performs better than GradientDescentOptimizer).

I also played a bit with the parameters. In particular, I took smaller std for your variable initialization, decreased learning rate (as your loss was unstable and "jumped around") and increased epochs (as I noticed that your loss continues to decrease).

I also reduced the size of the hidden layer. It is harder to train networks with large hidden layer when you don't have that much data.

Regarding your loss, it is better to apply tf.reduce_mean on it so that loss would be a number. In addition, following the answer of ml4294, I used softmax instead of sigmoid, so the loss looks like:

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_,labels=y))

The code below achieves accuracy of around 99.9% on the training data:

import numpy as np
import tensorflow as tf

sess = tf.InteractiveSession()

# generate data

np.random.seed(10)

inputs = np.random.normal(size=[1000,150]).astype('float32')*1.5

label = np.round(np.random.uniform(low=0,high=1,size=[1000,1])*0.8)
reverse_label = 1-label
labels = np.append(label,reverse_label,1)

# parameters

learn_rate = 0.002
epochs = 400
n_input = 150
n_hidden = 60
n_output = 2

# set weights/biases

x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])

b0 = tf.Variable(tf.truncated_normal([n_hidden],stddev=0.2,seed=0))
b1 = tf.Variable(tf.truncated_normal([n_output],stddev=0.2,seed=0))

w0 = tf.Variable(tf.truncated_normal([n_input,n_hidden],stddev=0.2,seed=0))
w1 = tf.Variable(tf.truncated_normal([n_hidden,n_output],stddev=0.2,seed=0))

# step function

def returnPred(x,w0,w1,b0,b1):

    z1 = tf.add(tf.matmul(x, w0), b0)
    a2 = tf.nn.relu(z1)

    z2 = tf.add(tf.matmul(a2, w1), b1)
    h = tf.nn.relu(z2)

    return h  #return the first response vector from the 

y_ = returnPred(x,w0,w1,b0,b1) # predict operation

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_,labels=y)) # calculate loss between prediction and actual
model = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(loss) # apply gradient descent based on loss


init = tf.global_variables_initializer()
tf.Session = sess
sess.run(init) #initialize graph

for step in range(0,epochs):
    sess.run([model,loss],feed_dict={x: inputs, y: labels }) #train model

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) 
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: inputs, y: labels})) # print accuracy
2
votes

Just a suggestion in addition to the answer provided by Miriam Farber: You use a multi-dimensional output label ([0., 1.]) for the classification. I suggest to use the softmax cross entropy tf.nn.softmax_cross_entropy_with_logits() instead of the sigmoid cross entropy, since you assume the outputs to be disjoint softmax on Wikipedia. I achieved much faster convergence with this small modification. This should also improve your performance once you decide to increase your output dimensionality from 2 to a higher number.

0
votes

I guess you have some problem here: loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_,labels=y) # calculate loss between prediction and actual

It should look smth like that: loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y_,labels=y))

Did't look at you code much, so if this would't work out you can check udacity deep learning course or forum they have good samples of that are you trying to do. GL