1
votes

My model's accuracy and loss are evaluating to 0.
The global steps should be 1625 but it's 1.
The acc and loss shouldn't be equal to 0 as both of them are contradicting each other.

My input function,keras estimator,train_and_evaluate are as follows:

def make_input_fn(addrs,labels,batch_size,mode):

 filename_dataset = tf.data.Dataset.from_tensor_slices((addrs,labels))     

 dataset = filename_dataset.apply(tf.contrib.data.map_and_batch(lambda 
 addrs, labels: tuple(tf.py_func(
    process, [addrs, labels], [tf.uint8, labels.dtype])),batch_size,

 num_parallel_batches=2,

 drop_remainder=False))
 if mode == tf.estimator.ModeKeys.TRAIN:
  num_epochs = None # indefinitely
  dataset = dataset.apply(tf.contrib.data.shuffle_and_repeat(buffer_size = 10000))
 else:
  num_epochs = 1
  dataset = dataset.repeat(num_epochs)

 dataset = dataset.prefetch(buffer_size=batch_size)
 images,labels = dataset.make_one_shot_iterator().get_next()
 images.set_shape([None,512,512,3])
 labels.set_shape([None,1])
 return images,labels

def keras_estimator(model_dir,config):
 base_model = Xception(weights='imagenet', include_top=False,input_shape = 
  (512,512,3),classes = 5)
 x = base_model.output
 x = GlobalAveragePooling2D()(x)

 x = Dense(1024, activation='relu')(x)
 x = Dropout(0.2)(x)
 x = Dense(256, activation='relu')(x)
 x = Dropout(0.2)(x)

 predictions = Dense(5, activation='softmax')(x)


 model = Model(inputs=base_model.input, outputs=predictions)


 for layer in base_model.layers:
   layer.trainable = False
 model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', 
       metrics=['acc'])


 estimator=tf.keras.estimator.model_to_estimator(keras_model=model,
      model_dir=model_dir,
      config=config)
 return estimator

def train_and_evaluate(model_dir):
 t_batch_size = 512
 e_batch_size = 64
 num_epochs = 25
 import pandas as pd
 df = pd.read_csv('/content/trainLabels.csv')
 from random import shuffle
 addrs = ['/content/train/train/' + str(df.iloc[i]['image']) + '.jpeg' for i 
 in range(len(df))]
 labels = df['level'].values.tolist()
 c = list(zip(addrs, labels))
 shuffle(c)
 addrs1, labels1 = zip(*c)
 train_addrs = addrs1[0 : int(0.9 * len(addrs))]
 train_labels = labels1[0 : int(0.9 * len(labels))]
 val_addrs = addrs1[ int(0.9 * len(addrs)) : ]
 val_labels = labels1[ int(0.9 * len(addrs)) : ]
 train_addrs = list(train_addrs)
 train_labels = list(train_labels)
 val_addrs = list(val_addrs)
 val_labels = list(val_labels)

 run_config = tf.estimator.RunConfig(save_checkpoints_secs=300)

 estimator = keras_estimator(model_dir,run_config)

 t_max_steps = (len(train_addrs) // t_batch_size) * num_epochs

 train_spec = tf.estimator.TrainSpec(input_fn = lambda : 
 make_input_fn(train_addrs,train_labels,
 t_batch_size,mode=tf.estimator.ModeKeys.TRAIN),max_steps = t_max_steps)

 eval_spec = tf.estimator.EvalSpec(input_fn = lambda : 
 make_input_fn(val_addrs,val_labels,
 e_batch_size,mode=tf.estimator.ModeKeys.EVAL),steps = 
 None,start_delay_secs=10,
    throttle_secs=300)


 tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)

Here are the log files:

INFO:tensorflow:Running training and evaluation locally (non-distributed). INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps None or save_checkpoints_secs 300. WARNING:tensorflow:From :9: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.experimental.map_and_batch(...). WARNING:tensorflow:From :12: shuffle_and_repeat (from tensorflow.contrib.data.python.ops.shuffle_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.experimental.shuffle_and_repeat(...). INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Warm-starting with WarmStartSettings: WarmStartSettings(ckpt_to_initialize_from='/content/training/keras/keras_model.ckpt', vars_to_warm_start='.*', var_name_to_vocab_info={}, var_name_to_prev_var_name={}) INFO:tensorflow:Warm-starting from: ('/content/training/keras/keras_model.ckpt',) INFO:tensorflow:Warm-starting variable: dense/kernel; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: dense/bias; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: dense_1/kernel; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: dense_1/bias; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: dense_2/kernel; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: dense_2/bias; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: Adam/iterations; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: Adam/lr; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: Adam/beta_1; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: Adam/beta_2; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: Adam/decay; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_1; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_2; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_3; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_4; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_5; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_6; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_7; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_8; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_9; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_10; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_11; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_12; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_13; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_14; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_15; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_16; prev_var_name: Unchanged INFO:tensorflow:Warm-starting variable: training/Adam/Variable_17; prev_var_name: Unchanged INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 0 into /content/training/model.ckpt. INFO:tensorflow:Saving checkpoints for 1 into /content/training/model.ckpt. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2018-11-05-13:21:17 INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /content/training/model.ckpt-1 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Finished evaluation at 2018-11-05-13:22:08 INFO:tensorflow:Saving dict for global step 1: acc = 0.0, global_step = 1, loss = 0.0 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1: /content/training/model.ckpt-1 INFO:tensorflow:Loss for final step: None.

1
Welcome to SO, have you tried debugging and narrowing down the possible issues? At the moment it's close to "my code is broken, please fix", which is unlikely to attract answers. Try to figure out what you think is causing the accuracy and loss issues, and then edit this to include a specific question about a section of your code. - shayaan
I'm not able to figure out how I debug my code. Everything seems to be alright. - Aarya Patel

1 Answers

0
votes

I had this issue before. It was because that I specified the wrong directory for the data sets. Ultimately tensorflow had no input data. I hope this helps.