4
votes

I am trying to stop the daemon processes in hadoop by ./stop-all.sh script but it gives following message:

no jobtracker to stop localhost: no tasktracker to stop no namenode to stop localhost: no datanode to stop localhost: no secondarynamenode to stop

I tried to see if the hadoop is running after this using jps and it showed:

27948 SecondaryNameNode 27714 NameNode 28136 TaskTracker 27816 DataNode 28022 JobTracker 8174 Jps

That is, it's running all daemons properly. I also checked hadoop dfs -ls / to just see if I am able to connect to hdfs. It's working.

I am running stop-all.sh script by supergroup user meaning, there is no issue with permissions.

1

1 Answers

4
votes

This message is shown if the start/stop scripts cannot find a pid file got the deamon in the $HADOOP_PID_DIR folder (which defaults to /tmp).

If:

  • these files have been deleted (by someone or something), or
  • the env variable $HADOOP_PID_DIR has been changed since you started the deamons, or
  • the user stopping the deamons is not the user that started them

then hadoop will show the error messages you are seeing.

Selected portions from the hadoop-daemon.sh file (for 1.0.0):

#   HADOOP_IDENT_STRING   A string representing this instance of hadoop. $USER by default

if [ "$HADOOP_IDENT_STRING" = "" ]; then
  export HADOOP_IDENT_STRING="$USER"
fi

# ....

if [ "$HADOOP_PID_DIR" = "" ]; then
  HADOOP_PID_DIR=/tmp
fi    

# ....

pid=$HADOOP_PID_DIR/hadoop-$HADOOP_IDENT_STRING-$command.pid

# ....

(stop)

  if [ -f $pid ]; then
    if kill -0 `cat $pid` > /dev/null 2>&1; then
      echo stopping $command
      kill `cat $pid`
    else
      echo no $command to stop
    fi
  else
    echo no $command to stop
  fi
  ;;