0
votes

Running :[cloudera@quickstart ~]$ sqoop export --connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" --username retail_dba --password cloudera --table department_export --export-dir /home/cloudera/sqoop_import/departments -m 12

Error:

16/12/24 22:29:48 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 16/12/24 22:29:49 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/cloudera/.staging/job_1482646432089_0001 16/12/24 22:29:49 WARN security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /tmp/hadoop-yarn/staging/cloudera/.staging/job_1482646432089_0001. Name node is in safe mode. The reported blocks 1268 needs additional 39 blocks to reach the threshold 0.9990 of total blocks 1308. The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1446) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4072) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4030)

Tried using "hdfs dfsadmin -safemode leave", again getting error,

16/12/24 10:37:59 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 16/12/24 10:38:00 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/cloudera/.staging/job_1482602419946_0007 16/12/24 10:38:00 WARN security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /tmp/hadoop-yarn/staging/cloudera/.staging/job_1482602419946_0007. Name node is in safe mode. It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.

1

1 Answers

0
votes

Make sure that you have HCAT_HOME environment variable set properly for Sqoop runtime. The error which you got is because sqoop isn't able to find the required dependency "org.apache.hive.hcatalog*", which is available in hcatalog of hive.