1
votes

Current setup

mysql connector version-mysql-connector-java-5.1.13
sqoop version-sqoop-1.4.6
hadoop version-hadoop-2.7.3
java version- Jdk-8u171-linux-x64/jdk1.8.0_171(oracle JDK)
OS-Ubundu

Note: Also tried with openjdk , same issue exist with this version also Sqoop Command : bin/sqoop import -connect jdbc:mysql://localhost:3306/testDb -username root -password root --table student --target-dir /user/hadoop/student -m 1 --driver com.mysql.jdbc.Driver

enter image description here

1
How large is the table you're importing? Says you're using over 4GB memory... How much is on your machine? - OneCricketeer
The table contains only three records and my machine RAM is 4GB - ruchika doifode
Okay, so you're on a single node... Datanode, NodeManager, and ResourceManagers each take 1 GB of memory by default, not leaving much room for the OS and anything else... You need to tune down the yarn-site.xml memory settings at least to about 256MB per container and application master - OneCricketeer

1 Answers

0
votes

Try to increase mapper parallelism (in your command it is -m 1 parameter). Set it to higher value, so each mapper will process less data and require less memory.
Also --split-by is necessary if the number of mappers>1.

See the suggestions about split-by column here.

Evenly distributed integer column is preferable.