It is common for Hadoop services to look for jars in HDFS because all nodes in the cluster can access files in HDFS. This is important if the MapReduce job being kicked off by the Hadoop service, in this case Sqoop, has a dependence on those jars. Remember, the Mappers are running on a DataNode, not the NameNode even though you are (probably) running the Sqoop command from the NameNode. Putting the jars on HDFS is not the only possible solution to this problem, but it is a sensible one.
Now we can deal with the actual error. At least one, but probably all, of your Mappers are unable to find a jar they need. That means that either the jar does not exist or the user trying to access them does not have the required permissions. First check if the file exists by running hadoop fs -ls home/SqoopUser/sqoop-1.4.3-cdh4.4.0/sqoop-1.4.3-cdh4.4.0.jar by a user with superuser privileges on the cluster. If it does not exist, put it there with hadoop fs -put {jarLocationOn/NameNode/fileSystem/sqoop-1.4.3-cdh4.4.0.jar} /home/SqoopUser/sqoop-1.4.3-cdh4.4.0/sqoop-1.4.3-cdh4.4.0.jar. I haven't worked with Cloudera specifically, so you will have to track down the jar location on the NameNode yourself. If Cloudera is anything like Hortonworks, there will be occasional issues like this where cluster deployment scripts/documentation misses a couple of required steps to get everything up and running.
Now that we know the file exists, we can check to see if the user SqoopUser has permissions to the file. Again, run hadoop fs -ls home/SqoopUser/sqoop-1.4.3-cdh4.4.0/sqoop-1.4.3-cdh4.4.0.jar and look at the file permissions. Also check the permissions of the directories containing the jar. Explaining POSIX file permissions is outside the scope of this answer, so if you are not familiar you might need to read up on those. One important note is that HDFS does not have its own concept of groups, it bases that on the groups of the underlying OS. Just make sure the jar is readable by SqoopUser and all of the parent directories are executable by SqoopUser. Indiscriminate user of chmod 777 will take of this, ie hadoop fs -chmod 777 /home/SqoopUser/sqoop-1.4.3-cdh4.4.0/sqoop-1.4.3-cdh4.4.0.jar. But of course be more discerning about the permissions you grant if your environment requires it.
If you find file permissions are tripping you up more often than you would like, the nuclear option is to set dfs.permissions to false in hdfs-site.xml. This will let all users access all files on HDFS. This can be very useful for rapid development, but it is safer to leave dfs.permisssions on.