5
votes

I have installed/configured Hadoop on windows hadoop-2.7.0

I could successfully start "sbin\start-dfs" run command. DataNode and NameNode started. I could create directory, add file into hadoop system.

But now when I try "sbin/start-yarn" on "resourcemanager" window I do not see error. But it failes on yarn's "namenode"

it fails with this error :-

15/06/21 17:26:49 INFO impl.MetricsConfig: loaded properties from hadoop-metrics
2.properties
15/06/21 17:26:49 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
15/06/21 17:26:49 INFO impl.MetricsSystemImpl: NodeManager metrics system started
15/06/21 17:26:49 FATAL nodemanager.NodeManager: Error starting NodeManager
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:524)
        at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:473)
        at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:526)
        at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:504)
        at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1064)
        at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:161)
        at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)
        at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:730)
        at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:726)
        at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
        at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:726)
        at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createDir(DirectoryCollection.java:365)
        at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createNonExistentDirs(DirectoryCollection.java:199)
        at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:152)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
        at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:254)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:463)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:511)
15/06/21 17:26:49 INFO impl.MetricsSystemImpl: Stopping NodeManager metrics system...
15/06/21 17:26:49 INFO impl.MetricsSystemImpl: NodeManager metrics system stopped.
15/06/21 17:26:49 INFO impl.MetricsSystemImpl: NodeManager metrics system shutdown complete.
15/06/21 17:26:49 INFO nodemanager.NodeManager: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NodeManager at idea-PC/27.4.177.205
************************************************************/

I had faced similar problem while "sbin\start-dfs". I tried different things. It looks like it was solved when I added hadoop's "bin" and "sbin" in path.

Can you please suggest the solution for Yarn problem.

6

6 Answers

2
votes

there should be a %HADOOP_HOME%\bin\hadoop.dll which contains the native method. %HADOOP_HOME%\bin should be in the path. If you built from source, make sure the hadoop.dll was built and placed.

1
votes

If you look at the history of the native method that is missing here you will see that it was added recently.

So this error means that you use a newer version of Hadoop, but your hadoop.dll is from an older version of Hadoop.

So either retrieving/building a newer hadoop.dll or downgrading Hadoop should avoid this problem.

For me downgrading to Hadoop 2.3.0 did the trick.

1
votes

In my case I have the 3.1.1 Hadoop version and I changed bin folder with another one from the following address https://github.com/s911415/apache-hadoop-3.1.0-winutils. I managed to start yarn with this dll and one node (Yarn cluster with one node). You can find the settings which I followed here. Also you have to set-up the hdfs-site.xml like this:

<configuration>

<property> 
    <name>dfs.replication</name>
    <value>1</value>
</property> 

<property> 
    <name>dfs.namenode.name.dir</name> 
    <value>/hadoop-3.1.1/data/namenode</value>
</property>

<property> 
    <name>dfs.datanode.data.dir</name> 
    <value>/hadoop-3.1.1/data/datanode</value>
</property>

</configuration>
0
votes

In my case it was an exception because Hadoop didn't find precompiled 'hadoop' DLL. I've put a path of the hadoop.dll's folder into PATH environment variable and it worked out.

The description of the exception you got is misleading, the original exception is thrown in java.lang.ClassLoader class: throw new UnsatisfiedLinkError("no " + name + " in java.library.path");

0
votes

In my case , there was the hadoop path added in path variable and it was giving out and error "Another instance of Derby has already booted"

0
votes

To me setting VM Argument -Djava.library.path=C:\winutils-master\hadoop-3.0.0 resolved the issue.