0
votes

I just installed Hadoop version 3.3.0 with JDK 1.8. while installing I edited core-site.xml, hdfs-site.xml, mapred-site.xml and yarn-site.xml and hadoop-env.cmd alongwith creating datanode and namenode folders in data folder. while executing hdfs namenode -format command, I get below error. can someone please help me understadn what is this error and how do I overcome?

WARN namenode.NameNode: Encountered exception during format ExitCodeException exitCode=-1073741515:

below is the log:

2020-12-24 08:52:03,258 INFO namenode.NameNode: createNameNode [-format]

2020-12-24 08:52:03,523 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
2020-12-24 08:52:04,872 INFO common.Util: Assuming 'file' scheme for path /C:/hadoop/data/namenode in configuration.
2020-12-24 08:52:04,872 INFO common.Util: Assuming 'file' scheme for path /C:/hadoop/data/namenode in configuration.
2020-12-24 08:52:04,904 INFO namenode.NameNode: Formatting using clusterid: CID-ed417e3b-49d3-4bb5-bf77-341a24a3f9e4
2020-12-24 08:52:04,997 INFO namenode.FSEditLog: Edit logging is async:true
2020-12-24 08:52:05,060 INFO namenode.FSNamesystem: KeyProvider: null
2020-12-24 08:52:05,060 INFO namenode.FSNamesystem: fsLock is fair: true
2020-12-24 08:52:05,060 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2020-12-24 08:52:05,091 INFO namenode.FSNamesystem: fsOwner                = admin (auth:SIMPLE)
2020-12-24 08:52:05,091 INFO namenode.FSNamesystem: supergroup             = supergroup
2020-12-24 08:52:05,091 INFO namenode.FSNamesystem: isPermissionEnabled    = true
2020-12-24 08:52:05,091 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true
2020-12-24 08:52:05,091 INFO namenode.FSNamesystem: HA Enabled: false
2020-12-24 08:52:05,201 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-12-24 08:52:05,216 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2020-12-24 08:52:05,216 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2020-12-24 08:52:05,232 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2020-12-24 08:52:05,232 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Dec 24 08:52:05
2020-12-24 08:52:05,232 INFO util.GSet: Computing capacity for map BlocksMap
2020-12-24 08:52:05,232 INFO util.GSet: VM type       = 64-bit
2020-12-24 08:52:05,232 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
2020-12-24 08:52:05,232 INFO util.GSet: capacity      = 2^21 = 2097152 entries
2020-12-24 08:52:05,263 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
2020-12-24 08:52:05,263 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2020-12-24 08:52:05,279 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999
2020-12-24 08:52:05,279 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2020-12-24 08:52:05,279 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2020-12-24 08:52:05,294 INFO blockmanagement.BlockManager: defaultReplication         = 1
2020-12-24 08:52:05,294 INFO blockmanagement.BlockManager: maxReplication             = 512
2020-12-24 08:52:05,294 INFO blockmanagement.BlockManager: minReplication             = 1
2020-12-24 08:52:05,294 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2020-12-24 08:52:05,294 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2020-12-24 08:52:05,294 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2020-12-24 08:52:05,294 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2020-12-24 08:52:05,357 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
2020-12-24 08:52:05,357 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
2020-12-24 08:52:05,357 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
2020-12-24 08:52:05,357 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
2020-12-24 08:52:05,404 INFO util.GSet: Computing capacity for map INodeMap
2020-12-24 08:52:05,404 INFO util.GSet: VM type       = 64-bit
2020-12-24 08:52:05,404 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
2020-12-24 08:52:05,404 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2020-12-24 08:52:05,435 INFO namenode.FSDirectory: ACLs enabled? true
2020-12-24 08:52:05,435 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2020-12-24 08:52:05,435 INFO namenode.FSDirectory: XAttrs enabled? true
2020-12-24 08:52:05,435 INFO namenode.NameNode: Caching file names occurring more than 10 times
2020-12-24 08:52:05,451 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2020-12-24 08:52:05,451 INFO snapshot.SnapshotManager: SkipList is disabled
2020-12-24 08:52:05,466 INFO util.GSet: Computing capacity for map cachedBlocks
2020-12-24 08:52:05,466 INFO util.GSet: VM type       = 64-bit
2020-12-24 08:52:05,466 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
2020-12-24 08:52:05,466 INFO util.GSet: capacity      = 2^18 = 262144 entries
2020-12-24 08:52:05,482 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2020-12-24 08:52:05,482 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2020-12-24 08:52:05,482 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2020-12-24 08:52:05,497 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2020-12-24 08:52:05,497 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2020-12-24 08:52:05,513 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2020-12-24 08:52:05,513 INFO util.GSet: VM type       = 64-bit
2020-12-24 08:52:05,513 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2020-12-24 08:52:05,513 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory root= C:\hadoop\data\namenode; location= null ? (Y or N) Y
2020-12-24 08:52:13,682 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1539316638-192.168.0.198-1608780133666
2020-12-24 08:52:13,682 INFO common.Storage: Will remove files: []
2020-12-24 08:52:13,744 **WARN namenode.NameNode: Encountered exception during format
ExitCodeException exitCode=-1073741515:**
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
        at org.apache.hadoop.util.Shell.run(Shell.java:901)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
        at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1341)
        at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:1332)
        at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:1285)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:456)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1713)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1821)
2020-12-24 08:52:13,**760 ERROR namenode.NameNode: Failed to start namenode.**
ExitCodeException exitCode=-1073741515:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
        at org.apache.hadoop.util.Shell.run(Shell.java:901)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
        at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1341)
        at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:1332)
        at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:1285)
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:456)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1713)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1821)
2020-12-24 08:52:13,760 INFO util.ExitUtil: Exiting with status 1: ExitCodeException exitCode=-1073741515:
2020-12-24 08:52:13,775 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at DESKTOP-S0HFRUB/192.168.0.198
2

2 Answers

0
votes

i got this error but solved with below steps,

try to open below exe from hadoop bin folder. winutils.exe

It will say if any dll is missing we can download the dll and paste it in bin folder and execute the command it will work.

for me below dll was missing and after copying it to bin folder it works. https://www.dll-files.com/msvcr100.dll.html

0
votes

It might be problem with native-hadoop libraries as you see in warning, dont ignore it. please share your .bashrc (or its windows eq) for better help

this line should be present in that file:

export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"