2
votes

Done My Home work searched everywhere but don't find any solutions java.lang.NoSuchFieldError: IS_SECURITY_ENABLED

The CDH parcel contains conflicting jars (jsp-api-2.1-6.1.14.jar, jasper-runtime-5.5.23.jar). jsp-api-2.1-6.1.14.jar and jasper-runtime-5.5.23.jar contains different versions of org.apache.Constants.java class.

The jasper-runtime-* jar does not contain the field "IS_SECURITY_ENABLED" therefore jetty throws an error "java.lang.NoSuchFieldError: IS_SECURITY_ENABLED" while trying to access this field in class org.apache.Constants.java which ultimately leads to failure in hadoop job.

is there any option in oozie so that can predefine the order of jar being picked from oozie share lib

Stacktrace

    java.lang.NoSuchFieldError: IS_SECURITY_ENABLED
2017-01-26 09:34:36,853 ERROR [main] org.mortbay.log: Error starting handlers
java.lang.NoSuchFieldError: IS_SECURITY_ENABLED
at org.apache.jasper.compiler.JspRuntimeContext.<init>(JspRuntimeContext.java:197)
at org.apache.jasper.servlet.JspServlet.init(JspServlet.java:150)
at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:440)
at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:736)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:895)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
at org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:142)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1128)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1540)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1536)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1469)
2017-01-26 09:34:36,866 INFO [main] org.mortbay.log: Started [email protected]:42435
2017-01-26 09:34:36,876 ERROR [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Webapps failed to start.     Ignoring for now:
org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
at org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:142)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1128)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1540)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1536)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1469)
Caused by: java.io.IOException: Problem in starting http server. Server handlers failed
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:907)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
... 10 more
2017-01-26 09:34:36,909 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class     java.util.concurrent.LinkedBlockingQueue
2017-01-26 09:34:36,918 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-01-26 09:34:36,926 INFO [Socket Reader #1 for port 41967] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for     port 41967
2017-01-26 09:34:36,926 INFO [IPC Server listener on 41967] org.apache.hadoop.ipc.Server: IPC Server listener on 41967:     starting
2017-01-26 09:34:37,048 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2017-01-26 09:34:37,048 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2017-01-26 09:34:37,048 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2017-01-26 09:34:37,222 ERROR [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception while registering
java.lang.NullPointerException
at org.apache.hadoop.mapreduce.v2.app.client.MRClientService.getHttpPort(MRClientService.java:174)
at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:157)
at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:250)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:851)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1131)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1540)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1536)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1469)
2017-01-26 09:34:37,223 INFO [main] org.apache.hadoop.service.AbstractService: Service RMCommunicator failed in state     STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.NullPointerException
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.NullPointerException
at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:178)
at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:250)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:851)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1131)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1540)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1536)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1469)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.mapreduce.v2.app.client.MRClientService.getHttpPort(MRClientService.java:174)
at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:157)
... 14 more    
1

1 Answers

0
votes

How to Install and Use the ShareLib

By default, the ShareLib should be placed in the home folder in HDFS of the user who started the Oozie web server; this is not necessarily the same user as the one submitting a job. In CDH3 and CDH4, this user is named ‘oozie’. The property in oozie-site.xml for setting the location of the ShareLib is called oozie.service.WorkflowAppService.system.libpath and its default value is /user/${user.name}/share/lib, where ${user.name} gets resolved to the user who started the Oozie server. Hence, the default location to install the ShareLib is /user/oozie/share/lib. More detailed instructions for installing the ShareLib can be found in the CDH4 Oozie documentation here. (A future release of Cloudera Manager will be able to install the ShareLib automatically.)

Find the full documentation Here

http://blog.cloudera.com/blog/2012/12/how-to-use-the-sharelib-in-apache-oozie/