0
votes

I am trying to write the spark logs in my custom location on edge node. But my log4j.properties file is overwritten by default cluster properties file in spark2-client/conf/log4j.properties

Please help me in fixing this.

Below are the details:

I am using below version : Spark version 2.1.1.2.6.2.25-1 Scala version 2.11.8

Below is my spark submit command

spark-submit \
--files file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties \
--class com.abc.datalake.ingestion.DataCleansingValidation \
--master yarn --deploy-mode cluster \
--conf spark.executor.memory=12G \
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
--conf spark.driver.memory=2g \
--conf salience=no \
--conf spark.executor.instances=10 \
--conf spark.executor.cores=3 \
--conf spark.rule_src_path='adl://abcdadatalakedev.azuredatalakestore.net/Intake/CDCTest/Meta_RV' \
--conf spark.num_of_partition=200 \
--conf 'spark.eventLog.dir=file:///home/abcdadevadmin/spark_jar/logs/' \
adl://abcdadatalakedev.azuredatalakestore.net/Intake/jar/DataValidationFrameWorkBaselineCDC.jar cat_1 

Below is my properties file

# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

# Set everything to be logged to the console
log4j.rootCategory=DEBUG, console, FILE
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# User log
log4j.logger.DataValidationFramework=DEBUG,ROLLINGFILE
log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ROLLINGFILE.File=file:///home/abcdadevadmin/spark_jar/logs/log.out
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
log4j.appender.ROLLINGFILE.MaxBackupIndex=10
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
log4j.appender.ROLLINGFILE.DatePattern='.'yyyy-MM-dd-HH-mm

Below is log from spark job

In the below log, -Dlog4j.configuration property is set twice. One picking my custom properties file and another is default cluster properties

SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.25-1/spark2/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.25-1/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 14291046      4 -r-x------   1 yarn     hadoop       3635 Apr  6 05:34 ./__spark_conf__/log4j.properties
 14291064      8 -r-x------   1 yarn     hadoop       4221 Apr  6 05:34 ./__spark_conf__/task-log4j.properties
    exec /bin/bash -c "LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH" $JAVA_HOME/bin/java -server -Xmx12288m 
 '-Dhdp.version=' 
 '-Detwlogger.component=sparkexecutor' 
 '-DlogFilter.filename=SparkLogFilters.xml' 
 '-Dlog4j.configuration=file:/home/abcdadevadmin/spark_jar/log4j/log4j.properties' 
 '-DpatternGroup.filename=SparkPatternGroups.xml' 
 '-Dlog4jspark.root.logger=INFO,console,RFA,ETW,Anonymizer' 
 '-Dlog4jspark.log.dir=/var/log/sparkapp/\${user.name}' 
 '-Dlog4jspark.log.file=sparkexecutor.log' 
 '-Dlog4j.configuration=file:/usr/hdp/current/spark2-client/conf/log4j.properties' 
 '-Djavax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl' -Djava.io.tmpdir=$PWD/tmp 
 '-Dspark.driver.port=34369' 
 '-Dspark.history.ui.port=18080' 
 '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:34369 --executor-id 9 --hostname wn8-da0001.zu4isz2uwtcuhdu3c5h0tllmhh.cx.internal.cloudapp.net --cores 3 --app-id application_1522782395512_1033 --user-class-path file:$PWD/__app__.jar 1>/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010/stdout 2>/mnt/resource/hadoop/yarn/log/application_1522782395512_1033/container_1522782395512_1033_01_000010/stderr"

I have also tried using below options, but no luck !!!

--conf 'spark.executor.extraJavaOptions=Dlog4j.configuration=file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties'
--driver-java-options '-Dlog4j.configuration=file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties' 
1

1 Answers

0
votes

If you are using cluster deploy mode, you have to point to local path in the driver and executors, which is the base directory.

Try this:

--conf 'spark.executor.extraJavaOptions=-Dlog4j.configuration=file:./log4j.properties'
--conf 'spark.driver.extraJavaOptions=-Dlog4j.configuration=file:./log4j.properties' 

Dont forget broadcast your file with:

--files file:///home/abcdadevadmin/spark_jar/log4j/log4j.properties