0
votes

I'm trying a parse a log file in Spark 1.6 using scala, here is the sample data

2017-02-04 04:48:11,123 DEBUG [org.quartz.core.QuartzSchedulerThread] - <batch acquisition of 0 triggers>
2017-02-04 04:48:20,892 INFO [org.jasig.inspektr.audit.support.Slf4jLoggingAuditTrailManager] - <Audit trail record BEGIN
=============================================================
WHO: audit:unknown
WHAT: TGT-7d937-yRqp6ObM7JOtkUZ7Ff4yEo95-casino1.example.org
ACTION: TICKET_GRANTING_TICKET_DESTROYED
APPLICATION: CASINO
WHEN: Sat Feb 04 04:48:20 AEDT 2017
CLIENT IP ADDRESS: 160.50.201.557
SERVER IP ADDRESS: login.cfu.asg
=============================================================

>
2017-02-04 04:48:32,165 INFO [org.jasig.cas.services.DefaultServicesManagerImpl] - <Reloading registered services.>
2017-02-04 04:48:32,167 INFO [org.jasig.casino.services.DefaultServicesManagerImpl] - <Loaded 2 services.>
2017-02-04 04:48:38,889 DEBUG [org.quartz.core.QuartzSchedulerThread] - <batch acquisition of 1 triggers>
2017-02-04 04:48:52,790 DEBUG [org.quartz.core.QuartzSchedulerThread] - <batch acquisition of 0 triggers>
2017-02-04 04:48:52,790 DEBUG [org.quartz.core.JobRunShell] - <Calling execute on job DEFAULT.serviceRegistryReloaderJobDetail>
2017-02-04 04:48:52,790 INFO [org.jasig.casino.services.DefaultServicesManagerImpl] - <Reloading registered services.>
2017-02-04 04:48:52,792 DEBUG [org.jasig.casino.services.DefaultServicesManagerImpl] - <Adding registered service ^(https?|imaps?)://.*>
2017-02-04 04:48:52,792 DEBUG [org.jasig.casino.services.DefaultServicesManagerImpl] - <Adding registered service
2017-02-04 04:48:52,792 INFO [org.jasig.casino.services.DefaultServicesManagerImpl] - <Loaded 2 services.>
2017-02-04 04:49:14,365 INFO [org.jasig.casino.services.DefaultServicesManagerImpl] - <Reloading registered services.>
2017-02-04 04:49:14,366 INFO [org.jasig.casino.services.DefaultServicesManagerImpl] - <Loaded 2 services.>
2017-02-04 04:49:19,699 DEBUG [org.quartz.core.QuartzSchedulerThread] - <batch acquisition of 0 triggers>
2017-02-04 04:49:43,465 DEBUG [org.quartz.core.QuartzSchedulerThread] - <batch acquisition of 0 triggers>
2017-02-04 04:50:00,978 INFO [org.jasig.casino.authentication.PolicyBasedAuthenticationManager] - <JaasAuthenticationHandler successfully authenticated >
2017-02-04 04:50:00,978 INFO [org.jasig.casino.authentication.PolicyBasedAuthenticationManager] - <Authenticated 3785973 with credentials.>
2017-02-04 04:50:00,978 INFO [org.jasig.inspektr.nhgij.support.Slf4jLogggbhAuditTrailManaver] - <Audit trail record BEGIN
=============================================================
WHO: z3705z73
WHAT: supplied credentials: [d37c5973]
ACTION: AUTHENTICATION_SUCCESS
APPLICATION: casinoINO
WHEN: Sat Feb 04 04:50:00 AEDT 2017
CLIENT IP ADDRESS: 101.181.28.555
SERVER IP ADDRESS: login.cfu.asg
=============================================================

>

And the data goes on, there can be other log data inbetween the patterns which is not relevant for my parsing though. I have about 40GB of files each contains one day's data.

All these files are gzip compressed. I tried using sc.wholeTextFiles to get a pair RDD, but running into Java heapspace errors as each file goes between 400mb to 800mb (uncompressed).

So i started using sc.textFile and experimenting with one reading one file. I can create a RDD[String], luckily sc.textFile does not return me any heapspace issues when run any action on this RDD.

Here is the code i tried.

val casinop2 = sc.wholeTextFiles("/logdata/casino/catalina.out-20150228.gz")

val casop = casinop2.flatMap(x=>x.split("\n")) .filter(x=> !(x.contains("Reloading registered services") || x.contains("Loaded 2 services.") || x.contains("DEBUG") || x.contains("ERROR") || x.contains("java.lang.RuntimeException") || x.contains("Caused by:") || x.contains("Granted ticket") || x.contains("java.lang.IllegalStateException") || x.startsWith("\t") || x.contains("org.jasig.cas.authentication.PolicyBasedAuthenticationManager") ))

val pattern = new Regex("""((\d{4})-(\d{2})-\d{2}\s\d{2}:\d{2}:\d{2}),\d{3}\s+(\w+)\s+\[(.*)\]\s+\-\s+\<.*\s\=*\s+([W][H][O]\:)\s+(.*)\s+([W][H][A][T]\:)\s+(.*)\s+([A][C][T][I][O][N]\:)\s+(.*)\s+([A][P][P][L][I][C][A][T][I][O][N]\:)\s+(.*)\s+([W][H][E][N]\:)\s+(.*)\s+([A-Z\s]{17}\:)\s+(.*)\s+([A-Z\s]{17}\:)\s+(.*)\s+\=*\s\s\>""") pattern: scala.util.matching.Regex = ((\d{4})-(\d{2})-\d{2}\s\d{2}:\d{2}:\d{2}),\d{3}\s+(\w+)\s+\[(.*)\]\s+\-\s+\<.*\s\=*\s+([W][H][O]\:)\s+(.*)\s+([W][H][A][T]\:)\s+(.*)\s+([A][C][T][I][O][N]\:)\s+(.*)\s+([A][P][P][L][I][C][A][T][I][O][N]\:)\s+(.*)\s+([W][H][E][N]\:)\s+(.*)\s+([A-Z\s]{17}\:)\s+(.*)\s+([A-Z\s]{17}\:)\s+(.*)\s+\=*\s\s\>

case class MLog(datetime: String, message: String, process: String, who: String, what: String, action: String, application: String, when: String, clientipaddress: String, serveripaddress: String,year: String, month: String)

pattern.findAllMatchIn(casop.collect.toString).toList

Now the last statement throws me heapspace error. The reason i want rdd into a string variable is regex needs multi line input, not single line. For single line, i would use map, flatmap etc.

The output i should get from the log file should be

|2017-02-04 04:54:41|   INFO|org.jasig.inspekt...|     s4542732|supplied credenti...|AUTHENTICATION_SU...|        CAS|Sat Feb 04 04:54:...|  175.163.28.77|login.vu.edu.au|2017|   02|
|2017-02-04 04:54:41|   INFO|org.jasig.inspekt...|     s4542732|TGT-78959-EX63Wf2...|TICKET_GRANTING_T...|        CAS|Sat Feb 04 04:54:...|  175.163.28.77|login.vu.edu.au|2017|   02|
|2017-02-04 04:54:41|   INFO|org.jasig.inspekt...|      4542732|ST-474481-jTxCJFB...|SERVICE_TICKET_CR...|        CAS|Sat Feb 04 04:54:...|  175.163.28.77|login.vu.edu.au|2017|   02|
|2017-02-04 04:54:44|   INFO|org.jasig.inspekt...|audit:unknown|ST-474481-jTxCJFB...|SERVICE_TICKET_VA...|        CAS|Sat Feb 04 04:54:...|  203.13.194.68|login.vu.edu.au|2017|   02|
|2017-02-04 04:55:02|   INFO|org.jasig.inspekt...|     s3785573|supplied credenti...|AUTHENTICATION_SU...|        CAS|Sat Feb 04 04:55:...| 101.181.28.125|login.vu.edu.au|2017|   02|
|2017-02-04 04:55:02|   INFO|org.jasig.inspekt...|     s3785573|TGT-78960-yWaWkcN...|TICKET_GRANTING_T...|        CAS|Sat Feb 04 04:55:...| 101.181.28.125|login.vu.edu.au|2017|   02|
|2017-02-04 04:55:02|   INFO|org.jasig.inspekt...|      3785573|ST-474482-rARxdUG...|SERVICE_TICKET_CR...|        CAS|Sat Feb 04 04:55:...| 101.181.28.125|login.vu.edu.au|2017|   02|
|2017-02-04 04:55:02|   INFO|org.jasig.inspekt...|audit:unknown|ST-474482-rARxdUG...|SERVICE_TICKET_VA...|        CAS|Sat Feb 04 04:55:...|  203.13.194.68|login.vu.edu.au|2017|   02|
+-------------------+-------+--------------------+-------------+--------------------+--------------------+-----------+--------------------+---------------+---------------+----+-----+

How can we read a multiline input and feed to regex?

1
Have you tried increasing your heap size? Ex: --executor-memory 10gAllan
Satisfied about my answer? I hope that it helped you!!!Allan
Thanks for improved regex, i tried with --executor-memory 10g, it still throwing the error "java.lang.OutOfMemoryError: GC overhead limit exceeded"Marsi
For some reason the garbage collector is taking an excessive amount of time (98% of CPU time of the process) and recovers very little memory each time (2% of the heap). This effectively means that your program stops doing any progress and is busy running only the garbage collection at all time. To prevent your application from soaking up CPU time without getting anything done, the JVM throws this Error so that you have a chance of diagnosing the problem. This happens in some code where tons of temporary objects are created in an already very memory-constrained environment.Allan
@Allan, setting heap size to 10G did not solve the outofmemory error. I couldnt troubleshoot what else could go wrong. I did a work around to continue with my processing. I call sc.textfiles and read large input files, filtered them, saved to temporary location and read them in sc.wholetextfiles. The temporary files were less than half of the size of original files, so it did not threw the outofMemory error.Marsi

1 Answers

1
votes

I have fixed and improved your regex and it should work now for your last logs that are on several lines:

The regex is the following beast:

(\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}),\d{3}\s+(\w+)\s+\[(.*)\]\s+\-\s+<[^>]*\s\=*\s+WHO\:\s+([^>\n]*)\s+WHAT\:\s+([^>\n]*)\s+ACTION\:\s+([^>\n]*)\s+APPLICATION\:\s+([^>\n]*)\s+WHEN\:\s+([^>\n]*)\s+([A-Z\s]{17}\:)\s+([^>\n]*)\s+([A-Z\s]{17}\:)\s+([^>\n]*)\s+\=*\s\s>

I have tried it with your logs by using the following replacement pattern that you should adapt depending on your exact needs:

\1 | \2 | \3 | WHO:\4 | WHAT: \5 | ACTION: \6 | APPLICATION: \7 | WHEN: \8 | \9  $10 | $11  $12

Here is the result:

before changes

after changes

Last but not least, you might have to change your heapsize: --executor-memory 10g