0
votes

Splunk Version 6.5.3.1

Splunk Build bf0ff7c2ab8b

Jenkins Version 1.642.3 or 2.32.3

On every Jenkins master, there's a splunk process running.

$ ps -eAf|grep splunk
splunk    58877      1 20 Feb16 ?        42-23:27:37 splunkd -p 8089 restart
splunk    58878  58877  0 Feb16 ?        00:00:00 [splunkd pid=58877] splunkd -p 8089 restart [process-runner]
asangal   91197  91175  0 12:38 pts/2    00:00:00 grep --color=auto splunk

Splunk process monitors / scans for the log file for any Jenkins jobs we have in our instance i.e. under $JENKINS_HOME/jobs/<JOB_NAME>/builds/<BUILD_NUMBER>/log file.

$ pwd
/opt/splunkforwarder/etc/system/local
$ cat inputs.conf
[default]
host = jenkins-master-project-prod-1-609 

[monitor:///var/log/jenkins]
index = some-jenkins-prod-index-in-splunk
disabled = False
recursive = True

[monitor:///home/jenkins/jobs/.../builds/.../log]
index = some-jenkins-prod-index-in-splunk
disabled = False
recursive = True
crcSalt = <SOURCE>

...
.....
... more config code here ...
....
..

In Splunk GUI, when I run a simple query to look for anything that Splunk captured for the same index and which is coming from any source (file), I do see valid outupt. Note: Actual row output is truncated. As you can see the bar chart, data is there and the table is populated. enter image description here

In my Jenkins job, I sometimes get some WARNINGs, INFO, ERRORS (for which I'm already using Log Parser Plugin at Jenkins level) and I'm trying to write a script which will fetch this LOG output of a Jenkins job from Splunk for the last 15, 30 minutes or last 1-7 hours or 1-30 days and find how many warnings, errors, etc (based on some keywords, regex) were found for that given time period. NOTE: There are various such Jenkins masters, where Splunk is runnings and my goal is to talk to Splunk and get the data what I need (rather than talking to 500 Jenkins masters).

I tried the following CURL commands which returns me a SEARCH ID, but it's not doing anything.

In the following CURL command, I'm passing a more refined query to fetch data. I saying fetch all the info Splunk has (fields can be added as per the GUI) within the LAST 30 minutes and where index is some-jenkins-prod-index-in-splunk and where the source of the log is: /home/jenkins/jobs/*/builds/*/log (first * for job name, second * for build number) and then I'm saying search the LOG in splunk where the log contains either of the lines/keywords/regex (as listed below using OR condition) and display the output in JSON format.

➜  ~ p=$(cat ~/AKS/rOnly/od.p.txt)
➜  ~ curl --connect-time 10 --max-time 900 -ks https://splunk.server.mycompany.com:8089/services/search -umy_splunk_user:$p --data search='search earliest=-30m index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log ("WARNING: " OR "npm WARN retry" OR "svn: E200033: " OR ": binary operator expected" OR ": too many arguments" OR ": No such file or directory" OR "rsync: failed to set times on")' -d output_mode=json
{"messages":[{"type":"ERROR","text":"Method Not Allowed"}]}%                                                                                  ➜  ~

As you can see, it's giving me Method not allowed.

When I gave the following query with /jobs in the URL part, I got a valid SEARCH ID.

➜  ~ curl --connect-time 10 --max-time 900 -ks https://splunk.server.mycompany.com:8089/services/search/jobs -umy_splunk_user:$p --data search='search earliest=-30m index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log ("WARNING: " OR "npm WARN retry" OR "svn: E200033: " OR ": binary operator expected" OR ": too many arguments" OR ": No such file or directory" OR "rsync: failed to set times on")' -d output_mode=json
{"sid":"1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A"}%

Using this SEARCH ID, I'm trying to get to the main logs but it's not working. I'm using jq to filter the JSON output to show in a nice layout.

 ➜  ~ curl --connect-time 10 --max-time 900 -ks https://splunk.server.mycompany.com:8089/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A -umy_splunk_user:$p --data search='search earliest=-30m index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log ("WARNING: " OR "npm WARN retry" OR "svn: E200033: " OR ": binary operator expected" OR ": too many arguments" OR ": No such file or directory" OR "rsync: failed to set times on")' -d output_mode=json|jq .
{
  "links": {},
  "origin": "http://splunk.server.mycompany.com/services/search/jobs",
  "updated": "2017-09-15T09:44:33-07:00",
  "generator": {
    "build": "bf0ff7c2ab8b",
    "version": "6.5.3.1"
  },
  "entry": [
    {
      "name": "search earliest=-30m index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log (\"WARNING: \" OR \"npm WARN retry\" OR \"svn: E200033: \" OR \": binary operator expected\" OR \": too many arguments\" OR \": No such file or directory\" OR \"rsync: failed to set times on\") | regex source=\".*/[0-9][0-9]*/log\" | table host, source, _raw",
      "id": "http://splunk.server.mycompany.com/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A",
      "updated": "2017-09-15T09:44:33.942-07:00",
      "links": {
        "alternate": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A",
        "search.log": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/search.log",
        "events": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/events",
        "results": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/results",
        "results_preview": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/results_preview",
        "timeline": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/timeline",
        "summary": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/summary",
        "control": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/control"
      },
      "published": "2017-09-15T09:43:59.000-07:00",
      "author": "my_splunk_user",
      "content": {
        "bundleVersion": "17557160226808436058",
        "canSummarize": false,
        "cursorTime": "1969-12-31T16:00:00.000-08:00",
        "defaultSaveTTL": "2592000",
        "defaultTTL": "600",
        "delegate": "",
        "diskUsage": 561152,
        "dispatchState": "DONE",
        "doneProgress": 1,
        "dropCount": 0,
        "earliestTime": "2017-09-15T09:13:58.000-07:00",
        "eventAvailableCount": 0,
        "eventCount": 30,
        "eventFieldCount": 0,
        "eventIsStreaming": true,
        "eventIsTruncated": true,
        "eventSearch": "search (earliest=-30m index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log (\"WARNING: \" OR \"npm WARN retry\" OR \"svn: E200033: \" OR \": binary operator expected\" OR \": too many arguments\" OR \": No such file or directory\" OR \"rsync: failed to set times on\")) | regex source=\".*/[0-9][0-9]*/log\" ",
        "eventSorting": "none",
        "isBatchModeSearch": true,
        "isDone": true,
        "isEventsPreviewEnabled": false,
        "isFailed": false,
        "isFinalized": false,
        "isPaused": false,
        "isPreviewEnabled": false,
        "isRealTimeSearch": false,
        "isRemoteTimeline": false,
        "isSaved": false,
        "isSavedSearch": false,
        "isTimeCursored": true,
        "isZombie": false,
        "keywords": "\"*: binary operator expected*\" \"*: no such file or directory*\" \"*: too many arguments*\" \"*npm warn retry*\" \"*rsync: failed to set times on*\" \"*svn: e200033: *\" \"*warning: *\" earliest::-30m index::some-jenkins-prod-index-in-splunk source::/home/jenkins/jobs/*/builds/*/log",
        "label": "",
        "latestTime": "2017-09-15T09:43:59.561-07:00",
        "normalizedSearch": "litsearch ( index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log ( \"WARNING: \" OR \"npm WARN retry\" OR \"svn: E200033: \" OR \": binary operator expected\" OR \": too many arguments\" OR \": No such file or directory\" OR \"rsync: failed to set times on\" ) _time>=1505492038.000 ) | regex source=\".*/[0-9][0-9]*/log\" | fields keepcolorder=t \"_raw\" \"host\" \"source\"",
        "numPreviews": 0,
        "optimizedSearch": "| search (earliest=-30m index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log (\"WARNING: \" OR \"npm WARN retry\" OR \"svn: E200033: \" OR \": binary operator expected\" OR \": too many arguments\" OR \": No such file or directory\" OR \"rsync: failed to set times on\")) | regex source=\".*/[0-9][0-9]*/log\" | table host, source, _raw",
        "pid": "2174",
        "priority": 5,
        "remoteSearch": "litsearch ( index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log ( \"WARNING: \" OR \"npm WARN retry\" OR \"svn: E200033: \" OR \": binary operator expected\" OR \": too many arguments\" OR \": No such file or directory\" OR \"rsync: failed to set times on\" ) _time>=1505492038.000 ) | regex  source=\".*/[0-9][0-9]*/log\"  | fields  keepcolorder=t \"_raw\" \"host\" \"source\"",
        "reportSearch": "table  host, source, _raw",
        "resultCount": 30,
        "resultIsStreaming": false,
        "resultPreviewCount": 30,
        "runDuration": 0.579,
        "sampleRatio": "1",
        "sampleSeed": "0",
        "scanCount": 301,
        "searchCanBeEventType": false,
        "searchEarliestTime": 1505492038,
        "searchLatestTime": 1505493839.21872,
        "searchTotalBucketsCount": 37,
        "searchTotalEliminatedBucketsCount": 0,
        "sid": "1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A",
        "statusBuckets": 0,
        "ttl": 600,
        "performance": {
          "command.fields": {
            "duration_secs": 0.035,
            "invocations": 48,
            "input_count": 30,
            "output_count": 30
          },
          "command.regex": {
            "duration_secs": 0.048,
            "invocations": 48,
            "input_count": 30,
            "output_count": 30
          },
          "command.search": {
            "duration_secs": 1.05,
            "invocations": 48,
            "input_count": 0,
            "output_count": 30
          },
          "command.search.calcfields": {
            "duration_secs": 0.013,
            "invocations": 16,
            "input_count": 301,
            "output_count": 301
          },
          "dispatch.optimize.reparse": {
            "duration_secs": 0.001,
            "invocations": 1
          },
          "dispatch.optimize.toJson": {
            "duration_secs": 0.001,
            "invocations": 1
          },
          "dispatch.optimize.toSpl": {
            "duration_secs": 0.001,
            "invocations": 1
          },
          "dispatch.parserThread": {
            "duration_secs": 0.048,
            "invocations": 48
          },
          "dispatch.reduce": {
            "duration_secs": 0.001,
            "invocations": 1
          },
          "dispatch.stream.remote": {
            "duration_secs": 1.05,
            "invocations": 48,
            "input_count": 0,
            "output_count": 332320
          },
          "dispatch.stream.remote.mr11p01if-ztbv02090901.mr.if.mycompany.com-8081": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv02090901.mr.if.mycompany.com-8082": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv11204201.mr.if.mycompany.com-8081": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv11204201.mr.if.mycompany.com-8082": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv11204401.mr.if.mycompany.com-8081": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv11204401.mr.if.mycompany.com-8082": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv16142101.mr.if.mycompany.com-8081": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv16142101.mr.if.mycompany.com-8082": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv16142301.mr.if.mycompany.com-8081": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr11p01if-ztbv16142301.mr.if.mycompany.com-8082": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr21p01if-ztbv14080101.mr.if.mycompany.com-8081": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr21p01if-ztbv14080101.mr.if.mycompany.com-8082": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr22p01if-ztbv07132101.mr.if.mycompany.com-8081": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr22p01if-ztbv07132101.mr.if.mycompany.com-8082": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr22p01if-ztbv09013201.mr.if.mycompany.com-8081": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr22p01if-ztbv09013201.mr.if.mycompany.com-8082": {
            "duration_secs": 0.001,
            "invocations": 1,
            "input_count": 0,
            "output_count": 5422
          },
          "dispatch.stream.remote.mr90p01if-ztep02103701.mr.if.mycompany.com-8081": {
            "duration_secs": 0.058,
            "invocations": 2,
            "input_count": 0,
            "output_count": 16948
          },
          "dispatch.stream.remote.mr90p01if-ztep02103701.mr.if.mycompany.com-8082": {
            "duration_secs": 0.066,
            "invocations": 2,
            "input_count": 0,
            "output_count": 14415
          },
          "dispatch.stream.remote.mr90p01if-ztep04044101.mr.if.mycompany.com-8081": {
            "duration_secs": 0.059,
            "invocations": 2,
            "input_count": 0,
            "output_count": 15858
          },
          "dispatch.stream.remote.mr90p01if-ztep04044101.mr.if.mycompany.com-8082": {
            "duration_secs": 0.065,
            "invocations": 2,
            "input_count": 0,
            "output_count": 11867
          },
          "dispatch.stream.remote.mr90p01if-ztep06024101.mr.if.mycompany.com-8081": {
            "duration_secs": 0.061,
            "invocations": 2,
            "input_count": 0,
            "output_count": 20695
          },
          "dispatch.stream.remote.mr90p01if-ztep06024101.mr.if.mycompany.com-8082": {
            "duration_secs": 0.06,
            "invocations": 2,
            "input_count": 0,
            "output_count": 15193
          },
          "dispatch.stream.remote.mr90p01if-ztep12023601.mr.if.mycompany.com-8081": {
            "duration_secs": 0.063,
            "invocations": 2,
            "input_count": 0,
            "output_count": 15932
          },
          "dispatch.stream.remote.mr90p01if-ztep12023601.mr.if.mycompany.com-8082": {
            "duration_secs": 0.064,
            "invocations": 2,
            "input_count": 0,
            "output_count": 14415
          },
          "dispatch.stream.remote.mr90p01if-ztep12043901.mr.if.mycompany.com-8081": {
            "duration_secs": 0.061,
            "invocations": 2,
            "input_count": 0,
            "output_count": 15418
          },
          "dispatch.stream.remote.mr90p01if-ztep12043901.mr.if.mycompany.com-8082": {
            "duration_secs": 0.058,
            "invocations": 2,
            "input_count": 0,
            "output_count": 11866
          },
          "dispatch.stream.remote.pv31p01if-ztbv08050801.pv.if.mycompany.com-8081": {
            "duration_secs": 0.075,
            "invocations": 2,
            "input_count": 0,
            "output_count": 15661
          },
          "dispatch.stream.remote.pv31p01if-ztbv08050801.pv.if.mycompany.com-8082": {
            "duration_secs": 0.071,
            "invocations": 2,
            "input_count": 0,
            "output_count": 15845
          },
          "dispatch.stream.remote.pv31p01if-ztbv08051001.pv.if.mycompany.com-8081": {
            "duration_secs": 0.066,
            "invocations": 2,
            "input_count": 0,
            "output_count": 14406
          },
          "dispatch.stream.remote.pv31p01if-ztbv08051001.pv.if.mycompany.com-8082": {
            "duration_secs": 0.072,
            "invocations": 2,
            "input_count": 0,
            "output_count": 15524
          },
          "dispatch.stream.remote.pv31p01if-ztbv08051201.pv.if.mycompany.com-8081": {
            "duration_secs": 0.067,
            "invocations": 2,
            "input_count": 0,
            "output_count": 16009
          },
          "dispatch.stream.remote.pv31p01if-ztbv08051201.pv.if.mycompany.com-8082": {
            "duration_secs": 0.068,
            "invocations": 2,
            "input_count": 0,
            "output_count": 15516
          },
          "dispatch.writeStatus": {
            "duration_secs": 0.012,
            "invocations": 7
          },
          "startup.configuration": {
            "duration_secs": 2.045,
            "invocations": 33
          },
          "startup.handoff": {
            "duration_secs": 14.595,
            "invocations": 33
          }
        },
        "messages": [
          {
            "type": "INFO",
            "text": "Your timerange was substituted based on your search string"
          },
          {
            "type": "WARN",
            "text": "Unable to distribute to peer named pv31p01if-ztbv08050601.pv.if.mycompany.com:8081 at uri=pv31p01if-ztbv08050601.pv.if.mycompany.com:8081 using the uri-scheme=http because peer has status=\"Down\".  Please verify uri-scheme, connectivity to the search peer, that the search peer is up, and an adequate level of system resources are available. See the Troubleshooting Manual for more information."
          },
          {
            "type": "WARN",
            "text": "Unable to distribute to peer named pv31p01if-ztbv08050601.pv.if.mycompany.com:8082 at uri=pv31p01if-ztbv08050601.pv.if.mycompany.com:8082 using the uri-scheme=http because peer has status=\"Down\".  Please verify uri-scheme, connectivity to the search peer, that the search peer is up, and an adequate level of system resources are available. See the Troubleshooting Manual for more information."
          }
        ],
        "request": {
          "search": "search earliest=-30m index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log (\"WARNING: \" OR \"npm WARN retry\" OR \"svn: E200033: \" OR \": binary operator expected\" OR \": too many arguments\" OR \": No such file or directory\" OR \"rsync: failed to set times on\") | regex source=\".*/[0-9][0-9]*/log\" | table host, source, _raw"
        },
        "runtime": {
          "auto_cancel": "0",
          "auto_pause": "0"
        },
        "searchProviders": [
          "mr11p01if-ztbv02090901.mr.if.mycompany.com-8081",
          "mr11p01if-ztbv16142101.mr.if.mycompany.com-8082",
          "mr11p01if-ztbv16142301.mr.if.mycompany.com-8081",
          "mr11p01if-ztbv16142301.mr.if.mycompany.com-8082",
          "mr21p01if-ztbv14080101.mr.if.mycompany.com-8081",
          "mr21p01if-ztbv14080101.mr.if.mycompany.com-8082",
          "mr22p01if-ztbv07132101.mr.if.mycompany.com-8081",
          "mr22p01if-ztbv07132101.mr.if.mycompany.com-8082",
          "mr22p01if-ztbv09013201.mr.if.mycompany.com-8081",
          "mr22p01if-ztbv09013201.mr.if.mycompany.com-8082",
          "mr90p01if-ztep02103701.mr.if.mycompany.com-8081",
          "mr90p01if-ztep02103701.mr.if.mycompany.com-8082",
          "mr90p01if-ztep04044101.mr.if.mycompany.com-8081",
          "mr90p01if-ztep04044101.mr.if.mycompany.com-8082",
          "mr90p01if-ztep06024101.mr.if.mycompany.com-8081",
          "mr90p01if-ztep06024101.mr.if.mycompany.com-8082",
          "mr90p01if-ztep12023601.mr.if.mycompany.com-8081",
          "mr90p01if-ztep12023601.mr.if.mycompany.com-8082",
          "mr90p01if-ztep12043901.mr.if.mycompany.com-8081",
          "mr90p01if-ztep12043901.mr.if.mycompany.com-8082",
          "pv31p01if-ztbv08050801.pv.if.mycompany.com-8081",
          "pv31p01if-ztbv08050801.pv.if.mycompany.com-8082",
          "pv31p01if-ztbv08051001.pv.if.mycompany.com-8081",
          "pv31p01if-ztbv08051001.pv.if.mycompany.com-8082",
          "pv31p01if-ztbv08051201.pv.if.mycompany.com-8081",
          "pv31p01if-ztbv08051201.pv.if.mycompany.com-8082"
        ]
      },
      "acl": {
        "perms": {
          "read": [
            "my_splunk_user"
          ],
          "write": [
            "my_splunk_user"
          ]
        },
        "owner": "my_splunk_user",
        "modifiable": true,
        "sharing": "global",
        "app": "search",
        "can_write": true,
        "ttl": "600"
      }
    }
  ],
  "paging": {
    "total": 1,
    "perPage": 0,
    "offset": 0
  }
}
➜  ~
➜  ~

BUT, as you can see, the generated JSON output is of no use, as it's not showing or containing any of the Jenkins job's output that I can use.

If in the CURL command, for the Splunk URL, if I try any of the following URLs end points, it's giving me an error.

    "search.log": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/search.log",
    "events": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/events",
    "results": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/results",
    "results_preview": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/results_preview",
    "timeline": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/timeline",
    "summary": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/summary",
    "control": "/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/control"

For ex: if I try URL.../<SEARCH_ID>/events, or URL/.../<SEARCH_ID>/results, etc, I get the following error.

curl --connect-time 10 --max-time 900 -ks https://splunk.server.mycompany.com:8089/services/search/jobs/1505493838.3723_ACEB82F4-AA21-4AE2-95A3-566F6BCAA05A/events -umy_splunk_user:$p --data search='search earliest=-30m index=some-jenkins-prod-index-in-splunk source=/home/jenkins/jobs/*/builds/*/log ("WARNING: " OR "npm WARN retry" OR "svn: E200033: " OR ": binary operator expected" OR ": too many arguments" OR ": No such file or directory" OR "rsync: failed to set times on")' -d output_mode=json|jq .

{
  "messages": [
    {
      "type": "FATAL",
      "text": "Method Not Allowed"
    }
  ]
}

I'm trying to find the hostname, source (Jenkins job log's path), the actual job's console output (that I can read and parse to generate meaniningful information) about in the last N time period, how many, errors, warnings, weird lines showed up and depending upon some thresholds, if the numbers cross those thresholds, then I need to send an email notification.

I can code all this but I'm not getting to the very first piece of the puzzle here which is Getting Splunk to spit out the CONSOLE OUTPUT of Jenkins job's which splunk is monitoring on the file system.

The end goal is to dump the meaningful data into a text file in the form or JSON or CSV and convert that data into some meaningful bar/pie charts etc.

For ex: if data.csv contains:

age,population
<5,2704659
5-13,4499890
14-17,2159981
18-24,3853788
25-44,14106543
45-64,8819342
65-85,312463
≥85,81312463

Then using the following file, I can convert this raw data into a pie chart which will look like as the image snapshot shown below.

<!DOCTYPE html>
<meta charset="utf-8">
<style>

.arc text {
  font: 10px sans-serif;
  text-anchor: middle;
}

.arc path {
  stroke: #fff;
}

</style>
<svg width="960" height="500"></svg>
<script src="https://d3js.org/d3.v4.min.js"></script>
<script>

var svg = d3.select("svg"),
    width = +svg.attr("width"),
    height = +svg.attr("height"),
    radius = Math.min(width, height) / 2,
    g = svg.append("g").attr("transform", "translate(" + width / 2 + "," + height / 2 + ")");

var color = d3.scaleOrdinal(["#98abc5", "#8a89a6", "#7b6888", "#6b486b", "#a05d56", "#d0743c", "#ff8c00"]);

var pie = d3.pie()
    .sort(null)
    .value(function(d) { return d.population; });

var path = d3.arc()
    .outerRadius(radius - 10)
    .innerRadius(0);

var label = d3.arc()
    .outerRadius(radius - 40)
    .innerRadius(radius - 40);

d3.csv("data.csv", function(d) {
  d.population = +d.population;
  return d;
}, function(error, data) {
  if (error) throw error;

  var arc = g.selectAll(".arc")
    .data(pie(data))
    .enter().append("g")
      .attr("class", "arc");

  arc.append("path")
      .attr("d", path)
      .attr("fill", function(d) { return color(d.data.age); });

  arc.append("text")
      .attr("transform", function(d) { return "translate(" + label.centroid(d) + ")"; })
      .attr("dy", "0.35em")
      .text(function(d) { return d.data.age; });
});

</script>

Generated Pie chart (due to csv file and html file): enter image description here

2

2 Answers

0
votes

Found the solutions.

I just had to use services/search/jobs/export end point.

Let's find out what was our Jenkins host (which contained the Jenkins job), What was the job's name (this can be parsed / grepped from the source path for the log file) and what was Jenkins's job's actual console output (_raw). Also, lets limit our search for finding the data within last 30 minutes (i.e. earliest=-30m).

There are actually 3 ways to do it.

1) By passing user name and password at command line.

2) By generating a SESSION TOKEN that we can use in any future CURL command in the headers.

3) By generating a --cookie "${COOKIE}" ID and using that. This one is preferred method of all as it replicates cookie values to any backend servers Splunk uses. Cookie name to be used: splunkd_8081

The last 2 solutions depends upon the first method for using a user's credentials to create either the SESSION or COOKIE ID.


Solution 1:

1) Here we'll use our splunk server

2) Pass username and password in the command line

3) Provide Splunk options for finding / fetching Splunk data (for Jenkins logs containing specific lines) and also doing a little extra regex match (so that it'll return exact Jenkins build# for the source path rather than showing 3 more lines for source for the same console output. Jenkins latestBuild, latestSuccessfulBuild etc are symlinks which point to a numbered build and we don't want to list these symlinked source entries in our output so I'm using a regex pattern to find source path where it contains a NUMBERED build# just before the log file in the source path).

4) Then, I'm using | to filter out only 3 fields: host, source, and _raw (that Splunk would return). host contains, which Jenkins server had the Jenkins job. source contains Jenkins job name, build# etc info in it's value. _raw variable contains the Jenkins job's console output (Few lines close to the string / line that we are trying to search for in the console output of a Jenkins job).

NOTE: All these 3 fields are available inside a dictionary variable result so I'm just outputting that.

5) Then, I'm providing the output in json format (you can also use csv). Finally I'm using jq to filter out the information.

NOTE: If you use jq -r ".result._raw" (i.e. _raw field inside dictionary variable result then it'll give you LINE by LINE output for the console output (rather than giving you one blob with \n embedded in it). You can also use sed 's/\\n/\n/g' but jq -r ".result._raw" was easy enough).

Commands ran:

$ p="$(cat ~/my_secret_password.txt)"
$
$ # The above command will set my password in variable 'p'
$
$ curl --connect-time 10 --max-time 900 -ks https://splunk.mycompany.com:8089/services/search/jobs/export -umy_splunk_user:$p --data search='search earliest=-30m index=some-jenkins-prod-index source=/home/jenkins/jobs/*/builds/*/log ("WARNING: " OR "npm WARN retry" OR "svn: E200033: " OR ": binary operator expected" OR ": too many arguments" OR ": No such file or directory" OR "rsync: failed to set times on") | regex source=".*/[0-9][0-9]*/log" | table host, source, _raw' -d output_mode=json | jq ".result"
$
$ # The following will give you LINE by LINE output for the console output 
$ curl --connect-time 10 --max-time 900 -ks https://splunk.mycompany.com:8089/services/search/jobs/export -umy_splunk_user:$p --data search='search earliest=-30m index=some-jenkins-prod-index source=/home/jenkins/jobs/*/builds/*/log ("WARNING: " OR "npm WARN retry" OR "svn: E200033: " OR ": binary operator expected" OR ": too many arguments" OR ": No such file or directory" OR "rsync: failed to set times on") | regex source=".*/[0-9][0-9]*/log" | table host, source, _raw' -d output_mode=json | jq -r ".result._raw"


NOTE: The userID and password is passed as -umy_splunk_user:$p (no space required after/between -u and actual splunk username.


Solution 2:

Solution no. 2 is by using a SESSION KEY/ID. You first have to use services/auth/login end point.

To generate the SESSION KEY/ID, run the following command.

NOTE: For generating the SESSION key, you do need to provide your credentials first but in later CURL / API calls / commands, you can just pass the SESSION key in the headers.

1) Generate the session key / id.

$ p=$(cat ~/my_secret_password.txt)
$ curl -k https://splunk.mycompany.com:8089/services/auth/login --data-urlencode username=my_splunk_userid --data-urlencode password=$p 
<response>
  <sessionKey>192fd3e46a31246da7ea7f109e7f95fd</sessionKey>
</response>

2) Use the session key / id henceforth in subsequent searches.

In subsequent requests set the header Authorization value to the session key () and now you don't need to pass your credentials using -uYourUserID:YourPassword.

$ curl -k -H "Authorization: Splunk 192fd3e46a31246da7ea7f109e7f95fd" --connect-time 10 --max-time 900 https://splunk.mycompany.com:8089/services/search/jobs/export --data search='search earliest=-30m index=some-jenkins-prod-index  source=/home/jenkins/jobs/*/builds/*/log ("WARNING: " OR "npm WARN retry" OR "svn: E200033: " OR ": binary operator expected" OR ": too many arguments" OR ": No such file or directory" OR "rsync: failed to set times on") | regex source=".*/[0-9][0-9]*/log" | table host, source, _raw' -d output_mode=json | jq ".result"


NOTE:

1) for line by line output for console output. Use: jq -r ".result._raw"

2) For count of searches found, you can use | stats count

Now, I can come up with the data I need in either CSV or JSON format and use graphing capabilities to show the data via meaningful charts or send email notifications if thresholds are more or less than a given / expected value (as per my automation script).

For more info, see Splunk REST API doc for search end point here: http://docs.splunk.com/Documentation/Splunk/6.6.3/RESTREF/RESTsearch and https://docs.splunk.com/Documentation/Splunk/6.5.3/SearchReference/SearchTimeModifiers

 second: s, sec, secs, second, seconds
 minute: m, min, minute, minutes
 hour: h, hr, hrs, hour, hours
 day: d, day, days
 week: w, week, weeks
 month: mon, month, months
 quarter: q, qtr, qtrs, quarter, quarters
 year: y, yr, yrs, year, years

If you want to search for data up until the last 30 days and 30 days prior to that point you need to have your earliest=-60d latest=-30d


Solution# 3:

1) Create COOKIE ID, run the follow command.

curl -sSv https://splunk.mycompany.com:8089/services/auth/login --data-urlencode username=your_splunk_userid --data-urlencode password=your_splunk_secret_password -o /dev/null -d cookie=1 2>&1 

It will spit out something like:

< Set-Cookie: splunkd_8081=5omeJunk_ValueHere^kjadaf33999dasdx0ihe28gcEYvbP1yhTjcTjgQCRaOUhco6wwLf5YLsay_2JgZ^J^SEYF9f2nSYkyS0qbu_RE; Path=/; HttpOnly; Max-Age=28800; Expires=Wed, 20 Sep 2017 00:23:39 GMT

Now grab the value part < Set-Cookie: <VALUE_upto_the_semi_colon> and store it in a variable. i.e.

export COOKIE="splunkd_8081=5omeJunk_ValueHere^kjadaf33999dasdx0ihe28gcEYvbP1yhTjcTjgQCRaOUhco6wwLf5YLsay_2JgZ^J^SEYF9f2nSYkyS0qbu_RE"

2) Now, use the cookie in your CURL commands for running similar queries like we did above. You do NOT need to pass credentials -uYourUserID:Password now.

$ curl -k --cookie "${COOKIE}" --connect-time 10 --max-time 900 ... rest of the command here similar to examples shown above ... ...
0
votes

For better implementation, do the following:

  1. Jenkins's Splunk plugin: https://wiki.jenkins.io/display/JENKINS/Splunk+Plugin+for+Jenkins

  2. Splunk's Jenkins Add-on/App: https://splunkbase.splunk.com/app/3332/