0
votes

I'm investigating an instance that's seeing a rise in index usage reported but not seeing corresponding data increases.

About 7 (of 100+ total) machines sending syslog data the license reports show lots (more than 10x the 'normal machines) of data usage, but if i query for data from those hosts I either find very little data or no data at all.

I've confirmed i'm searching all indexes that exist. I've also confirmed those machines are sending data by blocking the UDP port outbound and confirming splunk stopped receiving data.

My next steps are: attempt to confirm at machine source how much UDP data it's sending and compare to amount recorded by indexer confirm setup of wonky-behaving machines to be very sure there isn't some delta in the system or syslog configuration causing a lot of data to be generated.

Does anyone have advice for troubleshooting data that's recorded by the indexer but not actually stored? I'm wondering if it's possible that these hosts are so noisy they've effectively been cut off with regard to saving their events.

Thanks

1

1 Answers

0
votes

Check $SPLUNK_HOME/var/lib/splunk on disk and make sure your destination index(es) grow in size.

Make sure you are searching as a user that is allowed to search all indexes.

Check if you have a timestamping or timezone issue by searching over all time, or with latest=+24h. Splunk will decline to index data that's too far out of bounds with regard to time, but I don't think it shows that in license_usage.log if that's the case.