1
votes

I need to use Redis as a message key-value store for Logstash to read from. The idea is to use the existing Syslog-ng server to route the syslog for all servers to the Redis server so Logstash is able to read from it. I have my Redis server set up and am able to connect and write to Redis from Syslog-ng server using:

telnet redis.somedomain.com 6379

So the port is open and can be written to however the key value stores are not being sent. I already have the majority of this system working utilizing UDP as well as appending to individual hosts under /var/log/hosts. The change that I have made to my existing syslog-ng.conf file is as follows:

# In Redis Protocol Notation
# $5 = 5 characters(LPUSH), $4 = 4 characters(logs), $(length $MSG) = character length of $MSG,
# $MSG = Log Message per syslog-ng symbols

template t_redis_lpush { template("*3\r\n$5\r\nLPUSH\r\n$4\r\nlogs\r\n$(length $MSG)\r\n$MSG\r\n"); };
destination d_redis_tcp { tcp("redis.somedomain.com" port(6379) template(t_redis_lpush)); };
log { source(remote); source(noforward); filter(f_messages);  destination(d_redis_tcp); flags(final); };

I did not include the f_messages filter content since it already works and is in use to send logs to UDP and to /var/log/hosts. If anyone would like me to extract the filter functions I can post those as well. filter(f_messages) end up processing the result to something along the lines of

"Jan 21 14:27:23 www1/www1 10.252.4.152 - - [21/Jan/2014:14:27:23 -0700] "POST /service.php?session_name=6tiqbpfeu1uc31pg1eimjqpvt0&url=%2Fseo%2FinContentLinks%2Fblogs.somedomain.com%7Cmusic%7C2013%7C12%7Cinterview_fredo.php%2F HTTP/1.1" 200 536 www1.nyc.somedomain.com "66.156.238.1" "-" "Arch Quickcurl" "8126464" 0 92878"

Does anyone have any idea why my Redis template, destination and log shipper for Syslog-ng is not working?

Thanks in advance! Cole

1
How about try settings up Centralized Logstash? logstash.net/docs/1.3.3/tutorials/getting-started-centralized. Set up Logstash shipper at syslog-ng serverBen Lim
The infrastructure that is in place takes the logs from udp from every server and is fed upon it using syslog-ng. It won't allow me to really make changes without taking down the existing framework, and we can not be without logging on production systems or will need to code in a solution for every machines as the case with production deployments using various forms of automation. If I could trap it at the syslog-ng level and send a key-store copy to Redis then I can feed it in to the new Logstash instance and in to the new elasticsearch it'll work for this.Cole Shores
I believe I have narrowed it down to something with the protocol spec or the bash shell. When I telnet in to the redis server and run *3\r\n$5\r\nLPUSH\r\n$4\r\nlogs\r\n$20\r\n"this is some data!"\r\n it throws up an error: -ERR Protocol error: invalid multibulk length. it does this even when I just try and write *3\r\n for a simple carriage return. My redis server is 2.6 so it should be supported as well as using SUSE Linux Enterprise 11.Cole Shores

1 Answers

0
votes

Sorry not seeing this earlier. Have you looked at using format-json()?

Here is a destination that I have been using that works quite well (most of the macros come from a patterndb parser)

destination d_redis {
  redis (
    host("localhost")
    command("LPUSH", "logstash", "$(format-json type=bluecoat proxy_time=${PROXY.TIME} proxy_time_taken=${PROXY.TIME_TAKEN} proxy_c_ip=${PROXY.C_IP} proxy_sc_status=${PROXY.SC_STATUS} proxy_s_action=${PROXY.S_ACTION} proxy_sc_bytes=int64(${PROXY.SC_BYTES}) proxy_cs_bytes=int64(${PROXY.CS_BYTES}) proxy_cs_method=${PROXY.CS_METHOD} proxy_cs_uri_scheme=${PROXY.CS_URI_SCHEME} proxy_cs_host=${PROXY.CS_HOST} proxy_cs_uri_port=${PROXY.CS_URI_PORT} proxy_cs_uri_path=${PROXY.CS_URI_PATH} proxy_cs_uri_equery=${PROXY.CS_URI_EQUERY}  proxy_cs_username=${PROXY.CS_USERNAME} proxy_cs_auth_group=${PROXY.CS_AUTH__GROUP} proxy_s_supplier_name=${PROXY.S_SUPPLIER_NAME} proxy_content_type=${PROXY.CONTENT_TYPE} proxy_referrer=${PROXY.REFERRER} proxy_user_agent=${PROXY.USER_AGENT} proxy_filter_result=${PROXY.FILTER_RESULT} proxy_cs_categories=${PROXY.CS_CATEGORIES} proxy_x_virus_id=${PROXY.X_VIRUS_ID} proxy_s_ip=${PROXY.S_IP} proxy_any=${PROXY.ANYREST})\n")
  );
};

BTW - I really prefer syslog-ng doing the parsing over logstash. My experience is that it is much faster to use patterndb than grok, and having it done by syslog-ng makes the configuration more flexible as well.

Good luck, Jim