We have a classic logstash + elasticsearch + kibana setup for log aggregation. We use it to aggregate logs across all servers and applications, and we've stumbled upon the following problem: the first time ES receives a log line (a JSON document in our case), it creates a mapping for that document (see http://bit.ly/1h3qwC9). Most of the time, properties are mapped as strings, but in some cases they're mapped as dates or numbers. In the latter case, if another log line (from a different application) has the same field, but with a string value, ES will fail to index it (throwing an exception to its log and continue as usual). As a workaround we've configured ES to ignore malformed documents (index.mapping.ignore_malformed: true), but it feels more of a hack.
Any ideas how to solve this issue in an elegant way?