I can give you some insight on how to accomplish this with Telegraf:
Option 1: Telegraf output TCP into Logstash. This is what I do personally, because I like to have all of my data go through Logstash for tagging and mutations.
Telegraf output config:
[[outputs.socket_writer]]
## URL to connect to
address = "tcp://$LOGSTASH_IP:8094"
Logstash input config:
tcp {
port => 8094
}
Option 2: Telegraf directly to Elasticsearch. The docs for this are good and should tell you what to do!
From an ideological perspective, inserting metrics into the ELK stack may or may not be the right thing to do - it depends on your use case. I switched to using Telegraf/InfluxDB because I had a lot of metrics and my consumers preferred the Influx query syntax for time-series data and some other Influx features such as rollups.
But there is something to be said about reducing complexity by having all of your data "under one hood". Elastic is also making the push toward being more suitable for time-series data with Timelion and there were a few talks at Elasticon concerning storing time-series data in Elasticsearch. Here's one. I would say that storing your metrics in ELK is a completely reasonable thing to do. :)
Let me know if this helps.