Running into a situation where I'm running in a data center and limited by my server size. I'm running a single Prometheus instance and an exporter on one server. The number of targets for the exporter is large, in the thousands. This is too much load for the server, and I cannot scale up. I can however, add more servers of the same size.
I think I could federate and run multiple identical Prom instances with the exporter on each (like my current setup) and feed them into a Leader Prom instance. However, I'm scraping one long list of targets and the Prom instance isn't using many resources, the exporter is using many more resources (85% of resources are used by exporter). So it might make sense to set up a few identical exporters, each on their own server, and then use my single Prom instance to send 1/3 of the targets from the list to each exporter server.
This is a little different than the federation use case because I'd prefer to not run multiple Prom servers. Additionally, the file with the list of targets is generated, and it's difficult to split that into multiple files, otherwise I could just create different jobs in prometheus.yml where each job uses the file_sd_configs pointing to a unique file containing 1/3 of the targets (like targets1.json, targets2.json, targets3.json)
Ideally, I would like to have one file for the file_sd_configs, "targets.json", and then use relabeling/hashmod (or something?) to divide those up equally and send the subset to a specific server. Is this something that's possible?