i came across two different approaches to scale on a specific metric and i wonder what is the difference and if there is such in my case.
i have a deployment on GKE that includes scraping and exporting a specific metric from the application to stackdriver. using prometheus-to-sd sidecar. the metric appears on the stackdriver as custom.googleapis.com/dummy/foo
now, usually when i do HPA for custom metric i use it like the following:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: custom-metric-prometheus-sd
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: custom-metric-prometheus-sd
minReplicas: 1
maxReplicas: 5
metrics:
- type: External
external:
metricName: "custom.googleapis.com|dummy|foo"
targetAverageValue: 20
now, the same hpa works also using Pod metrics approach. like:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: custom-metric-prometheus-sd
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: custom-metric-prometheus-sd
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metricName: "custom.googleapis.com|dummy|foo"
targetAverageValue: 20
it works the same. i understand that when using Pod Metrics HPA will fetch the metrics from all pods and will calculate an average which will be compared to the target value to decide replica counts. its basically the same as if using the targetAverageValue on the External metric. so, in my case both will do basically the same, right? any different maybe in aspects of performance, latency, anything else ?
thanks Chen