1
votes

My question is specific to databricks. I am trying to access spark metrics via graphite sink in databricks by passing below spark configuration and I want the configuration to be passed while creating a cluster.

spark.metrics.conf.*.sink.graphite.class org.apache.spark.metrics.sink.GraphiteSink
spark.metrics.conf.*.sink.graphite.host myhost
spark.metrics.conf.*.sink.graphite.port 2003
spark.metrics.conf.*.sink.graphite.period 10
spark.metrics.conf.*.sink.graphite.unit seconds
spark.metrics.conf.*.source.jvm.class org.apache.spark.metrics.source.JvmSource

However, the above configuration only fetches driver level metrics. I read in some posts that to get executor level metrics, one has to pass below parameters

spark-submit <other parameters> --files metrics.properties 
--conf spark.metrics.conf=metrics.properties

My question is how can I pass --files parameter in databricks while creating a cluster (as I am not doing spark-submit) or is there any other way to get executor, worker and master level metrics ?

Cluster Json

{
    "num_workers": 0,
    "cluster_name": "mycluster",
    "spark_version": "5.5.x-scala2.11",
    "spark_conf": {
        "spark.metrics.conf.*.sink.graphite.unit": "seconds",
        "spark.metrics.conf.*.sink.graphite.class": "org.apache.spark.metrics.sink.GraphiteSink",
        "spark.metrics.conf.*.sink.graphite.period": "10",
        "spark.databricks.delta.preview.enabled": "true",
        "spark.metrics.conf.*.source.jvm.class": "org.apache.spark.metrics.source.JvmSource",
        "spark.metrics.conf.*.sink.graphite.host": "myhost",
        "spark.metrics.conf.*.sink.graphite.port": "2003"
    },
    "aws_attributes": {
        "first_on_demand": 0,
        "availability": "ON_DEMAND",
        "zone_id": "us-west-2c",
        "spot_bid_price_percent": 100,
        "ebs_volume_count": 0
    },
    "node_type_id": "dev-tier-node",
    "driver_node_type_id": "dev-tier-node",
    "ssh_public_keys": [],
    "custom_tags": {},
    "spark_env_vars": {
        "PYSPARK_PYTHON": "/databricks/python3/bin/python3"
    },
    "autotermination_minutes": 120,
    "enable_elastic_disk": false,
    "cluster_source": "UI",
    "init_scripts": [],
    "cluster_id": "0604-114345-oboe241"
}
1

1 Answers

1
votes

The --jars, --py-files, --files arguments support DBFS paths.

You can specify as shown below in the spark submit with the path to the metrics file.

--files=dbfs/yourPath/metrics.properties --conf spark.metrics.conf=./metrics.properties

Reference: Azure Databricks Jobs API - SparkSubmitTask

This article gives an example of how to monitor Apache Spark components using the Spark configurable metrics system. Specifically, it shows how to set a new source and enable a sink.