From the SparkContext documentation:
def addJar(path: String):
Unit
Adds a JAR dependency for all tasks to
be executed on this SparkContext in the future. The path passed can be
either a local file, a file in HDFS (or other Hadoop-supported
filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file on
every worker node.
So I think it is enough to just add this in your sparkContext initialization:
sc.addJar("hdfs://your/path/to/whatever.jar")
If you want to add just a file, there is a relevant addFile()
method.
See docs for more.