In the Hadoop/YARN world, you always need the configuration files on your client machine. So, you would need to fetch them locally. However, you usually need some of them and not all. In most of the cases, it should be enough to have hdfs-site.xml, core-site.xml and yarn-site.xml - if I am not mistaken. To be on the safe side, copy all of them in a local directory.
Then configure the following parameter in your flink-conf.yaml file on the machine, which will play the role of client, aka. you will launch your job from.
fs.hdfs.hadoopconf: path_to_hadoop_conf_dir
Then you should be able to launch a YARN job by telling the flink tool to use a yarn-master as job manager.
flink run -m yarn-cluster -yn <num_task_managers> -yjm <job_manager_memory> -ytm <task_manager_memory -c <main_class> <jar>
If you have configured the above memory parameters in your flink-conf.yaml, it should be possible to launch the job with the default values by omitting all those verbose parameter
flink run -m yarn-cluster -n <num_task_managers> -c <main_class> <jar>
As a quick test, you could try to launch a Scala shell on YARN.
start-scala-shell.sh yarn -n <num_task_managers> -nm test_job