I'll assume you're using Java but the equivalent process applies with Python.
You need to migrate your pipeline to use the Apache Beam SDK, replacing your Google Dataflow SDK dependency with:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-core</artifactId>
<version>2.4.0</version>
</dependency>
Then add the dependency for the runner you wish to use:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-spark</artifactId>
<version>2.4.0</version>
</dependency>
And add the --runner=spark
to specify that this runner should be used when submitting the pipeline.
See https://beam.apache.org/documentation/runners/capability-matrix/ for the full list of runners and comparison of their capabilities.