As part of Dataflow/Apache Beam, I would like to Read from a Source, then write to a source, and then again read from one source, and then again write in this order. How do i ensure the order of R->W->R->W in below? I believe the below runs like a parallel pipeline with R->W. I am not sure if to achieve this using PDone object.
(In example below consider that BIGQUERYVIEWB is a Big Query view formed from TESTDATASET1.TABLE2 and few other tables)
//Read 1
PCollection<TableRow> tr = pipeline.apply(BigQueryIO.readTableRows().fromQuery("SELECT ID FROM `TESTDATASET1.BIGQUERYVIEWA`").usingStandardSql());
PCollection<TableRow> tr1= tr.apply(ParDo.of(new SomeFn()));
//Write 1
tr1.apply(BigQueryIO.writeTableRows().withoutValidation()
.withSchema(FormatRemindersFn.getSchema())
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.to("TESTDATASET1.TABLE2"));
//Read 2
PCollection<TableRow> tr2 = pipeline.apply(BigQueryIO.readTableRows().fromQuery("SELECT ID FROM `TESTDATASET1.BIGQUERYVIEWB`").usingStandardSql());
PCollection<TableRow> tr3= tr.apply(ParDo.of(new SomeFn()));
//Write 2
tr3.apply(BigQueryIO.writeTableRows().withoutValidation()
.withSchema(FormatRemindersFn.getSchema())
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.to("TESTDATASET1.TABLE3"));