3
votes

I am trying to create a stream pipeline using apache-beam, that read sentences from google pub/sub and write the words into a Bigquery Table.

I am using 0.6.0 apache-beam version.

Following the examples, I have made this:

public class StreamingWordExtract {

/**
 * A DoFn that tokenizes lines of text into individual words.
 */
static class ExtractWords extends DoFn<String, String> {
    @ProcessElement
    public void processElement(ProcessContext c) {
        String[] words = ((String) c.element()).split("[^a-zA-Z']+");
        for (String word : words) {
            if (!word.isEmpty()) {
                c.output(word);
            }
        }
    }
}

/**
 * A DoFn that uppercases a word.
 */
static class Uppercase extends DoFn<String, String> {
    @ProcessElement
    public void processElement(ProcessContext c) {
        c.output(c.element().toUpperCase());
    }
}


/**
 * A DoFn that uppercases a word.
 */
static class StringToRowConverter extends DoFn<String, TableRow> {
    @ProcessElement
    public void processElement(ProcessContext c) {
        c.output(new TableRow().set("string_field", c.element()));
    }

    static TableSchema getSchema() {
        return new TableSchema().setFields(new ArrayList<TableFieldSchema>() {
            // Compose the list of TableFieldSchema from tableSchema.
            {
                add(new TableFieldSchema().setName("string_field").setType("STRING"));
            }
        });
    }

}

private interface StreamingWordExtractOptions extends ExampleBigQueryTableOptions, ExamplePubsubTopicOptions {
    @Description("Input file to inject to Pub/Sub topic")
    @Default.String("gs://dataflow-samples/shakespeare/kinglear.txt")
    String getInputFile();

    void setInputFile(String value);
}

public static void main(String[] args) {
    StreamingWordExtractOptions options = PipelineOptionsFactory.fromArgs(args)
            .withValidation()
            .as(StreamingWordExtractOptions.class);

    options.setBigQuerySchema(StringToRowConverter.getSchema());

    Pipeline p = Pipeline.create(options);

    String tableSpec = new StringBuilder()
            .append(options.getProject()).append(":")
            .append(options.getBigQueryDataset()).append(".")
            .append(options.getBigQueryTable())
            .toString();

    p.apply(PubsubIO.read().topic(options.getPubsubTopic()))
            .apply(ParDo.of(new ExtractWords()))
            .apply(ParDo.of(new StringToRowConverter()))
            .apply(BigQueryIO.Write.to(tableSpec)
                    .withSchema(StringToRowConverter.getSchema())
                    .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
                    .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));

    PipelineResult result = p.run();


}

I have an error near:

apply(ParDo.of(new ExtractWords()))

because the previous apply not return a String but an Object

I suppose that the problem is the type returned from PubsubIO.read().topic(options.getPubsubTopic()). The type is PTransform<PBegin, PCollection<T>> instead of PTransform<PBegin, PCollection<String>>

Which is the correct way to read from google pub/sub using apache-beam?

1

1 Answers

6
votes

You are hitting a recent backwards-incompatible change in Beam -- sorry about that!

Starting with Apache Beam version 0.5.0, PubsubIO.Read and PubsubIO.Write need to be instantiated using PubsubIO.<T>read() and PubsubIO.<T>write() instead of the static factory methods such as PubsubIO.Read.topic(String).

Specifying a coder via .withCoder(Coder) for the output type is required for Read. Specifying a coder for the input type, or specifying a format function via .withAttributes(SimpleFunction<T, PubsubMessage>) is required for Write.