I did some tests where I simulate the typical auto-detect pattern: first I run through all the data to build a Map
of all possible fields and the type (here I just considered String
or Integer
for simplicity). I use a stateful pipeline to keep track of the fields that have already been seen and save it as a PCollectionView
. This way I can use .withSchemaFromView()
as the schema is unknown at pipeline construction. Note that this approach is only valid for batch jobs.
First, I create some dummy data without a strict schema where each row may or may not contain any of the fields:
PCollection<KV<Integer, String>> input = p
.apply("Create data", Create.of(
KV.of(1, "{\"user\":\"Alice\",\"age\":\"22\",\"country\":\"Denmark\"}"),
KV.of(1, "{\"income\":\"1500\",\"blood\":\"A+\"}"),
KV.of(1, "{\"food\":\"pineapple pizza\",\"age\":\"44\"}"),
KV.of(1, "{\"user\":\"Bob\",\"movie\":\"Inception\",\"income\":\"1350\"}"))
);
We'll read the input data and build a Map
of the different field names that we see in the data and a basic type checking to determine if it contains an INTEGER
or a STRING
. Of course, this could be extended if desired. Note that all data created before was assigned to the same key so that they are grouped together and we can build a complete list of fields but this can be a performance bottleneck. We materialize the output so that we can use it as a side input:
PCollectionView<Map<String, String>> schemaSideInput = input
.apply("Build schema", ParDo.of(new DoFn<KV<Integer, String>, KV<String, String>>() {
// A map containing field-type pairs
@StateId("schema")
private final StateSpec<ValueState<Map<String, String>>> schemaSpec =
StateSpecs.value(MapCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of()));
@ProcessElement
public void processElement(ProcessContext c,
@StateId("schema") ValueState<Map<String, String>> schemaSpec) {
JSONObject message = new JSONObject(c.element().getValue());
Map<String, String> current = firstNonNull(schemaSpec.read(), new HashMap<String, String>());
// iterate through fields
message.keySet().forEach(key ->
{
Object value = message.get(key);
if (!current.containsKey(key)) {
String type = "STRING";
try {
Integer.parseInt(value.toString());
type = "INTEGER";
}
catch(Exception e) {}
// uncomment if debugging is needed
// LOG.info("key: "+ key + " value: " + value + " type: " + type);
c.output(KV.of(key, type));
current.put(key, type);
schemaSpec.write(current);
}
});
}
})).apply("Save as Map", View.asMap());
Now we can use the previous Map
to build the PCollectionView
containing the BigQuery table schema:
PCollectionView<Map<String, String>> schemaView = p
.apply("Start", Create.of("Start"))
.apply("Create Schema", ParDo.of(new DoFn<String, Map<String, String>>() {
@ProcessElement
public void processElement(ProcessContext c) {
Map<String, String> schemaFields = c.sideInput(schemaSideInput);
List<TableFieldSchema> fields = new ArrayList<>();
for (Map.Entry<String, String> field : schemaFields.entrySet())
{
fields.add(new TableFieldSchema().setName(field.getKey()).setType(field.getValue()));
// LOG.info("key: "+ field.getKey() + " type: " + field.getValue());
}
TableSchema schema = new TableSchema().setFields(fields);
String jsonSchema;
try {
jsonSchema = Transport.getJsonFactory().toString(schema);
} catch (IOException e) {
throw new RuntimeException(e);
}
c.output(ImmutableMap.of("PROJECT_ID:DATASET_NAME.dynamic_bq_schema", jsonSchema));
}}).withSideInputs(schemaSideInput))
.apply("Save as Singleton", View.asSingleton());
Change fully-qualified table name PROJECT_ID:DATASET_NAME.dynamic_bq_schema
accordingly.
Finally, in our pipeline we read the data, convert it to TableRow
and write it to BigQuery using .withSchemaFromView(schemaView)
:
input
.apply("Convert to TableRow", ParDo.of(new DoFn<KV<Integer, String>, TableRow>() {
@ProcessElement
public void processElement(ProcessContext c) {
JSONObject message = new JSONObject(c.element().getValue());
TableRow row = new TableRow();
message.keySet().forEach(key ->
{
Object value = message.get(key);
row.set(key, value);
});
c.output(row);
}}))
.apply( BigQueryIO.writeTableRows()
.to("PROJECT_ID:DATASET_NAME.dynamic_bq_schema")
.withSchemaFromView(schemaView)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
Full code here.
BigQuery table schema as created by the pipeline:
and resulting sparse data:
FILE_LOADS
method and setting.withTriggeringFrequency
(docs) – Guillem Xercavins