There's a couple of different ways to approach this but much of your use case can be solved using the Google-provided Dataflow templates. When using the templates, the light transformations can be done within a JavaScript UDF. This saves you from needing to maintain an entire pipeline and only writing the transformations necessary for your incoming data.
If your accepting many files input as a stream to Cloud Pub/Sub, remember that Cloud Pub/Sub has no guarantees on ordering so records from different files would likely get intermixed in the output. If you're looking to capture an entire file as is, uploading directly to GCS would be the better approach.
Using the provided templates either Cloud Pub/Sub to BigQuery or GCS to BigQuery, you could utilize a simple UDF to transform the data from CSV format to a JSON format matching the BigQuery output table schema.
For example if you had CSV records such as:
transactionDate,product,retailPrice,cost,paymentType
2018-01-08,Product1,99.99,79.99,Visa
You could write a UDF to transform that data into your output schema as such:
function transform(line) {
var values = line.split(',');
// Construct output and add transformations
var obj = new Object();
obj.transactionDate = values[0];
obj.product = values[1];
obj.retailPrice = values[2];
obj.cost = values[3];
obj.marginPct = (obj.retailPrice - obj.cost) / obj.retailPrice;
obj.paymentType = values[4];
var jsonString = JSON.stringify(obj);
return jsonString;
}