1
votes

I have a job defined as below:

<step id="file-transfer">
    <chunk checkpoint-policy="item" item-count="10" retry-limit="10">
        <reader ref="allTrusteeCustomerFilesReader">
            <properties>
                <property name="part-page-first-offset" value="#{partitionPlan['part-page-first-offset']}"/>
                <property name="part-page-last-offset" value="#{partitionPlan['part-page-last-offset']}"/>
                <property name="part-page-length" value="#{partitionPlan['part-page-length']}"/>
                <property name="part-sort-field" value="#{partitionPlan['part-sort-field']}"/>
                <property name="part-sort-ascending" value="#{partitionPlan['part-sort-ascending']}"/>
            </properties>
        </reader>
        <processor ref="customerFileLocalToGoogleStorageProcessor"/>
        <writer ref="customerFileWriter">
            <properties>
                <property name="set-google-cloud-migrated" value="true"/>
            </properties>
        </writer>


        <skippable-exception-classes>
            <include class="be.valuya.gestemps.server.file.batch.error.NoLocalStorageDataException"/>
            <include class="be.valuya.gestemps.server.file.batch.error.TargetAlreadyPresentException"/>
            <include class="be.valuya.gestemps.server.file.batch.error.CustomerFileAlreadyMigratedException"/>
        </skippable-exception-classes>
        <retryable-exception-classes>
            <include class="be.valuya.gestemps.server.file.batch.error.TransferToGoogleFailedException"/>
        </retryable-exception-classes>
    </chunk>

    <partition>
        <mapper ref="customerFilePartitionMapper"/>
    </partition>

    <end on="COMPLETED"/>
</step>

The referenced mappers creates an Array of Properties, with defined values, and returns it:

    PartitionPlanImpl partitionPlan = new PartitionPlanImpl();
    partitionPlan.setPartitions(partitionCount);
    partitionPlan.setPartitionProperties(partitionProperties);

    return partitionPlan;

It is correctly called on step starts and returns properties with the correct keys defined.

However, Im not able to get the partition plan properties from my reader step. None of the job context properties nor the step context properties contains anything. I see no error in console. The job instance parameters only contains the parameters set at runtime. None of the parameters/properties names conflicts. Attempt to inject them using @BatchProperty leave the fields null.

The batch is started as follow:

    Properties properties = new Properties();
    // Fill parameters...
    long executionId = jobOperator.start(jobName, properties);

What am I missing?

2
Can you share a simplified test app through e.g., github? It will be easier for me to debug it. Or you can also file a JIRA issue and attach a reproducer test app at - cheng
I created a minimal app to reproduce: github.com/cghislai/jberet-partitoin-test - cghislai
ill be damn, setting primitive int in the partition plan properties was the issue :/ - cghislai

2 Answers

0
votes

To see the partition properties in work, you will need to inject the item reader properties into your item reader class. For example,


public class AllTrusteeCustomerFilesReader implements ItemReader {
  @Inject
  @BatchProperty(name = "part-page-first-offset")
  String partPageFirstOffset;

...
}

The above field partPageFirstOffset will hold the value of the item reader property part-page-first-offset, which is defined in job.xml to reference partition property part-page-first-offset.

0
votes

The partition plan was puting integers in the Properties map. It seems strings are expected:

//
Properties partProperties;
partProperties.setProperty("plan-property", "20"); // <--- Use setProperty to enforce String values rather than put
// ...

PartitionPlanImpl partitionPlan = new PartitionPlanImpl();
partitionPlan.setPartitions(partitionCount);
partitionPlan.setPartitionProperties(partitionProperties);

return partitionPlan;