Unfortunately your batch will succeed or fail in a atomic fashion. There is no way to ignore errors for just those operations that fail.
What you probably want to do is to implement some intelligent error handling here. Your issue is that a-priori checking for duplicates will be very expensive because there is no batch GET operation (OK, so strictly speaking there is support; but only for one query per batch). My initial thought is that the most efficient way to deal with this would be to take a failed batch and basically binary tree search it.
Proposed Approach To Handle
Take your failed batch and split it in half; so if you had a batch of 100 operations you'd end up with two batches of 50 operations. Execute these two batches. Keep splitting each batch that fails and eliminating batches that have suceeded. You could probably write this as a reasonably efficient and parallelizable algorithm by modelling your entire dataset as a single 'batch' and having a rule of both maxbatchsize=100 and then splitting. Each batch can be executed independently of the others; becasue you'll just ignore duplicates it doesn't matter which copy of the dupe row is inserted first.
Others may like to chime in, but, I would think this gives you the most efficeint way to insert your data ignoring duplicates.
Other option would be to de-dupe the data before it hits Azure Table Storage, but, would probably want to know about the total number of rows and relative duplicate frequency before comments on whether that is a better approach.