I have a couple of pointers just in case. The first thing is to make sure your lambda concurrency limit is actually over 10. It should be, as it defaults to 1000, but it doesn't hurt to check.
About the explanation of how lambda stream consumers work, you have the details at the lambda docs.
One thing I've seen often with Kinesis Data Streams, is having trouble with the Partition Key of the records. As you probably know, Kinesis Data Streams will send all the records with the same partition key to the same shard, so they can be processed in the right order. If records were sent to any shard (for example using a simple round-robin) then you couldn't have any guarantee they would be processed in order, as different shards are read by different processors.
It's important then to make sure you are distributing your keys as evenly as possible. If most records have the same partition key, then one of the shards will be very busy while the others are not getting traffic. It might be the case you are using only 10 different values for your partition keys, in which case you would be sending data to only 10 shards, and since a lambda function execution will be connected to only one shard, you have only 10 concurrent executions.
You can know the shard Id you are using by checking the output of PutRecord. You can also force a shard ID by overriding the Hashing mechanism. There is more information about partition keys processing and record sorting at the SDK docs.
Also make sure you read the troubleshooting guide, as sometimes you can get records processed by two processors concurrently and you might want to be prepared for that.
I don't know if your issue will be related to these pointers, but the Key Partitioning is a recurrent issue, so I thought I would comment on it. Good luck!