I will provide hypothetical guidance, since you chose to ignore answers to my questions.
When it comes to a logging use case (time based indices) it is imperative to have at hand some data about future plans: for how long you want to keep the logging data around (retention period), what will be the usage pattern for the collected data (queries frequency, indexing frequency), how much data there will be each day (referring here to data on disk aka shard size). Before thinking at the issue of "per-app-index" or "single-index" do consider the advices below. After you do the math regarding the shards sizes, how many there will be for the chosen retention period, then you can think about per-app or single index.
Depending on the shards sizes especially and the retention period secondly, one would need to consider if the time-based indices will be daily, weekly or monthly. A good rule-of-thumb for the size of a shard is maximum 30-50GB, everything above this would make any recovery, shards relocation, searching potentially slower and potentially affecting the cluster stability.
If your apps are capable of generating large amounts of data daily that would go over the number mentioned above, one shouldn't choose to make the indices per application. If the sizes are smaller, then again it depends. A huge number of shards on one node is wasting resources and would make searching slow. Each shards has a fixed set of memory that is being used just because it exists. Also, when performing searches each shard will perform its search by one thread. One thread is basically one CPU core. The higher the time span being used in searches (more indices being searched), the higher number of concurrent searches happening, the higher the context switching at OS level between multiple threads trying to use the CPU cores. All in all, don't try to squeeze in a single node hundreds of shards, unless only some of them will be used at any given time. If you plan on querying all the data in your cluster most of the time, the number of shards you'd want to have on each node shrinks drastically. Otherwise your cluster will not be able to keep up with the load.
If your logging use case is the one which mostly has high activity on the most recent data (last few days to one week) then consider the approach of hot-warm architecture: https://www.elastic.co/blog/hot-warm-architecture-in-elasticsearch-5-x
The exercise of building and configuring a cluster does always involve testing. So, do please try to test the performance of your queries on a piece of data that's as much as possible identical to real life data. Also, do this on one node that has the hardware specs of the nodes in the production cluster.