It's explained elsewhere in the paper:
Existing abstractions for in-memory storage on clusters, such as distributed shared memory [24], key- value stores [25], databases, and Piccolo [27], offer an interface based on fine-grained updates to mutable state (e.g., cells in a table). With this interface, the only ways to provide fault tolerance are to replicate the data across machines or to log updates across machines. Both approaches are expensive for data-intensive workloads, as they require copying large amounts of data over the cluster network, whose bandwidth is far lower than that of RAM, and they incur substantial storage overhead.
In contrast to these systems, RDDs provide an interface based on coarse-grained transformations (e.g., map, filter and join) that apply the same operation to many data items. This allows them to efficiently provide fault tolerance by logging the transformations used to build a dataset (its lineage) rather than the actual data.1 If a partition of an RDD is lost, the RDD has enough information about how it was derived from other RDDs to recompute just that partition. Thus, lost data can be recovered, often quite quickly, without requiring costly replication.
As I interpret that, handling streaming applications would require the system to do lots of writing to individual cells, shoving data across the network, i/o, and other costly things. RDDs are meant to avoid all that stuff by primarily supporting functional-type operations that can be composed.
This is consistent with my recollection from about 9 months ago when I did a Spark-based MOOC on edx (sadly haven't touched it then)---as I remember, Spark doesn't even bother to compute the results of maps on RDDs until the user actually calls for some output, and that way saves a ton of computation.