13
votes

Is there any limitations exist in Core Data? e.g; how many max rows can be in a table/Entity? How much data can reside in the DB?

In general if some document can describe all kind of limitations which exist inside the Core Data (for iOS)?

Update: w.r.t answer given by @TechZen, my question was implied to the fact that I/Core Data will be using sqlite at the backend. But to just clear the point, I am intended to use sqlite and when I am talking about limitations of Core Data, I am indirectly asking limit of sqlite (database store).

So is there any limitations imposed by core data other than the limitations of the sqlite when we are talking about iOS environment?

2

2 Answers

27
votes

There are no logical limitation on Core Data itself beyond those imposed by situational memory, disk space etc. However, if you use an SQLite store, you get the default limitations of SQLite itself. If you are writing for iOS, you will never hit those limits.

Really the only practical limitation you hit with Core Data comes from memory issues caused by reading in large blobs e.g. trying to store images or audio in an SQLite store. That can be avoided by storing the blobs in external files.

As an aside, I would warn you that I can tell by the way you phrased the question, that you are thinking about Core Data wrong.

Core Data is not an object wrapper for SQL. Core Data is not SQL. Entities are not tables. Objects are not rows. Columns are not attributes. Core Data is an object graph management system that may or may not persist the object graph and may or may not use SQL far behind the scenes to do so. Trying to think of Core Data in SQL terms will cause you to completely misunderstand Core Data and result in much grief and wasted time.

0
votes

Core Data is a rich and sophisticated object graph management framework capable of dealing with large volumes of data. The SQLite store can scale to terabyte-sized databases with billions of rows, tables, and columns. Unless your entities themselves have very large attributes or large numbers of properties, 10,000 objects is considered a fairly small size for a data set. When working with large binary objects, review Binary Large Data Objects (BLOBs).

Binary Large Data Objects (BLOBs)

If your application uses Binary Large OBjects (BLOBs) such as image and sound data, you need to take care to minimize overheads. Whether an object is considered small or large depends on an application’s usage. A general rule is that objects smaller than a megabyte are small or medium-sized and those larger than a megabyte are large. Some developers have achieved good performance with 10 MB BLOBs in a database. On the other hand, if an application has millions of rows in a table, even 128 bytes might be a CLOB (Character Large OBject) that needs to be normalized into a separate table.

In general, if you need to store BLOBs in a persistent store, use an SQLite store. The other stores require that the whole object graph reside in memory, and store writes are atomic (see Persistent Store Types and Behaviors), which means that they do not efficiently deal with large data objects. SQLite can scale to handle extremely large databases. Properly used, SQLite provides good performance for databases up to 100 GB, and a single row can hold up to 1 GB (although of course reading 1GB of data into memory is an expensive operation no matter how efficient the repository).

A BLOB often represents an attribute of an entity—for example, a photograph might be an attribute of an Employee entity. For small to modest BLOBs (and CLOBs), create a separate entity for the data and create a to-one relationship in place of the attribute. For example, you might create Employee and Photograph entities with a one-to-one relationship between them, where the relationship from Employee to Photograph replaces the Employee's photograph attribute. This pattern maximizes the benefits of object faulting (see Faulting and Uniquing). Any given photograph is only retrieved if it is actually needed (if the relationship is traversed).

It is better, however, if you are able to store BLOBs as resources on the file system and to maintain links (such as URLs or paths) to those resources. You can then load a BLOB as and when necessary.