Consume data in the background, and lower memory usage by batching imports and preventing duplicate records in the Core Data store.
- iOS 11.0+
- macOS 10.13+
- Xcode 11.3+
- Core Data
This sample app shows a list of earthquakes recorded in the United States in the past 30 days by consuming a U. S. Geological Survey (USGS) real time data feed.
Press the app’s refresh button to load the USGS JSON feed on the URLSession’s default delegate queue, which is a serial operations queue running in the background. Once the feed downloads, continue working on this queue to import the large number of feed elements to the store without blocking the main queue.
Import Data in the Background
To import data in the background, you need two managed object contexts: a main queue context to provide data to the user interface, and a private queue context to perform the import on a background queue.
Create a private queue context by calling the persistent container’s
When the feed download finishes, use the task context to consume the feed in the background. Wrap your work in a
For more information about working with concurrency, see
Update the User Interface
To show the imported data in the user interface, merge it from the private queue into the main queue.
automatically property to
Both contexts are connected to the same
persistent, which serves as their parent for data merging purposes. This is more efficient than merging between parent and child contexts.
When the background context saves, Core Data observes the changes to the store and merges them into the
view automatically. Then
NSFetched observes changes to the
view, and updates the user interface accordingly.
Finally, dispatch any user interface state updates back to the main queue.
Work in Batches to Lower Your Memory Footprint
Core Data caches the objects that are fetched or created in a context, to avoid a round trip to the store file when these objects are needed again. However, your app’s memory footprint grows as you import more and more objects. To avoid a low memory warning or termination by iOS, perform the import in batches and reset the context after each batch.
Split the import into batches by dividing the total number of records by your chosen batch size.
Reset the context after importing each batch by calling
Prevent Duplicate Data in the Store
Every time you refresh the feed, the data downloaded from the remote server contains all earthquake records for the past month, so it can have many duplicates of data you’ve already imported. To avoid creating duplicate records, you constrain an attribute, or combination of attributes, to be unique across all instances.
code attribute uniquely identifies an earthquake record, so constraining the
Quake entity on
code ensures that no two stored records have the same
Quake entity in the data model editor. In the data model inspector, add a new constraint by clicking the + button under the Constraints list. A constraint placeholder appears.
Double-click the placeholder to edit it. Enter the name of the attribute (or comma-separated list of attributes) to serve as unique constraints on the entity.
When saving a new record, the store now checks whether any record already exists with the same value for the constrained attribute. In the case of a conflict, an
NSMerge policy comes into play, and the new record overwrites all fields in the existing record.