Optimizing Your App for Multitasking on iPad in iOS 9
Multitasking in iOS 9 allows two side-by-side apps and the Picture-in-Picture window to simultaneously run onscreen at the same time. Discover essential techniques for designing efficient, responsive apps to give your users a fluid, immersive experience with Slide Over, Split View, and Picture-in-Picture.
[Cheering and applause]
BRITTANY PAINE: Hi, everyone.
My name is Brittany Paine,
and later you will meet Jon Drummond.
We are engineers on the SpringBoard team.
Today we are going to talk to you about optimizing your app
for multitasking on iPad and iOS 9.
This is actually the third talk
in the series related to multitasking.
The first two already happened.
If you didn't get a chance to watch them,
you should go watch them.
So I have a lot to say today.
Jon does too.
We are going to go pretty fast.
Put on your listening ears.
So this is your app.
In iOS 8, your app had the full device at your disposal.
It could use as much system resources
as the device had available.
However, in multitasking in iOS 9 this is no longer the case.
There can and probably will be more
than one app on screen at a time.
And all of the apps on screen now have
to share the system resources.
Some system resources like CPU, GPU,
and disk I/O degrade gradually
as multiple processes compete for them.
Let's look at an example of CPU.
So for the new developers in the audience, the Holy Grail
of app responsiveness is to be able to update your UI
at 60 frames per second, or 60 FPS.
This means you have about 16 milliseconds
to get all your work done in response to a user event.
So if this is your app,
let's say it is doing a great job rendering at 60 FPS.
It can get all its work done in only 10 milliseconds.
Each of these slices represents one millisecond.
Then, the user brings in a secondary app.
That app is also doing a great job updating at 60 FPS.
And it gets all its work finished in 6 milliseconds.
Both apps combined, as you can see, are already using
up all 16 milliseconds that we have in order
to be able to render at 60 FPS.
And then the user starts a PiP,
and the PiP is also doing a great job,
and he needs eight milliseconds to render his UI.
The combined total of all three apps
on screen are 24 milliseconds, and that means
that the rendering is actually at about 40 FIPS instead
of the 60 FIPS that we want.
The user will notice stuttering.
The same type of problem applies to the GPU.
However, some system resources like memory can result
in a much worse user experience
when multiple processes are competing for them.
Look at the example again.
Here is the app again.
Below the iPad is the system memory footprint.
You can see on the very left we have the system using some
amount of memory.
In the middle, we have your app, the blue app,
using some amount of memory.
Then we have all of this free space.
There is so much room for activities.
You can do all kind of stuff.
And then, the user brings in a secondary app,
and the secondary app needs memory, too.
But we are still in good shape.
We have a tiny bit of free memory left.
Then guess what happens next?
The PiP happens.
Now we are out of memory.
When the system can't find free memory,
it has to kill a process.
In that instance, the user gets ripped out of his or her --
we'll say her -- current context,
and we are taken back to SpringBoard.
In my opinion, that is a much worse user experience
than just the stuttering UI that can happen
when multiple processes compete for CPU or GPU.
So now you may be asking yourself, what is SpringBoard,
and why are SpringBoard engineers here today to talk
to you about multitasking?
Well, SpringBoard is a lot of things.
It is the Home screen.
The Lock screen.
Icons, wallpaper, system gestures, notification center.
I got lost.
Okay, all of these things and more.
But most importantly, we are a UI application just like y'all.
SpringBoard is also the original multitasking app.
Prior to iOS 9 and also in iOS 9,
SpringBoard is always considered to be running in the foreground,
even though your app is visible to the user.
Because of that, we face the same challenges
that y'all do today in the new multitasking environment.
And we've learned a lot of lessons along the way.
And we would like to share some of those lessons
with y'all today so you don't make the same mistakes.
Let's get started.
Optimizing your app, the easy stuff, part one.
First thing, use the Leaks instrument and find
and fix your memory leaks.
How many of you have forgotten to write DIALEK?
There has to be more hands than that.
Well, I have.
Happens to the best of us.
And the Leaks instrument can help you find those problems,
and they are usually really easy to fix.
However, the best way to avoid leaks is to use Swift.
You should do that instead.
Next, you should use the Allocations instrument to find
and fix retained cycles and unbounded memory growth.
Last, you should use the Time Profiler instrument to find
and fix inefficient algorithms.
I'm not going to talk about any of these problems today.
These types of problems apply to all apps, not just apps
that are interested in multitasking.
Instead, we are going to focus on the things most important
in the new multitasking environment.
In our experience, the biggest lesson we learned is
that great performance involves trade-offs.
Are you going to precompute your data and keep it in memory?
Or are you going to calculate it on the fly and use CPU?
Are you going to keep all your resources locally on disk?
Or are you going to keep them in the cloud
and fetch them whenever you need them?
Are you going to run your animations
on the CPU or on the GPU?
Let's look at an example
of a sample app we have been working on called IconReel.
So here is our app right out of the box.
It starts with a handful of icons.
When the user taps on an icon, we zoom up the icon
to show a more detailed view.
There is a sticky dock at the bottom
where users can save their favorite icons.
When a user adds more icons, we add more pages,
and the user can scroll between the pages.
Does this look familiar to anyone?
Yeah, that was on purpose
BRITTANY PAINE: So we can keep all of these icons
in memory because each icon is only about 60 kilobytes.
We will store them all in an NSDictionary.
It turns out scrolling is great.
Why? Because we have no other pages to scroll to.
So some of our users like to add more icons.
So we add a few dozen more icons to our NSDictionary.
So far scrolling is okay.
Now, this animation may look familiar to you.
That's because there are a lot of apps
that have an animation like this.
Think of photos scrolling around in a photos or videos app.
You may have an animation like this in your app.
Everything is going great.
Some of our really good customers download even
And now we have several dozen more icons in our NSDictionary,
and it turns out scrolling is still perfect.
That is, until sometimes we are seeing IconReel crash,
and sometimes in a multitasking environment we are seeing other
foreground apps crash.
Let's invest what is going on here.
We took a time profiler trace and saw that CPU
and disk I/O were minimal while scrolling.
But the Allocations instrument showed us our memory usage was
very high because all of our icons were kept in memory.
What is happening here is
that IconReel is quickly exhausting the available system
memory, and when the system gets short on free memory,
it tries to free some up.
If there's none it can find, it has to terminate processes.
Sometimes this means terminating IconReel,
and sometimes this means terminating another
But all of the time this is a horrible user experience.
We want to avoid this.
As a good multitasking citizen, IconReel needs
to get its memory usage under control so all the apps
on screen can coexist and create the great user experience
that our users expect.
This brings me to the idea of the working set.
One of the most important things you can do
to optimize your memory usage is to understand
and manage your working set.
Your working set should consist of only the critical objects
and resources that your app needs right now.
You should keep it small in order
to keep your memory usage low.
It might change based on context.
For example, your working set may contain different objects
when the app is in the foreground
versus when the app is in the background, or it might change
when you change view controllers.
Last, you shouldn't let it grow unbounded.
We saw what happens when the system runs out of memory.
We don't want you to be that app.
Let's look at IconReel's working set.
At the end of the last example,
IconReel's working set was every icon we were keeping in memory.
Is this the best working set?
No. So what do we really need right now?
Well, all we really need is the current page of icons
that the user is viewing.
So this is a much better working set.
So now let's try scrolling with our working set
of only one page of icons.
Oh! Oh! Okay.
That was horrible.
And if you didn't see it because maybe I was standing
in your way, scrolling was horrible.
There was a multisecond hang before the next page
of icons scrolled into view.
Let's investigate what is going on.
We took another time profiler trace, and we saw that CPU
and disk I/O are actually very high while scrolling,
and Allocation showed us that our memory usage is very low.
This is the exact opposite problem from what we just had.
Like I said, great performance involves making a series
What can we do to fix this?
The answer is to manage our CPU time better.
So the most important thing you can do
to keep your app responsive is to do as little work
as possible on your main thread.
The main thread's top priority is to respond to user events,
and doing unnecessary work on your main thread means
that the main thread has less time to respond to user events.
Because you're sharing the CPU time with all of the apps
on screen, you need to be hyperaware
of what your main thread is doing.
Any work being performed on the main thread that is not directly
in response to a user event should be performed elsewhere.
What tools can we use to make sure that we keep responding
to user events but also make sure we get the extra work
finished in a timely manner?
Well, the answer is we can use GCD and Quality of Service.
There was a great talk last year at WWDC 2014,
and there's a great talk coming up on Friday on GCD
and Quality of Service.
So I'm not going to go into a lot of detail here.
There's two important things.
The first thing is that your main thread is running
at the highest priority, that's the user interactive priority.
The other thing is that the Quality
of Service bands are shared
across all foreground applications.
So everyone's user-initiated queues get equal access
to the CPU, and everyone's background queues get equal
access to the CPU.
No single foreground app is prioritized above any other.
If you prioritize your work appropriately,
the system can guarantee that we get the best possible
performance while there are multiple apps on screen.
So how does this apply to IconReel?
Well, we are going to load the icons
on a separate thread using GCD
at the user-initiated Quality of Service.
The first thing we do, we create our dispatch queue,
called Icon Generation Queue, and it's a serial queue.
While we are on the main thread, we're going to dispatch async,
generating each icon on the next page onto our icon queue.
It isn't obvious from this code snippet,
but dispatch asyncing work from the main thread to a queue
like this downgrades the quality of service
to the user-initiated quality of service,
which is the second-highest quality of service.
So this code is effectively loading all the icons
at the user-initiated quality of service.
Okay. So let's try scrolling again.
That was better.
This solution could work for you and your apps
if your design allows for it.
Let's say you can show placeholder images while you are
waiting for the real content to load.
However, IconReel's design doesn't allow for it.
We have to have the icon already loaded before it scrolls
So what we really need, we need a way
to temporarily boost the icon generation queue
up to the same priority as the main thread,
right before the icon scrolls on to screen
so that it can finish generating the icon faster.
There's a way to do that, too.
It's by using the quality of service overrides.
This comes in handy when we have this type of priority inversion
where we have a high-priority thread or queue blocked,
waiting on a low-priority thread or queue to finish some work.
And the awesome part is that quality
of service overrides can happen for you automatically
if you provide the system with enough information.
You may be thinking: Brittany, how can I provide the system
with enough information?
Well, here is a handy chart.
Again, the talk from last year and the talk coming
up on Friday go into this in more depth.
But the take-home here is that Dispatch Group Wait
and Dispatch Semaphore Wait are not your friends.
You should audit your code for uses of these functions
and be aware that they don't fix these types
of priority inversions.
So let's look at what we are going to do in IconReel.
Right before that first column of icons scroll on screen,
on the main thread we are going
to dispatch sync this empty block onto our icon queue.
What that will do is boost the priority of the icon queue
up to match the priority of the main thread
until that block executes,
at which point the icon queue will go back
down to its normal priority.
Now let's look at scrolling and see what it looks like.
Oh, yeah, looks great, with a nice slow swipe.
So let's try a slow swipe followed by a fast swipe.
Oh, man! We are right back to where we started
where that next page, there's
like a hang before it comes on screen.
So now what?
There is a pattern here.
Pulled out Instruments again.
We took another trace.
This time we also got a calculator,
and we did some math, and we found that we just can't load,
read the icon image from disk, decode the icon image,
and make the icon nice and pretty in the amount of time
that it takes for a fast user to swipe to the next page.
So now what?
I guess we can just wait for faster devices.
BRITTANY PAINE: Some applause for that.
We can do better.
We have to be smarter.
Math calculations told us we would have much better
performance if we had the next page
of icons already loaded in memory.
This ensures that even when you are scrolling quickly
through multiple pages, we have enough time
to load the next page of icons before the user can swipe again.
So let's increase our working set size from one page
to three pages: the page you're currently viewing
and the page on either side.
We don't have a magic eight ball.
So we don't know which way the user is about to scroll.
Having one page on either side seems like the best compromise.
Now let's try scrolling.
That looks much better.
Now by the time the user gets to the next page,
we've already started loading the next page of icons.
However, this actually increases our memory usage.
When we had our working set that contained only one page,
we had this memory footprint.
Now we have this memory footprint.
We need to be aware of how we are affecting other apps.
So now let's look at what happens when the user brings
in a secondary app with IconReel already on screen.
IconReel resizes to show only three columns instead of four.
This is a great opportunity to reassess our working set.
Do we still need three pages of four columns
of icons in memory at a time?
No. We only need three pages of three columns
of icons in memory at a time.
So effectively it looks a little bit like this.
Now, I could go through
and manually throw away those extra columns of icons
that we don't need anymore, but it was a lot
of work to generate them.
I don't want to redo it if I don't have to.
It would be great if there was a place
where I could put these icons where,
if the system needed the memory, we could get rid of them.
If the system didn't need the memory, then they could stay
in memory for us so that we could use them again
when we need them.
There is a way to do that,
and that is by listening to memory warnings.
Memory warnings happen when the system is under memory pressure
or your process is approaching its memory limit.
I'd really like to give you a number that you could code
to for your memory limit.
Unfortunately, there's no such number.
The limit is different per device
and per application context.
So my best advice is to just listen
for memory warnings and do things.
So what should you do?
Well, you should remove anything not in your working set.
This includes clearing cache data, releasing images,
and releasing view controllers.
Here is the APIs that you can use to listen
for memory warnings, and I want to make it clear that none
of these are new in iOS 9.
They have been around for a while.
Hopefully y'all are already using all of them.
I could go through on a memory warning and manually get rid
of these icons myself.
I'm lazy and don't want to do that.
It would be great if there was a tool that managed all that crap
for me so I didn't have to do it.
Turns out we have one of those, too.
It is called NSCache.
It is similar to an NSDictionary, and it's great
for objects that can be re-created quickly.
It also handles the memory warnings for you
by automatically evicting items from itself, and it's also aware
of application context like foreground versus background
and evicts items when necessary.
It does other cool stuff, too, but we don't have time
to talk about that today.
Make sure to check out the NSCache documentation.
Now when we have IconReel up in a split view,
instead of throwing away these icons,
let's put them in an NSCache.
In fact, let's just put every icon we ever generate that's not
in our working set in NSCache.
And when we have a memory warning,
we can let NSCache do all the work of evicting all those icons
that are not in our working set.
We started here, where we had every icon in memory.
So our memory usage was high, but scrolling was great and CUP
and disk I/O were low.
But sometimes we were seeing apps crash,
and that is not good.
So then we adjusted our working set size to only one page
of icons, so the memory usage was low, but our CPU
and disk I/O were very high while scrolling,
and this resulted in crappy scrolling performance.
Then, we revised our working set size to be three pages of icons,
which increased our memory usage a little bit.
And then every other icon that we generated that was not
in our working set we threw into NSCache.
We have increased our overall memory usage,
but most of our increase is now adaptable
to the surrounding circumstances.
For many of you this is good enough.
If your app can run in a multitasking environment
with Maps running Flyover, then you are probably in great shape.
I presented a bunch of things that your app can do today
to be a good multitasking citizen,
but sometimes these things are not enough.
To talk to you about what you should do next,
I would like to present Jon Drummond.
JON DRUMMOND: Thank you, Brittany.
So understanding how your app uses memory can pay off
tremendously for both your app's performance
and the rest of the system.
But sometimes that just isn't enough.
What if you are doing everything we talked about
and doing it all correctly, but things are still going wrong?
We will talk about some ways to be even more adaptive
with the way that your apps manage their memory footprint.
To start, I would like to go back to the multitasking example
where we have a primary and a secondary app.
We are under memory pressure right now,
so the system issues a memory warning and the apps,
being good multitasking citizens, will take care
of trimming their caches and other objects they don't need.
Great. Now, the user brings in a PiP, okay?
We have space, it's good.
We are not under pressure, but we are getting close.
Now the user goes and resizes the secondary app
to be fifty-fifty with the primary.
This causes a spike in memory growth, and CPU for that matter.
The system did not have time to react to this.
Unfortunately, it was forced
to kill the primary app here even though the primary app was
not the cause of the memory growth.
Dropping back into SpringBoard is never a good experience,
as Brittany showed us.
Before I finish or keep going here,
I want to share a quote with you.
That is that "the world outside your process should be regarded
as hostile and bent upon your destruction."
I don't mean that to sound ominous and I don't want you
to think I'm paranoid, but part
of being a good multitasking citizen is adapting
to everything that is happening around you.
Sometimes what is happening around you is extreme.
You might be doing everything fine,
but the system might conspire to terminate you.
It is not really, but being prepared for this kind
of situation will help you survive.
So first things first.
Memory is the most constrained resource on iOS.
That is not to say that other resources are not constrained.
They are, but they just degrade differently.
When the system runs out of memory,
it's like hitting a wall.
Something has to go.
It's got to get the memory back.
As we saw from the previous example,
sometimes the system can acquire memory faster
than it can be released.
Even if we had time to issue a memory warning there,
and even if the apps, all three of them running,
had time to respond to it, it's not clear
that they would have had enough CPU cycles
to actually do something meaningful and clean
up enough memory to accommodate the growth.
To understand how the system can reclaim memory,
we have to understand how memory is classified.
I'm going to break it down into three different groups,
the first of which we call dirty.
This is memory in active use by your process.
These are your objects, these are your heap allocations,
statics, globals, everything you have cached,
actually pretty much everything.
And it is not reclaimable by the system
because it's in use by you.
The second is called purgeable.
This is otherwise dirty memory that has been explicitly marked
as not in use by the app,
so the system knows it can take it back if needed.
The third type we call clean, and that is read-only memory
that is backed by files on disk.
The system can reclaim the memory
because it can always bring it back because the file is there.
Back to our system memory bars.
What does this look like?
We have a scenario with very, very little free memory.
But it doesn't look like that to the system.
The system knows it has leeway to use anytime it wants to free
up memory for growth, and it can do this
without issuing memory warnings
or requiring the apps to intervene.
The goals for your app and for adapting the memory usage here
are minimize your dirty memory
and maximize your purgeable and clean memory.
We are going to start with minimizing dirty.
First, yeah, use less of it.
I know that's really easy for me to say standing up here,
but if you manage your working set
and track the objects you are allocating, use Instruments,
do all that, once you've taken care of that, the next step is
about reclassifying this dirty memory as purgeable.
If you do this, it can be automatically reclaimed
by the system, and it is best for nice-to-have data,
data that you don't need right now, might need in the future,
and anything you might otherwise put in a cache, for example.
Let's apply this to IconReel.
This resembles the scenario we left off.
We have caches of icons on either side in our working set.
For this example, I'm going to reclassify our data a bit.
Rather than working on a per-icon chunk,
I'm going to group the icons into sets of columns,
just because it makes this a bit easier here
but it doesn't otherwise change the dynamic of our app.
As the user scrolls around, we update our working set,
things get cached, things get pulled
out of the cache, it's all the same.
Let's classify this memory use.
First, we've got our memory in use.
Classified as dirty.
That's our working set.
We've got all the objects in our caches, those are purgeable.
This presents us with an interesting opportunity
to be even fancier with the way we classify memory.
Let's, for example, mark even more of the data as in use
or dirty, even some that's in the cache.
What this has done for us is introduced a second level
The first one, the outside-most icons,
the ones we are least likely to need soon,
we'll let the system reclaim whenever the system needs it.
We don't care.
But we'd like to retain some control
over icons we might need soon.
So even though they're still in the cache,
we're going to mark them in use
so the system can't take them without asking.
These are the ones we'll release
when we respond to a memory warning.
This leaves us with the working set,
which we absolutely need right now,
and there's nothing to do about that.
What does this look like in the multitasking example?
I'll return to the scenario where things went wrong before.
The PiP has come in.
We're out of memory.
But now, the system knows that both the primary
and the secondary app, being good multitasking citizens,
have this region of purgeable memory that can be cleared.
The system, without asking or telling anybody anything,
can take that from the app
and return the system to a good state.
But of course, the user keeps using the device
and memory grows again.
We have another memory warning.
But since the apps have only lost the purgeable data,
they are free to respond to memory warnings and clear
up the caches, thus returning the system to a good state.
How can you use purgeable data in your apps?
It is simple.
There's a class, and it's Purgeable Data.
And it is a subclass of NS immutable data
with no other properties, it is simple.
The only additions are these three methods,
the first being Begin Content Access.
This tells the system you are using the memory,
and don't take it away from me.
The second, to go along with it, is End Content Access.
The memory is now considered purgeable,
and you may lose it at any time.
And the third, to figure out if the system has taken it from you
when you weren't using it.
To return to the system resource bars, this is pretty much
where we left off with Brittany.
I haven't changed anything.
The height of these bars is all the same.
What we have done is take a segment of our adaptive memory
and reclassified it as purgeable.
This has made us a much better multitasking citizen
because the system can do work for us on our behalf
but has the same intrinsic performance characteristic
where, if I've lost my purgeable memory
and I've cleared my caches, what am I going to do now?
I need to rebuild any data I need,
and we've established that's very expensive.
Disk I/O and CPU spike, that's not great.
Let's, though, analyze the data we use in our app to see
if there's something more we can do with that.
Well, what is the data that IconReel uses?
Well, they are icons, and the first property is
that they are absolutely essential.
We don't have an app if we don't have icons.
At some point, the user will scroll to them.
We need them.
No getting away from it.
Second, they are expensive to generate.
Read them off the disk, decode them,
do nice rounded corners, whatever you want.
The third point, though, they can actually be precomputed.
We know ahead of time what the images are going to be.
If we have the spare cycles, we can calculate them.
And the last point is that they are largely static.
There is a very, very good chance
if I pregenerated something, it is going to be the same
when I actually need it because it doesn't change frequently.
All of these points combine
to make this data a perfect candidate for caching to a file.
Before I move on from this,
I want to make the point this is still a cache even
if it's a file.
Don't go writing such caches to the user's documents folder.
Keep it for your app's cache folder
or the system's temp directory.
Back to our system resources.
We just introduced a new one.
That is disk space.
We can trade some CPU cycles up front to generate the data,
save it off, and then when we actually need to pull it in,
we virtually eliminated the need for the CPU at all.
Now, you may have noticed that I/O went up a bit there
because the fully rendered images may be larger
than the source files.
These things are all about trade-offs.
That leads me directly to maximizing clean memory.
You may recall I said earlier that memory backed by a file
on disk is considered clean.
Well, we have a file on disk now.
What a coincidence.
Data in such a file can be memory mapped.
The system will allocate a chunk of memory for you
that directly maps the file, the contents of the file, on disk.
It is most definitely worth noting that memory
in the file contents must match exactly.
You cannot memory map a file that needs further decoding,
or what have you, after it is loaded in.
This is ideal for read-only data, as I mentioned,
that doesn't change frequently.
The coolest part about all of this is the same
with purgeable memory,
the system can reclaim any free memory from you as needed.
But, it is not gone.
When you need it again, when you access again,
the system loads it back from the disk,
and it's like it never went away.
Furthermore, it has great random access properties.
Just because your working set is here and you need a piece
of data over here, doesn't matter.
The system can just read that portion of the file,
hand it off to you, and you're good to go.
What does this look like?
We start with our data in memory.
And you write it out to disk.
The system now considers this, that memory map.
If you go to access a chunk of it, all it does is load
that in, hand it off to you.
When you are not using it anymore, it takes it away.
How can we apply this to IconReel?
We'll start with the working set, but actually we are going
to combine the working set
with every other icon image we generated
and create one big data file and write that to disk.
Now, back in our app, we bring back our working set.
We access those three pages of memory,
and the system just loads only that portion of the file
into memory for us, keeping the rest of this clean.
Doesn't matter if that file goes out for hundreds of megabytes.
It is not there in memory.
As we scroll around, the same happens.
System loads in the data we need.
Say if we introduce a feature that shows notifications
for icons that may be elsewhere on the home screen,
we can pull those in without worrying
about where they are, what they are.
Virtual memory system does all the heavy lifting for us.
How does this change our system resource bars?
We've already killed CPU there.
But now our cache is actually on disk.
So that goes away, too.
And now our only memory footprint is the working set.
Everything else is considered clean
and thus does not count against us.
It is also worth noting that we effectively eliminated the CPU
requirements on both the front and back end
of our data's life cycle.
We no longer need CPU to generate data to show
to the user, nor do we need CPU to clean it
up in response to memory warnings.
The system is doing all of that for us.
How can you use memory map data?
Once again, it's a simple API on NSData.
There are some options you can use
to initialize the data object.
If you specified the mapped option in your initializer,
you will get a memory map data file.
I would be remiss, however, if I did not mention
that there were caveats with using this system.
The first being that it is not appropriate
for small chunks of data.
The virtual memory itself works on pages,
which are small chunks of data.
If yours are even smaller, you will introduce problems
that overshadow what you are actually trying to solve.
Furthermore, from memory map files,
if you memory mapped a thousand icon files, say,
there is actually a limit to the open files any process can have.
You can get yourself into trouble that way.
That's one of the main reasons
in our examples we grouped the data into larger chunks
to make it more manageable for the VM system.
You can also just misuse the virtual memory system.
Each purgeable data object you create creates a region
in your virtual memory space.
If you create too many of these memory map files, too,
you can fragment your entire space.
You can similarly exhaust it.
If you decide to memory map a file that is ten gigabytes,
you're going to run out of space entirely.
Unfortunately, abusing the virtual memory system results
in your process being terminated,
which is what we are here to avoid in the first place, right?
In order to give the best multitasking experience
to the user, it is really important to understand the data
and the characteristics of the data that your app uses.
You must be able to differentiate nice-to-have data
from the essential data you need right now.
Can your data be precomputed?
How expensive is it to re-create?
How frequently does it change?
Understanding this about the data you use can help you pick
the right tools to make your memory as adaptable as possible.
Together we can improve the adaptivity of our apps
and provide the best multitasking experience
for our users.
In the coming weeks, as you're update your apps
for multitasking in iOS 9,
I hope you keep these topics in mind.
First, using Instruments to identify and fix bugs.
This is the easy stuff.
Your leaks, inefficient data structures, algorithms,
abandoned memory, too.
Get that stuff fixed.
Second, prioritize your work appropriately
and don't block the main thread.
I don't know how many of you caught the first session
in this series, but the system may actually terminate you
for blocking the main thread for too long.
It is best to understand
where your work should go and put it there.
The third was identifying and managing your working set,
and be aware that this working set may change based
on the application's current executing context:
are you foreground, are you background, are you a PiP?
Understand where you are right now
because your working set is likely not the same.
The fourth is to use caches and response to memory warnings.
This is the basics for being a good multitasking citizen.
Please respond to them.
Next, leverage the virtual memory system,
which is what we talked about.
Understand the data characteristics of your app,
and see if you can leverage these tools
to let the system manage your memory for you.
The last is that great performance requires trade-offs.
Your apps have requirements.
The apps running next to you have requirements.
The system has limitations.
Identify ways your app can adapt to the environment
with more constrained resources as this is the key
to improving the user experience for everybody.
Because I started with a quote, I'm going to end with one.
I found this on the Internet.
Allegedly attributed to Charles Darwin: "It is not the strongest
or most intelligent who survive,
but those who can best manage change."
Multitasking is about adapting, and the apps
that best adapt are going
to provide the best user experience.
For more information I really encourage you to check
out the documentation.
There's a new Adopting Multitasking guide
that is great.
Technical support, come to the forums.
If you have any general questions,
there's Curt Rothert, contact him.
He's also an ex-SpringBoard engineer, by the way,
so I'm sure he would love to hear from you.
We had two sessions earlier in this track.
I encourage you to watch them.
There are also Performance and GCD talks coming up.
Thank you, everybody, for coming out!
I can't wait to see your apps!
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.