Have you saved our 63-page iOS security white paper to your Reading List but find yourself too busy making great apps to get through it? You can keep your good intentions to devour every last detail, but meanwhile come join us for an illuminating talk on why we care so deeply about security as a design philosophy central to all our iOS products.
You're all amazing for coming to a security talk
at 4:00 p.m. I know it's been a long day,
but thank you so much for coming.
My name is Ivan Krstic, I head Security Engineering
and Architecture at Apple, which is the group that built security
from the ground up into every product we ship to users.
Today the focus is iOS.
And I am so proud to tell you
about the role security has played
as a key design philosophy for that platform.
Before we dive in, we need to set some context.
Why is security so important to Apple,
and why do we believe it is so critical to our users?
The answer is because our mobile devices are an unprecedented
record of our lives.
Never before in history have there been objects
that know so much about us.
About how we spend money.
About the emails we send, and the photos we take.
About our messages.
From our quickest hellos to our most intimate conversations.
When you think of it that way,
you realize that protecting the security of all
that information is so much more than just about technology.
For us it's a mission, and it's a mission that is nothing short
of protecting the digital personal sovereignty
of our users.
And when you think about it that way, you also realize
that the very definition of attackers has changed.
Attackers today might be criminals looking
to hold your data for ransom.
They might be unscrupulous business competitors looking
to gain an edge.
They might be internet service providers looking
to indelibly mark your online activity for tracking
and advertising without your consent.
It could be nation states,
like in the 2014 hack of Sony Pictures.
And sometimes curiosity can get the better
of even those close to us.
And then, of course, we must never underestimate the power
of the advanced feline threat.
So when you think about all
of these attackers, what do they want?
We find that the motives tend to group in one of three groups.
There is personal stalking and surveillance, which is trying
to access your photos and your messages,
gin access to your camera and your microphone,
then there is corporate espionage,
which is gaining access to your business emails and documents
and your intellectual property, and then finally,
there is direct financial benefit, stealing money directly
from your online banking session or injecting ads
and committing click fraud while you browse the web.
And to do these things, attackers have created adware,
spyware, ransom ware, remote access Trojans,
and a variety of other malware.
How do we know this?
Because we've seen it on other platforms.
But not on iOS.
Nearly a decade after it came into the world,
there hasn't been a single piece of iOS malware
that has affected our users at scale.
This is because nearly 10 years ago,
Apple realized what role mobile devices would come to play
in the lives of our users.
We realized that existing security technologies would be
woefully inadequate, and could not step up to the challenge,
and so we decided to build the best security technologies
that we could imagine to protect our customers,
at a scale that is staggering.
We are protecting users that use
over a billion active iOS devices all over the globe.
And every single one
of our security features protects a real user
from real threats.
But we are not in this alone.
There are really three key pillars to iOS security.
There is platform security which are the technologies
that we build into our software and hardware.
There is users who upgrade their software to the latest
and most secure versions, and then there's developers
like you, who use our security technologies
to build secure apps.
We're going to talk about all three pillars today.
We'll start with platform security.
When you think about how security used to work,
especially in enterprise settings, it was a long
and complex list of thing that users have
to do to try and be secure.
Loading secure configuration into their devices,
complex batch management schemes,
complex password policies.
It was difficult, it was heavy handed,
and it set up users to fail.
It was so hard to be secure.
But because Apple owns all the hardware and software,
we were in a position to address this
in a unique and innovative way.
When you look at iOS security,
we've built security directly into the silicon.
We have made the devices that are secure
out of the box by default.
We've made it really easy for our users to update
and run the latest and most secure version.
We have made it really easy to log
into the devices securely with Touch ID.
We have a curated App Store, and we've made security easy to use.
IOS platform security consists of a great deal of features,
and today I only have time to highlight five.
We'll start with Secure Boot.
One of the most important ways of being able
to trust a device is to trust the software that runs on it.
And the way we do this is by building trust into the silicon.
The Apple design system chip inside every iOS device holds
Apple's public keys in an area
of memory called Boot ROM, which is read only.
It's written in the factory and cannot be changed after that.
And when your iOS device starts, we take that public key
and the Boot ROM will validate the next step in the boot chain,
which is the low-level boot loader.
And only that validation passes will we move on to the next step
in the chain, and we'll repeat this until iOS has fully booted,
which gives us confidence that every piece of software
that was used in boot was signed with Apple's Private Key.
When you think about secure boot, it's really interesting
that we don't rely on any third parties
to deliver secure boot trust.
We don't rely on certificate authorities
that are outside of our control.
The keys used or secure boot are generated management provision
The code that does the verification
that I just showed you is written by Apple,
and only Apple is in possession of the private keys required
to sign all the software.
But there is another interesting element of this,
which is that when an iOS device goes to update the version
of the software, it has
to contact our installation authorization server,
and ask for permission to update to a given version.
And it does this by taking cryptographic measurements
of the update, sending them to the server,
and asking whether it's okay to update.
We do this because the server is now in a position
to deny an iOS device the ability to move to an older,
less secure version of iOS.
And so when you put these two things together,
you get a strong and trustworthy mechanism for relying
on the software that runs every iOS device.
It's not possible to copy an old version of iOS
from one device to another.
It's not possible to tamper with the integrity
of the software of this process.
Now, let's talk about protecting user's data at rest.
If you're really serious about doing this, you don't want
to take the cryptographic keys that protect user data
and make them available to the application processor
or the normal processor in the device.
That is because the attack surface there is simply
If you're serious about protecting user data at rest,
you build dedicated silicon that holds those cryptographic keys.
We've done it.
We call it the secure enclave.
Now, when you think about passcodes,
normally they're very short, four to six digits.
And if somehow an attacker were able
to take the encrypted data off a phone, or off an iOS device,
and attempt every single possible passcode,
it wouldn't take them very long.
So what we do instead is we take the user's passcode,
and we derive a key from it that is untangled
with the hardware key that is only available
to our secure enclave.
Which means it's not possible to guess passcodes offline.
Passcode guesses must happen on the device,
and the device is free to limit the number of attempts.
In fact, this is exactly how your iOS devices work.
After a few incorrect passcode attempts,
we start imposing a time delay.
But after 10 incorrect passcode attempts,
the secure enclave will simply not unlock that device again.
This has nothing to do with the erase data feature,
where if it's on, the data will actually be erased
after 10 incorrect passcode attempts.
Even if you have this feature off,
once 10 incorrect passcode attempts are made,
the secure enclave will not unlock
that device again regardless of how much time you give it.
So we built the system using industry standard algorithms.
We've subjected it to rigorous internal security audits,
and third-party code review,
and then we've taken it a step further.
We have taken our core cryptographic libraries
that underpin data protection, and we've posted them on the web
for everyone to download and inspect.
Let's talk about sandboxing.
Sandboxing is a method for isolating data between apps.
This is because even with the best of intentions,
developers sometimes make mistakes,
and sandboxing is a way of mitigating some
of the potential harm from those mistakes.
It's a little bit like air bags or seatbelts in a car
which mitigate the risk of injury in an accident.
We also use sandboxing to put some of the choices
around security directly in the hands of our users.
We have a mechanism called TCC, or transparency consent
and control, where we can ask a user whether they trust a given
app with some of their sensitive data,
like location, photos and contacts.
And once a user makes a decision in a dialogue by this,
the sandbox mechanism and the kernel enforces that decision.
Another critical element
of our platform security is code signing.
When an attacker is trying to attack a device,
the very first step is attempting
to get their malicious code to run.
Because of this, iOS code signing doesn't just cover the
It covers every app that runs on the device.
In fact, when you go to upload your app into the App Store,
X code will calculate the cryptographic hash
of every executable and resource in your app bundle,
and will write them into a code directory alongside the app,
which is then sent to the App Store.
When the user then downloads an app and runs it,
our kernel will look at every executable page of memory
and compare it against the code directory to make sure
that it hasn't been tampered with.
As a result, an attacker can't simply take some malicious code,
inject it into memory and then divert the control flow to it.
Instead, they have to resort to much more complex
and difficult schemes to get their malicious code to run.
Let's talk a bit about Touch ID.
We know that for our iPhone users
on average they unlock their phone 80 times a day.
And if you have a passcode set, it adds friction
to each one of those 80 unlocks.
As a result, almost half
of our users didn't have a passcode set,
and we knew that because the passcode is quite literally part
of the key used to protect the user's data at rest,
this is a problem we had to solve.
We needed a solution that was easy, that was fast,
and that provided great protection
to the data of our users.
For us, that solution was Touch ID.
But we didn't just take a fingerprint sensor
and put it in a phone.
You can change your passwords,
but you can't change your biometric properties.
You can't change your fingerprint.
And so we knew that we had
to afford the biometric data the highest level
of protection we could build.
So what we did was we connected the fingerprint sensor directly
to the secure enclave, with an encrypted link.
And we made it so that when you touch your finger to the sensor,
that fingerprint is transmitted directly to the secure enclave,
where it is processed, and then encrypted.
And it's never made available to the application processor
that runs normal applications on the phone.
Now, I mentioned that before Touch ID,
about half of our users had a passcode set.
Now, 9 out of 10 do.
This is a shining example of what Apple is amazing at.
It takes our hardware engineering expertise,
our software engineering expertise, our dedication
to ease of use, and combines them in a way
where a normal user doesn't care about fingerprint sensors,
doesn't care about secure enclaves or hash functions.
What they get is simply an amazing step forward
for the security of their data.
The second key pillar
of iOS security is users upgrading their software.
The latest versions of iOS are always the most secure,
and we're constantly making iOS more secure based
on the threats we see today and threats we see in the future.
And we've built all this deep technology
for trusting the software that runs on our hardware,
but it doesn't matter how secure a version of iOS is,
if people are not installing the latest one,
the one that is most secure.
So when we think about software updates, it is not just
about the device and the software it runs,
it is also about our relationship with carriers,
and our ability to get updates out there quickly,
and also the user experience
for how a user sees an update on their device.
One thing we've done with iOS 9 is we greatly shrank the amount
of space it took to install updates, so that users
who had relatively little free space could still get the latest
and more secure version of the software.
We then took a look at the user experience
and said sometimes users don't want to be interrupted
to do an update, so we'll give them a choice
to install an update right then and there,
or to install it later that night
when they're probably not using their phone
or be reminded the next day.
The results are amazing.
Nearly 85% of our customers are running the latest version
of iOS, the most secure version we have,
and if you think this is easy or obvious, you only need look
at some of the other platforms [audience murmurs].
This, by the way, is a fair comparison, because Android 6
and iOS 9 were released within just a couple of weeks
of each other, and for some perspective, every version
of Android before 5.1.1 has a vulnerability called stage
fright, which is remotely exploitable.
You can exploit it by sending a specially crafted message
to a vulnerable device, and gain complete control
over that device.
And Google patched this bug very quickly.
But it doesn't matter, because most
of their users don't have the fix.
And a fix that is not installed doesn't do anyone any good,
which is why we invest so much effort not just
in building secure versions of iOS, but also making it easy
for our users to have that last and most secure version.
The third pillar in iOS security is developers like you,
using the security technologies we provide,
to build secure apps.
This means following best practices on the platform.
We have an amazing number of apps that use Touch ID API
to save their users from having to remember complex passwords
and type them directly into the app, and last year,
we introduced a feature called App Transport Security
that provides strong protection to app information as it travels
across the network to that app's servers.
Today I'm proud to say that at the end of 2016,
App Transport Security is becoming a requirement
for App Store apps.
This means that by the end of 2016, when your apps communicate
with your own server back-ends, they must do
so using a secure TLS channel, using TLS 1.2,
unless the data being communicated is bulk data
such as media streaming and data that is already encrypted.
This is going to provide a great deal of real security
for our users, and the communications
that your apps have over the network.
There is another piece of responsibility that rests
on you, the developers, when you build your apps,
and this is to know your code, and not just the code you write,
but also any third-party code you actually include
in your apps.
This is the because libraries you use may undermine the
security of your app.
Maybe because they're doing things you're not aware
of behind your back.
But maybe because there is a more recent
and more secure version out there.
There is kind of an amazing example here
where there is a very popular third-party networking library
that had a critical flaw in the validation
of security for TLS connections.
And this flaw was patched very quickly, but for a period
of time, there were up to 25,000 apps in the App Store
that didn't have that latest, more secure version.
So it's really important that you get your third-party code
from trustworthy locations, that you know what it does,
and that you keep it current.
So these are our three key pillars of iOS security.
Platform security, which are the best security technology
mechanisms that we can build into our hardware and software.
Our users who download the latest and most secure versions
of iOS and developers like you,
using our security technologies to build secure apps.
And it's a fair question to ask, so how are we doing?
How good is iOS security?
Security is interesting, because unlike performance,
you can't measure it directly.
You can't run a test, and time how long it takes
to get an objective answer.
So what you're left with are indirect metrics.
And I have three that I'll share with you today.
The first one is that after a decade of its existence,
there has still not been a single piece
of iOS malware affecting our users at scale.
This one is the one that is most important to us.
Our users have been fantastically well protected
for nearly 10 years.
The next indirect metric that is interesting is to look at some
of the unfettered jailbreaks that are coming out.
These are jailbreaks that defeat iOS security mechanisms,
and what is interesting about them is
that the latest launch we've looked at frequently have
to find and chain together between five
and ten distinct vulnerabilities in order to be able
to defeat platform security mechanisms.
This is much higher than on any other platform.
And the reason is because iOS security measures work together
in lock step in such a way that defeating all
of them takes a long time and a lot of work.
For my last indirect metric,
I want you to take it with a grain of salt.
As probably most of you know, there is a black market
for software vulnerabilities.
And every once in a while, some of the prices
on the black market become known.
Usually these prices are tens of thousands of dollars,
sometimes a hundred thousand.
But in 2013, the New York Times learned
that an iOS vulnerability sold for $500,000 and last year,
Forbes interviewed a jailbreak developer who said
that most experts agree that the black market price
of an unfettered iOS jailbreak is a million dollars.
Take that with a grain of salt,
but it's a fascinating number to think about.
So where does this leave us?
Clearly, security is a process and not a destination.
We're not done with iOS security and we never will be.
It's also not something that we just got interested in recently.
What you're seeing now is a result of a decade
of our best work in protecting our users.
But the bad guys aren't standing still
and we will constantly evolve our security
to defeat not only the threats of today,
but also the threats of tomorrow.
We will take our hardware expertise
and our software expertise,
and we will build the best security technologies
that we can imagine to protect our customers
and how they use their mobile devices today
and also how they'll use them tomorrow,
and protect our customers against today's attackers
and tomorrow's, even if they are cats [laughter].
Thank you very much.
For those of you who want to learn more, I invite you
to read the iOS security guide, which creates a great amount
of technical detail about all the great features I just
covered, including many others that I didn't have time for,
like iCloud keychain and two-factor authentication,
and a link will be available at this session site.
Thank you again.
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.