The same remote that you use to watch TV can also be used as a dynamic game controller. Tap into the Siri Remote touch surface and harness the built-in accelerometer and gyroscope to deliver engaging gameplay. See how the Game Controller framework can be used to fully integrate MFi-based controllers into gaming designed for the living room.
We're going to continue this morning.
Our next topic this morning is Game Controllers
and Siri Remote.
And actually, I'll be picking up right
where at the last session left off.
You know, as we were saying, most apps will take their input
through the remote up into the focus engine, or they can step
down sort of one level and take input either
as gestures or as touches.
But there's a third option and that's
to read the input directly off of the Siri Remote
or from an MFi Game Controller using the Game
So I want to actually start.
Let me give you a slide here, a quick refresher
about the Game Controller framework
if you're not already familiar with it.
So, this is something that we introduced a few years ago
as an API for supporting third party Game Controllers.
And now, we've added the Siri Remote to that as well.
And really what this is about is having one single unified API
that works across all of the controllers
that we support regardless of the vendor.
And it standardizes a lot of how you read those things
from those controllers.
So standardizes how we detect,
how or whether our controller is present.
It can handle controllers being connected
or disconnected while your app is running
and giving you notifications about when that occurs.
And then some real simple APIs to read those inputs either
by polling them directly and just reading the properties
or through an event-driven callback model.
Now, this is also an API that's in common, of course, across iOS
and OS X but and now with Apple TV to tvOS as well.
So from the perspective of the Game Controller framework,
we see the Siri Remote as just another GCController object
in an array of controllers that are connected to the device.
And from that perspective, it's actually, it's kind of--
it's really nice because it means
that if you have already a written code
to support Game Controllers in your iOS apps,
it's going to be very easy for you to add support
in that same way for the Siri Remote.
So the way that you do that is
that the controller framework has this--
this concept called profiles for different kinds of controllers.
And so, here the Siri Remote supports two profiles.
The first one is the GCMicroGamepad
and that correlates to the touch surface
and the various buttons on the remote.
And then, as you know, the Siri Remote has an accelerometer
and a gyroscope.
And those are supported through another profile
that we call GCMotion.
Now, something to note is that that's a little bit different
than what you might be used to on iOS.
You know, nowadays on iOS, motion data is usually reported
up through the Core Motion framework.
But here on tvOS, it's coming
in through the Game Controller framework.
Here is this last thing,
something that we call a GCEventViewController.
And this is new and it's specific to tvOS
and what this does is to give your app explicit control
over whether the inputs from the remote should be caught
by the Game Controller framework exclusively
or whether they should also be sent along to UIKit
and the responder chain and ultimately
up into the focus engine and there's different implications
to each of those choices.
So let me dive now into GCMicroGamepad.
So, because this is-- if you're using the Game Controller
framework, this is how you can get access to the buttons
and to the data from the touch surface.
So the first property here
that it provides you is the DPAD correlating to the touch surface
with reporting to you X and Y values from negative one
to positive one in each axis.
As you touch down on the surface and then as you move your thumb
around on the glass or--
and that's analog properties coming back,
or it can give you so-called digital values
and give you what amount to kind of like button presses of up,
down, left and right or taps of up, down, left and right
from the various edges.
All right, the second from the perspective
of the Game Controller framework,
clicking the remote is something that we see
as pressing the A button as if it were a controller.
And that's typically something that's used
as the primary action in a game or other apps
when you're holding the remote in portrait orientation.
And then the third one is the X button, how we see it
from the Game Controller framework.
And that's physically the play/pause button here
on the lower left.
So this is usually, typically useful as a secondary button
in a game and especially in games
where you might be holding the remote horizontally.
It becomes very natural for you to be putting one thumb
on the DPAD and the touch surface and your other thumb
on the X button here and as you're holding it at that way.
Now, the rest of the buttons, the menu button, I'll get you
in a moment because that's handled a little
The Siri button, the home button and the volume up
and down are reserved for the system.
So now, let me just get--
show you an example of using this API.
It's really very, very simple.
Here, I'm just assuming that I've already--
this controller has already connected,
meaning the remote is already connected.
I'm just reading its values, microGamepad, DPAD, X and Y,
axis values and I can get the current state
of the two buttons.
So here is just-- if I'm polling it,
I can just read these properties anytime.
The other option, if an event-driven model is better
for your app then you can set up event callbacks here.
So-- And these are called either valueChangedHandlers
or pressedChangedHandlers, depending on the kind
of data that's coming off of this particular input.
So here, I'm setting it for the DPAD, a value changed handler
and just then any time that there's any input change
on the DPAD, that will be called,
the same with the two buttons.
Any time, I either click or unclick, those buttons,
it will call my-- those two handlers.
So now, there are some really unique aspects
about using the DPAD and I want to go into those.
The first is that it supports a concept called DPAD windowing
and what this defines is the placement of the origin of the X
and Y axis on the surface-- on the touch surface.
You might think, you know, shouldn't this just be
at the center of the touch surface.
And the answer actually is no, not necessarily.
You know, most of the time, actually in most games,
you're really using the DPAD surface more
or like a virtual joystick where you don't--
maybe the game doesn't really care exactly
where the player touched down on the surface but what they care
about is that they have touched down.
And then, then they want to--
the game wants to monitor the movement up to that point
and give you back locations that are relative to that.
So that's what we provide by default, so by default origin,
if you're using the Game Controller framework,
the origin of the X and Y axis is placed wherever your finger
lands on the touch surface.
And then subsequent values are going to be relative
to that point as you move around.
Until you get to the edge of an imaginary window that's
at positive one, negative one in each axis.
Once you get to that point, if you keep on moving,
it's sort of drags a window and drags the origin along with it.
And then you'll be moving next to that new origin.
I'll show you that in just a second here.
The other option, you can choose for your apps
that you want the opposite, right?
That you want to get absolute values off the DPAD instead
and you don't want this windowing mode to be enabled.
If that's the case, then you set reportsAbsoluteDpadValues
to true on the GCMicroGamepad
and then it will just be giving you the absolute values
with the origin remaining fixed
at the physical center of the touch surface.
So let's take a look at a couple of examples here.
So the first one is probably the most simple,
so this is the default case, windowing is enabled.
And let's say that the player touches down at that point.
OK, so then that defines the origin
in this imaginary window around it.
And then as I move my thumb around,
all the values coming back are going to be relative
to that origin, all right.
And then you see here, I'm right by the edge of the window here,
if I keep moving to the right, here we go,
it's going to drag the window along with me
And move the origin correspondingly.
And now, if I start moving
around again then the new values are going to be relative
to this new origin point, OK.
Another example, what if I start out and I touch right
at the edge of the remote?
Well, that's going to define the origin just as before.
But there's now-- there is no way for me to move any further
to the left but if I move to the right, that's going
to drag the window along with me again.
And now, values will be reported relative
to this new origin once again.
And as I start to move around,
I might even shift it back as I'm moving.
Final example here, what
if I don't want this kind of behavior?
I can just turn all of this off and get absolute values instead.
So in this case now, zero,
zero is at the physical center of the touch surface.
And wherever I move around, wherever I touch down,
the values are reported back to me relative
to that absolute point.
OK, so that's windowing.
Next, let me talk about the rotation of the remote.
So by default, the values, the X and Y values
from the touch surface are going to be reported to you
as if the remote is being held in portrait orientation,
meaning X to the right and Y going up here.
And by default, that stays the same regardless
of the way the player is actually holding the remote.
So that presents a bit of a problem
if your game wants the remote to be held on its side and
or maybe the other way, right?
And you'd have to just transform those values.
Now the transformation is an-- you know, just easy, easy math
but the problem is how do you know?
How do you know to do that?
Well, you'd either have to be reading the accelerometer
to figure it out or forcing the player
to hold the remote, one way or the other.
And neither one of those is really what you want to do.
So we provide you with a better option.
There's a property
on the GCMicroGamepad called allowsRotation.
And if you set that true, then now, X and Y are going
to remain sort of fixed-- in-- relative to the player instead
of relative to the orientation of the remote.
So now, if I set that to true, whether I'm in portrait,
X to the right and Y is up or if I go landscape left,
still X to the right, Y is up, relative to the player now,
and the other way, X to the right and Y is up.
OK. So, now that way, if you set this then your game doesn't have
to worry about which way it's being held and you don't have
to present any messaging to the user, forcing them
to hold it one way or the other.
Now, I mentioned also about the menu button,
so the menu button gets handled just a little bit differently
than the other buttons do.
So when you're using the Game Controller framework,
pressing the menu button triggers what's called your
So that's code that you write.
And there's some complexity here because the thing--
your response to the menu button being pressed may need
to be different depending on where the player is
in your game or in your app.
So because the behavior that we want to get here is
that if the player is on-- sitting on your title screen
or sitting on the main menu, they may just launch in the game
and home should behave like back button and take them back
to the home screen of the Apple TV.
Likewise, once they go into your submenus and your level choosers
or sort of non-immersive game play,
pressing menu should there be acting like a back button
and be lifting the user effectively,
one level up in a hierarchy of Windows.
The unique aspect to this is once you're
in immersive gameplay, and what I'm defining here is immersive
gameplay is any game that has the idea
of potentially becoming paused,
I mean unpausing and resuming again.
In that case, the menu button should toggle that behavior.
So if you're playing the game, you hit menu, it should bring
up a pause screen or whatever is your natural pause behavior.
And then hitting menu again should dismiss that
and bring you back into the game again.
Now, the way you go about that is very tied
into my next topic here and that's how you integrate
with UIKit if you are using the Siri Remote
from the Game Controller framework.
So here is the situation.
So if you are linking with the Game Controller framework,
then all of the touches and button presses and so
on get reported to that framework
through the GCMicroGamepad.
They also get reported up into UIKit.
So that will show up as touches and gestures going
up into the responder chain and potentially
up into the focus engine.
So uh-oh, it seems like they're going to both places.
This is may be something that you--
a behavior that you don't want to have
and so we provide you a way to turn that off.
Now most of the time with games,
if it's a big full screen Metal view or OpenGL view
and that's the only screen
of the game then really this behavior ends
up being pretty harmless, right?
You're swiping around, the focus engine doesn't have anything
else to transition to so it doesn't really matter.
But the thing that does matter and it does care
about is the behavior of the menu button.
So if you hit the menu button, then it's being sent
to the Game Controller framework and going
through that pause handler, and it's also going
up to the focus engine
and whatever the focus engine thinks,
menu should do at that moment.
So we need to someway for a game to say, no,
no I'm handling this, I'll catch that menu event and deal
with it myself because right now, I'm in a part of a game
where I want, for example, to go to a pause screen instead of up
into the focus engine.
So the class that you used to do this is called the--
or to control this, is called the GCEventViewController.
So this is a view controller class
that gives your app explicit control over whether events
from the remote will be seen just
by the Game Controller framework or whether you also want them
to go up the responder chain into UIKit.
And effectively, all games or any app that's using--
presenting kind of a full screen view Metal in OpenGL
and then wanting to handle a remote input
through the Game Controller framework,
should use this typically as your root view controller.
And there's a property on this view controller class called
So by default, now any input coming off the remote will only
go to the Game Controller framework once you adapt
But you have explicit control over it.
You can say, yes, this is true now and have
that input go directly up to the--
also go to that responder chain.
Now, here's how you used to be using this because it depends
on where you are in your game again.
So let's say you're sitting at the home screen,
you launch into a game and now you are
in the main menu of the game.
In this spot, you actually want controller user interaction
enabled to be set to true.
So that and therefore for events to also go up to UIKit,
so that if the player hits the menu button,
it's going to return them to the home screen of the Apple TV,
but then once they-- if instead, they picked the option
to play your game and it's kind of an immersive game that now,
during the game play portion and you should be setting
that for once they're there to pause, all right?
And keep changing this, controlling this from true
to false as the player crosses that threshold between the part
of the game where you want the menu button to take them out
and the part of the game where you want the menu button
to just go to your controllerPauseHandler.
OK, there's one other part about integrating with UIKit
that you should also know.
And that's dealing with the idle time out.
So we have an issue here where the screen saver will kick in
or will pop up if you are using a Game Controller
in your game unless you disable that.
And so that's managed through this property
on the UIApplication delegate called idleTimerDisabled.
If you set that to true then that disables the idleTimer
and therefore disables the screen saver.
But again, this another one just the same
as with controller user interaction enabled.
This is a case where you should be managing this actively
depending on where the player is in your game.
So if they are sitting, again, on your main, on the home screen
or your main menu, well at that particular moment,
if the player is just sitting there on your main menu
for 10 minutes or whatever the time
out that they've set happens to be, then it's probably OK
for the screensaver to kick in, right?
They might have walked away or whatever the case is.
So you've set that to false but once they get
into your game play,
that's where you should set idleTimerDisabled to true.
And again, actively manage this as they go back and forth
between different modes of your app.
OK, one more thing to mention here just about integrating
with UIKit, I've been discussing this all in the context
of integrating with the Siri Remote but these two topics
about the controllerUserInteractionEnabled
and the idle timer, they apply to games
that are using Game Controllers as well and I just want
to mention that here, I'm going to get
into Game Controllers a little bit later.
So now let me go back and talk about device motion.
So as I mentioned earlier,
so the Siri Remote also supports the GCMotion profile
to take input off of the accelerometer
and the gyroscope that's built into the Siri Remote.
So as the remote is moving around,
we are able to get the gravity vector which is just the X, Y,
Z value and the user acceleration vector
and other X, Y, Z value.
But there's a few things to note here.
The first is that this data coming off
of the motion profile is already being filtered before it comes
to you and before it hits that API.
So if your code is also doing filtering of accelerometer data,
then you should remove that for the tvOS version of your app
because if you don't, we'll filter it and then hand it
to you and then you would be introducing unnecessary lag
if you were to filter it twice.
Also note that this API is giving you what we call fused
motion data from both the accelerometer
and the gyroscope together through the gravity vector
and through the user acceleration vector.
And what that-- what fusing means is that each
of those sensors really serves to reinforce the other.
So the gravity-- data from the accelerometer helps
to correct gyroscope drift.
The gyroscope data helps to smooth out the accelerometer.
And so together, they give you much more accurate data
than either one of them would separately.
You know, just because they are both reinforcing each other
and this is before the values are passed to you.
But something to note now is that for really, really vigorous
and I mean really, you know, shaking the remote,
really vigorous user acceleration can overwhelm that,
all of that.
So what we recommend is that you avoid creating situations
in your games, in your apps where the player has
to really vigorously shake the remote
or make super aggressive movements
because that's creating data that's difficult
for the framework to correct
until the sensors have had a chance to quiesce.
OK, so that takes me through the Siri Remote.
So next, let's-- let me talk about MFi Game Controllers.
So Apple TV supports the full set
of MFi Game Controllers which is awesome.
And the controllers that we support are all what we call the
Wireless Extended Gamepad profile.
Wireless, of course, are referring to the--
to being a stand alone, you know, communicating
over Bluetooth and extended gamepad is describing the button
layout which is in common
across all those controllers that we support.
There's always a DPAD on the left side, A, B, X,
and Y on the right side arranged in this diamond formation,
two thumbsticks on the face, two triggers
and shoulder pads on the front.
And something to note also, all of these buttons
that I'm mentioning, they're all pressure sensitive.
So even the buttons, you know, A, B,
X and Y are pressure sensitive and of course,
the triggers and so on.
So you can get really precise control in your games
if you're using a Game Controller.
And then the menu button behaves just the same as the menu button
on the Siri Remote in terms of how it integrates with UIKit
and the behavior that you should have there as I was mentioning.
All right, and then also there's always a player indicators,
so that you can set in your apps, if particularly
if you have a multiplayer game to relate
which remote goes to which player.
OK. Now as I said, we really support the full set
of MFi controllers on Apple TV.
I've shown some of them here,
some of the newer ones at the top.
All those pictures I have been showing
of a controller are the Steelseries Nimbus
but we also have support-- great support for the Horipad Ultimate
which is another great controller, Madcatz Controller
as well as many of the controllers or, you know,
all of those controllers that were supported
on the iPhone as well.
And to just reiterate that point, so what this means is
that if you're app had-- already had support for MFi controllers
in it, the expectation really is that that code should be able
to just come straight over.
You'll need to add support
for the GCMicroGamepad for the Siri Remote.
And you'll need to add the handling of the menu button,
you know, specifically for Apple TV.
But all of the code that you already had
for supporting your MFi controllers should come
Now there is one thing though that's unique
or at least perhaps that you might not have
And that's how you might handle multiple controllers.
Now, this is a question that we get all the time.
How many controllers are supported simultaneously?
So here is that answer.
Apple TV can simultaneously support one Siri Remote plus
up two of the MFi controllers.
And each of these just appears as another GCController
in the controller's array.
Now, obviously most people
who asked this question are usually asking it in the context
of a multiplayer game, that's what they're envisioning.
But actually the most common scenario that you need
to be thinking about is
where this applies actually is a single-player game.
And here's what I mean by that.
So a single-player game needs to be ready for the possibility
that at any given moment,
you might just have the one Siri Remote connected
or the person playing that might have the Siri Remote
and they have a Game Controller
or they might even have two Game Controllers, right?
So there could potentially be three different inputs coming
into a single-player game.
And you're-- and they're all coming just showing
up as new GCController objects in that controller's array.
But your game does not know in advance
which input the player is actually holding on to
and giving input from.
So you need to build in some logic to be able to handle that.
Now let me describe the goals, right?
The goals here of that logic,
first is to just really be flexible.
Follow the player on to whatever device they've chosen to become.
And try to use their actions
as an implicit expression of their intent.
So if your game supports the remote
and it supports Game Controllers then obviously you need to build
in the flexibility to deal
with either one independently, of course, right?
But you also need to give the player--
build in the flexibility so that the player can change
So how you-- and how you will do that is going to be different
for different kinds of games.
You know, really for some of the games that you're all working
on there is there's just no issue.
If you take a game like Crossy Road, something like that,
you are swiping left and right and up and down with the remote,
maybe clicking the remote to move forward
or if you're an MFi controller, you are using the DPAD
to go left and right and up and down
and maybe the A button to go forward.
So in reality, that kind of game can just write code
that is listening to both
and implicitly follow the player just by listening
to all the inputs and taking input from all of them when it's
in single player mode.
But there's other games that might use the remote
in a way that's very different
from the way it uses an MFi controller, right?
So you might be using device motion of the remote
to steer a vehicle or, you know,
to drive a car, that kind of thing.
Whereas, on a Game Controller, you would probably do
that with the left thumbstick, right?
And so in that case, your game has
to make a much more binary choice
between which device it wants to listen to.
You know, should I pay attention to the accelerometer
which is always reporting its data even
if it's just sitting there on the couch.
And the players picked up the Game Controller or should I,
you know, take values from the Game Controller.
And so again, you want to be flexible and you want
to follow the player, so a good approach there is
to just use whichever one had some kind of explicit action,
meaning an explicit button press most recently --
and do a binary switch between them based on that.
Another option or sort of a variant on that that I've seen
in some games is, which of the inputs did they use
to start the level and we'll stick with that
through the course of the level,
maybe if it's something that's timed that you don't expect them
to pick the other one up right in the middle
or maybe they started the level but then they paused it.
So which one-- which device did they use to unpause that level?
Because maybe the reason they paused it was
to switch their inputs.
And in doing so then, you're letting their actions, you know,
indicate their or give you a good impression of their intent.
Another one, to watch for in code,
just did the controller array itself change,
and meaning did a new controller get added
or did a controller disappear from the array.
You'll get a notification and you can see then what happened.
And typically, this will happen
when maybe someone will start a game and then switch
to the other device, but maybe the other device was asleep
or needed to be powered on.
And it'll connect, it'll show up in that array and again,
use that as an implicit expression of intent
and switch over to it.
All right, and then finally, well, maybe all of these ways
of just figuring it out on your own isn't sufficient
and you need to really allow the player to just tell you
which one they're using and let them decide.
And that's fine as well, but the point here is just to say
that there's a lot of options between the extremes
of on the one hand, providing something like a player's--
an input selector UI or on the other hand, forcing the player
to quit the game or go into settings or something
like that to switch inputs.
We want it to be a lot more flexible
and to just follow the player kind of live
as the game play is going.
And incidentally, not to draw this too closely, you know,
there's nothing-- we have nothing
against a player selector UI.
In fact, that's probably going to be necessary
in a lot of multiplayer games.
But here really I'm talking about the single-player game
where the player has both a remote and one
or more Game Controllers.
OK, so, everyone we've just talked about a lot,
everything here from the DPAD on the Siri Remote up through,
you know, my soliloquy here on handling multiple controllers.
There's a lot more resources here for you to dive in to now.
In particular, it's all of our landing page, but the place
where I always start first is in the GCController reference docs,
that's where you can get all the details of GCMicroGamepad
and all these properties.
And then sample code, I just want to point
out the Fox sample is something that's new for Apple TV
since we introduced that sample back at WWDC earlier this year
and now we've ported it over onto the Apple TV.
And then Demobots is available as well and both
of those have great, you know, code that shows you exactly how
to integrate with the Siri Remote
and with an MFi controller.
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.