스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
유연한 인터페이스 디자인하기
iPhone X의 유연한 제스처 인터페이스를 만드는 데 사용된 기술을 알아보고, 직관적이고 자연스러운 제스처와 동작으로 디자인하여 즐거운 앱 경험을 제공하는 방법을 살펴보세요.
리소스
-
다운로드
Hey, guys. Thank you.
Thanks for coming by, guys.
Welcome to Designing Fluid Interfaces. My name is Chan. And, I work on the human interface team here at Apple. And, most recently, I worked on this, the fluid gestural interface for iPhone 10.
So, me, Marcos, and Nathan, we want to share a little bit about what we learned working on this, and other projects like this in the past.
So, the question we ask ourselves a lot is, what actually makes an interface feel fluid? And, we've noticed that a lot of people actually describe it differently. You know, sometimes, when people actually try this stuff, when we show them a demo, and they try it, and they hold it in their hands, they sometimes say it feels fast. Or, other people sometimes say it feels smooth. And, when it's feeling really good, sometimes people even say it feels natural, or magical.
But, when it comes down to it, it really feels like, it's one of those things where you just know it when you feel it. It just feels right.
And, you can have a gestural UI, and we've seen lots of gestural UI's out there, but if it's not done right, something just feels off about it. And, it's oftentimes hard to put your finger on why.
And, it's more than just about frame rates, you know. You can have something chugging along at a nice 60 frames per second, but it just feels off. So, what gives us this feeling? Well, we think it boils down to when the tool feels like an extension of your mind. An extension of your mind.
So, why is this important? Well, if you look at it, the iPhone is a tool, right? It's a hand tool for information and communication. And, it works by marrying our tactile senses with our sense of vision.
But, if you think about it, it's actually part of a long line of hand tools extending back thousands of years.
The tool on the left here was used to extract bone marrow 150,000 years ago, extending the sharpness of what our fingers could do. So, we've been making hand tools for some time now. And, the most amazing thing is that our hands have actually evolved and adapted alongside our tools. We've evolved a huge concentration of muscles, nerves, blood vessels that can perform the most delicate gestures, and sense the lightest of touches.
So, we're extremely adapted to this tactile world we all live in.
But, if you look at the history of computers, we started in a place where there was a lot of layers of extraction between you and the interface. There was so much you had to know just to operate it. And, that made it out of reach for a lot of people.
But, over the last few decades or so, you've sort of been stripping those layers back you know, starting with indirect manipulation, where things were a little bit more one-to-one. A little bit more direct all the way to now, where we're finally stripping away all those layers back, to where you're directly interacting with the content. This to us is the magical element. It's when it stops feeling like a computer, and starts feeling more like an extension of the natural world. This means the interface is now communicating with us at a much more ancient level than interfaces have ever done. And, we have really high standards for it. You know, if the slightest thing feels wrong, boom, the illusion is just shattered. But, when it feels right, it feels like an extension of yourself, an extension of your physical body. It's a tool that's in sync with your thought. It feels delightful to use, and it feels really low-friction, and even playful.
So, what gives us this feeling? And, when it feels off, how do we make it feel right? That's what this presentation's all about.
We're going to talk about four things today. And, we're going to start with designing some principles, talking about how we build interfaces that feel like an extension of us.
How to design motion that feels in tune with the motion of our own bodies, and the world around us.
And, also designing gestures that feel elegant and intelligent. We're also going to talk about that, now that we've built this kind of stuff, how do we build interactions on top of it that feel native to the medium of touch, as a medium? So, let's get started. How do we design an interface that actually extends our mind? How do we do this? Well, we think the way to do it, is to align the interface to the way we think and the way we move.
So, the most important part of that is that our minds are constantly responding to changes and stimulus and thought, you know? Our minds and bodies are constantly in a state of dynamic change. So, it's not that our interfaces should be fluid, it's that we're fluid, and our interfaces need to be able to respond to that.
So, that begins with response.
You know, our tools depend on the latency. Think about how hard it would be to use any tool, or play an instrument, or do anything in the physical world, if there was a delay to using it? And, we found that people are really, really sensitive to latency. You know? If you introduce any amount of lag, things all of a sudden just kind of fall off a cliff in terms of how they respond to you. There's all this additional mental burden. It feels super disconnected. It doesn't feel like an extension of you anymore.
So, we work so hard to reduce latency. Where, we actually engineered the latest iPhone to respond quicker to your finger, so we can detect all the nuances of your gestures as instantly as possible.
So, we really care about this stuff, and we think you should too. And, that means look for delays everywhere. It's not just swipes. It's taps, it's presses, it's every interaction with the object. Everything needs to respond instantly.
And, during the process of designing this stuff, you know, oftentimes the delays kind of tend to seep in a little bit. You know? So, it's really important to keep an eye out for delays. Be vigilant and mindful of all the latencies or timers that we could introduce into the interface so that it always feels responsive.
So, that's the topic of response. It's really simple, but it makes an interface feel lively and dynamic. Next, we want to allow for constant redirection and interruption. This one's big.
So, our bodies and minds are constantly in a state of redirecting in response to change in thought, like we talked about.
So, if I was walking to the end of this stage here, and I realize I forgot something back there, I could just turn back immediately. And, I wouldn't have to wait for my body to reach the end before I did that, right? So, it's important for our interfaces to be able to reflect that ability of constantly redirecting. And, it makes it feel connected to you.
That's why for iPhone 10, we built a fully redirectable interface.
So, what's that? So, the iPhone 10's an actually-- it's pretty simple two-axis gesture. You go horizontally between apps. And, you go vertically to go home. But you can also mix the two axes, so you can be on your way home, and peek at multitasking and decide whether or not to go there. Or, you can go to multitasking and decide, actually, no, I want to go home.
So, this might not seem that important, but what if we didn't do this? What if it wasn't redirectable? So, what if the only gestures you could do was this horizontal gesture between apps, and then a vertical gesture to go home, and that's it. You couldn't do any of that in-between stuff I just mentioned.
Well, what would happen is that you would have to think before what you did, before you performed the gesture, you'd have to think what you want to do.
And so, the series of events would be very linear, right? So, you'd have to think, do I want to go home? Do I want to go to multitasking? Then you make your decision, then you perform the gesture, and then you release.
But, the cool thing is when it's redirectable, the thought and gesture happen in parallel. And, you sort of think it with the gesture, and it turns out this is way faster than thinking before doing. You know? Because it's a multi-axis gestural space. It's not separate gestures. It's one gesture that does all this stuff. Home, multitasking, quick app switching, so you don't have to think about it as a separate gesture.
And, helps with discovery. Because you can discover a new gesture along the path of an existing gesture.
And, it allows you to layer gestures at the speed of thought. So, what does that last one mean? So, let me show you some examples. And, we've slowed down the physics on the simulation, so you can actually see a little bit what I'm talking about.
So, I can swipe to go home, and then swipe to the next page, or springboard while I'm going home. I can layer these two gestures once I've internalized them.
Another example is that I can launch an app and realize, oh, actually I need to go to multitasking, and I can interrupt the app and go straight to multitasking, while the app is launching.
Or, I can launch an app and realize, oh, that was the wrong app. And, I can shoot it back home, while I'm launching it. Now, there's one other one where I can actually just launch an app, and if I'm in a hurry, I can start interacting with the app as it's launching.
So, this stuff might not seem really important, but we've found it's super important for the interface to be always responding, always understanding you. It always feels alive. And, that's really important for your expectation and understanding of the interface, to be comfortable with it. To realize that it's always going to respond to you when you need it. And, that applies as well to changes in motion, not just to the start of an interaction, but when you're in the middle of an interaction, and you're changing. It's important for us to be responsive to interruption as well. So, a good example is multitasking on iPhone 10.
So, we have this pause gesture where you slide your finger up halfway up the screen, and pause, and so we need to figure out how to detect this change in motion. And so, how do we do this? How do we detect this change in motion? Should we use a timer? Should we wait until your finger has come below a certain velocity for a few amount of time, and then bring in the multitasking cards? Well, it turns out that's too slow. People expect to be able to get to multitasking instantly. And, we need a way that can respond as fast as them. So, instead we look at your finger's acceleration.
It turns out there's a huge spike in the acceleration of your finger when you pause. And, actually the faster you stop, the faster we can detect it. So, it's actually responding to the change in motion, as fast as we know how, instead of waiting for some timer.
So, this is a good example of responding to redirection as fast as possible. So, this is the concept of interruption and redirection. This stuff makes the interface feel really, really connected to you.
Next, we want to talk a little bit about the architecture of the interface. How you lay it out, conceptually.
And, we think when you're doing that, it's important to maintain spatial consistency throughout movement.
What does that mean? This kind of mimics the way our object persistence memory works in the real world. So, things smoothly leave and enter our perception in symmetric paths.
So, if something disappears one way, we expect it to emerge from where it came? Right? So, if I walked off this stage this way, and then emerged that way, you'd be pretty amazed, right? Because that's impossible. So, we wanted to play into this consistent sense of space that we all have in the world. And so, what that means is, if something is going out of view in your interface, and coming back into view, it should do so in symmetric paths. It should have a consistent offscreen path as it enters and leaves. A good example of this is actually iOS navigation. When I tap on an element in this list here, it slides in from the right.
When I tap the back button, it goes back to the right. It's a symmetric path. Each element has a consistent place where it lives at both states. This also reinforces the gesture. If I choose to slide it myself to the right, because I know that's where it lives, I can do that. It's expected.
So, what if we didn't do this. Here's an example, where when I tap on something, it slides in, and then when I hit back it goes down. And, it feels disconnected and confusing, right? It feels like I'm sending it somewhere. In fact, if I wanted to communicate that I was sending it somewhere, this is how I could do it, right? So, that's the topic of spatial consistency.
It helps the gesture feel aligned with our spatial understanding of the world. Now, the next one is to hint in the direction of the gesture.
You know, we humans are always, kind of, predicting the next few steps of our experience. We're always using the, kind of, trajectories of everything that's happening in the world to predict the next few steps of emotion.
So, we think it's great when an interface plays into that prediction. So, if you have two states here, initial state and a final state. The object-- and you have an intermediate transition. The object should transition smoothly between these two states in a way that it grows from the initial state to the final state, whether it's through a gesture or an animation.
So, good example is Control Center actually. We have these modules here in Control Center, where as you press they grow up and out towards your finger in the direction of the final state, where it actually finally just pops open. So, that's hinting. It makes the gestures feel expected, and predictable. Now, the next important principle is to keep touch interactions lightweight.
You know the lightness of multitouch is one of the most underrated aspects of it, I think. It enables the airy swipes and scrolls, and all the taps and stuff that we're all used to. It's all super lightweight. But, we also want to amplify their motion. You want to take a small input and make a big output, to give that satisfying feeling of moving or throwing something and having a magnified result. So, how does this apply to our interfaces? Well, it starts with a short interaction.
A short, lightweight interaction.
And, we use all our sensors, all our technology, to understand as much about it. To, sort of, generate a profile of energy and momentum contained within the gesture.
Using everything we know, including position, velocity, speed, force, everything we know about it to generate a kind of, inertial profile of this gesture. And then we take that, and generate an amplified extension of your movement. It still feels like an extension of you. So, you get that satisfying result with a light interaction.
So, a good example of this is scrolling, actually. Your finger's only onscreen for a brief amount of time, but the system is preserving all your energy and momentum, and gracefully transferring it into the interface.
So, what if it didn't have this? Those same swipes, well, they wouldn't get you very far.
And, in order to scroll, you'd have to do these long, laborious swipes that would require a lot more manual input. It would be a huge pain to use.
Another good example of this is swipe to go home.
The amount of time that your finger's onscreen is very light. And, it's-- ends up making it a much more liquid and lightweight gesture that still feels native to the medium of multitouch.
While still being able to reuse a lot of your muscle memory from a button, because you move your finger down on the screen, and back up to the springboard. And, it's not just swipes, it's taps too. It's important for an interface to respond satisfyingly to every interaction. The interface is signaling to you that it understood you. It's so important for the interface to feel alive and connected to you.
So, that's the topic of lightweightness and amplification.
The next one is called rubberbanding.
It means we're softly indicating boundaries of the interface. So, in this example, the interface is gradually and softly letting you know that there's nothing there. And, it's tracking you throughout. It's always letting you know that it's understanding you.
What happens if you didn't do that? Well, it would feel like this. It would feel super harsh and disconcerting. You kind of hit a wall there. It would feel broken, right? And, you actually wouldn't know the difference between a frozen phone, and phone that's just at the top of the edge of the screen, right? So, it's really important that it's always telling you that you've reached the edge. And, this applies to transitions, too. It's not just about when you hit the edge, it's also when you hand off from one thing to another thing. Tracking. So, a good example of this is when you transition from sliding up the dock to sliding up the app. It doesn't just hit a wall, and one thing stops tracking, and then the other thing takes over. They both smoothly hand off in smooth curves, so that you don't feel like there's this harsh moment where you hand off from one thing to another. Next one is to design smooth frames of motion.
So, imagine I have a little object here moving up and down. It's very simple. But, we all know this object is not really moving, right? We're all just having the perception of it moving. Because we're seeing a bunch of frames on the screen all at once, and it's giving us the illusion of it moving. So, if we took all of those frames of motion, and kind of, spread them out here. And we see the ball's in motion over time, the thing that we're concerned about is right around here, where there's too much visual change between the adjacent frames. This is when the perception of the interface becomes a little choppy. You get this visual strobing.
And, this is because the difference between the two frames is too much.
And, it strobes against your vision, so. Here's an example of where you have two things both moving at 30 frames per second. But the one on the left looks a bit smoother than the one on the right, because the one on the right is moving so fast, that it's strobing. My perception of vision is, kind of, breaking down. I don't believe that it's moving smoothly any more.
So, the important thing to take away is that it's not just about framerate. It's what's in the frames.
So, we're kind of limited by the framerate, and how fast we can move and still preserve a smooth motion.
So, this one's in 30 frames per second. If we move it up to 60 frames per second, you can see that we can actually go a little bit faster, and still preserve smooth motion. We can do faster movement without strobing.
And, there's addition tricks we can do too, we can do things like motion blur. Motion blur basically bakes in more information in each frame about the movement, like the way your eyes work, and the way a camera works.
And you can also do-- take a page from 2D animation and video games by stretching, this-- this technique called motion stretching stretches the content in each frame to provide this elastic look as it moves with velocity. And so, in motion, it kind of looks like this. So, each of the different techniques, kind of, tries to encode more information visually about what's going on in the motion. And, I want to focus a little bit on this last one here, motion stretching, because we do this on iPhone 10, actually. You know, when you launch an app, the icon elastically stretches down to become the app as it opens.
And, it stretches up in the opposite direction as you close the app. To give you that little bit extra information between each frame of motion to make it a little bit smoother-looking. Lastly, we want to work with behavior rather than animation. You know, things in the real world are always in a state of dynamic motion, and they're always being influenced by you. They don't really work like animations in the animation sense, right? There's no animation curve prescribed by real life. So, we want to think about animation and behavior more as a conversation between you and the object. Not prescribed by the interface. So, to move away from static things transitioning into animated things, instead think about behavior. So, Nathan's about to dive deep into this one. But, here's a quick example. So, in Photos, there's less mass on the photos, because it's conceptually lighter. But, then when you swipe apps, there's more mass on the apps. It's conceptually heavier, so we give more mass to the system.
So, that's a little bit about how to design interfaces that think and work like us.
In-- it starts with response. To make things feel connected to you, and to accommodate the way our minds are constantly in motion.
To maintain spatial consistency, to reinforce a consistent sense of space, and symmetric transitions within that space.
And, to hint in the direction of the gesture. To play into our prediction of the future. And, to maintain lightweight interactions, but amplify their output.
To get that satisfying response, while still keeping the interaction airy and lightweight. And, to have soft boundaries and edges to the interface. That interface is always gracefully responding to you, even when you hit an edge, or transition from tracking one thing to tracking the other. And, to design smooth dynamic behavior that works in concert with you. So, that's some principles for how to approach building interfaces that feel like an extension of our minds.
So, let's dive in a little deeper. I'm going to turn it over to Nathan de Vries, my colleague, to design motion-- to talk about designing motion in a way that feels connected to motion, to the motion of both you and the natural world. Thanks, Chan.
Hi everyone. My name's Nathan, and I'm super excited to be here today to talk to you about designing with dynamic motion.
So, as Chan mentioned, our minds and our bodies are constantly in a state of change. The world around us is in a state of change. And, this introduces this expectation that our interfaces behave the same way, as they become more tactile, it shifts our expectations to be much higher fidelity.
Now, one way we've used motion in interfaces is through timed animations. A button is tapped on the screen, and the reins are, kind of, handed over to the designer.
And, their job is to craft these perfect frames of animation through time. And, once that animation is complete, the controls are handed back to the person using the interface, for them to continue interacting.
So, you can kind of think of animation and interaction as being-- as moving linearly through time in this, kind of, call and response pattern. In a fluid interface, the dynamic nature of the person using the interface kind of shifts control over time away from us as designers.
And, instead, our role is to design how the motion behaves in concert with an interaction. And, we do this through these continuous dynamic behaviors that are always running, that are always active. So, it's these dynamic behaviors that I'm going to, really focus on today. First of all, we're going to talk about seamless motion. And, it's this element of motion that makes it feel like the dynamic motion is an extension of yourself.
Then, we're going to take a look at character. How, even without timing curves, and timed animations, we can introduce the concept of playfulness, or character, or texture to motion in your interfaces.
And finally, we'll look at how motion itself gives us some clues about what people intend to do with your interface. How we can resolve some uncertainty about what a gesture is trying to do by really looking at the motion of the gesture. So, to kick things off, let's look at seamless motion. What do I mean by seamless motion? So, let's look at an example that I think we can all familiarize with.
So, here we have a car, and it's cruising along at a constant speed. And then, the brakes are applied, slowing it down to a complete stop. Let's look at it again, but this time we'll plot out the position of the car over time. So, at the very start of this curve it's, kind of, straight, and pointing up to the right. And, this shows that the car's position is moving at a constant rate, it's kind of unchanging.
But then, you'll notice the curve starts to bend, to smoothly curve away from this straight line. And, this is the brakes being applied. The car is decelerating from friction being introduced.
And, by the end of the curve, the curve is completely flat, horizontal, showing that the position is now unchanging. That the car is stopped. So, this position curve is visualizing essentially what we call seamless motion. The line is completely unbroken, and there are no sudden changes in direction.
So, it's smooth and it's seamless. Even when, actually, new dynamic behaviors are being introduced to the motion of the car, like a brake, which is applying friction to the car.
And, even when the car comes to a complete stop, you'll notice that the curve is completely smooth. There's this indiscernible quality to it. You can't tell when the car precisely stopped. So, why am I talking about cars? This is a talk about fluid interfaces, right? So, we feel like the characteristics of the physical world make for great behaviors.
Everyone in this room finds the car example so simple because we have a shared understanding, or a shared intuition for how an object like a car moves through the world.
And, this makes us a great reference point. Now, I don't mean that we need to build perfect physical simulations of cars that literally drive our interface. But, we can draw on the motion of a car, of objects that we throw or move around in the physical world around us and use them in our own dynamic behaviors to make their motion feel familiar, or relatable, or even believable, which is the most important thing. Now, this idea of referencing the physical world in dynamic behaviors has been in the iPhone since the very beginning with scrolling.
A child can pick up an iPhone and scroll to their favorite app on the Home screen, just as easily as they can push a toy car across the floor. So, what are some key, kind of, characteristics of this scrolling, dynamic behavior that we have? Well, firstly it's tapping into that intuition, that shared understanding that we all have for objects moving around in the world. And, our influence on those objects.
The motion of the content is perfectly seamless, so while I'm interacting with it, while I'm dragging the content around, my body is providing the fluidity of the movement, because my body is fluid.
But, as soon as I let go of the content, it seamlessly coasts to a stop. So, we're kind of maintaining the momentum of the effort being put into the interface.
The amount of friction that's being used for scrolling is consistent, which makes it predictable, and very easy to master.
And finally, the content comes to an imperceptible stop, kind of like the car, not really knowing precisely when it came to a stop.
And, we feel that this distinct lack of an ending kind of reinforces this idea that the content is always moving, and always able to move, so while content is scrolling, it makes it feel like you can just put your finger down again, and continue scrolling. You don't have to wait for anything to finish. So, there are innumerable characteristics of the physical world that would make for great behaviors.
We don't have time to talk about them all, but I'd like to focus on this next one, because we personally find it incredibly indispensable in our own design work.
So, materials like this beautiful flower here, the natural fibers of this flower have this organic characteristic called elasticity.
And, elasticity is this tendency for a material to gracefully return into a resting state once stress or strain is removed.
Our own bodies are incredibly elastic.
Now, we're capable of running incredibly long distances, not because of the strength of our muscles, but because of their ability to relax.
It's their elasticity that's doing this.
So, our muscles contract and relax once stress and strain is removed. And, this is how we conserve energy. Makes us feel natural and organic. The same elasticity is used in iPhone 10.
Tap an icon on the Home screen, and an elastic behavior is pulling the app towards you.
Bring it exactly where you want it to be. And, when you swipe from the bottom, the app is placed back on the Home screen in its perfect position. We also use elasticity in scrolling. So, if I scroll too far and rubberband, like Chan was talking about, when you let go, the content uses elasticity to pull back within the boundaries, helping you get into this resting position, ready for the next time you want to scroll. So, let's dig in a little deeper on how this elasticity works behind the scenes. You can think of the scrolling content as a ball attached to a spring. On one end of the spring is the current value. This is where the content is on the display.
And, the other end of the spring is where the content wants to go because of its elasticity. So, you've got this spring that's pulling the current value towards the target. Its behavior is influencing the position of the content.
Now, the spring is essentially pulling that current value towards the target.
And, what's interesting about a spring is, it does this seamlessly. This seamlessness is, kind of, built in to the behavior.
And, this is what makes them such versatile tools for doing fluid interface design.
Is that you, kind of, get this stuff for free. It's baked in to the behavior itself. So, we love this behavior of a value moving towards a target. We can just tell the ball where to go, and we'll get this seamless motion of the ball moving towards the target. But, we want a little bit more control over how fast it moves. And, whether it overshoots. So, how do we do that? Well, we could give the ball a little more mass, like make it bigger, or make it heavier.
And, if we do that, then it changes the inertia of the ball, or its willingness to want to start moving. Or, maybe its unwillingness to want to stop moving. And, you end up with this little overshoot that happens. Another property that we could change is the stiffness of the spring, or the tensile strength of the spring. And, what this does, is it affects the force that's being applied to the ball, changing how quickly it moves towards the target. And, finally, much like the car, and the braking of a car, we can change the damping, or the friction, of the surface that the ball is sitting on. And, this will act as, kind of, a brake that slows the ball down over time, also affecting our ability to overshoot. So, the physical properties of a ball and a spring are, kind of, physics class material, right? It's super useful in a scientific context, but we've found that in our own design work they can be a little bit overwhelming or unwieldy for controlling the behavior of objects on the screen.
So, we think our design tool should have a bit of a human interface to them. That they need to reflect the needs of the designer that's using the tool.
And so, how do we go about that? How do we simplify these properties down to make it more design friendly? So, mass stiffness and damping will remain behind the scenes, they're the fundamental properties of the spring system that we're using. But, we can simplify our interface down to two simple properties.
The first is damping, which controls how much or little overshoot there is from 100% damping, where there will be no overshoot to 0% damping where the spring would oscillate indefinitely.
The second property is response.
And, this controls how quickly the value will try and get to the target.
And, you might notice that I haven't used the word duration. We actually like to avoid using duration when we're describing elastic behaviors, because it reinforces this concept of constant dynamic change. The spring is always moving, and it's ready to move somewhere else.
Now, the technical terms for these two properties are damping ratio and frequency response. So, if you'd like to use these for your own design work, you can look up those terms, and you'll find easy ways to convert them. So, we now have these two simple properties for controlling elastic behaviors. But, there's still an infinite number of possibilities that we can have with these curves. Like, there's just hundreds, thousands, millions of different ways we can configure those two simple properties and get very different behavior.
How do we use these to craft a character in our app? To control the feel of our app? Well, first and foremost, we need to remember that our devices are tools.
And, we need to respect that tools, when they're used with purpose, require us to not be in the way, not get in the way with introducing unnecessary motion.
So, we think that you should start simple.
A spring doesn't need to overshoot. You don't need to use springy springs.
So, we recommend starting with 100% damping, or no overshoot when you're tuning elastic behaviors.
That way you'll get smooth, graceful, and seamless motion that doesn't distract from the task at hand. Like, just quickly shooting off an email.
So, when is it appropriate to use bounciness? There's got to be a time when that's appropriate, right? Well, we feel if the gesture that's driving the motion itself has momentum, then you should reward that momentum with a little bit of overshoot.
Put another way, if a gesture has momentum, and there isn't any overshoot, it can often feel broken or unsatisfying to have the motion follow that gesture. An example of where we use this is in the Music app.
So, the Music app has a small minibar representing Now Playing at the bottom of the screen, and you can tap the bar to show Now Playing. Because the tap doesn't have any momentum in the direction of the presentation of Now Playing, we use 100% damping to make sure it doesn't overshoot. But, if you swipe to dismiss Now Playing, there is momentum in the direction of the dismissal, and so we use 80% damping to have a little bit of bounce and squish, making the gesture a lot more satisfying.
Bounciness can also be used as a utility, as a functional means.
It can serve as a helpful hint that there's something more below the surface. With iPhone 10, we introduced two buttons to the cover sheet for turning on the Flashlight, and for launching the Camera.
To avoid accidentally turning on the flashlight by mistake, we require a more intentional gesture to activate the Flashlight.
But, if you don't know that there's a more intentional gesture needed to activate it, when you tap on the button, it responds with bounciness. Has this kind of playful feel to it.
And, that hint is teaching you not only that the button is working, but that it's responding to you. But, it's kind of teaching you that if you press just a little bit more firmly, it'll activate. It's like teaching you. It's hinting in the direction of the motion. So, bounciness can be used to indicate this kind of thing. Now, so far we've been talking about using motion to move things around, or to change their scale, change their visual representation on the screen.
But, we perceive motion in many different ways.
Through changes in light and color, or texture and feel.
Or even sound. Many other sensations that we-- our senses can detect. We feel this is an opportunity to go even further, go beyond motion, when you're tuning the character of your app.
By combining dynamic behaviors for motion with dynamic behaviors for sound and haptics, you can really fundamentally change the way an interface feels.
So, when you see, and you hear, and you feel the result of the gesture, it can transform what was otherwise just a scrolling behavior into something that feels like a very tactile interface. Now, there's one final note I want you thinking about when you're crafting the character of your app.
And, that's that it feels cohesive, that you're staying in character. Now, what does this mean? So, even within your app, or across the whole system, it's important that you treat behaviors as a family of behaviors.
So, in scrolling for example, when I scroll down a page, using a scrolling behavior, and then I tap the status bar to scroll to the stop of the page, using an elastic behavior.
In both cases, the page itself feels like it's moving in the same way, that it has the same behavior, even though two different types of behaviors are driving its motion, are influencing its motion.
Now, this extends beyond a single interaction like scrolling.
It applies to your whole app. If you have a playful app, then you should embrace that character, and make your whole app feel the same way. So, that people-- once they learn one behavior of your app, they can pick up another behavior really easily, because we learn through repetition. And, what we learn bleeds over into other behaviors.
So, next up, I'd like to talk a little bit about aligning motion, or dynamic motion, with intent.
So, for a discrete interaction like a button, it's pretty clear what the intent of the gesture is. Right? You've got three distinct visual representations on screen here.
And, when I tap one of them, the outcome is clear.
But, with a gesture like a swipe, the intent is less immediately clear. You could say that the intent is almost encoded in the motion of the gesture, and so it's our job, our role, to interpret what the motion means to decide what we should do with it. Let's look at an example.
So, let's say I made a FaceTime call, a one-on-one FaceTime call, and in FaceTime, we have a small video representation of yourself in the corner of the screen. And, this is so I can see what the person on the other end sees.
We call this floating video the PIP, short for picture in picture.
Now, we give the PIP a floating appearance to make it clear that it can be moved.
And, it can be moved to any corner of the screen, with just a really lightweight flick.
So, if we compare that to the Play, Pause, and Skip buttons, like, what's the difference here? So, in this case, there's actually four invisible regions that we're dealing with. No longer do we have these three distinct visual representations on screen that are being tapped. We kind of have to look at the motion that's happening through the gesture, and intuit what was meant. Which corner did we intend to go to? Now, we call these regions of the screen endpoints of the gesture.
And, when the PIP is thrown, our goal is to find the correct endpoint, the one that was intended. And, we call this aligning the endpoint with the intent of the gesture. So, one approach for this is to keep track of the closest endpoint as I'm dragging the PIP. Now, this kind of works. I can move the PIP to the other corner of the screen, but it starts to break down as soon as I move the PIP a little bit further. Now, I actually need to drag the PIP quite far, like past halfway over the screen. Pretty close to the other corner. So, it's not really magnifying my input. It's not really working for me. And, if I try and flick the PIP, it kind of goes back to the nearest corner, which isn't necessarily what I expected. So, the issue here is that we're only looking at position. We're completely ignoring the momentum of the PIP, and its velocity when it's thrown.
So, how can we incorporate momentum into deciding which endpoint we go to? So, to think about this, I think we can set aside endpoints for a moment, and take a step back.
And, just really simplify the problem. Ultimately, what I'm trying to do here is move content around on the screen.
And, I actually already have a lot of muscle memory for doing exactly that with scrolling. So, why don't we use that here? We use scrolling behaviors all the time, so we have this natural intuition for how far content goes when I scroll.
So, here you can see that when I scroll the PIP instead, it coasts along, and it slows down, using this familiar deceleration that we're familiar with from scrolling.
And, basically by taking advantage of that here, we're reinforcing things that people have learned elsewhere. That the behavior is just doing what was expected of the system. Now this new, hypothetical, imaginary PIP position is not real. We're not going to show the PIP go here in the interface. This is what we call a projection.
So, we've taken the velocity of the PIP, when it was thrown. We've, kind of, mixed in the deceleration rate, and we end up with this projected position where it could go if we scrolled it there. And so, now instead of finding the nearest endpoint to the PIP when we throw, we can calculate its projected position and move there instead. So now, when I swipe from one corner of the screen to another with just a lightweight flick, it goes to the endpoint that I expected.
So, this idea of projecting momentum is incredibly useful. And, we think its super important. I'd like to share some code for doing this with you, so that you can do this in your own apps. So, this function will take a velocity like the PIP's position velocity, and deceleration rate, and it'll give you the value that you could use as an endpoint for dynamic behavior.
It's pretty simple. If we look at my FaceTime example of the pan gesture ending code, you can see that I'm just using the UIScrollView.DecelerationRate. So, we're leaning on that familiarity people have with scrolling and how far content will go when scrolled.
And, I'm using that with my projection. So, I take the velocity of the PIP and the deceleration rate, and I create that imaginary PIP position. And, it's this imaginary, projected position that I then use as the nearest corner position.
And, I send my PIP there, by retargeting it.
So, this idea of using projection to find out the endpoint of a position, is incredibly useful for things being dragged or swiped, where you really need to respect the momentum of the gesture.
But, this projection function isn't just useful for positions, you can also use it for scales, or even for rotations.
Or, even combinations of the two.
It's a really versatile tool that you should really be using to make sure that you're respecting the momentum of a gesture, and making it feel like the dynamic motion in your app is an extension of yourself.
So, that's designing with motion. Dynamic motion.
Behaviors should continuously and seamlessly work in concert with interactions.
We should be leaning on that shared intuition that we have for the physical world around us. The things that we learn as children about how objects behave and move in the physical world, apply just as readily to our dynamic interfaces.
You should remember that bounciness needs to be purposeful. Think about why you're using it, and whether it's appropriate. And, make sure that as you add character, and texture, that you're balancing it with utility.
And finally, remember to project momentum. Don't just use position, use all of the information that's at your disposal to ensure that motion is aligned with the intent of where people actually want to go. And then, take them there. So, to talk a little bit more about how to fluidly respond to gestures and interactions, I'd like to introduce my colleague, Marcos, to the stage. Thanks for having me, everyone. That was great.
Thanks, Nathan.
Hi everyone.
My name is Marcos.
So far, we've seen how important fluidity is when designing interfaces.
And, a lot of that comes from your interaction with a device.
So, in this section, we're going to show you how touches on the screen become gestures in your apps. And, how to design these gestures to capture all the expression and intent into your interfaces.
So, we're going to start by looking at the design of some core gestures like taps and swipes.
Then, we'll look at some interaction principles, that you should follow when designing gestures for your interface.
And then, we'll see how to deal with multiple gestures, and how to combine them into your apps.
We're going to start by looking at a gesture that is apparently very simple, a tap.
You would think that something-- you would think that a tap is something that doesn't have to be designed, but you'll see how its behavior has more nuances than it seems.
In our example, we're going to look at tapping on a button, in this case, on the Calculator app.
The first thing to remember is that the button should highlight immediately when I touch down on it.
This shows me the button is working, and that the system is reacting to my gesture.
But, we shouldn't confirm the tap until my touch goes up.
The next thing to remember is to create an extra margin around the tap area. This extra margin will make our taps more comfortable, and avoid accidental cancellations if a touch moves during interaction. And, like my colleague Chan was saying, I should be able to change my mind after I've touched down on the button. So, if I drag my finger outside the tap area, and lift it, I can cancel the tap. The same way, if I swipe it back on the button, the button should highlight again, and let me confirm the tap.
The next gesture we're going to talk about is swipe.
Swipes are one of the core gestures of iOS, and they're used for multiple actions like scrolling, dragging, and paging.
But, no matter what you use it for, or how you call it, the core principles of a gesture are always the same. In this example, we're going to use a swipe to drag this image to the right.
So, the interaction starts the moment I touch down on the image with intention to drag it.
But, before we can be sure it's a swipe, the touch has to move a certain distance. We learn to differentiate swipes from other gestures. This distance is called hysteresis, and is usually 10 points in iOS.
So, once the touch reaches this distance, the swipe begins.
This is also a good moment to decide the direction of the swipe. If it's horizontal, or vertical for instance. We don't really need it for example, but it's very useful in some situations. So, now that the swipe has been detected, this is the initial position of a gesture.
After this moment, the touch and the image should stay together and move as one thing. We should respect the relative position, and never use the center of the image as the dragging point.
During the drag, we should also keep track of the position and speed up the touch, so when the drag is over, we don't use the last position. We use the history of the touch, to ensure that all the motion is transferred fluidly into the image.
So, as we've seen, touch and content should move together. One-to-one tracking is extremely important. When swiping or dragging, the contents should stay attached to the gesture.
This is one of the principles of iOS. You enable scrolling, and makes the device feel natural and intuitive. It's so recognizable and expected that the moment the touch and content stop tracking one-to-one, we immediately notice it. And, in the case of scrolling, it shows us that we've reached the end of the content. But, one-to-one tracking is not limited to touch screens. For instance, manipulating UI on the Apple TV was designed around this concept.
So, even if the touch is not manipulating the content directly, having a direct connection between the gesture and the interface puts you in control of the action, and makes the interaction intuitive.
Another core principle when designing gestures, is to provide continuous feedback during the interaction. And, this is not just limited to swipes or drags. It applies to all interactions. So, if you look again at the Flashlight button on the iPhone 10, the size of button changes based on the pressure of my touch. And, this gives me a confirmation of my action. It shows me the system is responding to my gesture, but it also teaches me that pressing harder will eventually turn on the flashlight. Another good example of continuous feedback, is the focus engine on the Apple TV.
So, the movements on the Siri remote are continuously represented on the screen. And, they show me the item that is currently selected, the moment the selection is going to change, and the direction the selection is going to go. So, having our UI respond during the gesture is critical to create a fluid experience. For that reason, when implementing your gestures, you should avoid methods that are only detected at the end of the gesture, like UISwipeGestureRecognizer. And, use ones like the actual touches, or other gestureRecognizers that provide all possible information about the gesture.
So, not just the position, but also the velocity, the pressure, the size of the touch. In most situations though, your interfaces must respond to more than one gesture.
As you keep adding features to your apps, the complexity and number of gestures increases, too. For instance, almost all UIs that use a scroll view will have other gestures like taps and swipes competing with each other. Like in this example, I can scroll the list of Contacts, or freely touch on one of them to preview it.
So, if we had to wait for the final gesture, before we show any feedback, we would have to introduce a delay. And, during that wait, the interface wouldn't feel responsive. For that reason, we should detect all possible gestures from the beginning of the action. And, once we are confident of the intention, cancel all the other gestures.
So, if we go back to our example, I start pressing that contact, but I decide to scroll instead. And, it's at that moment that we cancel the 3D touch action, and transition into the right gesture.
Sometimes, though, it's inevitable to introduce delay.
For instance, every time we use the double-tap in our UIs, all normal taps will be delayed.
The system has to wait after the tap, to see if it's a tap or a double-tap. In this example, since I can double-tap to zoom in and out of a photo, tapping to show the app menu is delayed by about half a second.
So, when designing gestures for your applications, you should be aware of these situations, and try to avoid delays whenever possible.
So, to summarize, we've seen how to design some core gestures, like taps and swipes. We've seen that content and touch should move one-to-one, and that is one of the core concepts of iOS.
You should also provide continuous feedback during all interactions, and when having multiple gestures, detect them in parallel from the beginning. And now, I'd like to hand it back to Chan, who will talk about working with fluid interfaces. Thanks, everyone. Nice job.
Alright, I'm back.
So, we just learned about how to approach building interfaces that feel as fluid, as responsive, and as lively as we are.
So, lets talk about some considerations now that we're feeling a little bit more comfortable with this, for working within the medium of fluid interfaces. And that begins with teaching.
So, one downside to a gestural interface is that it's not immediately obvious what the gestures are. So, we have to be friendly and clever about how we bring users along with us in a way that's friendly and inviting.
And so, one way we can do that is with visual cues. So, the world is filled with these things, right? You can learn them once, and you can use them everywhere. They're portable. And so, when you see this, you know how to use it.
So, we've tried to establish similar conventions in iOS. Here's a couple examples.
So, if you have a scrolling list of content, you can clip the content off the bottom there, to indicate that there's more to see, that invites me to try and reveal what's under there. And, if we're dealing with pages of content, you can use a paging indicator to indicate that there's multiple pages of content.
And, for sliding panes of content, you can use an affordance, or a grabber handle like this, to indicate that it's grabbable and slidable.
Another technique you can use is to elevate interactive elements to a separate plane. So, if you have an interactive element, lifting it up to a separate plane can help distinguish it from the content.
So, a good example of this is our on/off switch. We want to indicate that the knob of the switch is grabbable, so we elevate it to another plane. This helps visually separate it, and indicate its draggable nature.
So, floating elements, interactive elements like this, above the interface can help indicate that they're grabbable. Next, we can use behavior, you know, to show rather than tell to use-- how to use an interface. So, we can reinforce a dynamic behavior with a static animation.
So, an example of this is Safari. In Safari, we have this x icon at the top left to close the tab, and when you hit that button, we slide the tab left to indicate it's deleted.
This hints to me that I can slide it myself to the left. And, accomplish the same action of deleting the tab through a gesture.
So, by keeping the discrete animation and the gesture aligned, we can use one to teach the other.
And, there's another technique we can use, which is explanations. This is when you explicitly tell users how to use a gesture.
So, this is best when used sparingly, but it's best when you have one gesture that's used repeatedly in a bunch of places, and you explain it once up front, and then you just keep using it, and keep reinforcing it.
Don't use it for a gesture that's used only intermittently. People won't remember that.
Now, I want to talk a little bit about fun and playfulness. Because this is one of the most important aspects of a fluid interface. And, it only happens when you nail everything.
It's a natural consequence of a fluid interface. It's when the interface is responding instantly and satisfyingly. When it's redirectable and forgiving. When the motion and gestures are smooth. And, everything we just talked about. The interface starts to feel in sync with you.
And, something magical happens where you don't feel like you need to learn the interface, you feel like you're discovering the interface.
And so, we think it's great when we allow people to discover the interface through play. And, it doesn't even feel like they're learning it, it feels fun.
So, people love playing with stuff. So, we think it's great to play into our natural fiddle factor.
You know, play is our mind's internalizing the feel of an interface. So, it's great when we're building this stuff, when we're prototyping it, just to build it. You know, play with it yourself. See how you fiddle with it. Hand it to others see how they play with it. And, think about how you can reinforce that with something like an animation, or behavior, an explanation.
And, it's surprising how far play can go, and having interface teach itself to people. Let's talk a little bit about fluidity as a medium. How we actually go about building this stuff. You know, we think interfaces like this are a unique medium, and it's important that we approach it right.
So, the first thing is to design the interactions to be inseparable from the visuals, not an afterthought. The interaction design should be done in concert with the visuals. You shouldn't be able to even tell when one ends and another begins.
And, it's really important that we build demos of this stuff. The interactive demo we think is really worth a million static designs. Not just to show other people, but to also understand the true nature of the interface yourself.
And, when you prototype this stuff, it's so valuable for you because you get to almost discover the interface as you're building it.
You know, this technique is actually how we built the iPhone 10 interface.
And, it's really important because it also sets a goal for the implementation. We're so lucky here at Apple that we have this amazing engineering team to build this stuff, because it's really hard to build. And, it's so important also to have that kind of magical example that reminds yourself and the engineering teams, and yourselves that what it can feel like, you know? And, it's really important to, kind of, remember, remind yourself of that.
And, it makes-- when you actually build it, it makes something that's hard to copy, and it gives your app a unique character.
So, you know, multitouch is such an amazing medium we all get to play in.
We get to use technology to interface with people at an ancient, tactile level. It's actually really cool.
You know, all those principles we talked about today, they're at the core of the design of the iPhone 10 gestural interface, you know, responsive, redirectable, interruptible gestures, dynamic motion, elegant gesture handling.
In a lot of ways, it's kind of the embodiment of what we think a fluid interface could be.
When we align the interface to the way we think and move, something kind of magical happens.
It really stops feeling like a computer, and starts feeling like a seamless extension of us.
You know, as we design the future of interfaces, we think it's really important to try and capture our humanity in the technology like this.
So, that one of the most important tools of humankind is not a burden, but a pleasure and a delight to use.
Thank you very much. [ Applause ]
-
-
찾고 계신 콘텐츠가 있나요? 위에 주제를 입력하고 원하는 내용을 바로 검색해 보세요.
쿼리를 제출하는 중에 오류가 발생했습니다. 인터넷 연결을 확인하고 다시 시도해 주세요.