스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
앱 실행 최적화하기
느린 앱 출시는 개발자의 업무에 지장이 됩니다. 새로운 앱 출시 도구를 알아보고 앱을 빠르게 출시하는 방법을 살펴보세요. 앱 출시가 어떻게 진행되는지, 이 중요한 시기에 작업을 최소화하고, 우선순위를 정하고 최적화하는 방법에 대한 정보를 익히세요. 빠른 iOS 앱 출시를 가능하게 한 엔지니어로부터 팁과 요령도 들어 보세요.
리소스
-
다운로드
Hello everyone, my name is Spencer Lewson, and I'm an engineer on the Performance Team here at Apple. Today I'm very excited to tell you about how you can optimize your app's launch. We'll be covering these four main topics today. First, what is launch? What are the different types of launches and how do we break them down into their different subphases? Next, we'll be talking about how to properly measure your app's launch. Out in the field, iOS devices can be in a variety of different states and conditions, and these states and conditions can produce inconsistent launch results. So, it's important to understand these states and how to reduce their impact when you're taking measurements. Once you've done that, you can take a look at how to use Instruments to profile and understand your app to find opportunities to improve it.
And finally, we'll leave you with some tips and some tricks on how to monitor your app's launch, both over time and in the field, to ensure that you consistently deliver a delightful experience to all of your users. So, what is that app launch I was talking about? Well, app launch is a user experience interruption.
What do I mean by this? Let's take a look. Okay, ready, set, go.
Wow, on the iPhone 6S Plus, launch took nearly 2.5 seconds, and this wasn't as delightful as our users expect it to be.
You know, it's really important for launch to be delightful, because it happens a lot. In fact, across all iOS devices, it happens billions of times a day.
So, we did some number crunching, and we figured out that with we save only one millisecond on each of those launches, we would save an astounding 162 days of launch time, yes, in other words, thank you, in other words, that's the amount of time it takes to send a rocket to Mars. But it's also important for a number of other reasons. First of all, your app's launch is your user, first experience with your app, and as such, it should be delightful.
Now it's important to remember that as developers, we tend to gravitate towards newer devices. So, it's important to ensure that the experience that you see in your hand is the same experience that the customers, that your users, have in their hands on different iOS devices and under different conditions.
Furthermore, launch covers a huge part of your code base, from primer coating, to initialization, to view creation, and more. And as such, if you're seen that your launch isn't as delightful as your users expected to be, this might be indicative that there's other parts of your code base that are delightful, as well. Finally, launch is a very intense time for the phone. Involves a lot of CPU work and a lot of memory work. So, you should try to reduce this as it impacts the system performance, and of course, your user's battery life. So, let's take a look at those launches I talked about before, there's a cold launch, a warm launch, and something is often referred to as launch, but isn't quite a launch, a resume.
Cold launches occur after reboot, or when your app has not been launched for very long time.
In order to launcher app, we need to bring it from disk into memory, startup system-side services that support your app, and then spawn your process.
As you'd expect, this can take a little time, but fortunately, once it's happened once, you'll experience a warm launch. In this case, your app still needs to be spawned, but we've already brought your app into memory and started up some of those system-side services. So, this will be a little bit faster and a little bit more consistent.
Finally, there's that resume. This occurs when a user reenters your app from either the home screen or the app switcher. As you know, the app is already launched at this point, so it's going to be very fast.
What you need to remember from this is not to confuse resumes with launches when you're taking measurements. So, given this information, wouldn't it be great if launches were as quick and as delightful as resumes? How can we achieve that? Well, we need to hit the goal of rendering our first frame within 400 milliseconds.
That's so that we have pixels displayed to the user during the launch animation, and by the time that launch animation is complete, your app is interactive and responsive. The first step to doing that is understanding what is happening during launch. So, let's launch Maps.
As you know, launch generally starts when the user taps your icon on your home screen. Then over the next 100 or so milliseconds, iOS will do the necessary system-side work in order to initialize your app. That leaves you as developers about 300 milliseconds to create your views, load your content, and generate your first frame.
Now this frame doesn't necessarily need to be fully complete. It can have some placeholders for asynchronously loading data, but at this point, your app should be interactive and responsive.
So, in the case of Maps, all the tiles have not yet loaded. You can still initiate your search and browse your favorites. Then over the next couple hundred milliseconds, you can display that asynchronously loaded data and generate your final frame for your user. Let's take a closer look at these phases.
These six phases cover everything from system initialization to the app initialization to view creation and layout, and then depending on your app, potentially a asynchronous loading phase for your data, the extended phase.
The first half of system interface is dyld. For those of you unfamiliar, a dynamic linker loads your shared libraries and frameworks.
Now in 2017, we introduced dyld 3, which added exciting optimizations to the system.
Well, we're happy to announce that in iOS 13, we're bringing these optimizations to your apps.
That means we are now caching your runtime dependencies, or warm launches, which should give them a significant speed improvement . Thank you.
Now with a new linker, comes some new recommendations.
To take full advantage of these new improvements, we recommend that you avoid linking unused frameworks, as this can have hidden costs, which we'll show you later.
We also recommend that you avoid dynamic library loading, such as DLOpen or NSbundleLoad, because this forfeits those wins you would have gotten by having those in your cache.
Finally, that means that you should be hard linking all of your dependencies, as it's now even faster than it was before.
The second half of system interface is libSystemInit.
This is when we initialize the low-level system components within your application.
Now this is mostly system-side work with a fixed cost. So, use developers don't need to focus on the section.
Now we have static runtime initialization.
This is when the system initializes your Objective-C and Swift run times.
Now in general, your app shouldn't be doing any work here unless you have static initializer methods, which are possibly present in your code, or more likely, a surprise from the frameworks that you link. In general, we don't recommend static initialization. So, let's take a moment to talk about how to reduce its impact.
If you own a framework which uses static initialization, consider exposing API to initialize your stack early. But if you must use static initialization, consider moving code out of class load which is invoked every time during launch to class initialize, which is lazily invoked the first time you use a method within your class. Next up is UIKit Initialization.
This is when the system instantiates your UIApplication and your UIApplicationDelegate. For the most part, this is system-side work, setting up event processing and integration with the system. But you can still effect this phase if you subclass UIApplication or you do any work in UIApplicationDelegate initializers. Now we have application initialization. This is where the good stuff is. This is where you as developers can likely have the biggest impact on your app's launch. For those of you who have not yet adopted the new UIC in APIs or are targeting iOS 12 or earlier, Application Init works, again, with these delegate call-back methods. application: willFinishLaunchingwithOptions, and application: didFinishLaunchingwithOptions. As your app is displayed to the user, the further methods, applicationDidBecomeActive: will be invoked.
Now it's important to know that if you have not UIScenes, you should be creating your view controllers and didFinishLaunchingwithOptions.
That's because with UIScene, ApplicationInit works a little bit differently. Now you will still get willFinishLaunching and didFinishLaunchingwithOptions, but as your app is displayed to the user, you will get the UISceneDelegate lifecycle callbacks.
Those are, of course scene: willConnectwithSession with options. ScenewillEnterForeground, and sceneDidBecomeActive. You should be creating your view controllers, and scene: willConnecttoSessionwithOptions.
It's important to note that you should be only creating your view controllers, and scene: willConnectToSessionwithOptions, and that also, and didFinishLaunchingwithOptions. That the common pitfall, which, of course, results in performance losses and, likely, unpredictable bugs in your code base. Regardless of whether or not you've adopted the new UIScene APIs, our advice for this phase is generally the same. You should be deferring any unrelated work but it's not necessary to commit your first frame, by either pushing it to the background queues or just doing it later entirely.
If you did adopt UIScenes, you can do one more thing, and that's make sure that you're sharing your resources between your Scenes. This is, of course, to reduce the overhead of doing any work unnecessarily multiple times. To learn more about UIScenes, please take a look at these two talks from earlier this week.
Next is the first frame render phase. Now, this is relatively straightforward. This is when we create your views, perform layout, and then draw them.
We then take that information, and we commit and render your first frame into nice, shiny pixels.
You can affect this phase by reducing the number of views in your hierarchy. And you can do that by flattening your views to use less or by lazily loading views that are not shown during launch. You should also take a look at your auto layout and see if you can reduce the number of constraints you're using.
Finally, we have the extended phase. This is the app-specific period from your first commit until when you show your final frame to your user. This is when you load that asynchronous data we talked about. Now not every app has this phase.
But if you do have this phase, your app should be interactive and responsive. If you do have this phase, we only have general advice on how you should approach it, and that is to understand what is happening, and you can do that by leveraging OS signpost APIs to mark out and measure the work that occurs in between these two time periods.
Now that we talked about what launch is, let's talk about how to get usable measurements.
At any given time, an iOS device is under a variety of different states and conditions, and this can introduce substantial variance into launch. So, when we're analyzing and comparing our launch results, it's critical to ensure that we're making apples-to-apples comparisons, because how do you know if you're making any progress if before you make any changes, your launch results are completely unpredictable? The first step to making them predictable is removing those sources of variance, such as networking interference and interference in background processes.
Now we realize that this sounds counterintuitive, as this may result in a launch that's not entirely representative of regular usage, but we wanted to let you know that that's okay.
It's more important to have consistent results with which you can evaluate progress. At Apple, we've been using this technique to successfully detect regressions during development and drive down launch times. We then validate these performance improvements by using telemetry collected from the field. Fortunately, we have some tips on setting up that clean and consistent environment.
First, reboot your device. This will clear out any unnecessary state, and then let it settle down over the next few minutes to clear up any boot time work.
You could also reduce your dependence on the network by either turning on airplane mode or marking out your network dependencies in code. Networking can introduce a fair amount of variance. Next is iCloud.
ICloud is a great feature which works in the background to deliver a seamless experience to our users, but that work in the background can interfere with app launch. So, during your measurements, using unchanging iCloud account with unchanging data, or log out of iCloud entirely. Next be sure to use the release build of your application when you're making measurements. This is, of course, to reduce the overhead of unnecessary debugging code during your measurements and to take advantage of the compile time optimizations.
Finally, you should be measuring with warm launches, which as mentioned before, are more consistent, because some of your app may already be in memory, and some of those system-side services may already be running. Now we can set up some data to test with.
It's important to create a mock data set which is consistent, and you might need a couple data sets for different types of users, such as users with small amounts of data and users with large amounts of data, though, in the ideal situation, your app should be able to scale to any amount of data.
That's why loading only the data that is necessary to show your first frame.
Now we're ready to pick out some devices.
You should pick out a variety of devices that are important to your users and then stick to them force consistency.
Be sure to include your oldest devices for your oldest-supported releases. This is because performance characteristics look different between older devices and newer devices, which have different amounts of RAM and CPU cores.
This will ensure that your launch is delightful for all of your users on all of their devices.
Now we're ready to take some measurements.
We can leverage the new XCTest for app launce performance in Xcode 11. With just a few lines of code, Xcode will launch your app repeatedly and then provide statistical results about how it performs.
We'll talk about this more later.
So, now we've talked about what launch is and how to measure it, let's talk a little bit about how to improve it.
When you're reviewing your app's launch both in code and in instruments, you should keep these three tips and tricks in mind.
That is to first minimize your work, then prioritize your work, and finally, optimize your work.
When minimizing work, you should be deferring anything unrelated to generating the first frame. That means deferring things like undisplayed views or pre-warming features that are not yet used.
You should also avoid blocking the main thread, either with network I/O, file I/O, or more, as this will affect launch. Move it to a background thread. Finally, you should take care to reduce your memory usage. Allocating and manipulating memory can take time. Next, prioritize your work.
This is when you should make sure that work is scheduled at the right quality of service.
Now in iOS 13, we've made some exciting optimizations to the Scheduler to make your apps launch even faster. But that means it's more critical than ever to preserve priority issue propagate work across threads. You should take a look at Modernizing Grand Central Dispatch Usage from WW 2017, which goes into depth about how to handle concurrency correctly.
Finally, we have optimizing work.
Anything that's remaining after you've minimized it and prioritized it should be optimized. That is to say it should be simplified and limited. For example, limit the amount of data that you fetch only what you need during launch, or lazily compute any variables and results that you need.
When you're doing this, take a look at your methods and algorithms and see if you can optimize them, as you could get significant improvements by calculating a result differently or using a different data structure. And finally, you should be caching your resources and your complications. This is, of course, to reduce the CPU and memory overhead by doing work multiple times unnecessarily. So, I'd love to hand the stage over to Dan, who is going to give you a great demo on how to use the new App Launch Template in Xcode Instruments to understand and improve our app's launch. Thank you, Spencer. Hi, everyone, my name is Dan Sawada, and I'm also one of the performance engineers here at Apple. Today I will be going over a typical workflow of understanding your app's launch and looking for opportunities to minimize, prioritize, and optimize the work, so that you can actually deliver a delightful first user experience.
So, let's get started. The app that I'm going to be demonstrating today is called Star Searcher. It's an example app that we specifically written for this session. As you can see, it's a very typical UI table view that lists all of my imaginary stars. If you click on the cell, or a star, it shows you a little description blurb, in addition to a picture.
However, we have one problem, let's go ahead and launch it.
Ready, go.
So, that took an astounding 2.5 seconds to launch, not sure if I could call that delightful. So, let's use Xcode and Instruments to see if there's anything we can do about this.
So, here we have our Xcode project for Star Searcher. Now one important thing that we should do before we do any performance-related analysis is selecting the profile scheme in Xcode. This will ensure Xcode to recompile your app in release mode, so that you can take the advantages of compiler time optimizations.
Once Xcode recompiles your app, it will install it on your device and launch Instruments. Now we are happy to announce that as of iOS 13, or Xcode 11, we now have the AppLaunchTemplate, which we can use specifically for triage purposes like this, figuring out what's wrong with AppLaunch. So, let's go ahead and double-click on AppLaunch. Now the first thing we want to do here is hit the record button.
At this point, Instruments will automatically launch Star Searcher, our app, gather all of the metrics, telemetry data, analyze them, and create visualizations for all of the app launch phases. So, with take a look. The first few phases marked in purple are the phases that occur before your main function is invoked within your app.
Onto the green phases, these phases of the early phases that occur at the very first of your main function, as your app finishes its launch and draws its first frame in UI. Let's go ahead and expand the lanes. As we expand the lanes, you can see the detailed states of all of the threads that respond within your app's process. Obviously, the most important one would be the main thread, or also known as the UI thread, which is responsible for handing user input and drawing your UI. Let's go ahead and pin down the lanes that are relevant for our purpose, starting with the app launch phases, our main thread, and there's one more worker thread that's doing a substantial amount of work during launch. So, let's go ahead and pin this down, too.
Speaking of thread states -- oops.
Like that.
Speaking of thread states, gray means it's blocked, meaning that the thread isn't doing any work.
Red means it's runnable, meaning that there's work scheduled to be done, but lacking CPU resources. Orange means it's preempted, meaning that it was doing work but got interrupted in favor of other competing work that has a higher priority. And last but not least, blue means it's running, meaning that it's actually doing work on the CPU core. So, with that information, let's take a look phase by phase starting with the system interface initialization.
As we triple-click on a phase, we can highlight the phase and get detailed information towards the bottom half of the screen. To your left, you can see the detailed stack trace of all the work that's being done during this time period. To your right, you can see a aggregated stack trace, which lists all of the symbols ordered by the number of CPU sample size. Now notice that this initial phase only took 6 milliseconds as it sets up its system interfaces. This is primarily due to the benefits of dyld3 introduction and third-party apps, in addition to other system layer enhancements. So, as developers, we can take advantage of all of those enhancements without writing a single line of code. Let's move on, but before we do so, there's one other thing I should point out here.
Notice that while this phase only spent 6 milliseconds on the CPU clock for Star Searcher, it spent 149 milliseconds on the wall clock. This discrepancy comes from the overhead of the profiling mechanism itself, which does give us a lot of information and insight, but has a cost of its own. So, this is why it's very important to distinguish profiling with measurements, which I will explain more later on. On to the next phase, which is static runtime initialization. Now notice this phase took an astonishing 375 milliseconds. Now that's a little bit too long.
So, let's take a look. Looking at the detailed stack trace, we see a highlighted symbol with a blue icon marking 370 milliseconds' worth of work on the CPU. Now all of these highlighted symbols indicate code that's declared within our sources. Let's click on it.
Now by expanding the stack trace, it points us to the SLSuperfastLogger. Now, if a library is calling itself superfast, that implies some fishiness, but let's take a look. So, SLSuperfastLogger is a external framework that we've imported specifically into Star Searcher to take the benefits of powerful and convenient logging. However, the only place we invoke this framework is within the table view controller. Specifically, within the didSelectRowAt callback.
Now this callback is completely out of the launch path, because it's only invoked when the user taps on a cell. So, why is it doing over 300 milliseconds' worth of work during early launch and even before our main function is invoked? Well, let's investigate.
By searching the symbol, it points us to a plus-load method declared within the SL SuperfastLogger class. Now, this is a static initializer, meaning that all of this work would be done at very early in launch before a main function is invoked, given the fact that we link against it. Now, the take away here is that it's very important to understand the impact of your dependencies in the frameworks that you use. External libraries and frameworks may be convenient and may be powerful, but it may come with a heavy cost. So, if those costs justifies the benefits, great. But for our case, 300 milliseconds during launch is a little bit too much for what it's worth. So, let's go ahead and pursue alternatives.
In our case let's use os.log, which is a very lightweight and efficient logging mechanism that comes right with iOS as well as other Apple platforms. Now once we remove the dependency, there's one additional thing that we absolutely need to remember to do, which is to remove the actual linkage. Now because the cost here is with a static initializer, we need to make sure to remove the linkage in order for it not to impact us. So, with that, let's go back to our trace. The next phase is UIKit initialization, which took 28 milliseconds on the wall clock. Now this is pretty much a fixed cost for all applications. So, unless you subclass UI application or do a custom initialization work in UIApplicationDelegate, it's pretty much something that we can disregard for now. So, let's move on.
The next chunk of work is your applications initialization, which is pretty much what you control. Now notice there is a big amount of work being done with didFinishLaunchingWithOptions callback, which took 791 milliseconds on the wall clock. Now that's very long. Let's take a look. So, this phase immediately points us to heavy amounts of work in the StarDataProvider class.
It says, "loading stars." Okay, now, notice that there's a huge blockage in the main thread, which essentially is a delay in our launch. Our main thread was blocked for 754 milliseconds. Now that's not nice.
Let's take a look.
So, in order to inspect the detailed states, we should look at the event list.
By looking at the event list, we notice that it was blocked for 754 milliseconds, and afterwards, it was unblocked, or made runnable, by thread 0x12253. Now this corresponds to this worker thread that was doing a lot of work.
So, there's some relationship here.
Now going back to the main thread, notice that it's scheduled to do work at priority 47. Forty-seven is equivalent to the user interactive QoS. Now look at all this red meeting there's a lot of work to do, but it's lacking CPU resources. Well, let's figure out why.
As we click on the worker thread, we notice that there's a lot of work scheduled to do work at priority 4. This is equivalent to the background QoS. What we're actually seeing here is a symptom known as priority inversion, where a given thread is being blocked by a separate thread that has a lower QoS, or priority, than itself. Obviously, this isn't ideal, because it's still aimed to launch more than it should. So, let's go ahead and try to fix that. Looking back at the StarDataProvider, which is at the core of this issue, is a very simple class that's responsible for fetching data for our stars from SQLite database, has a dedicated dispatch queue with a background QoS, and note that this is to ensure that data fetching doesn't compete with the UI. And there's two API being exposed. One for loading data asynchronously using this GrandCentralDispatch's async primitive and another synchronous API that loads the data in a synchronous fashion.
Now looking at the actual call sites within the didFinishLaunchingwithOptions, we are leveraging the asynchronous API, but also leveraging the dispatch semaphore to ensure that we wait for all of the data to be fetched before we proceed on to drawing the actual first frame of our table view. Now if we're going to be doing this, we should use the correct concurrency primitive, which is the sync primitive in GCD. Now using the correct concurrency primitives, GrandCentralDispatch will temporarily propagate the priority of the main thread to the worker thread and boost it up to user inactive so that it matches. So, at this point, I think we have the potential to resolve the priority inversion, but there's one more issue that I notice here.
LoadStarDataSync API accepts a range of rows to load the data for. In our case, we're loading from row 0 to the very last row, which is essentially everything. Now when you think about it, the first frame can only fit just a limited number of cells that may be on the screen size. In the case of Star Searcher, perhaps around 10 to 15, depending on the device. So, let's go ahead and optimize that, and instead of loading everything, let's just load the first 20 rows, just enough to draw the first frame of the table view in a synchronous fashion. Afterwards, we should load all of the rest lazily in the background and only update the table view when finished after launch.
Let's move on. Back to the trace, last but not least. The last phase is our first frame rendering.
Notice that this phase took 951 milliseconds, which is very long, considering that this is only responsible for doing the layout work and the rendering of our first frame.
Now let's taking a deeper dive, it points us to the StarTableviewController, and looking at the detailed stack trace, we see a lot of work and a CellForRowAt callback, which is responsible for doing the layout work of the cells. let's go ahead and expand that. As we expand the stack trace, it points us to a lot of initialization work for the StarDetailView controller which took 882 milliseconds on the CPU. So, at this point, we've identified this is pretty much the bottleneck here.
Let's take a look at our code. Now looking at the table view controller within the CellforRowAt callback, we create the cells using our custom cell, and at the same time, we put in a speculative optimization which is to pre-warm and cache the DetailViewControllers of the DetailVew, as we do the layout work. This is with the hopes to streamline the transition from a table view to a detail view. But as we saw in the trace, this doesn't create a high cost.
Now stepping back a little bit, when you think about it, the detailed view doesn't really make sense for our first frame. It only makes sense when the user taps on a cell. So, let's go ahead and defer that work.
Where should we defer it to? Perhaps the didSelectRowAt callback, which is invoked when the user taps on a cell.
So, at this point, we've made several enhancements, or optimizations, to Star Searcher. So, let's go ahead and re-profile it.
Now one thing to note here is that as you make incremental changes, you should consistently remeasure and re-profile as you make progress. That way, you can actually understand the exact impact of your incremental change set. But for the sake of his demo, we've actually aggregated all the changes into one for the sake of time and boom. There's a little UI glitch, but we can immediately see that our launch is under 500 milliseconds. Now, as I said earlier, the profiling mechanism does come with a cost of its own. So, to get a better understanding of what our users would experience, let's go ahead and leverage the new XCTest APIs to measure our launch performance within our test.
With just a few lines of code, we can actually integrate launch performance tests, or any performance tests, with an XCTest. Let's go ahead and kick this off.
Now at this point, XCTest will do one throwaway launch attempt, which cancels out the variance that would come about by cold launches. Afterwards, it will do the specified number of iterations or by default five iterations of launches and measure the time it took. Afterwards, it will produce a nice statistics of that data. It's going to take a few minutes for the test to complete, and now we've taken the launch of Star Searcher from 2.5 seconds to just over 300 milliseconds. And to wrap up the demo, I'd like to show you what this actually looks like on the UI. So, let's make sure we kill Star Searcher. That was quick.
Thank you. Back to you, Spencer.
Thanks, Dan, for that awesome demo on how to use Xcode, Instruments, AppLaunchTemplate to improve our app launch experience. So, we realize that in your code bases, you're not going to find just a few couple places in your code that you can fix with just a few lines and get such substantial improvements. It's more likely that you're going to have to find a bunch of 5 to 10 milliseconds wins and then stack all those together.
We want to let you know that we've got your back.
We've been making a ton of iOS optimizations to improve your app's launch and help you reach your goal with very little to no adoption from your side. I want to call on a few in particular. As mentioned before, dyld3 brings caching of your runtime dependencies to your apps, which you saw in the demo, that provided a huge improvement. The Scheduler has also been optimized to help prioritize the work that happens during launch. We also put Auto Layout and Objective-C under the microscope and made a bunch of optimizations there.
And then finally, we have exciting changes to app packaging coming later this year. We think that altogether these changes should result in a huge improvement your apps with very little to no adoption. So, let's wrap things up with some tips and tricks on how to make sure your app stays delightful once you've done all this work.
First of all, don't let performance be an afterthought. You should start working on it and thinking about it at the beginning of every bug fix, at the beginning of every re-factor, and the beginning of every feature.
This is because it's incredibly easy to introduce regression, especially a little one like 2 milliseconds. The problem is these little ones add up to a big problem, and if you don't address them immediately, it becomes very hard to find them all. In order to do that, to detect those regressions, you should be plotting your app's launch over time and running tests regularly. This will ensure that you're meeting your target and that you immediately know if you've regressed from that target. You should also take a look at the new Xcode organizer, which lets you know how your app performs in the field.
In iOS 13, for users that have opted in, power and performance metrics will be gathered about your app. They will then be aggregated over 24-hour periods and sent back to your organizer where you can view them in the form of histograms by software version and device version.
However, if you desire a little bit more control over that data, you can adopt MetricKit.
MetricKit allows you to specify custom power and performance metrics.
Now like the organizer, this data will be gathered up and aggregated over 24-hour periods of time and then delivered back to you through a delegate method in your app. From there, you're free to handle the data as you see fit. To learn more about this, we recommend you check out Improving Battery Life and Performance from WW 2019.
So, in summary, we'd love for you today to start understanding your app's launch with the new AppLauchTemplate in Xcode Instruments. See if you can find opportunities to minimize, prioritize, and optimize your work.
Next, although well intended, not all optimizations work out, such as the pre-warming DetailView controllers that Dan addressed in his demo. So, be sure to measure, not estimate, performance whenever you're making changes. Again, it's very easy to introduce regressions unintentionally.
Finally, you should be tracking your performance in all phases of development. This means utilizing the new XCTest app launch measurements on a variety of devices and possibly integrating this with continuous integration.
This will ensure that you're consistently delivering a delightful app launch to all of your users on all of their devices.
For more information, please view the talks that we referenced today, and have a great rest of your Friday afternoon. Thank you. [ Applause ]
-
-
찾고 계신 콘텐츠가 있나요? 위에 주제를 입력하고 원하는 내용을 바로 검색해 보세요.
쿼리를 제출하는 중에 오류가 발생했습니다. 인터넷 연결을 확인하고 다시 시도해 주세요.