Performance

RSS for tag

Improve your app's performance.

Posts under Performance tag

54 Posts
Sort by:
Post not yet marked as solved
2 Replies
485 Views
In our AR app and appclip made with SceneKit, we experience very significant drops in framerate when we make our 3D content appear at different steps of the experience. For now all of our 3D objects are in our Main Scene. Those which are supposed to appear at some point in the experience have their opacity set to 0.01 at the beginning and then fade in with a SCNAction (the reason why we tried setting their opacity to 0.01 at start was to make sure that these objects are rendered from the start of the experience). However, if the objects all have their opacity set to 1 from the start of the experience, we do not experience any fps drop. It is worth noting that the fps drops only happen the first time the app is opened. If I close it and re-open it, then it unfolds without any freeze. What would be the best way to load (or pre-load) these 3D elements to avoid these freezes? We have conducted our tests on an iPhone X (iOS 15.2.1), on an iPhone 12 Pro (iOS 14), and on an iPad Pro 2020 (iPad OS 14.8.1).
Posted
by
Post not yet marked as solved
1 Replies
476 Views
Below, the sampleBufferProcessor closure is where the Vision body pose detection occurs. /// Transfers the sample data from the AVAssetReaderOutput to the AVAssetWriterInput, /// processing via a CMSampleBufferProcessor. /// /// - Parameters: /// - readerOutput: The source sample data. /// - writerInput: The destination for the sample data. /// - queue: The DispatchQueue. /// - completionHandler: The completion handler to run when the transfer finishes. /// - Tag: transferSamplesAsynchronously private func transferSamplesAsynchronously(from readerOutput: AVAssetReaderOutput, to writerInput: AVAssetWriterInput, onQueue queue: DispatchQueue, sampleBufferProcessor: SampleBufferProcessor, completionHandler: @escaping () -> Void) { /* The writerInput continously invokes this closure until finished or cancelled. It throws an NSInternalInconsistencyException if called more than once for the same writer. */ writerInput.requestMediaDataWhenReady(on: queue) { var isDone = false /* While the writerInput accepts more data, process the sampleBuffer and then transfer the processed sample to the writerInput. */ while writerInput.isReadyForMoreMediaData { if self.isCancelled { isDone = true break } // Get the next sample from the asset reader output. guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else { // The asset reader output has no more samples to vend. isDone = true break } // Process the sample, if requested. do { try sampleBufferProcessor?(sampleBuffer) } catch { /* The `readingAndWritingDidFinish()` function picks up this error. */ self.sampleTransferError = error isDone = true } // Append the sample to the asset writer input. guard writerInput.append(sampleBuffer) else { /* The writer could not append the sample buffer. The `readingAndWritingDidFinish()` function handles any error information from the asset writer. */ isDone = true break } } if isDone { /* Calling `markAsFinished()` on the asset writer input does the following: 1. Unblocks any other inputs needing more samples. 2. Cancels further invocations of this "request media data" callback block. */ writerInput.markAsFinished() /* Tell the caller the reader output and writer input finished transferring samples. */ completionHandler() } } } The processor closure runs body pose detection on every sample buffer so that later in the VNDetectHumanBodyPoseRequest completion handler, VNHumanBodyPoseObservation results are fed into a custom Core ML action classifier. private func videoProcessorForActivityClassification() -> SampleBufferProcessor { let videoProcessor: SampleBufferProcessor = { sampleBuffer in do { let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer) try requestHandler.perform([self.detectHumanBodyPoseRequest]) } catch { print("Unable to perform the request: \(error.localizedDescription).") } } return videoProcessor } How could I improve the performance of this pipeline? After testing with an hour long 4K video at 60 FPS, it took several hours to process running as a Mac Catalyst app on M1 Max.
Posted
by
Post not yet marked as solved
2 Replies
463 Views
I just got an app feature working where the user imports a video file, each frame is fed to a custom action classifier, and then only frames with a certain action classified are exported. However, I'm finding that testing a one hour 4K video at 60 FPS is taking an unreasonably long time - it's been processing for 7 hours now on a MacBook Pro with M1 Max running the Mac Catalyst app. Are there any techniques or general guidance that would help with improving performance? As much as possible I'd like to preserve the input video quality, especially frame rate. One hour length for the video is expected, as it's of a tennis session (could be anywhere from 10 minutes to a couple hours). I made the body pose action classifier with Create ML.
Posted
by
Post not yet marked as solved
2 Replies
396 Views
In the App we saw a 4 seconds loading time while launching the App. Sometimes this time up to 8 seconds but the average time is 4 seconds. What could be the reason? In the AppDelegate there isn't server calls. The breakpoint inside the AppDelegate function didFinishLaunchingWithOptions is taking 4 seconds to get in. I have some api calls but they are launched after the breakpoint in the AppDelegate. The Api calls responses are working faster. In Instruments I have some blocked time, is it normal? Is something wrong in the log?
Posted
by
Post not yet marked as solved
4 Replies
3.1k Views
Hello, I'm investigating a crash where keyWindow is nil when I don't expect it to be nil. I would like to understand when this can happen. Indeed,I could make a fix by guarding for nil values, but this would lead to an invalid state. Context We want to return quickly from application(didFinishLaunchingWithOptions:), so: We set a view controller as a splash screen rootViewController => we mark the window with makeKeyAndVisible(). We queue initializations asynchronously on the main queue. => Basically, while the app is initializing starting, we're displaying a splash screen view controller. When the app is done initializing, it needs to present the actual UI. A last asynchronous task on the main queue does this. We get keyWindow from UIApplication to set the new view controller with the actual UI. That's where we assume that it shouldn't be nil and force-unwrap it, but alas, in some instances it's nil. Misc This crash only happens when app built with Xcode 13.x, running on iOS 15. I cannot reproduce this bug, and it has fairly little occurrence. (100s over 100000 of sessions) I also attached a sample crash Questions Given that I made the window "key and visible" in step 1, what could cause the window to stop becoming "key". What would be the correct way to get the window to set the new root view controller ? I don't really want to guard against a nil value because then it means that I wouldn't be able to set my new rootViewController and the app would be stuck on the launch screen. 2021-12-01_18-16-29.7629_-0700-9b5855582b13262f154acae64dd3140ad49c84f3.crash Thanks a lot! Bruno
Posted
by
Post not yet marked as solved
0 Replies
650 Views
When running on iPhone13 Pro Max (iOS15.1) and tapping the screen with several fingers in succession, it may go to 30fps at the moment of the tap. self.displayLink = [CADisplayLink displayLinkWithTarget: self selector: @selector(repaintDisplayLink)]; [self.displayLink setPreferredFramesPerSecond:60]; // or // [self.displayLink setPreferredFrameRateRange:CAFrameRateRangeMake(60,60,60)]; [self.displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; Unity and cocos2d-x are written with almost the same code, but the same phenomenon occurs. It is not reproduced on iPhoneXS MAX(iOS15.1), iPhoneSE2, iPhone7, etc. Also, if you set it to 120fps and run it, and then tap it continuously, the 120fps will turn into 60fps in the same way.
Posted
by
Post not yet marked as solved
0 Replies
403 Views
I have an multiple render-backend application on MacOS,so I can compare the performance between Metal and OpenGL about OffscreenRender. I found that, when I do not use MSAA,metal perform better.But when I open MSAA,especially at Large-texture,such as 3840x1920, metal seems slower. Then I simplify the render-pass, no draw call, seems it's the clear of texture was the difference of Metal and OpenGL, I'm not sure. in OpenGL,I use glClearBufferfv; in Metal, I set colorAttachments.loadAction=MTLLoadActionClear(I tried enalbe or disable tripple-buffering); Then I close the clear-action both on OpenGL and Metal,render-pass time of openGL reduce about 3ms ,Metal reduce about 8ms, Is that mean Metal take more time to clear texture? How can I resolve with such problem? (because I need clear-action) @Apple Device: Mac-mini-2018 Xcode: Xcode13.0 Here is the metal instrument screen-shot(idle of GPU seems too long,about 50ms):
Posted
by
Post not yet marked as solved
0 Replies
415 Views
It seems that SpriteKit doesn't batch the draw calls for textures in the same texture atlas. There are two different behaviors, based on how the SKTextureAtlas gets initialized: The draw calls are not batched when the texture atlas is initialized using SKTextureAtlas(named:) (loading the texture atlas from data stored in the app bundle). But the draw calls seem to be batched when the texture atlas is created dynamically using SKTextureAtlas(dictionary:). The following images show the two different behaviors and the SpriteAtlas inside the Assets Catalog. 1. Draw calls not batched: 2. Draw calls batched: The SpriteAtlas: I created a sample Xcode 13 project to show the different behavior: https://github.com/clns/spritekit-atlas-batching. I have tried this on the iOS 15 simulator on macOS Big Sur 11.6.1. The code to reproduce is very simple: import SpriteKit class GameScene: SKScene { override func didMove(to view: SKView) { let atlas = SKTextureAtlas(named: "Sprites") // let atlas = SKTextureAtlas(dictionary: ["costume": UIImage(named: "costume")!, "tank": UIImage(named: "tank")!]) let costume = SKSpriteNode(texture: atlas.textureNamed("costume")) costume.setScale(0.3) costume.position = CGPoint(x: 200, y: 650) let tank = SKSpriteNode(texture: atlas.textureNamed("tank")) tank.setScale(0.3) tank.position = CGPoint(x: 500, y: 650) addChild(costume) addChild(tank) } } Am I missing something?
Posted
by
Post not yet marked as solved
0 Replies
313 Views
We have an iPhone app where we are extensively using NSPredicate for array filtering. With iOS 15.1 upgrade, we started seeing performance issues with our app. After troubleshooting, we found slowness with array filtering, which otherwise works pretty fast on iOS 15.0.2 and below versions. Has anyone else faced such issue? If so, any resolution available? We couldn't find anything in release notes for iOS 15.1, which could cause this issue.
Posted
by
Post not yet marked as solved
0 Replies
416 Views
I'm using xcrun xctrace export --output results.xml --input test_trace.trace --xpath '//trace-toc[1]/run[1]/data[1]/table' to export data from a test run with instruments as part of my app's CI. With Xcode 12 this resulted in an xml file that I could parse relatively quickly, but now with Xcode 13 the export process itself is taking 90+ seconds and generating a 160mb xml file for a 10 second recording. I noticed the table that has increased is the time-sample schema. Just attempting to export this table with --xpath '//trace-toc[1]/run[1]/data[1]/table[4]' takes quite a while. The table has about 790 thousand rows. I'm using a custom instrument based off the time profiler, and still have about the same number of stack trace samples in my output. Did anything change in Xcode 13 that caused instruments to include many more time samples that aren't corresponding to a stack trace? Is it possible to disable this to have fewer time samples in my trace (while preserving the stack trace frequency) so the xml can be parsed quicker?
Posted
by
Post not yet marked as solved
1 Replies
647 Views
Starting from Xcode 12.4 (I think), failing unit tests in framework targets take about 5 seconds to complete. Successful tests complete almost instantly (as expected). For example, the following test:     func testExampleFailure() {         XCTFail()     } Takes 4.356 seconds to execute on my 2019 MacBook Pro: Test Case '-[FrameworkTests.FrameworkTests testExampleFailure]' failed (4.356 seconds). This seems to only affect unit tests in Framework targets (unit tests for App targets are unaffected). I have also tested this in fresh Framework template projects across multiple Macs, so it appears to be an Xcode bug. I'd hoped Xcode 13 would fix this issue, but it persists for me on Monterey. Perhaps someone could suggest a workaround?
Posted
by
Post not yet marked as solved
1 Replies
365 Views
Hello, I'm attempting to automate some performance tests we currently do manually using signposts and Instruments. It looks like XCTOSSignpostMetric is the perfect tool for the job, but I can't get it to play nicely. If I use the pre-defined signpost metric constants (customNavigationTransitionMetric, scrollDecelerationMetric, etc), it works fine. If I use a custom signpost using the XCTOSSignpostMetric.init(subsystem: category: name:) initializer, nothing happens. The documentation is very sparse on this topic and Googling, Binging, Githubing and Twittering have come up empty. I reduced the issue to the smallest example I could ( https://github.com/tspike/SignpostTest ). What am I doing wrong? Thanks, Tres Environment details: macOS 11.6 Xcode 12.5.1 iOS 14.6 iPhone SE 1st Gen In the app: ... let signpostLog = OSLog(subsystem: "com.tspike.signpost", category: "signpost") let signpostName: StaticString = "SignpostTest" @main struct SignpostTestApp: App { init() { os_signpost(.begin, log: signpostLog, name: signpostName) DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(500), execute: { os_signpost(.end, log: signpostLog, name: signpostName) }) } ... } In the test    func testSignposts() throws {     let app = XCUIApplication() // No performance data let metric = XCTOSSignpostMetric.init(subsystem: "com.tspike.signpost",  category: "signpost",  name: "SignpostTest") // Works as expected // let metric = XCTOSSignpostMetric.applicationLaunch let options = XCTMeasureOptions() options.iterationCount = 1 measure(metrics: [metric], options: options, block: { app.launch() }) }
Posted
by
Post not yet marked as solved
1 Replies
502 Views
My app uses CoreML to run neural network (CoreML use CPU for most layers). Some time my performance is very good but after 30 seconds speed become slowly (fps is much less). I profiled it and found, that iOS use performance core for my app at the beginning. After 30 seconds it stops use Performance cores and starts use efficient cores (them frequency is less). You may see to on the screenshot. qos of my queue is .userInitiated. Also I tried .userInteractive, but is don't change anything. I assume that it is cores planning feature of iOS. But I cannot find any information about it. Is there documentation, which describe this behavior? Can I say iOS use performance core for my app all the time? I use iPhone XR with iOS 14.7.1.
Posted
by
Post not yet marked as solved
9 Replies
752 Views
Hi, I am developing a simple passthrough proxy system extension using NETransparentProxyProvider. This is what the extension fundamentally does: In handleNewFlow open a connection to the remote endpoint using CreateTCPConnection method in Tunnel provider. Once the remote endpoint is connected open the NEAppProxyTCPFlow and start both ends of the flow. When I use perf to test the network speed while sending I see a 10 times drop in speed when using my system extension. iperf -c <server_address> iperf uses 131072 byte blocks to send data by default for 10 seconds My code for inbound and outbound flows is quite simple: For inbound flow read from the remote connection, in the completion handler for read write to the flow and in the completion handler for flow start another read from remote. For outbound flow read from the flow, in the completion handler write to the remote and in the completion handler for writing to the remote trigger another read from the flow. Is there any problem with the above approach which can cause network transfer slowdown? I also captured Wireshark traces for cases with and without my system extension and I see a pattern there. When I read from the flow the system extension reads chunks of varying sizes irrespective of what the application is sending. Eg. I see 4096, 16384, 8192. When I send these chunks to the remote connection it keeps waiting for ACKs for each chunk irrespective of the TCP window size. I also see a [PSH, ACK] in the last packet for each chunk. Without my system extension, iperf sends many packets in short time without [PSH,ACK] as it is using bigger buffer and does not wait for ACKs so frequently. It respects the TCP window size. I can provide any details needed to help root cause this problem. I am testing this on macOS BigSur 11.5.1 Any help is greatly appreciated Regards
Posted
by
Post not yet marked as solved
1 Replies
420 Views
I'm using Xcode 13.0 (13A233). I want to access the "enablePerformanceTestsDiagnostics" feature. When I enter "xcodebuild test -project PerformanceTest.xcodeproj -scheme PerformanceTest -destination "platform=iOS Simulator,id=D43B8013-11A6-4E66-A42A-7174B9109276" -enablePerformanceTestsDiagnostics YES" command in the terminal,the xcresult did not produce the memgraphset.zip file for me. I don't know why this happened, Any one can confirm if this is a bug or give me some hints about my operation.
Posted
by
Post not yet marked as solved
1 Replies
953 Views
Hi, i have updated iOS from 14.8 to 15. After that i am facing issues like so many apps are lagging. Those were working perfectly fine on the same day prior to update. IPhone storage is taking 3+ mins to load completely. Apple store is taking couple of secs to load. Iphone is getting extremely heat while using. Charging is also taking longer time. Chat with apple support team, as per their suggestion, updated iOS again using my pc. Still getting the same issue. They are also suggesting to do factory reset as well. Why would i do factory reset and restore again. Just downgrade it to iOS 14.8 again.
Posted
by
Post not yet marked as solved
1 Replies
408 Views
After installing iOS 15, my iPhone battery has been just straight up worse. I am able to get only a minute of screen on time with every 1% of battery consumed. Moreover my phone is plagued with other bugs and random shut down.
Posted
by